Threads for swifthand

  1. 2

    Can anyone who’s tried sorbet comment on why you’d want to use this instead of crystal?

    1. 10

      your question is ‘why x instead of y’, but these two are not direct competitors/alternatives, though both are related to ‘type checking ruby(-esque) code’.

      sorbet is a ‘checker’ that can be used on top of an existing ruby codebase, which also means it can be gradually adopted. this is similar to mypy in the python world.

      crystal, on the other hand, is a separate (though similar looking) programming language with no direct interop with the ruby ecosystem. instead, it compiles to machine code, as opposed to running inside a ruby interpreter/vm.

      1. 1

        yeah crystal is like elixer… similar syntax but distinctly different and incompatible.

        1. 1

          Thanks. “Ability to gradually migrate” does answer the question

        2. 4

          completely different languages and runtimes?

          it’s my understanding that sorbet, and the new type checking functionality built in to ruby 3 using RBS can be implemented gradually in existing ruby projects

          crystal would require a rewrite

          1. 1

            I am more interested in that exactly: What have people’s experiences been between RBS and Sorbet? They seem to approach the same problem, but I suspect they’re not entirely overlapping. As such, I find myself wanting to gather the experiences of someone who has tried both before deciding which I might gradually introduce into a project.

            1. 6

              I wrote an online poker virtual table in ruby without any typechecking at the start of the pandemic. It went swimmingly.

              After playing for a few weeks I realized about a dozen tweaks I wanted to make. Diving back into the code was a little difficult and I grabbed Sorbet (my first go at it) and I found it really helped me keep things straight. I used the types sparingly at first, mainly to document my assumptions about parameters for certain functions. In some places the type signatures got complicated and I took that as a hint to refactor. Decent experience all around. The worst part was the way it loaded gems when probing for type signatures in my dependencies. Thankfully that was a smaller project, probably 2k LoC with a half dozen dependencies. I can’t imagine how a large rails application would fare in that process.

              Later RBS was released and I figured I’d port my game over to it since the official nature of RBS seemed to challenge Sorbet and the future might demand I make the change. I didn’t like any part of it. The definitions being in a separate file was probably the worst. It meant that those useful notes about my assumptions (in the form of type signatures) were an extra step away. The error messages coming from the Steep tool were significantly less understandable than the Sorbet messages. It ran slower in most cases too.

              My current day job doesn’t use ruby but if it did I wouldn’t necessarily advocate for my team to adopt either tool. If someone else did, I’d be happy to help bring in Sorbet and I would argue strongly against RBS. The experience of no type checking was better than RBS.

              1. 1

                Thanks for sharing your experiences! I’ll probably try out Sorbet first on a smaller project, and get a feel for it.

        1. 25

          From 2013 to 2020 I used this article’s recommended solution, in various configurations: mostly LUKS with btrfs, sometimes LVM, sometimes mdadm, sometimes letting btrfs handle it, and all sorts of scripts to handle the rest. Although performance was not always amazing, I never had problems with it. This included years when people would swear btrfs was going to eat my children. It was always a bit of a burden to set up, monitor and maintain, but I got used to it. I use ZFS now, but I still have a few things that I like better about LUKS+btrfs, and some of them are mentioned in this article.

          I can comfortably use either, and I do not feel particularly partisan in the matter.

          What tipped the scales for me was threefold: ZFS’ encrypted send/recv, the ease of tooling around it, and the overall approach to on-line data integrity (as explained by @wizeman). Combined, these now feel like “table stakes” for me to consider a combination storage-and-backup solution.

          To a lesser extent (i.e. not a deciding factor), I have enjoyed things like ZED, which I believe stems from what the article calls a “layering violation”. I hated that at first, honestly. It sounds silly, but after a near-decade of the previous tools, the idea of not lego-ing together block devices in my own bespoke manner was a big mental barrier. In the long run it has not mattered, and seems to be the source of features that I benefit from.

          The only point I need to individually disagree with from the article is when they say “checksumming is usually not worthwhile”, I do not agree. Not at all. Even with ECC memory, the number of random hardware issues I have seen has taught me that I want every tool imaginable at my disposal, at every link in the chain. This isn’t just about RAM or drives, either: bad SATA cables, overheating HBAs, transient voltage issues on a PCH, oh my! Name any minor component between your CPU and the storage media, and it can go wrong there! For anyone who has not used smartd, I recommend checking that out as well.

          As a final caveat, my use cases mean I do not use dedup or compression. Whether they are a benefit, a detriment, or are better solved other ways are not part of my calculus. They are not reasons that I use ZFS, but they also are not reasons I would avoid it. It strikes me as flawed reasoning to say “ZFS offers compression, but compression is often not useful” as a reason to avoid ZFS.

          1. 61

            Please don’t pay attention to this article, it’s almost completely wrong. I think the author doesn’t know what he’s talking about. I will go point by point:

            • Out-of-tree and will never be mainlined: Please don’t use zfs-fuse, it’s long been abandoned and OpenZFS is better in every respect. The rest of the points I guess are true (or might be, eventually, sure).
            • Slow performance of encryption: This seems to be completely wrong. I believe OpenZFS re-enabled vector instructions with their own implementation of the kernel code that can no longer be used. For an example, see which was merged many months after the Linux kernel disabled vector instructions.
            • Rigid: This was done deliberately so people like the author don’t shoot themselves in the foot. It would actually have been easier to make the vdev hierarchy more flexible, but ZFS is more strict on purpose, so users don’t end up with bad pool configurations.
            • Can’t add/remove disks to RAID: I guess this is still true? I’m not entirely sure because I’m not following OpenZFS development closely nor do I use RAID-Z.
            • RAID-Z is slow: As far as I know this is correct (in terms of IOPS), so RAID-Z pools are more appropriate for sequential I/O rather than random I/O.
            • File-based RAID is slow: OpenZFS can now do scrubs and resilvers (mostly) sequentially, so this point is wrong now.
            • Real-world performance is slow: I wouldn’t call it slow, but ZFS can be slower than ext4, sure (but it’s also doing a lot more than ext4, on purpose, such as checksumming, copy-on-write, etc).
            • Performance degrades faster with low free space: The free-space bitmap comment is just weird/wrong, because ZFS actually has more scalable data structures for this than most other filesystems (such as ext4). It might be true that ZFS fragments more around 80% utilization than ext4, but this is probably just a side-effect of copy-on-write. Either way, no filesystem will handle mostly full disks very well in terms of fragmentation, so this is not something specific to ZFS, it’s just how they (have to) work.
            • Layering violation of volume management: This is completely wrong. You can use other filesystems on top of a ZFS pool (using ZVols) and you can use ZFS on top of another volume manager if you want (but I wouldn’t recommend it), or even mix it with other filesystems on the same disk (each on their own partition). Also, you can set a ZFS dataset/filesystem mountpoint property to legacy and then use normal mount/umount commands if you don’t like ZFS’s automounting functionality.
            • Doesn’t support reflink: This is correct.
            • High memory requirements for dedupe: The deduplication table is actually not kept in memory (except that a DDT block is cached whenever it’s read from disk, as any other metadata). So as an example, if you have some data that is read-only (or mostly read-only) you can store it deduped and (apart from the initial copy) it will not be any slower than reading any other data (although modification or removal of this data will be slower if ZFS has to keep reading DDT blocks from disk due to them being evicted from cache).
            • Dedupe is synchronous: Sure it’s synchronous, but IOPS amplification will mostly be observed only if the DDT can’t be cached effectively.
            • High memory requirements for ARC: I don’t even know where to begin. First of all, the high memory requirements for the ARC have been debunked numerous times. Second, it’s normal for the ARC to use 17 GiB of memory if the memory is available otherwise – this is what caches (such as the ARC) are for! The ARC will shrink whenever memory is otherwise needed by applications or the rest of the kernel, if needed. Third, I use OpenZFS on all my machines, none of them are exclusively ZFS hosts, and there is exactly zero infighting in all of them. Fourth, again, please just ignore zfs-fuse, there is no reason to even consider using it in 2022.
            • Buggy: All filesystems have bugs, that’s just a consequence of how complicated they are. That said, knowing what I know about the ZFS design, code and testing procedures (which is a lot, although my knowledge is surely a bit outdated), I would trust ZFS with my data above any other filesystem, bar none.
            • No disk checking tool: This is actually a design decision. Once filesystems get too large, fsck doesn’t scale anymore (and it implies downtime, almost always), so the decision was made to gracefully handle minor corruption while the machine is running and being used normally. Note that a badly corrupted filesystem will of course panic, as it likely wouldn’t even be possible to recover it anymore, so it’s better to just restore from backups. But you can also mount the ZFS pool read-only to recover any still-accessible data, even going back in time if necessary!

            In conclusion, IMHO this article is mostly just FUD.

            1. 21

              This is actually a design decision.

              A question on my mind while reading this was whether or not the author knows ZFS enough to be making some of these criticisms honestly. They seem like they should, or could. I am not attacking their intelligence, however I would prefer to see a steelman argument that acknowledges the actual reasons for ZFS design choices. Several of the criticisms are valid, but on the topic of fsck, ARC and layering the complaints appear misguided.

              I spent 7 years using the solution they recommend (LUKS+btrfs+LVM) and have been moving to ZFS on all new machines. I’ll make that a separate top-level comment, but I wanted to chime in agreeing with you on the tone of the article.

              1. 7

                I’m not sure the check tool is really not needed. It’s not something I want to run on mount / periodically. I want a “recovery of last resort” offline tool instead and it doesn’t have to scale because it’s only used when things are down anyway. If there’s enough use case to charge for this ( there’s enough to provide it by default.

                1. 4

                  In general we try to build consistency checking and repair into the main file system code when we can; i.e., when doing so isn’t likely to make things worse under some conditions.

                  It sounds like what you’re after is a last ditch data recovery tool, and that somewhat exists in zdb. It requires experience and understanding to hold it correctly but it does let you lift individual bits of data out of the pool. This is laborious, and complicated, and likely not possible to fully automate – which is why I would imagine many folks would prefer to pay someone to try to recover data after a catastrophe.

                2. 5

                  Dedup does generally have high memory requirements if you want decent performance on writes and deletes; this is a famous dedup limitation that makes it not feasible in many situations. If the DDT can’t all be in memory, you’re doing additional random IO on every write and delete in order to pull in and check the relevant section of the DDT, and there’s no locality in these checks because you’re looking at randomly distributed hash values. This limitation isn’t unique to ZFS, it’s intrinsic in any similar dedup scheme.

                  A certain amount of ZFS’s nominal performance issues are because ZFS does more random IOs (and from more drives) than other filesystems do. A lot of the stories about these performance issues date from the days when hard drives were dominant, with their very low IOPS figures. I don’t think anyone has done real performance studies in these days of SSDs and especially NVMe drives, but naively I would expect the relative ZFS performance to be much better these days since random IO no longer hurts so much.

                  (At work, we have run multiple generations of ZFS fileservers, first with Solaris and Illumos on mostly hard drives and now with ZoL on Linux on SATA SSDs. A number of the performance characteristics that we care about have definitely changed with the move to SSDs, so that some things that weren’t feasible on HDs are now perfectly okay.)

                1. 2

                  I have a new laptop that’s been sitting for 3 weeks and I’ve yet to migrate to it using it. I’m not one of those people who moves things over all at once. I prefer to use my (infrequent) changes in primary personal devices as an opportunity to reorganize and reconsider my data and applications.

                  Upside is vast, downside is that I need to find the time to do it.

                  1. 3

                    As someone who has the same story of “used it for a while, eventually went all-in” with Linux Mint + Cinnamon, I’ve been keeping my eye on Pop. I’ve daily drove it for a month at two different times. It seems to check basically all the same boxes as Mint + Cinnamon, and in an alternate reality I could have just as easily ended up a happy Pop user instead.

                    The one thing I appreciate in Mint, thus far, has been the dedication to making upgrades between minor releases feel less dangerous (or possible at all). That is one aspect I have found lacking in many other Ubuntu-based distros, that try to match the 6-month cycle of their upstream. Does anyone have experiences to share upgrading between versions of PopOS (rather than the full backup-wipe-reinstall)?

                    1. 3

                      I run PopOS on one of my systems and upgraded from a prior version to 21.04 a while back. I kind of expected to end up doing a wipe + reinstall but wanted to try out the upgrade path just to see if anything broke - I don’t think anything did.

                    1. 6

                      The last two weeks I have had a few occasions to discuss Ruby 3 with friends, and wonder about when libraries will begin to ship features that lean into more parallel or asynchronous workfkows. The exact example one of them raised was “Even just allowing independent database queries to run concurrently would be amazing” and lo, Relation#load_async.

                      As exciting to me is the promise of removing some of the gotchas and edge cases of the Rails autoloader. I really should dig into that Zeitwerk upgrade guide, because I do worry for how many projects I have with autoloader workarounds that I will need to seek out and undo.

                      1. 2

                        I’ve been doing async in Ruby for over a decade. While the ruby3 features are usefulz they don’t add anything fundamentally new.

                        1. 1

                          Care to elaborate in what form?

                          1. 1

                            Started out with callbacks and deferrables, worked with one of the inventors of fiber-based “invisible async”, most recently promise-based

                          2. 1

                            This is pretty disingenuous. You could do async Ruby before Ruby 3, but to really do it you needed an external library like eventmachine. The language itself prevents true async because of a GIL. Ruby 3’s features make async much more practical to achieve with the base language itself.

                            1. 1

                              GIL provents proper multiprocessing/threading, which is orthogonal to async I/O

                              1. 1

                                You can do async reasoning without a GIL, but it makes it much less useful practically.

                                1. 2

                                  IME the GIl makes no difference, since anything that benefits from async is IO bound and doesn’t use even one full core of CPU anyway…

                        1. 2

                          Visiting family and pre-preparing various Christmas food. As my parents have gotten older we’ve learned it is easier to prep as much as possible about a month in advance, such as dough for sweets, or pre-cooked pierogis, and freeze it until the actual week of the holiday. They’ll find things to stress about in that week to fill the space, but at least one big chunk of is done.

                          We tend to give out a lot of holiday food to friends and family. Baking a few dozen mini-loaves of zucchini bread, hand-filling 300 pierogis, and so on, requires a team effort for the whole weekend.

                          1. 1

                            I have been curious to try out pierogis! Any good recipes you can share (secret ones are totally fine 😅)

                            1. 2

                              The dough has simple ingredients, but an involved process. I’ll send a DM later today.

                          1. 7

                            It’s going to be interesting to have a new DE in a popular distro available by default. I’ll not sure why we ended up in this situation, but in almost every case you get the default Gnome, or you have to click through alternatives/spins/whatever to get something else. This is a welcome change.

                            1. 7

                              I agree with you, but OTOH, Ubuntu did it with Unity and a lot of people hated that.

                              I am running the Ubuntu Unity remix right now, on a PC wiped and reinstalled just this month. IMHO it’s still a great desktop and one of the best ever for Linux – but a lot of people hated it. One of the things that saddened me were people complaining that it couldn’t do things that it did easily, readily and by default – but they hadn’t learned how. (Example: multiple complaints that if you had an active icon in the dock thing for an app, you couldn’t open a new window. You can: you just middle-click the icon for a new instance. In this regard, it’s better than macOS, its inspiration.)

                              Sadly, 26 years after Windows 95, that is the only desktop metaphor most people know. It’s a good one, but there is a lot more to desktop GUIs than Win95, and most of the clones don’t even implement one of the most basic features: the ability to move the taskbar to a vertical position on one edge, where the contents remain horizontally arranged.

                              KDE, Cinnamon, MATE, Lumina – all Win95 ripoffs and can’t do this.

                              Xfce & LXDE/LXQt are the only ones I know that do it well.

                              1. 2

                                You can: you just middle-click the icon for a new instance. In this regard, it’s better than macOS, its inspiration.)

                                I’m not remotely surprised people are mad - convention in desktops is that middle-click doesn’t exist (or is optional), due to historical reasons of some mice literally not having a middle-click. Due to this convention, most people don’t even think to try middle-clicking, and will either left- or right-click only.

                                The ‘problem’ here is that people don’t learn new paradigms formally - they learn by example, from using existing programs. So new entrants with new paradigms have to teach their paradigm at the same time as they’re introducing it, which means it’s impossible to be both radically innovative (as in, innovating at the very roots of your design) and intuitive/beginner-friendly. An aspect of a program can either be familiar or fundamentally novel, not both.

                                1. 2

                                  I see what you mean, but in this instance and context, I beg to differ.

                                  Middle-clicking doesn’t do anything significant on macOS outside of web browsers, as far as I’m aware. On Windows, it triggers a free-scroll-by-dragging mechanism that I think was originally implemented by Logitech for their 3-button PS/2 and RS-232 mice, long before USB or scroll-wheels were common.

                                  But on xNix it’s an established mouse gesture with 3 meanings.

                                  1. Paste the currently-selected text at the cursor
                                  2. On a window title-bar: send this window to the back of the Z-stack, behind all other windows
                                  3. (As on Windows & macOS) Open a new browser tab to display the link being middle-clicked.

                                  AFAICR this works on FreeBSD as well as on essentially all Linuxes. I haven’t tried OpenBSD, NetBSD, Minix 3 or Plan 9 extensively enough to say.

                                  The middle-click to open a new instance behaviour is standard in browsers across all 3 major OS families. The fact that Apple and most PC laptops don’t give you a middle button doesn’t invalidate this. Clicking the scroll wheel does it on all mice and trackballs for 20+ years, and if you don’t have a middle button, clicking both left + right together does it and has done for 25+ years.

                                  This is not new or weird or radical. If people don’t know it’s there, that’s their problem, not that of Unity programmers. Or they could have taken 2min to read the single-page of quick-start instructions that Ubuntu resorted to displaying over the wallpaper when you first log in.

                                  Once again, I am frustrated and angry because most people are ignorant and lazy, so they didn’t know how to use something good and attractive and pleasing and useful, so handy functionality that I liked and used daily for many years was taken away from me.

                                2. 1

                                  Cinnamon supports vertical taskbars. One on each side, if you want. Am I misunderstanding something about your complaint?

                                  1. 3

                                    It supports vertical panels.

                                    A panel has the contents arranged in the same direction as the panel. So if the panel is vertical, the contents are also vertical.

                                    A taskbar is different. The contents are always horizontal, whatever the orientation of the panel. Xfce calls this a “deskbar” and can do both.

                                    Here are examples of vertical taskbars:

                                    Here’s a broken one in MATE:

                                    Notice that the menus is on its side, spacing is all over the place, launcher icons are GIGANTIC because they scale with panel width, and there’s a vertical column of status indicators.

                                    Cinnamon does something similar; I lose ¼ of the panel to a vertical column of status indicators, and I can’t have a neat single-icon-height row of them. Even the clock wraps onto 2 lines, one for the hours, and below that, the minutes. Dash-to-panel under GNOME Shell is the same.

                                    A vertical taskbar retains the horizontal orientation of the controls on the panel, but the panel itself is vertical. Make it wider, your app-switcher buttons get wider but don’t change size; you get more status icons in each row. The clock and start button stay the same size.

                                    1. 2

                                      KDE can do most of this. Here’s mine

                                      I never tried to change the start or the clock size. It’s not obvious, but maybe I’m missing some options?

                                      1. 1

                                        I may have a screenshot of KDE $LATEST’s horribly broken attempt to do it somewhere, from the last time I tried. :-D

                                        Yeah, no, KDE can’t do it well at all. They never thought of the use case so it’s untested and it shows. The start button is square and scales to the width of the panel so it becomes HYYYUUUGE. Ditto the clock. The window-switcher applets – there are 2 to choose from, because KDE – try to scale the buttons to fill the available space which breaks my muscle memory.

                                        Belatedly looks at your screenshot Ha! Actually that nicely illustrates exactly the things I mean! Thank you.

                                        KDE, which has an option for everything, so the devs never thought of setting size constraints on toolbar items in a generic way (the smart thing to do IMHO and AFAICT what LXDE/LXQt does) or at least size constraints on things you may not want to scale (e.g. launcher menu and clock, which is what Xfce does.)

                                        Like it or not, the Win95-style desktop was invented by Microsoft, and in my not-remotely humble opinion, any desktop that seeks to implement the same style of desktop – taskbar, Start menu, system tray/notification area, etc. – should at least imitate the basic functionality that was in the original 26Y ago.

                                        Xfce does this, not terribly elegantly but in a nicely-customisable way. LXDE/LXQt do it, in a simpler way that’s less customisable but works. GNOME 2/MATE, Cinnamon, KDE 3/4/5 and just about every other Win95-like desktop I’ve tried on Linux or FreeBSD over the last 26 years fail to do so.

                              1. 4

                                Just moved. Final days of overlap between the two leases, so going to deep clean the old place and take photos of my efforts. If my landlord tries to stiff me on my deposit, I will be able to push back.

                                All other time (at new location) will be spent digging through boxes for a very specific item that I need in the moment. Along the way I will pass over twenty other items, until I later need one of those and think to myself “Ah crap I thought I just saw that?!” and go back to digging.

                                I am very bad at moving.

                                1. 2

                                  Just moved. Can confirm the ah hell, where was that thing I just had.

                                1. 12

                                  Ruby has had this for a little while, and I accept that this is a popular language feature… HOWEVER: the more I see this sprinkled around code, the worse of a code smell I find it to be. I have actually seen a style emerge where people default to using this rather than traditional method-calling, and it drastically increases the complexity of a piece of code. It’s almost never tested behavior, it’s more of a way to avoid failing-fast, ensuring that tracking down the true source of a problem is much, much harder.

                                  Like any tool it can be useful and it can also be misused - I have to accept that it’s here to stay in null-permitting languages. For now it bums me out, though.

                                  1. 3

                                    Yes. I think it should only be used at the boundaries as part of quick and dirty validation. If you’re using it outside of a constructor/converter, you’re using it wrong.

                                    1. 3

                                      Speaking from TypeScript, in which you can constrain away nulls at any declaration:

                                      I think it’s a matter of the data you’re dealing with. If a data type has optional properties, it should be fair game for use with optional chaining. Getting an optional result from a property lookup means you can collapse multiple null checks into one and get code that’s easier to read, for instance.

                                      But first, the data type has to be suitable. Most functions’ argument types ought to disallow undefined / null, if what the function would do in that case is return undefined / null anyway. Push the responsibility for that check back to the caller, and it will both reduce the function’s complexity and prevent calling it with optional-chained expressions. “For best results, squeeze tube from bottom.”

                                      1. 2

                                        Clojure has a threading macro which allows this, and doesn’t turn into a code smell but a warning that there will be special casing for nil and the check for a valid value is in that section. Normally the behavior it guards is tested or relied on elsewhere, and not just a throwaway “don’t crash here please.”

                                        1. 2

                                          In Ruby code at work, we have a strict rule against chaining the safe navigation operator.
                                          Single use is discouraged, but allowed in situations like simple presentational code, or if used in a way that does not propagate null values further, e.g. when the code provides a default:

                                          user.favorite_color&.titleize || "None"

                                          In cases like this it is being used a slightly-more-terse replacement for a ternary operator:

                                          user.favorite_color.present? ? user.favorite_color.titleize : "None"

                                          I don’t care for it myself, but enough of the team does, so the compromise has been allowing these singular uses, but never chaining.

                                          I agree with you. Chaining is a big red flag. If one is drilling down into some crazy deep object chain as in user&.company&.address&.state, it is a sign that something else is wrong, and the need to reach for the safe navigation operator is a smell pointing to that.

                                        1. 6

                                          I submitted it as I see this relevant and anti-pitchfork enough to spread it.

                                          Nice to see them self-hosting it and correcting the misunderstandings. I’m wonder why people are against opt-in usage statistics.

                                          1. 24

                                            Nice to see them self-hosting it and correcting the misunderstandings. I’m wonder why people are against opt-in usage statistics.

                                            There is a cultural element in software that can never be meaningfully separated from the strictly technical aspects, and this is just one of them. I don’t necesarily agree with this position (i.e. opposition to usage statistics, even opt-in) but I can sort of see where it’s coming from.

                                            First, there’s a general, and probably at this point well-deserved opposition to tracking technologies in general, because usage statistics have been misused often enough, and for long enough, that it’s hard to trust anyone gathering stats anymore, even if it’s done in good faith, and even if it’s done by trustworthy parties.

                                            One way to look at it is that Google, Facebook & friends have ruined stat collection for all of us, I guess? The fact that it’s opt-in isn’t really relevant . The idea is that there’s a high chance that you are eventually going to get screwed because that’s what the tracking industry does. It’s a shady industry that attracts shady people and that results in shady business decisions even in matters that are not related to data collection because that’s how things run in a shady industry. It’s sort of like why some people don’t want to do business with oil & gas companies. Spilling oil into baby dolphin teritory is opt-in and you might think that if that’s a legitimate concern, you just opt out – but even if you do, you’re still going to have to deal with a lot of shady crap because that’s what that industry is like. It rarely happens that the same people who do shady crap drilling for oil (or misusing private data) are okay the rest of the time.

                                            In other words, there’s a concern that a) you are going to be affected even if you opt out of data collection, and b) that if you do opt in, there’s a high chance that you are going to get screwed over, no matter what the fine print says today.

                                            Second, there’s a more subtle effect involved, where analytics are opt-in, but a lot of development decisions are based only on collected data. There was an article on the frontpage here a while back about how Mozilla used that to deprecate the ALSA interface, leaving only the PulseAudio interface in, and thus proceeded to piss off orders of magnitude more people than their tracking showed. The general feeling in this case is that analytics, sure, is technically opt-in, but if you depend on that piece of software for anything important, then buddy you’d better opt in or the deprecation hammer will fall right on top of the features you use.

                                            And third, which is an important thing to remember: Audacity is open source software. It contains contributions from lots of volunteers all over the world, some of whom might have never chosen to contribute to a piece of software that uses analytics, even if they’re opt-in. I can see why some of them would be pissed.

                                            Edit: all that aside, there’s still the matter of the cultural element i mentioned above. It’s just that the tide is turning against data collection in general. It doesn’t have to be a rational thing, people don’t have to logically justify their choice of software or hardware. Whether they’re justified in their belief is irrelevant after a point. Lots of things in our culture can’t be logically justified and we still do them.

                                            1. 8

                                              The general feeling in this case is that analytics, sure, is technically opt-in, but if you depend on that piece of software for anything important, then buddy you’d better opt in or the deprecation hammer will fall right on top of the features you use.


                                              This is why I generally (though still selectively) opt-in on many cross-platform desktop applications: to represent my presence as a Linux desktop user, on behalf of the thousands who I know won’t. If a company doesn’t know you (i.e. people with your use patterns) use their thing, eventually a report to this-or-that a VP of Product is going to have metrics showing that you don’t exist, and don’t deserve support our further effort. In the case where you actually do exist, this can be inconvenient.

                                              I participate in even more time-consuming tasks, like responding to Lenovo’s customer research surveys (and others). In free-form responses, I drop in upbeat excitement on behalf of whatever niche or minority usage that I honestly represent (features, industry focus, you name it: I want to be heard).

                                              That said, I am under no illusion that folks will change on this, and do as I do. Least of all those persnickety Linux users.

                                            2. 7

                                              Usage statistics are a side channel. They’re explicitly a way to exfiltrate data.

                                              1. 1

                                                Usage statistics are valuable only for the maintainers, not to the users, at least not in a manner that’s direct enough to be observed. For example, recently someone who collects nothing didn’t notice that very few users completed a certain task, then made that task mandatory. The result was quite unpleasant.

                                                Usage statistics can be said to be the best, cheapest, most effective ways to notice user problems. Much less skewed than problem reports, much less effort than running focus groups.

                                                For someone who gets the software for free, there’s little reason to consider whether a particular feature makes developing the software simpler. Really, if you don’t pay for the software or its development, if you don’t even know the names of the developers, why would you accept something that might be a security risk, just to simplify some unknown people’s work? And it may be a security risk, because as Corbin notes, those statistics are a way to exfiltrate data from your system.

                                              1. 19

                                                I find it strange that statements like “training this model had a CO2 footprint of 5 cars over their life time” are not put into more context. How often do they train a model? How many people does it serve?

                                                Conceivably, the researchers working on it, also had cars which might exceed this carbon impact already . 5 cars for a model of huge impact world - wide doesn’t necessarily seem like a lot.

                                                Edit :possibly the paper does have more context, of course

                                                1. 7

                                                  Agreed. It’s interesting to note the exponential growth, though. Their 2017 model 27kWh - less than a single US gallon of gasoline. Also note that BERT and its derivatives have really captured imaginations world-wide, and the approach people use seems to be to throw more data and processing power at it. It’s not just Google doing this, it’s dozens of dumb startups.

                                                  The problem with using judging any activity by its carbon emissions is that we’re likely to need ALL available energy, fossil or renewable, for the purpose of transitioning to a fossil-free economy by 2050, if we want to have a shot at RCP2.6. In that light, any economic activity - whether it’s training a neural network or selling hot-dogs - that’s not aimed at reducing carbon emissions, is somewhat unethical.

                                                  1. 11

                                                    Not to get too political, but our society seems extremely inept at solving pretty much any problem of any worth, whether that’s climate change, Google/Facebook knowing all sorts of things about you, all sorts of muckery with food (varying from palm oil being in damn near everything to food being flow in from the other side of the world), sweat shops in Bangladesh and similar countries making our clothes, etc. etc. etc. Most polls show a vast majority of people don’t like any of these things, but … nothing happens, in almost any country.

                                                    In short, pointing fingers at Google and such with “you should not do that” is probably the wrong strategy, and instead it might be smarter to reconsider how we deal with these problems in the first place. I have some ideas about this, but I won’t expand on them here. I also think it’s exceedingly unlikely that this will happen anyway, for various reasons, so 🤷‍♂️

                                                    tl;dr: we’re fucked anyway.

                                                    1. 8

                                                      Not to get too political, but our society seems extremely inept at solving pretty much any problem of any worth, […] Not to get too political, but our society seems extremely inept at solving pretty much any problem of any worth,

                                                      The problem is that as long as the people with purchasing power do not feel the pain, we are happy to pay lip service to such causes, but do not drastically want to alter our way of living to solve these issues. However, if it is something that affects rich nations, then suddenly a lot is possible. E.g. see SARS-COV-2 vaccines. Western governments have thrown billions at it and within a year it’s done (of course, based on prior work on SARS and MERS).

                                                      Of course, climate change affects us all, but rich nations do not really see it yet, with some exceptions (fires in Australia and the US West Coast). Of course, climate change is too hard to turn around to do something when we really start caring.

                                                      1. 6

                                                        Yes. What I was trying to argue was that while saying it’s unethical to spend 600 MWh on a language model is completely true, it’s not particularly insightful, as it’s unethical to spend 600 MWh on almost anything - including those six cars that the previous commenter dismissed as a trifle.

                                                        I actually find that this type of argument - a new technology being unethical because of its embodied energy - understates the actual shape and size of the problem. A lot of our current technological infrastructure is ridiculous, when measured by that same yardstick. But maybe it’s unfair to judge a paper by its editorialized summary.

                                                        1. 1

                                                          Google/Facebook knowing all sorts of things about you

                                                          Most polls show a vast majority of people don’t like any of these things, but … nothing happens, in almost any country.

                                                          Is GDPR not a solution? After all, the reason Google knows so much at this point isn’t really search, it’s the octopus of an advertising business it’s got.

                                                          1. 4

                                                            GDPR essentially says you can continue doing what you did before as long as you ask consent: so, you get popups, and other than that little really changed. The exact interpretation of various things (such as “implied consent” if you never click “accept”) differs per member state, and there’s also the issue of enforcement which is up to member states. In short, it’s all pretty patchy. And lot of these popups are designed in such a way that opting-out is quite time-consuming (not necessarily on purpose, could just be crap design).

                                                            In the end, I feel GDPR is perhaps well-intentioned, but it’s also designed so that companies can keep doing what they were doing while offering an “opt-out solution”, which in many causes is a faux-solution. If something is widely considered undesirable then it should not be done at all, instead of relying on savvy-enough consumers hunt for opt-out mechanisms.

                                                            A lot of the other things, such as “right to access your data” and “right to have your data removed” were already part of the laws in many countries before GDPR, but no paid much attention to that because who cares what the laws are in some tinpot little European country, right?

                                                          2. 1

                                                            food being flow in from the other side of the world

                                                            Very little food is moved via air freight, no? Doesn’t a massive, massive majority of food transport consist of rail, truck and container ship (and combinations thereof)?

                                                            And I hesitate to ask this, because supply chain logistics feels a bit off-topic for lobsters, but what about that is “muckery” anyway?

                                                            1. 3

                                                              With “muckery” I meant more like sugar being added to a lot of stuff, lemons being coated with wax to make them look nicer in the supermarkets (but it’s not good if you use lemon zest), trans-fats not being very healthy in spite of being marketed as such and industry pretending it’s not a problem, I could go on and on.

                                                              As for logistics, shipping is never really free, especially since a lot of foodstuff are cooled. When I lived in New Zealand things were much more seasonal (you can buy imported tomatoes out-of-season, but you pay ridiculous prices).

                                                              1. 2

                                                                Avocados were infamous for being transported on planes (to Europe and Asia, at least), but I see that they’ve moved on to refrigerated sea containers.

                                                          3. 4

                                                            The entire thing reeks of performative theatre.

                                                            1. 2

                                                              5 cars for a model of huge impact world - wide doesn’t necessarily seem like a lot.

                                                              One of such example is the models they use to drive the PUE of their data centers to a record low value.

                                                              1. 2

                                                                I agree with the alleged point that the cost of pretraining new language models puts it out of the reach of most researchers, resulting in a certain amount of unfair competition. If I have an idea for a better training objective, I cannot really put it to test, because I don’t have the resources, while Google could do easily do a grid search. I don’t think we have seens such a large imbalance in computational linguistics research before.

                                                                However, most scientific papers that use large transformer networks do not pretrain transformers, but finetune them for a specific task, which typically only takes a few hours on a consumer-level GPU. So, even though the carbon dioxide cost of pretraining a large transformer may be very large, the ‘amortized’ cost is relatively low. Once Google, Facebook, et al. release a new pretrained models, thousands of models (if not more) models are created by finetuning the pretrained transformer. So, the CO2 impact per actual per published/deployed model is probably not that much higher than before pretrained transformers.

                                                                The point about biases in models and their ramifications for society are on the mark. I fear that the author’s points are not compatible with Google’s PR about these models and we should hold FAANG and others accountable for such issues.

                                                                1. 1

                                                                  My shallow understanding of online AI products gives me the idea this is a step in a periodic CI/CD pipeline. Think nightly builds, but the inputs are both code and training data. I think if only the data changed you could just refine the previous result, but in the case of code changes (capturing a new kind of information, any changes to the network structure) you would have to start over and train from zero. This is just a remote guess; I’d love for someone who knows to speak up.

                                                                1. 2

                                                                  This past March I fell off of many years of Inbox Zero. Last week I started climbing back towards it, so continuing that a bit. I’m about 50% done by volume, but only 25% done in terms of total time/effort.

                                                                  Also rebuilding my personal browser start page. Current design is 4 years old, and I’ve moved most of my browsing into Firefox Containers. In addition to a new design, it’s getting an active feature for the first time. I love containers, but I can’t rely fully on the “Always open domain X in container Y”, as do have some sites I visit as multiple identities. So I’m going to hack around it using a few local domain names e.g. trello-personal.localhost and trello-work.localhost that do have a “Always open in X container” setting, and then those will redirect to the desired URL (as the tab will now be in the desired container).

                                                                  It’s a minor thing, but I’ve put it off for almost a year, to see if the feeling of need would fade, but it hasn’t.

                                                                  1. 1

                                                                    Still under CDDL, still can’t shrink pools.

                                                                    1. 17

                                                                      Still can’t shrink pools

                                                                      Device removal exists for some usecases, specifically mirrored vdevs.

                                                                      The CDDL flaming is so predictable at this point that it hurts to argue, so I’ll hold off for the most part. Yes, Oracle is bad because they haven’t relicensed ZFS under the GPL. However, the CDDL enabled the open source components of Solaris to be extricated from Oracle and allowed innovation to continue to happen in the open when Oracle closed off Solaris.

                                                                      One example of OpenZFS’ innovation: we finally have an open source encryption alternative to LUKS that can do snapshot backups to untrusted devices. It’s totally changed my backup workflow. I patiently waited for ZFS encryption to start setting up encrypted-by-default Linux machines with snapshots and transparent backups, and my patience was rewarded. OpenZFS 0.8 changed how I set up machines.

                                                                      1. 2

                                                                        Would you prefer if people didn’t complain about CDDL whenever ZFS is brought up? Because Oracle and the CDDL is literally the main thing which takes an otherwise super impressive project and turns it into a project which has absolutely no practical applicability.

                                                                        Is it even a good thing at this point that innovation is “allowed to continue” on a DoA filesystem, rather than focusing effort on relevant filesystems?

                                                                        1. 4

                                                                          The bias in this comment is just so painful I don’t even know where to start. I’ve been using ZFS on FreeBSD happily for nearly a decade now, the idea that the filesystem is DoA is just propagandist nonsense.

                                                                          1. 1

                                                                            DoA on the commonly used operating systems. There, happy?

                                                                            1. 4

                                                                              Not really, since I’m not the only one using FreeBSD for a storage server running ZFS, and haven’t been for a very long time. Just because you aren’t using it doesn’t mean it’s not widely used. FreeNAS is very popular among home NAS builders, mostly because of ZFS. Get outside your bubble.

                                                                              1. 2

                                                                                I wonder if there’s a misconception that supporting OpenZFS is supporting Oracle, which is explicitly not the case considering that OpenZFS deliberately has diverged from Oracle to implement things like non proprietary encryption.

                                                                                I think there were a few reasons cited for that in the GitHub discussions. One was that the specs for Oracle’s ZFS encryption weren’t available, another was that Oracle’s key management was too complex.

                                                                                1. 1

                                                                                  Personally, I know that supporting ZFS isn’t necessarily supporting Oracle. However, continuing development on ZFS means continuing development on a project which is intentionally license poisoned by Oracle to cripple Linux, which is bad enough in itself.

                                                                                  1. 2

                                                                                    intentionally license poisoned by Oracle to cripple Linux

                                                                                    Do you mean the fact that it originally came out of the Solaris codebase? That was Sun’s call, not Oracle’s. That “Fork Yeah!” video I linked in the top level comment has a nice overview of that bit of history.

                                                                                    FWIW, it also explains that the majority of the ZFS team (and teams for many other Solaris subsystems) immediately quit after Oracle acquired Sun and closed off Solaris. It seems like most of the Solaris team wanted development to be in the open, which is orthogonal to how Oracle does business.

                                                                                    Personally, I’m not seeing the harm in supporting the project. The license is not libre, but this is probably a historical artifact of the competition between Solaris and Linux. Linux won in a lot of regards, but the team behind ZFS doesn’t seem like they’re carrying that historical baggage. If they were, ZFS would have died with Solaris and really would be irrelevant.

                                                                              2. 2

                                                                                There’s no (legal) problem using it on Windows or macOS either. The problem is not the CDDL, it’s the GPL. The CDDL does not impose any restrictions on what you can link it with. The GPL does.

                                                                                1. 1

                                                                                  That’s not entirely fair IMO. You can’t GPL-licensed code and integrate it into a license which is more restrictive than the GPL, which is entirely reasonable. The issue is that the CDDL is more restrictive than the GPL, so CDDL-licensed code can’t use GPL-licensed code, so ZFS can’t use Linux code.

                                                                            2. 2

                                                                              turns it into a project which has absolutely no practical applicability.

                                                                              I’m really confused, are you arguing that ZFS doesn’t work on widely used operating systems? FreeBSD and Linux are pretty widely used.

                                                                              I was also shooting for a technical discussion about the filesystem instead of bikeshedding the license. There’s a lot of technically interesting things that zero-trust data storage enables, such as cloud storage providers that can’t see your data at rest. I think that’s much more interesting to discuss than this CDDL vs. GPL boilerplate. For example, I’ve got some ideas for web-based ZFS replication projects to make sharing files between different people with ZFS pools easier.

                                                                            3. 1

                                                                              The CDDL flaming is so predictable at this point that it hurts to argue

                                                                              I don’t care to argue about it, but I think the camp that’s unhappy about the CDDL is pretty huge.

                                                                              Yes, Oracle is bad because they haven’t relicensed ZFS under the GPL.

                                                                              I’m not even that picky. I’d settle for MIT, BSD, or even MPL.

                                                                              1. 2

                                                                                The cddl is similar to the MPL in that it is weak file based copy left. The sizeable difference is the MPLv2 has an explicit exception to allow it to be relicensed as GPL.

                                                                                1. 1

                                                                                  I meant to say MPLv2. Twas a typo.

                                                                              2. 1

                                                                                Device removal exists for some usecases, specifically mirrored vdevs.

                                                                                I recently tried removing a mirrored vdev from my pool and it worked flawlessly. Pretty nice feature - all data was migrated to the other vdevs in the pool. I’m currently going through my pool and replacing old drives with newer drives after testing them. I am tempted to go from 3 mirrored vdevs (2 TB each) to 2 mirrored vdevs (8TB each) without losing anything but the time required for testing, or going with 3 vdevs again.

                                                                              3. 8

                                                                                Are you a current ZFS user, or are those particular reasons that you don’t use ZFS?

                                                                                After years of waiting, with the release of ZoL 0.8.0, I finally moved all-but-one of my machines from LUKS+btrfs to encrypted ZFS. Four out of five, and so far so good. I am close to, but not yet at the point of, flat-out recommending it as a default to my friends who run desktop Linux. The only features I miss so far are:

                                                                                • The way RAID expansion works under btrfs.
                                                                                • LUKS having had time to be integrated cleanly into various utilities shipped with desktop Linux distros.

                                                                                I am very thankful than RAID-Z expansion is in the works, and I hope my faith in the OpenZFS team will be rewarded the way it was with encryption. But much like how so much of ZFS feels “right”, the way btrfs handles adding drives feels like the way it should have always been, with all file systems.

                                                                                1. 1

                                                                                  DKMS is a bit of a pain. Which distro do you typically work with?

                                                                                  1. 2

                                                                                    I like that NixOS makes it pretty clear that my ZFS module is properly built for the exact kernel version I’m running, FWIW. I’ve had lots of success deploying on the order of tens of currently reliable NixOS machines with ZFS.

                                                                                2. 1

                                                                                  Someone needs to go full RMS on this project and just reimplement the whole thing from scratch. No more CDDL, but all the benefits of ZFS. A man can dream…

                                                                                  1. 2

                                                                                    That would be btrfs. The length people go to because of “wrong open source license” or “not-invented-here-syndrome” is mind bending. More power to them, but it’s non-trivial.

                                                                                1. 32

                                                                                  The main content of this post does not seem, to me, to support the primary claim.

                                                                                  The framing of this primary claim, is whether (or not) people can collectively can use Firefox “for the sake of the web”. Put differently: whether the collective choices of users can provide a marketshare-derived bulwark against a complete Google monopoly (on standards, on the web experience, etc.). The article then complains that using Firefox has become burdensome, and that Mozilla behaves poorly in their opinion.

                                                                                  Those complaints are fine enough to be an article on their own. Certainly there is nothing wrong with expressing how one feels. However, neither the individual pain points, nor disingenuous behavior by Mozilla, actually speak to the premise: whether or not the collective choices of users can provide a marketshare-derived bulwark against a complete Google monopoly. As an overall framing question, the article leaves it unaddressed, except a few moments of unsupported nihilism.

                                                                                  I should be clear: I do not think the complaints listed are invalid. An actual consequence of these complaints, is that the people who are part of that bulwark are probably subjected to a worse web browsing experience than they otherwise could be (e.g. if Mozilla acted differently). That is not good.

                                                                                  A conclusion the article does not draw, but which follows from the previous, is that having a worse experience will likely erode that marketshare over time. This will lead it to be a less effective barrier against Google doing whatever-they-please. That is also not good.

                                                                                  Ultimately, while I understand the criticisms (and agree with some), they don’t actually critique the idea of collective action. Instead there are just appeals to despair and powerlessness. “Nothing here is new”, “we are past the point of no return”, “we are entering a dark age”, and then the sentence that bothered me the most:

                                                                                  And does anyone actually believe, that that sub-segment of all web users, that believe in browser engine diversity, can save anything?


                                                                                  And nothing in this article seems to refute that.

                                                                                  1. 7

                                                                                    The framing of this primary claim, is whether (or not) people can collectively can use Firefox “for the sake of the web”.

                                                                                    My intention was to ask whether people should individually use Firefox, “for the sake of the web”, at the expense of accepting anything Mozilla decides. Sorry if that wasn’t clear.

                                                                                    Considering the current trends, the increasing popularity of Chrome and of mobile platforms (ie. Android and iOS), I dismiss the possibility of a collective effort to turn the tides, a priori. You’re right that I don’t argue the point why it’s not possible, it just seems like such a pointless debate, that depends on entirely contingent factors. I just wanted to offer a pessimistic response to all the “Use Firefox to save the web” articles I have been seeing over the last few months.

                                                                                    1. 10

                                                                                      Fair enough, in so far as you acknowledge the a priori dismissal. If we sat here and ran through all those contingent factors, I would probably agree with you more often than not.

                                                                                      FWIW I do not use Firefox as some civic duty on behalf of the web, and I have not found myself arguing that people should (thus far). But nor do I find the “anti-monopoly bulwark” angle implausible. I use Firefox almost entirely because of the Multi-Account Containers add-on. I legitimately do not know how I would use the web without it. Or at least do not know how I could use it as effectively as I am used to.

                                                                                      I did stubbornly use Firefox mobile for a two year span, despite it feeling like a worse experience. But as of some time this year, it has been markedly better, in that way that goes unnoticed–so much so that I had not reflected on it until typing this paragraph. It’s that natural tendency to take tools/systems for granted, once they have been working smoothly for long enough.

                                                                                      1. 13

                                                                                        FWIW, I do use Firefox as some civic duty on behalf of the web, and it’s becoming a more miserable experience with almost every release.

                                                                                        I’ll definitely have a look at Edge when the Linux version ships with vertical tabs, because I really had enough of the abusive relationship with Mozilla.

                                                                                        1. 6

                                                                                          Seeing as it is roughly on-topic, what are the changes that have made you miserable?

                                                                                          In my text editors, terminal, file manager, and some others, when a sub-option-of-a-sub-option changes, I notice immediately. This article and thread have caused me to realize browsers are an odd exception: I am not that sensitive to little changes in options or details.

                                                                                          I use Firefox primarily (90-95% of browsing), but I do use Chrome partially for work. Aside from (a) Chrome lacking a plugin akin to Multi-Account Containers, and (b) Google blatantly not caring that G Suite runs poorly in Firefox, my experience on web pages feels basically comparable.

                                                                                          1. 6
                                                                                            • Extension system can’t support vertical tabs.
                                                                                            • User styles being on their way out.
                                                                                            • Extensions not working on “special” domains.
                                                                                            • Constantly having to fix styling (e. g. dickbar).
                                                                                            • “Restart Firefox” button doesn’t restart Firefox, broken for years.

                                                                                            It’s a death by thousand cuts.

                                                                                            1. 5

                                                                                              For your first point, I use “Tree Style Tabs”, which I’ve been happy enough with. It’s not quite as seamless as the pre-webextentions version, but it does give vertical tabs.

                                                                                              1. 2

                                                                                                I’m aware of all options, and they are all crap. (TST is worse than other options though.)

                                                                                                Sure we can hack tabs into a sidebar, but the extension can’t even disable the “real tab bar”.

                                                                                                1. 2

                                                                                                  A bit of css removes the real tab bar for me. What other options do you think are better thank TST?

                                                                                                  1. 2

                                                                                                    A bit of css removes the real tab bar for me.

                                                                                                    That “bit” of CSS has grown to 100 lines at my machine. Plus, userChrome.css is on Mozilla’s kill list anyway, so it’s not something that can be relied upon.

                                                                                                    What other options do you think are better thank TST?

                                                                                                    Sidebery is better.

                                                                                                    1. 1

                                                                                                      100 lines? I have this:

                                                                                                      #TabsToolbar, #sidebar-header {
                                                                                                          visibility: collapse !important;
                                                                                                      #TabsToolbar {
                                                                                                          margin-bottom: -21px !important;

                                                                                                      Now, if mozilla does kill userChrome.css and it stops working, I’ll have to move to another browser. It isn’t any love for mozilla, at this point, that keeps me with it, just that I’m used to TST and containers. I’ll check out Sidebery (though I am perfectly happy with TST as it is).

                                                                                                    2. 1

                                                                                                      This bit of CSS needs to be updated once every couple releases, because they keep breaking it. And it’s going to stop working anyway, as @soc wrote in a sibling comment.

                                                                                                    3. 1

                                                                                                      I’m OK with Tab Center Redux’s vertical tabs in the sidebar. I have no horizontal tab bar. I also have my bookmarks bar in the same horizontal space next to the URL bar. For added usability, I have the normal toolbar (File/Edit/View/…) in the titlebar.

                                                                                                  2. 4

                                                                                                    For comparison: Out of all of them only the restart option bothers me. And that’s broken only on my linux box.

                                                                                                    1. 1

                                                                                                      I rather like All Tabs Helper’s vertical tabs functionality.

                                                                                          2. 3

                                                                                            many of these changes are in line with Google’s vision for the web, and reflect Mozilla’s reliance on Google. while Mozilla may be the lesser of two evils, it is still evil, and only voting for the lesser evil won’t be enough to improve things.

                                                                                            not to mention that using Firefox is much less significant even than a vote. it helps Mozilla charge more for partnerships where they show their users ads, but if you don’t click on these ads then you aren’t actually helping Firefox because you are reducing the per-user effectiveness of their ad space. rambling now…

                                                                                          1. 1

                                                                                            I stay within the Debian ecosystem for my desktop Linux use (after a solid decade of frequent distro-hopping). For the last 4 years I have mostly used Linux Mint’s Cinnamon version, but I continue to check in on alternatives: Pop, Ubuntu, Debian itself, and Elementary.

                                                                                            I still don’t see myself using Elementary as a daily driver, but I deeply respect their continued focus and refinement of their desktop experience. Both in the design on the surface, and how it all fits together and “feels” out of the box. The way they approach forward-facing statements vis-a-vis Wayland carries a different tone from (my limited readings of) other distros.

                                                                                            If you have never tried Elementary, and have a spare machine (old laptop, etc.) to run an install and play with it for a day or two, I recommend it if only for a point of comparison, to see how their choices/points-of-focus regarding desktop UX make you feel.

                                                                                            1. 1

                                                                                              I am greatly looking forward to RBS, but am sure will be some time before I am using 3.0 at $WORK. I will likely start a few backlogged toy projects, from a grab-bag that I would normally ignore, just for the excuse to experience RBS.

                                                                                              1. 5

                                                                                                It might be just me, but it seems like Rails Engines is missing in the discussion. How does Packwerk compare to that approach?

                                                                                                1. 5

                                                                                                  I think the core issue they are addressing is not whether components of an application with separate concerns be isolated for configuration, convenience, development or testing. That can certainly be done with Engines, or even with a small team and a strong commitment to internal organization/conventions.

                                                                                                  This seems to be “solving” something that lies at a much deeper level, and which those two approaches would leave unsolved (or tool-assisted variations thereof, as discussed in the post). Namely, that Ruby constant resolution allows for you to reference any constant, anywhere, for any purpose, after it is loaded (from anywhere else even if you didn’t know it!). It is the difference between what happens (in all every other module/file/context) after a call to Ruby’s “require” versus after a call to Python’s “import”.

                                                                                                  One could take a self-contained grouping of four models, two controllers, and a few interactors, make them into an Engine, and document that “This mini-domain should be treated as an external API, and only accessed as such!” However, there is nothing stopping someone in the primary application from just referencing one of those models by name deep in some other application code, completely invalidating that attempt at isolation.

                                                                                                  This kind of problem can be enforced with some discipline (docs, code review, or otherwise), but the likelihood that this gets violated is one that scales with team size, codebase size, and rate of change within a Ruby project. Just as noteworthy: these are all efforts that would not be required in other ecosystems, where there is a more restrictive answer to the question “Which constants are available in my current execution context?”

                                                                                                  I have thought about this often as one of the biggest challenges in working with Ruby projects “at scale”, and for my particular areas of interest, this is more of a factor weighing against Ruby than the oft-discussed topic of performance.

                                                                                                  1. 3

                                                                                                    To add to my initial comment, I’ve seen posts from other companies that describe how they’ve managed to use engines to encapsulate and enforce interfaces between parts of their application. Flexport and Root Insurance come to mind.

                                                                                                  2. 4

                                                                                                    Hi! This is a great topic that I should have included in the blog post. Rails Engines is definitely one of the mechanisms you can use to modularize a Rails application. As @swifthand mentioned below, Packwerk is a tool to enforce boundaries. While Rails Engines comes with other functionalities, it can only be used to establish boundaries. The modularity isn’t enforced because constants will still be globally accessible.

                                                                                                    You can try using both Packwerk and Rails Engine in a Rails app though. What do you think?

                                                                                                    I also highly recommend checking out the Rails engine section in my colleague’s blog post -

                                                                                                  1. 2

                                                                                                    This is really neat. I actually introduced a junior engineer to the idea of characterization tests a week ago, and wish I had this to serve as an example. Will show it to him anyway, but also show him how simple the code behind it all is.

                                                                                                    1. 1

                                                                                                      Thanks. Great to see the idea spread.

                                                                                                    1. 3

                                                                                                      Traveling to visit friends and family that I haven’t seen since the beginning of the various coronavirus measures. After an appropriate period of self-quarantine, of course. I moved far away in January, telling them “Oh I’ll come back and visit after l settle in. Maybe 3 months or so?” Hah.