1. 1

    Great post! Love the name too!

    Perhaps it’s for historical reasons or even a dumb question, but why have separate “indirect” level qualifications vice just having a block that either points to data or it points to another pointer? It seems that would allow near infinite “levels.” The only reason against I can think is that you’d need some sort of tag to denote pointer vs data, but that tag could be a single byte or even single bit.

    1. 1

      Thanks! I’m glad you liked it. You’re the first one that got the name reference :)

      Re the blocks, it’s probably like you said, for historical reasons. Both the FFS paper [1] and if you go back to the “UNIX Implementation” [2] talk about triple indirect pointers without discussing using a “tag”.

      [1] https://people.eecs.berkeley.edu/~brewer/cs262/FFS.pdf

      [2] https://users.soe.ucsc.edu/~sbrandt/221/Papers/History/thompson-bstj78.pdf

    1. 3

      I’m not sure if it’s just me but to suggest that disk utilisation statistics can be realistically used as a law enforcement weapon in this way seems farfetched. It’s also worth noting that there are all sorts of reasons why apparent disk utilisation might not be what it appears, including but not limited to overlay mounts, in-memory caching, compression.

      I’m also not sure that inferring that data exists across virtual machine boundaries is practical either. With lightweight containers maybe, but in a “real” virtual machine with emulated or paravirtual disk controllers, on-disk deduplication is invisible to the virtual machine. The virtual machine will have a filesystem of its own and will report the disk space as used regardless, even if it was deduplicated in the real world, because the filesystem descriptors on the virtual hard disk will say that it is used. How is the virtual machine supposed to know otherwise?

      1. 4

        to suggest that disk utilisation statistics can be realistically used as a law enforcement weapon in this way seems farfetched

        Very much so. I worked as a digital forensics analyst for several years, and never once did this kind of technique even remotely appear. I may not go so far as to say that it would never be used, maybe in some crazy high stakes case something like this could be used in a very targeted last ditch effort but that be a major exception. In most forensic cases you’re looking at either raw data blocks where permissions aren’t an issue anyways, or encrypted blobs where a technique like this wouldn’t make sense either.

        So perhaps the author is focusing more on typical malicious software.

        1. 3

          VMs were mentioned in the context of timing, not space utilization. i.e. you could detect (I guess) a suspiciously fast sync write and infer that it didn’t have to write the data to the disk because it’s been detected as being already there.

          1. 4

            Perhaps this can be reproduced in controlled conditions with very specific hardware but it also seems like a stretch otherwise. The moment you move away from a single 5400rpm disk up to a hardware RAID controller or a full SAN appliance, you’re suddenly subject to fabric congestion, flash write caches and possibly even multi-tiered storage. That’s still ignoring the fact that the emulated/paravirtual disk controller still has to handle the writes as if they are really happening, as deduplication won’t be happening until the hypervisor writes virtual blocks out to real disk. I suppose the notable exception here is with raw device mappings or cases where something like iSCSI is used to present a LUN to the virtual machine itself, but you’d still be subject to a whole range of other conditions first.

        1. 6

          This looks like it could be very cool! But it also seems like it’d help to see some use cases in action. Perhaps a screencast or terminal cast could help?

          1. 2

            I’ll try making a video/gif of dijo in action.

          1. 23

            Note that:

            • Browsers are pretty much already “bundled” and exist outside the traditional distribution model. Pretty much all stable distributions have to take upstream changes wholesale (including features, security fixes and bug fixes) and no longer cherry-pick just security fixes. The packaging of browsers as snaps are merely admitting that truth.

            • The chromium-browser deb is a transitional package so that users who are upgrading don’t get a removed Chromium. It is done this way for this engineering reason - not a political one. The only (part) political choices here are to ship Chromium as a snap and no longer spend the effort in maintaining packaging of Chromium as a deb. Background on that decision is here: https://discourse.ubuntu.com/t/intent-to-provide-chromium-as-a-snap-only/5987

            • Ubuntu continues to use the traditional apt/deb model for nearly everything in Ubuntu. Snaps are intended to replace the use case that PPAs and third party apt repositories are used for, and anything else that is already shipped “bundled”. For regular packages that don’t have any special difficulties packaging with the traditional model, I’m not aware of any efforts to move them to snaps. If you want to never use snaps, then you can configure apt to never install snapd and it won’t.

            • Free Software that is published to the Snap Store is typically done with a git repository available so it is entirely possible for others to rebuild with modifications if they wish. This isn’t the case for proprietary software in the Snap Store, of course. The two are distinguished by licensing metadata provided (proprietary software is clearly marked as “Proprietary”). This is exactly the same as how third party apt repositories work - source packages might be provided by the third party, or they might not.

            • Anyone can publish anything to the Snap Store, including a fork of an existing package using a different name. There’s no censorship gate, though misleading or illegal content can be expected to be removed, of course. Normally new publications to the Snap Store are fully automated.

            • The generally cited reason for the Snap Store server-end not being open is that it is extensively integrated in deployment with Launchpad and other deployed server-end components, that opening it would be considerable work, that Canonical spent that effort when the same criticism was made of Launchpad, but that effort was wasted because GitHub (proprietary) took over as a Free Software hosting space instead, and nobody stood up a separate Launchpad instance anyway even after it was opened, so Canonical will not waste that effort again.

            • The generally cited reason for the design of snapd supporting only one store is that store fragmentation is bad.

            I hope that sheds some clarity on what is going on. I tried to stick to the facts and avoided loading the above with opinion.

            • Opinion: Ubuntu has always been about making a bunch of default choices. One choice Ubuntu has made in 20.04 is that snaps are better for users than third party apt repositories (because the former run in a sandbox and can be removed cleanly; the latter is giving third parties root on your system and typically break the system such that future release upgrades fail). Some critics complain that users aren’t being asked before the Chromium snap is installed. But that would be a political choice. Ubuntu is aimed at users who don’t care about packaging implementation details and just want the system to do something reasonable. Ubuntu’s position is that snaps are reasonable. So it follows that Chromium packaging should be adjusted to what Ubuntu considers the best choice, and that’s what it’s doing.

            Disclosure: I work for Canonical, but not in the areas related to Mint’s grievances and my opinions presented here are my own and not of my employer.

            1. 8

              Thanks a lot. While I don’t agree about the opinion at all, background explanation is much appreciated.

              1. 7

                The chromium-browser deb is a transitional package

                I can’t speak about Mint, but in Ubuntu the chromium-browser deb installs Chromium as a snap behind the scenes.

                The generally cited reason for the Snap Store server-end not being open is that it is extensively integrated in deployment with Launchpad and other deployed server-end components, that opening it would be considerable work, that Canonical spent that effort when the same criticism was made of Launchpad, but that effort was wasted because GitHub (proprietary) took over as a Free Software hosting space instead, and nobody stood up a separate Launchpad instance anyway even after it was opened, so Canonical will not waste that effort again.

                So, unless you’ll own the market with the product it’s not worth open sourcing? IMO releasing a product open source is never “wasted effort” because it may prove useful in some capacity whether you as the original author know it or not. It may spawn other ideas, provide useful components, be used in learning, the list goes on and on.

                1. 9

                  IMO releasing a product open source is never “wasted effort”

                  It’s very convenient to have this opinion when it’s not you making the effort. People seem to care a lot about “providing choice” but it somehow almost always translates into “someone has to provide choice for me”.

                  1. 5

                    It’s very convenient to have this opinion when it’s not you making the effort.

                    True. I should have worded that better. I was talking about the case of simply making source available, not all the added effort to create a community, and make a “product”, etc. I still don’t believe companies like Canonical have much a leg to stand on when arguing that certain products shouldn’t be open source when open source is kinda their entire thing and something they speak pretty heavily on.

                    1. 4

                      Yep. Just to be clear, open-sourcing code isn’t free. At an absolutely bare minimum, you need to make sure you don’t have anything hardcoded about your infra, but you’ll actually get massive flak if you don’t also have documentation on how to run it, proper installation and operation manuals for major platforms, appropriate configuration knobs for things people might reasonably want to configure, probably want development to happen fully in the open (which in practice usually means GitHub), etc.—even if you yourself don’t need or want any of these things outside your native use case. I’ve twice been at a company that did source dumps and got screamed at because that “wasn’t really open-source.” Not that I really disagree, but if that wasn’t, then releasing things open-source is not trivial and can indeed very much be wasted effort.

                      1. 3

                        That’s true, but that cost is vastly reduced when you’re building a new product from scratch. Making sure you’re not hardcoding anything, for example, is much easier because you can have that goal in mind as you’re writing the software as opposed to the case where you’re retroactively auditing your codebase. Plus, things like documentation can only help your internal team. (I understand that when you’re trying to get an MVP out the door docs aren’t a priority, but we’re well past the MVP stage at this point.)

                        If the Snap Store was older I would totally understand this reasoning. But Canonical, a company built on free and open source software, really should’ve known that people were going to want the source code from the start, especially because of their experience with Launchpad. I think they could have found a middle ground and said look, here’s the installation and operation manuals we use on our own infra. We’d be happy to set up a place in our docs that adds instructions for other providers if community members figure that out, and if there’s a configuration knob missing that you need, we will carry those patches upstream. Then it would have been clear that Canonical is mostly interested in their own needs for the codebase, but they’re still willing to be reasonable and work with the community where it makes sense.

                  2. 4

                    Opinion: Ubuntu has always been about making a bunch of default choices. One choice Ubuntu has made in 20.04 is that snaps are better for users than third party apt repositories (because the former run in a sandbox and can be removed cleanly; the latter is giving third parties root on your system and typically break the system such that future release upgrades fail).

                    I think this is a fine opinion but it seems contradicted by the fact that some packages are offered by both the off-the-shelf repos and snap.

                    1. 3

                      I don’t see a contradiction. Can you elaborate?

                      I did say “better than third party apt repositories”. The distribution has no control over those, so what is or isn’t available in them does not affect my opinion. I’m just saying that Ubuntu has taken the position that snaps (when available) are preferable over packages from third party apt repositories (when available). And what is available through the distribution’s own apt repository is out of scope of my opinion statement.

                      1. 2

                        Ubuntu has always been about making a bunch of default choices.

                        What is the default choice when I type jq in bash?

                        Command ‘jq’ not found, but can be install with:
                        sudo snap install jq # version 1.5+dfsg-1
                        sudo apt install jq # version 1.6-1

                        It’s fine and well opinionated choice that ubuntu prefers it for third party things. I feel like a lot of first party supported utilities are not well opinionated and i’m left thinking about trade-offs when i go with one over the other.

                  1. 1

                    What is a “Chunked-List”?

                    base_ptr is of type *mut MaybeUninit, so I was skipping a few thousand whole chunks, landing very deep in uninitialized memory.

                    The problem is that ptr.add takes a dimensionless unit (why? C programmers have known this was a mistake for decades, but we can’t fix it anymore! What’s rust’s excuse!?). You should be able to say something like ptr.add(8,bytes) but can’t. Why?

                    The problem is the second line, &mut buf[0]. This creates a pointer that only has provenance for the first element of the array. Offsetting it and then trying to access another element of the array would be UB.

                    How is that possible?

                    1. 1

                      How is that possible?

                      Because &mut but[0] as *mut u8 creates a pointer to the first byte of the first element. Then add(1) advances one byte, yet the elements stored in the buf are of type i32 (4 bytes). You’re now pointing to the second byte of the first element instead of the second element.

                      1. 1

                        That’s what I would have thought but the claim as I understand it, is that it isn’t pointing to the second byte of the first element but the random number generator known as “UB”.

                        A language designer says some behaviour is undefined because they don’t know the right answer to what something should be, so they want to leave an implementation free to have another answer if it is convenient.

                        My question is what other answer does it need to have such that rust wants to pretend integers aren’t made of bytes?

                        1. 1

                          Perhaps an implementation could implement arrays such that they would “grow” in the same direction that the stack grows? So e.g. [0, 1] would put 1 at the top of the stack and 0 would be the next element on the stack (as in the next to pop from the stack after 1). And then on x86 (where the stack grows downwards) add(len) (where len is the length of an element of the array in bytes) would take you completely outside of the memory occupied by the array.

                          1. 1

                            That doesn’t sound very plausible. On x86, the stack pointer (%sp) points to the location of the top of the stack, but not the end of the stack memory. That memory region extends both directions (from %sp) on x86 (and actually most architectures). When people say “the stack grows down” they mean that pushing a value causes %sp to get a lower virtual address than what it had before.

                            The reason for this is clear in the assembly; x86 has indirect addressing!

                            mov %sp(0), %rax  // same as pop %rax;push%rax;
                            mov %sp(1), %rax  // gets the previously pushed value
                            mov %sp(2), %rax  // ... and so on
                            

                            NB: The above only works if you’re pushing pointers. If you push structures, then %sp(1) might actually point to the middle of a 128-bit value.

                            Negative offsets, are usually used for “scratch” space, but some ABI have a “red zone” which is used by the callee, so you might see mov %sp(-17) past that zone if you actually spill registers. Most functions only have a few arguments (and they all fit into registers) so you don’t see stack allocation on x86 very much, but both memory regions are “valid” and “usable” if you understand the ABI.

                            It seems much more likely to me this is something to do with aliasing.

                    1. 1

                      There is also a rather large social push to reduce the number of dependencies, which is less tangible but definitely contributes to the larger crates in the Rust ecosystem at large.

                      1. 10

                        I agree the String thing is confusing, in fact the author didn’t list quite a few different string types that exist, and listed no where near the amount of string conversions or ways to get string-y arguments. However, it’s one of those cases where the underlying problem Rust solved with this confusion actually exists in (nearly?) all languages. Rust, in typical Rust fashion just makes you aware of all the footguns up front. Once you wrap your head around when to use each type, or what their various tradeoffs are, it makes perfect sense. So much so that I’ll get frustrated with other languages that paper over these details leading to bugs. Bottom line, strings in general are hard, really hard.

                        1. 3

                          I think the distinctions that Rust make are useful and necessary. However, I think one of the problems is that the types are confusingly named. I think String should have been called StringBuf and OsString OsStringBuf, just as you have Path and PathBuf`.

                          I think an additional problem that make slices ([T]) and string slices (str) difficult to understand is that they are unsized, built-in types. So, people have to understand the difference between e.g. &str and str and why you cannot just put str in e.g. a struct. I know that there are good reasons for why string references are as they are, but I think from a learning curve perspective, it would have been easier if string slices were something along the lines of a simple Copyable type:

                          struct StringSlice<'a> {
                            buf: &'a StringBuf,
                            lower: usize,
                            upper: usize,
                          }
                          
                          1. 1

                            Having references to unsized slices is necessary to avoid a performance hit. The StringSlice type above is 1 word larger than &str (which is just a pointer and a length). More importantly it has an additional layer of indirection: buf points to the StringBuf which points to the data, while &str points directly at the relevant data.

                            1. 2

                              You don’t have to convince me. Like I said, there are good reasons for the current representation. It’s just that it makes (string) slices more opaque.

                              This is also quite opposite to many other types Rust, which are transparent and can be understood by just reading the standard library.

                              One of my favorite examples is the BinaryHeap::peek_mut method, which would be completely unsafe in another language (since you can modify the tip of the heap, which invalidates the heap property), but in Rust it can be done without any magic. The borrow system takes care of that you can only have one mutable reference (so no-one else can have a view of the heap when the heap property is temporarily broken), the Drop implementation of PeekMut takes care of restoring the heap property when necessary.

                        1. -1

                          So, the author hates snaps, and is incapable of downloading and installing the deb themselves using dpkg?

                          I mean, I get it. There are aspects of snaps that are ugly and downright unpleasant for grizzled UNIX veterans (I’m looking at you, oddball sandboxed configuration directory locations!) but they’re almost undeniably a boon for the average end user.

                          Ubuntu is designed to be the LInux distribution for everyone, and that includes decidedly non technical users who want to just install it and have it Just Work (TM).

                          Now, I can just HEAR you revving up your keyboard for a scathing retort about how they don’t Just Work for your use case, but you’re most likely a highly skilled technical practitioner with years of experience at the UNIX command line and very highly refined tastes around things like packaging, layout and software installation.

                          And that’s great! But maybe you should consider a distribution which expects that and gives you that control by default, rather than one aimed at the lowest common denominator whose aim is to bring Linux to Uncle Alvin who’s 92 and just wants a way to browse his Fox news, read his email, and maybe buy a thing or two online.

                          1. 3

                            [The author] is incapable of downloading and installing the deb themselves using dpkg?

                            I think the concern is more that in certain circumstances that .deb simply installs a snap behind the scenes. Further, the worry of many is that this will continue to be the trend, where even more traditional packages are replaced by snaps.

                            Sure, you’ll still have PPAs and can download a random deb from a website (if someone supplies it), but that’s a far step worse than from an official source.

                            Ubuntu is designed to be the LInux distribution for everyone, and that includes decidedly non technical users who want to just install it and have it Just Work (TM).

                            Yes an no. They’re also marketed heavily towards server and enterprise environments. Canonical is pushing snaps just as hard in those places too (look at LXD, kernel live patching, even things like NextCloud which market a snap install, etc.).

                            My personal gripe with snaps is just the marketing doesn’t match product. I don’t have good experiences with Snaps outside of Ubuntu based distros even though they’re marketed as perfectly cross distro. My personal fear is more and more companies release snaps of their products because of how hard Canonical is pushing them, meanwhile on other distros the experience suffers.

                            1. 2

                              My personal gripe with snaps is just the marketing doesn’t match product. I don’t have good experiences with Snaps outside of Ubuntu based distros even though they’re marketed as perfectly cross distro. My personal fear is more and more companies release snaps of their products because of how hard Canonical is pushing them, meanwhile on other distros the experience suffers.

                              That’s valid. As I mentioned I’ve had mixed success with snaps even on Ubuntu (there’s a tendency to contribute snaps that are either busted out of the box or become busted very quickly and are never fixed.)

                              I’m also a bit frustrated that Snap versus Flatpak is a thing because more fragmentation is exactly NOT what the Linux desktop needs.

                          1. 3

                            Maybe it’s just because its Twitter format, but its hard to judge the points. Some points I agree with. Some of the criticisms I’m not sure what they are referring to as it only goes surface deep, so it’s hard to judge if what they’re experiencing is common or they’re just doing something against the grain.

                            Also some points comes off as, “I wanted to do X, but Rust wants me to Y. I know better than Rust and I hate that it is putting up road blocks to letting me do what I want.” While this can be very much true in some situations and I’m not doubting the authors ability to write correct programs…however the number of times I’ve felt the same way, only find out later that what I had wanted to do actually WAS flawed in some way is higher than I’d like to admit.

                            1. 2

                              “I wanted to do X, but Rust wants me to Y. I know better than Rust and I hate that it is putting up road blocks to letting me do what I want.”

                              I don’t read any of the points to mean that. Instead my understanding is that the author is saying is that when ‘trying to go against the grain’ you either have to use unsafe to ‘shut rustcc up’ or rely on complex language features, verbose code or external crates and that neither option is good. ej:

                              In Rust there will be a tension between simple but plenty kernels of unsafe, and trying to avoid unsafe as much as possible using complex language features.

                              The purpose of placating the borrow checker is to guarantee properties of the code, ej. no data races. That says nothing of the purpose the code is written for. That remains the task of a human†. Verbose, hard to read code impedes this. So for some cases there is a tension is between writing simple code wrapped in unsafe or harder to understand code that has some [important] verified properties.

                              †: Yes I imagine it is possible to encode business rules in Agda or Coq, but that is hardly the common case when writing rust.

                            1. 6

                              As much as I dislike snap, this post is overly dramatic. You can easily download the non-ubuntu chromium binary and install it without need of snap.

                              The main problems of snap, which are “irreconcilable differences” that will alienate a part of the population, are:

                              1. hardcoded home directory pollution
                              2. user home must be inside /home/
                              3. cannot disable the automatic update feature
                              1. 9

                                You can easily download the non-ubuntu chromium binary and install it without need of snap.

                                I suppose they want to use official packages from a reputable repository. Installing binaries manually really is bad practice for security and maintainability reasons.

                                1. 2

                                  I installed the official chromium .deb for Debian and it works flawlessly. (I prefer firefox, but jitsi does not work well in firefox).

                                  1. 4

                                    Is that a repository, or a single .deb file? If the latter, that doesn’t get updates along with regular system maintenance. If it’s an external repository, that could be a decent solution depending on how much you trust it.

                                    1. 2

                                      if chromium is anything like regular chrome or firefox they are updated out of cycle with the rest of the system anyway, unless you happen to turn auto-updates off

                                      1. 4

                                        At work I’m using Chromium and Firefox from the Debian repositories. Auto updates are turned off and will use the standard system update mechanism.

                                        Having random binaries update themselves in a system sounds like a recipe for madness to a sysadmin. Also, how does that even work in a multi-user system where they’re installed system wide? Does that mean these binaries are setuid root or something?

                                    2. 2

                                      jitsi does not work well in firefox

                                      I keep hearing this, but I use jitsi from firefox every day and don’t have any issues. There was a feature missing in firefox about a year ago that was preventing jitsi from working, That was reported and fixed eventually although it took a while to get through the system. Maybe there are still some minor issues but nothing I have seen that makes me want to switch to chrome.

                                      1. 5

                                        Firefox’s implementation of WebRTC has some issues that make Jitsi scale poorly when anyone in a call is on Firefox. This is fine for small groups; it only becomes an issue if there’s more than 10 or so participants.

                                        1. 2

                                          Ok, thanks for clarifying that. I can confirm I am only using it in small groups.

                                  2. 5

                                    I really don’t understand why Ubuntu pushes Snaps when there is Flatpaks (desktop) and Docker (server), unless what they really want is to generate lock in. I wished they were more collaborative and smarter about what maked them stand out (like being a polished desktop Linux). Point 1. was one of the reasons for me to switch to Fedora.

                                    1. 9

                                      I find the existence of both Flatpak and Snap confusing. They seem to solve a problem that only exists for a limited set of software within an already very limited niche of users. Web browsers on desktop Linux distros seem to be well-served by them, but how many engineer-years have gone into building these things?

                                      I suspect there’s some big benefit/use-case that I’m completely missing.

                                      1. 12

                                        I find the existence of both Flatpak and Snap confusing.

                                        This!

                                        Snap and flatpack try to solve two completely unrelated problems: application sandboxing and package distribution, and do a notoriously bad job at each one.

                                        Application sandboxing should be an OS-feature, not requiring any action by the potentially hostile application distributors. Thus, it should be able to act upon arbitrary programs. If I want to run “ls” in a controlled container, so be it. Any application, no matter how is it distributed, must be sandboxable.

                                        Package distribution is a different thing. At this point, it seems that nearly all of the problems can be solved by distributing a static executable as a single file.

                                        1. 2

                                          If I want to run “ls” in a controlled container, so be it.

                                          That may be rather difficult. It already needs access to the whole filesystem…

                                          1. 3

                                            But it doesn’t need to access to the network, or file contents and it definitely should not be allowed to change anything. Plenty of permissions to restrict.

                                            1. 2

                                              or file contents

                                              Can you restrict that on Linux? Is there a separate permission for reading files and reading directories?

                                              You’d also need a whitelist for reading some files, such as shared libraries and locale.

                                              and it definitely should not be allowed to change anything

                                              Well it has to be able to write to stdout… which could be any file descriptor.

                                              1. 1

                                                Can you restrict that on Linux? Is there a separate permission for reading files and reading directories?

                                                So long as the directory has r-x (octal 5) permission, and the file does not have read r permissions you can browse the directory but not read the files contents.

                                                1. 3

                                                  No I mean is there a way to allow readdir but not read? AFAIK Linux does not have that level of granularity.

                                        2. 1

                                          This is entirely new to me too.

                                          From the wikipedia entry https://en.wikipedia.org/wiki/Snappy_(package_manager):

                                          The system is designed to work for internet of things, cloud and desktop computing.

                                          So it’s a more light-weight Docker I guess.

                                          1. 6

                                            I’m not sure how much more light-weight they can be, given that Flatpak and Snap are both using the same in-kernel container mechanisms (cgroups, namespaces, seccomp etc.) as Docker.

                                            1. 4

                                              Somewhat tangential (maybe you happen to know, or somebody else who does is reading) – is the sandboxing any good these days, and do Flathub applications/other packagers user them? About two years ago, when Flatpak was just getting hot, the flurry of “this is the future of Linux desktop” posts convinced me to spend a few weekends with it and it was pretty disappointing.

                                              It turned out that virtually all applications on flathub had unrestricted access to the home directory (and many of them had unrestricted access to the whole filesystem), even though it showed the pretty “sandbox” icon – arguably not Flatpak’s fault I guess, but not very useful, and also not very assuring (features that go almost completely unused tend to be broken in all sorts of ways – since no one gets to use them and hit the bugs). Lurking through the bug tracker also painted a pretty terrible picture – obvious bugs, some of which had had serious enough CVEs assigned for months, lingered for months. So basically it was (almost) zero sandboxing done by a system that looked somewhat unlikely to be able to deal with really malicious applications in the first place.

                                              (Edit: I don’t mean that Flatpak, or Snap, are bad as a concept – and I also want to re-emphasize, for anyone reading this in 2020, that all of this was back in 2018 or so. But back then, this looked like years away from being anything near something you’d want to use to protect your data – it wasn’t even beta quality, it was, at best, a reasonable proof of concept.)

                                              Also, even though this was all supposed to “streamline” the distribution process so that users get access to the latest updates and security fixes more quickly, even the most popular packages were hopelessly out of date (as in weeks, or even months) in terms of security fixes. I expect at least this may have changed a bit, given the increase in popularity?

                                              Has any of this stuff changed in the last two years? Should I give it another go this weekend :-) ?

                                              (Edit: I can’t find my notes from back then but trying to google around for some of the bugs led me here: http://flatkill.org/ . There’s a lot of unwarranted snark in there, so take it with a grain of salt, but it matches my recollections pretty well…)

                                              1. 4

                                                It turned out that virtually all applications on flathub had unrestricted access to the home directory (and many of them had unrestricted access to the whole filesystem),

                                                A cursory GitHub search of the Flathub organization shows ~150-200 applications have --filesystem=host or --filesystem=home each. And close to 100 have --device=all. So it seems that a large portion is still effectively unsandboxed.

                                                Lurking through the bug tracker also painted a pretty terrible picture – obvious bugs, some of which had had serious enough CVEs assigned for months, lingered for months.

                                                This is a disaster in the making. Outside the standard SDKs that are provided through FlatHub, applications compile their own picked versions of… pretty much everything. Just going over a bunch of Flatpaks shows that the dependencies are out of date.

                                                That said, I see what they are aiming for. The broad permissions are caused by several issues that will probably be resolved in time: broad device permissions are often for webcam access, which should be solved by Pipewire and the corresponding portal. The home/host filesystem permissions can partially be attributes to applications which use toolkits for which the portal mechanism isn’t implemented.

                                                The problem that every Flatpak packages their own stuff is more concerning though… I know that the aim is to be distribution-independent, but it seems like a lot could be gained by allowing re-use of regular packages within Flatpaks.

                                              2. 2

                                                I’m thinking more lightweight conceptually. Docker is seen as a sysadmin/devops thing, Snappy is more like a mobile app.

                                                1. 3

                                                  In practice however it is still a sysadmin thing.

                                        3. 4

                                          You can easily download the non-ubuntu chromium binary and install it without need of snap.

                                          Then you’re either stuck using PPAs (which is a no-go for certain environments) or manually updating the DEB. Both of which are not good options when it should be as easy getting updates from the official repositories.

                                          1. 0

                                            I’ve found Chris’ recent posts to be increasingly histrionic. He’s otherwise been a reliable read for ages.

                                            1. 1

                                              You say that but I’d agree it’s a serious bug or even just WTF moment.

                                              Yes, there’s the FHS - but nowhere it says (afaik) that software should break if you change something like this, which isn’t even an edge case but has been done for decades.

                                              1. 1

                                                I don’t disagree with that. It seems like a poor limitation that deserved more attention from the devs once reported. And it would have likely caused problems at the last place I was a Sysadmin.

                                                What I’m complaining about is the tone with which he’s presented the issue. And it’s not limited to this post; I’ve been reading his blog for about ten years and it’s been a high quality read for most of that time, until relatively recently when the tone has been more entitled and (for want of a better word) whingy which detracts from the substance of what he’s writing about.

                                          1. 1

                                            Fork is the best Git client ever, for Mac and Windows. It used to be free, but when it went to US$50 last month I paid immediately and sighed with relief that the devs now have a revenue stream and can continue working on it. It’s that good.

                                            I’m sure most of you use the Git CLI. I used to. But a good GUI is so much more efficient, letting you scroll through revision trees and inspect diffs and interactively rebase without having to fill your mental working-set with a ton of details of commands and flags.

                                            1. 1

                                              I’ve been a big fan of GitKraken for the same reasons. Although there is a free version, the paying the license is absolutely worth it!

                                            1. 29

                                              The 6-week release cycle is a red herring. If Rust didn’t have 6-week cycles, it would have bigger annual releases instead, but that has no influence on the average progress of the language.

                                              It’s like slicing the same pizza in either 4 or 42 slices, but complaining “oh no, I can’t eat 42 slices of pizza!”

                                              Rust could have had just 4 releases in its history: 2015 (1.0), 2016 (? for errors), 2018 (new modules) and 2020 (async/await), and you would call them reasonably sized, each with 1 major feature, and a bunch of minor standard library additions.

                                              Async/await is one major idiom-changing feature since 2015 that actually caused churn (IMHO totally worth it). Apart from that there have been only a couple of syntax changes, and you can apply them automatically with cargo fix or rerast.

                                              1. 17

                                                It’s like slicing the same pizza in either 4 or 42 slices, but complaining “oh no, I can’t eat 42 slices of pizza!”

                                                It’s like getting one slice of pizza every 15 minutes, while you’re trying to focus. I like pizza, but I don’t want to be interrupted with pizza 4 times. Being interrupted 42 times is worse.

                                                Timing matters. Treadmills aren’t fun as a user.

                                                1. 13

                                                  Go releases more frequently than Rust, and I don’t see anyone complaining about that. Go had 121 releases, while Rust less than half of that.

                                                  The difference is that Go calls some releases minor, so people don’t count them. Rust could do the same, because most Rust releases are very minor. If it had Go’s versioning scheme it’d be on something like v1.6.

                                                  1. 20

                                                    People aren’t complaining about the frequency of Go releases because Go doesn’t change major aspects of the language, well, ever. The most you have to reckon with is an addition to the standard library. And this is a virtue.

                                                    1. 8

                                                      So, what major aspects of the language changed since Rust 1.0, besides async and perhaps the introduction of the ? operator?

                                                      1. 10

                                                        The stability issues are more with the Rust ecosystem than the Rust language itself. People get pulled into fads and then burned when they pay the refactoring costs to move to the next one. Many of those fad crates are frameworks that impose severe workflow constraints.

                                                        Go is generally far more coherent as an overall ecosystem. This was always the intent. Rust is not so opinionated and structured. This leads to benefits and issues. Lots of weird power plays where people write frameworks to run other people’s code that would just be blatantly unnecessary in Go. It’s unnecessary in Rust, too, but people are in a bit of daze due to the complexity flying around them and it’s sometimes not so clear that they can just rely on the standard library for a lot of things without pulling in a stack of 700 dependencies to write an echo server.

                                                        1. 2

                                                          Maybe in the server/web part of the ecosystem. I am mostly using Rust for NLP/ML/data massaging and the ecosystem has been very stable.

                                                          I have also use Go for several years, but I didn’t notice much difference in the volatility.

                                                          But I can imagine that it is different for networking/services, because the Go standard library has set strong standards there.

                                                        2. 6

                                                          Modules have changed a bit, but it was optional change and only required running cargo fix once.

                                                          Way less disruptive than GOPATH -> go modules migration.

                                                      2. 5

                                                        That is kind of the point. I love both Go and Rust (if anything, I’d say I’d like Rust more than Go if working out borrow checker issues wasn’t such a painstaking, slow process), but with Go I can go and update the compiler knowing code I wrote two years ago will compile and no major libraries will start complaining. With Rust, not so much. Even in the very short time I was using it for a small project, I had to change half of my code to use async (and find a runtime for that, etc.) because a single library I wanted to use was ‘async or the highway’.

                                                        Not a very friendly experience, which is a shame because the language itself rocks.

                                                        1. 9

                                                          In Rust you can upgrade the compiler and nothing will break. Rust team literally compiles all known Rust libraries before making a new release to ensure they don’t break stuff.

                                                          The ecosystem is serious about adherence to semver, and the compiler can seamlessly mix new and old Rust code, so you can be selective of what you upgrade. My projects that were written for Rust 1.0.0 work fine with the latest compiler.

                                                          The async addition was the only change which caused churn in the ecosystem, and Rust isn’t planning anything that big in the future.

                                                          And Go isn’t flawless either. I can’t upgrade deps in my last Go project, because migration to Go Modules is causing me headaches.

                                                          1. 3

                                                            Ah, yeah, the migration to modules was a shit show. It took me about six months to be able to move a project to modules because a bunch of the dependencies took a while to upgrade.

                                                            Don’t get me wrong, my post wasn’t a criticism of my Rust. As I said, I really enjoy the language. But any kind of big changes like async and so on introduce big paradigm shifts that make the experience extra hard for newcomers. To be fair to Rust, Python took 3 iterations or so until they figured out a proper interface for async, while rust figured the interface and left the implementation to the reader… Which has created another rift for some libraries.

                                                    2. 4

                                                      I can definitely agree with the author, since I do not write Rust in my day job it is pretty hard for me to keep up with all the minor changes in the language. Also, as already stated in the article, the 6 week release cycle exacerbates the problem.

                                                      I’m not famliar with Rust’s situation, but from my own corporate experience, frequent releases can be awful because features are iterated on continuously. It would be really nice to just learn the final copy of something rather than all the intermediate steps to get there.

                                                      1. 3

                                                        Releasing “final copy” creates design-by-commitee. Features have to get real-world use to prove they’re useful.

                                                        There’s a chicken-egg problem here. Even though Rust has nightlies and betas, features are adopted and used in production only after they’re declared stable. But without using a feature for real, you can’t be sure it’s right.

                                                        Besides, lots of changes in 6-week releases are tiny, like a new command-line flag, or allowing few more functions to be used as initializers of global variables.

                                                        1. 6

                                                          Releasing “final copy” creates design-by-commitee. Features have to get real-world use to prove they’re useful.

                                                          Design-by-committee can be a lot more thoughtful than design-by-novice. I think this is one of the greatest misonceptions of agile.

                                                          Many of the great things we take for granted are done by committee including our internet protocols and core infrastructure. There’s a lot of real good engineering in there. Of course there’s research projects and prototyping which are super useful but it’s a full time job to keep up with developments in research. Most people don’t have to care to learn it until it’s stable and published.

                                                          1. 2

                                                            Sorry, I shouldn’t have mentioned an emotionally-charged “commitee” name. It was not the point.

                                                            The point is that language features need iteration to be good, but for a language with strong stability guarantee the first iteration must be the final one.

                                                            So the way around such impossible iteration is release only obvious core parts, so that libraries can iterate on the rest. And the rest is going to be blessed as official only after it proves useful.

                                                            Rust has experience here: the first API of Futures turned out to have flaws. Some interfaces caused unfixable inefficiencies. Built-in faillibility turned out to be more annoying than helpful. These things came out to light only after the design was “done” and people used it for real and built large projects around them. If Rust held that back and waited for the full async/await to be feature-complete, it’d be a worse design, and it wouldn’t have been released yet.

                                                          2. 3

                                                            Releasing “final copy” creates design-by-commitee.

                                                            I’m not convinced that design-by-crowd is substantively different from design-by-committee.

                                                            1. 1

                                                              Releasing “final copy” creates design-by-commitee. Features have to get real-world use to prove they’re useful.

                                                              There’s a chicken-egg problem here. Even though Rust has nightlies and betas, features are adopted and used in production only after they’re declared stable. But without using a feature for real, you can’t be sure it’s right.

                                                              I deny the notion that features must be stabilized early so that they get wide spread or “production use.” It may well be the case that some features don’t receive enough testing on nightly/beta and in order to get more users it must hit stable, but limited testing on nightly or beta is not a reason to stabilize a feature. Either A) wait longer until it’s been more thoroughly tested on nightly/beta or B) find a manner to get more testers of features on nightly/beta.

                                                              I’m not necessarily saying that’s what happened with Rust, per se, but it’s close as I’ve seen the sentiment expressed several times over my time with Rust (since 0.9 days).

                                                          3. 10

                                                            It’s not a red herring. There might be bigger annual releases if there weren’t 6-week releases, but you’re ignoring the main point: Rust changes frequently enough to make the 6-week release cycle meaningful. The author isn’t suggesting the same frequency of changes less often, but a lower frequency of changes - low enough, perhaps, that releasing every 6 weeks would see a few “releases” go by with no changes at all.

                                                            No one is trying to make fewer slices out of the pizza. They’re asking for a smaller pizza.

                                                            1. 7

                                                              How is adding map_or() as a shorthand for map().unwrap_or() a meaningful language change? That’s the scale of changes for the majority of the 6-week releases. For all but handful of releases the changes are details that you can safely ignore.

                                                              Rust is very diligent with documenting every tiny detail in release notes, so if you don’t pay attention and just gloss over them only counting the number of headings, you’re likely to get a wrong impression of what is actually happening.

                                                              1. 3

                                                                How is adding map_or() as a shorthand for map().unwrap_or() a meaningful language change?

                                                                I think that’s @ddevault’s point: the pizza just got bigger but it didn’t really get better. It’s a minor thing that doesn’t really matter, but it happens often and it’s something you may need to keep track of when you’re working with other people.

                                                                1. 9

                                                                  Rust also gets criticised for having too small standard library that needs dependencies for most basic things. And when it finally adds these basic things, that’s bad too…

                                                                  But the thing is — and it’s hard to explain to non-users of the language — that additions of things like map_or() is not burdensome at all. From inside, it’s usually received as “finally! What took you so long!?”.

                                                                  • First, it follows a naming pattern already used elsewhere. It’s something you’d expect to exist already, not really a new thing. It’s more like a bugfix for “wtf? why is this missing?”.

                                                                    Back-filling of outrageously missing features is still a common thing in Rust. 1.0 was an MVP rather than a finished language. For example, Rust waited 32 releases before add big-endian/little-endian swapping.

                                                                  • There’s cargo clippy that will flag too unidiomatic code, so you don’t really need to keep track of it.

                                                                  • It’s OK to totally ignore this. If your code worked without some new stdlib function, it’ll doesn’t have to care. And these changes are minor, so it’s not like you’ll need to read a book on a new method you notice. You’ll know what it does from it’s name, because Rust is still at the stage of adding baby things.

                                                                  1. 7

                                                                    In the Haskell world, there’s a piece of folklore called the Fairbairn Threshold, though we have very clean syntax for composing small combinators:

                                                                    The Fairbairn threshold is the point at which the effort of looking up or keeping track of the definition is outweighed by the effort of rederiving it or inlining it.

                                                                    The term was in much more common use several years ago.

                                                                    Adding every variant on every operation to the Prelude is certainly possible given infinite time, but this of course imposes a sort of indexing overhead mentally.

                                                                    The primary use of the Fairbairn threshold is as a litmus test to avoid giving names to trivial compositions, as there are a potentially explosive number of them. In particular any method whose definition isn’t much longer than its name (e.g. fooBar = foo . bar) falls below the threshold.

                                                                    There are reasonable exceptions for especially common idioms, but it does provide a good rule of thumb.

                                                                    The effect is to encourage simple combinators that can be used in multiple situations, while avoiding naming the explosive number of combinations of those combinators.

                                                                    Given n combinators I can probably combine two of them in something like O(n^2) ways, so without the threshold as a rule of thumb you wind up with a much larger library, but no real greater utility and much higher cognitive overhead to track all the combinations.

                                                                    Further, the existence of some combinations tends to drive you to look for other ever larger combinations rather than learn how to compose combinators or spot the more general usage patterns yourself, so from a POSIWID perspective, the threshold encourages better use of the functional programming style as well.

                                                                2. 1

                                                                  Agreed. It has substantially reduced my happiness all around:

                                                                  • It’s tiring to deal with people who (sincerely) think adding features improves a language.
                                                                  • It’s disappointing that some people act like having no deprecation policy is something that makes a language “stable”/“reliable”/good for business use.
                                                                  • It’s mind-boggling to me that the potential cost of removing a feature is never factored into the cost of adding it in the first place.

                                                                  Mainstream language design is basically living with a flatmate that is slowly succumbing to his hoarding tendencies and simply doesn’t realize it.

                                                                  What I have done to keep my sanity is to …

                                                                  • freeze the version of Rust I’m targeting to Rust 1.13 (I’m not using ?, but some dependencies need support for it), and
                                                                  • playing with a different approach to language design that makes me happier than just watching the constant mess of more-features-are-better.
                                                                  1. 2

                                                                    Mainstream language design is basically living with a flatmate that is slowly succumbing to his hoarding tendencies and simply doesn’t realize it.

                                                                    I like that analogy, but it omits something crucial: it equates “change” with “additional features/complexity” – but many of the changes to Rust are about removing special cases and reducing complexity.

                                                                    For example, it used to be the case that, when implementing a method on an item, you could refer to the item with Self – but only if the item was a struct, not it it was an enum. Rust 1.37 eliminated that restriction, removing one thing for me to remember.

                                                                    Other changes have made standard library APIs more consistent, again reducing complexity. For example the Option type has long had a map_or method that calls a function on the Some type or, if the Option contains None, uses a default value. However, until Rust 1.41, you had to remember that Results didn’t have a map_or method (even though they have nearly all the other Option methods). Now, Results have that method too, making the standard library more consistent and simpler.

                                                                    I’m not claiming that every change has been a simplification; certainly some have not. (For example, did we really need todo!() as a shorter way to write unimplemented!() when they have exactly the same effect?).

                                                                    But some changes have been simplifications. If Rust is a flatmate that is slowly buying more stuff, it’s also a flatmate that’s throwing things out in an effort to maintain a tidy space. Which effect dominates? As a pretty heavy Rust user, my personal feeling is that the language is getting simpler over time, but I don’t have any hard evidence to back that up.

                                                                    1. 3

                                                                      But some changes have been simplifications.

                                                                      I think what you are describing is a language that keeps filling some gaps and oversights, they are probably not the worst kind of additions, but they are additions.

                                                                      If Rust is a flatmate that is slowly buying more stuff, it’s also a flatmate that’s throwing things out in an effort to maintain a tidy space.

                                                                      What has Rust thrown out? I have trouble coming up with even a single example.

                                                                      As a pretty heavy Rust user, my personal feeling is that the language is getting simpler over time, but I don’t have any hard evidence to back that up.

                                                                      How would you distinguish between the language getting simpler and you becoming more familiar with the language?

                                                                      I think this is the reason why many additions are “small, simple, obvious fixes” to expert users, but for new/occasional users they present a mountain of hundreds of additional things that have to be learned.

                                                                      1. 1

                                                                        How would you distinguish between the language getting simpler and you becoming more familiar with the language?

                                                                        That’s a fair question, and is part of the reason I added the qualification that I can only provide my personal impression – without data, it’s entirely possible that I’m mistaking my own familiarity for language simplification. But I don’t believe that’s the case, for a few reasons.

                                                                        I think this is the reason why many additions are “small, simple, obvious fixes” to expert users, but for new/occasional users they present a mountain of hundreds of additional things that have to be learned.

                                                                        I’d like to focus on the “additional things” part of what you said, because I think it’s key: if a feature is revised so that it’s consistent with several other features, then that’s one fewer thing for a new user to learn, not one more. For example, match used to treat & a bit differently and require as_ref() method calls to get the same effect, which frequently confused people learning Rust. Now, & works the same with match as it does with the rest of the language. Similarly, the 2015 Edition module system required users to format their paths differently in use statements than elsewhere. Again, that confused new users (and annoyed pretty much everyone) and, again, it’s been replaced with a simpler, more consistent, and easier-to-learn system.

                                                                        On the other hand, you might have a point about occasional Rust users – if a user understood the old module system, then switching to the 2018 Edition involves learning something new. For the occasional user, it doesn’t matter that the new system is simpler – it’s still one more thing for them to learn.

                                                                        But for a new user, those simplifications really do make the language simpler to pick up. I firmly believe that the current edition of the Rust Book describes a language that is simpler and more approachable – and that has fewer special cases you have to “just remember” – than the version of the language described in the first edition.

                                                                        1. 1

                                                                          A lot of effort is spent “simplifying” things that “simply” shouldn’t have been added in the first place:

                                                                          • do we really need two different kind of use paths (relative and absolute)?
                                                                          • do we really need both if expressions and pattern matching?
                                                                          • do we really need ? for control flow?
                                                                          • do we really need to have two different ways of “invoking” things, (...) for methods (no support for named parameters) and {...} for structs (support for named parameters)?
                                                                          • do we really need the ability to write foo for foo: foo in struct initializers?

                                                                          Most often the answer is “no”, but we have it anyway because people keep conflating familiarity with simplicity.

                                                                          1. 2

                                                                            You’re describing redundancy as if it was some fault, but languages without any redundancy are a turing tarpit. Not only we don’t need two kinds of paths, the whole use statement is unnecessary. We don’t even need if. Smalltalk could live without it. We don’t really need anything more than a lambda and a Y combinator or one instruction.

                                                                            I’ve used Rust v0.5 before it had if let, before there was try!(). It required a full match on every single Option. It was a pure minimal design, and I can tell you it was awful.

                                                                            So yes, we need these things, because convenience is also important.

                                                                            1. 2

                                                                              You’re describing redundancy as if it was some fault, but languages without any redundancy are a turing tarpit.

                                                                              I’m very aware of the turing tarpit, and it simply doesn’t apply here. A lack of redundancy is not the problem – it’s the lack of structure.

                                                                              Not only we don’t need two kinds of paths, the whole use statement is unnecessary. We don’t even need if. Smalltalk could live without it. We don’t really need anything more than a lambda and a Y combinator or one instruction.

                                                                              Reductio ad absurdum? If you think it’s silly to question why we have both if-then-else and match, why not add ternary operators, too?

                                                                              It required a full match on every single Option. It was a pure minimal design, and I can tell you it was awful.

                                                                              Pattern matching on options is pretty much always wrong, regardless of the minimalism of design. I think the only reasons Rust users use it is because it makes the borrow checker happy more easily.

                                                                              I’ve used Rust v0.5 before it had if let, before there was try!(). It required a full match on every single Option. It was a pure minimal design, and I can tell you it was awful.

                                                                              In my experience, the difference in convenience between Rust 5 years ago (which I use for my own projects) and Rust nightly (which is used by some projects I contribute to) just isn’t there.

                                                                              There is no real point in upgrading to a newer version – the only thing I get is a bigger language and I’m not really interested in that.

                                                                3. 1

                                                                  This discussion suffers from “Monday morning quarter backing” to an extent. We now (post fact) know which releases of Rust contained more churn than others. “churn” being defined as a change that either introduced a different (usually better IMO) way of doing something already possible in Rust, or a fundamental change that permeated the ecosystem either to due to being the new idiomatic way, or being the Next Big Thing and thus many crates in the ecosystem jumped in early. Either way, my code needs to change due to new warnings (and the ecosystem doesn’t care for warnings) or since many crates are open source I’ll inevitably get a PR to switch to the new hotness.

                                                                  With that stated, my actual point is that Rust releases every 6 weeks. I don’t know if the next release (1.43 at the time of this writing) will contain something that produces churn or not without closely following upcoming releases. I don’t know if the release after that will contain big changes. So I’m left with either having to follow all releases (every 6 weeks), or closely follow upcoming releases. Either way I’m forced to stay in tune with Rust development. For many this is fine. However in my industry (Government) where dependencies must go through audit, etc, etc. It’s really hard to keep up with. If Rust had “major” (read churn inducing releases) every year, or say every 3 years (at new editions) that would be far, far easier to keep up with. Because then I don’t need to check every 6 weeks, I can check every year, or three years whatever it may be. Minor changes (stdlib additions, etc.) can still happen every 6 weeks, almost as Z releases (in semver X.Y.Z speak), but churn inducing changes (Y changes) happen on a set much slower schedule.

                                                                  1. 2

                                                                    When your deps updated to ?, you didn’t need to change anything. When your deps started using SIMD, you didn’t need to change anything. When your deps switched to Edition 2018, you didn’t need to change anything because of that.

                                                                    Warnings from libraries are not displayed (cap-lints), so even if you use deprecated stuff, nobody will notice. You could sleep through years of Rust changes and not adopt any of them.

                                                                    AFAIK async/await was the first and only language change after Rust 1.0 that massively changed interfaces between crates, causing a necessary ecosystem-wide churn. It was one change in 5 years.

                                                                    Releases are backwards compatible, so you really don’t need to pay attention to them. You need to update the compiler to update dependencies, but this doesn’t mean you need to adopt any language changes yourself.

                                                                    The pain of going through dependency churn is real. But apart from async, it’s not caused by compiler release cycle. Dependencies won’t stop changing just because the language doesn’t change. Look at JS for example: Node has slow releases with long LTS, the language settled down after ES2016, IE and Safari put breaks on language evolution speed. And yet, everything churns all the time! People invent new frameworks weekly on the same language version.

                                                                  1. 1

                                                                    I came to say this as well. I’m very fond of zola.

                                                                  1. 1

                                                                    My favorite, especially if not used to smaller form factors is the Vortex rac3r 3 without a doubt! The Vortex pok3r is a great 60% if looking for something smaller.

                                                                    1. 3

                                                                      I use a pretty stock doom-emacs with only a few additional packages

                                                                      Unlike vim I find emacs to be much harder to simply copy others configs. Probably due to have insanely configurable emacs is, but at least it gets me to stick close to stock (with doom as “stock”).

                                                                      1. 18

                                                                        For folks wanting more context on how the “minimum supported Rust version” (MSRV) issue is treated in the ecosystem, this issue has a number of opinions (including my own) and some discussion: https://github.com/rust-lang/api-guidelines/issues/123

                                                                        As far as I can tell, there is no strong consensus on what to do. In practice, I’ve observed generally the following states:

                                                                        1. Some folks adopt an explicit MSRV policy but do not consider it a breaking change to increase it.
                                                                        2. Some folks adopt an explicit MSRV policy and consider it a breaking change to increase it.
                                                                        3. There is no MSRV policy, and the only guarantee you have is that it compiles on latest stable (or latest stable minus two releases).

                                                                        In general, I’ve found that (1) and (2) are usually associated with more widely used crates and generally indicates an overall more conservative approach to increasing the MSRV. (3) is generally the default though, as far as I can tell.

                                                                        There’s good reason for this. Maintaining support for older versions of Rust is a lot of thankless work, particularly if your library is still evolving or if your own crate has other dependencies with different MSRV policies. All it takes is one crate in your dependency graph to require a newer version of Rust. (Unless you’re willing to pin a dependency in a library, which is generally bad juju.) Rust’s release cycle reinforces this. It moves quickly and provides new things for folks to use all the time. Those new things are added specifically because folks have a use for them, so their use can propagate quickly in the ecosystem if a widely used crate starts using it. The general thinking here is that updating your Rust compiler should be easy. And generally speaking, it is.

                                                                        “Maturity” is perhaps the right word, but only in the sense that, over time, widely used crates will slow their pace of evolution and, consequently, slow their MSRV increases. This isn’t necessarily equivalent to saying that “maturity” equals “slow evolution,” because it is generally possible for crates to make use of newer versions of Rust without increasing their MSRV via version sniffing and conditional compilation. (Not possible in every case, but the vast majority.) But doing this can lead to significant complexity and a greatly increased test matrix. It’s a lot of extra work, and maybe doing that extra work is what this author means by “maturity.” Chances are though, that’s a lot of unpaid extra work, and it’s not clear to me that that is reasonable expectation to have.

                                                                        1. 4

                                                                          Perhaps part of the solution could be to make LTS versions of rustc and cargo? That way distro maintainers could preferentially use those, and package maintainers preferentially target those. Make the common toolchain setup procedure apt install cargo instead of curl https://sh.rustup.rs > sh and there’s at least a prayer of people preferring that. Debian 10 currently ships with rustc 1.34 for example, which IMO is a pretty good place to put a breakpoint.

                                                                          But for this to happen there needs to be agreement on what the LTS versions are. If Debian 10 ships rustc 1.34, Ubuntu 20.04 ships 1.37 and Fedora ships 1.12, then as a crate maintainer I’m not going to bother trying to target a useful minimal version, because it’s a lot of random work that will never be perfect. If everyone ships rustc 1.34, then it’s much easier to say to myself “well I’d like this shiny new feature in rustc 1.40 but I don’t really need it for now, it can just go in the next time I’m making a breaking release anyway”. This actually works in my favor, ‘cause then when a user tries to install my software on some ancient random system I can just say “sorry, you have to use rustc 1.34+ like everyone else, it’s not like that’s a big ask”. Then distro maintainers can backport rustc 1.34 to Debian 9 or 8 if they really need to, and only need to do it once as well for most people’s software to work.

                                                                          This happens already, hence why Debian 10 has gcc-7 and gcc-8 packages. It’s fine. The special cases just need to be uncommon enough that it’s not a huge hassle.

                                                                          1. 5

                                                                            Yes, people generally want some kind of LTS story. There was an RFC that was generally positively received about 1.5 years ago: https://github.com/rust-lang/rfcs/pull/2483

                                                                            It was closed due to lack of bandwidth to implement it, but it seems like something that will be revisited in the future. There’s just a ton of other stuff going on right now that is soaking up team bandwidth, mostly in the form of implementing already merged RFCs.

                                                                            1. 4

                                                                              It would be really sad to let Debian hold back Rust version adoption in the ecosystem the way Debian gets to hold back C++ version adoption via frozen GCC.

                                                                              It seems to me it would be a major strategic blunder for Rust to do an LTS instead of the current situation.

                                                                              1. 2

                                                                                Is Debian a factor anymore? I mean it was always pretty backwards, but does anybody use it, care for it anymore? How independent is Ubuntu from them?

                                                                                I only use fedora/centos/rhel or windows for work. I have only seen Ubuntu in use by others in large real-world deployments, but Debian? Never.

                                                                                1. 3

                                                                                  Is Debian a factor anymore? I mean it was always pretty backwards, but does anybody use it, care for it anymore? How independent is Ubuntu from them?

                                                                                  People do care about Debian and use Debian. That’s fine. What’s not fine is acting entitled to having code from outside the Debian stable archive build with the compilers shipped by Debian stable.

                                                                                  As Ubuntu LTS releases get older, they have similar ecosystem problems as Debian stable generally, but in the case of Rust in particular, Ubuntu updates Rust on the non-ESR Firefox cycle, so Rust is exempt from being frozen in Ubuntu. (Squandering this exemption by doing a Rust LTS would be a huge blunder for Rust in my opinion.)

                                                                                  In my anecdotal experience entitlement to have out-of-archive code build with in-archive compilers is less of a problem with RHEL. People seem to have a better understanding that if you use RHEL, you are paying Red Hat to deal with being frozen in time instead of being frozen in time being a community endeavor beyond the distro itself. Edited to add: Furthermore, in the case of Rust specifically, Red Hat provides a rolling toolchain for RHEL. It doesn’t roll every six weeks. IIRC, it updates about every third Rust upstream release.

                                                                                  1. 3

                                                                                    The company I work at (ISP & ISTP) use Debian as the operating system on almost all virtual machines running core software which requires n nines uptime.

                                                                                    1. 3

                                                                                      I’ve found Debian Stable to be perfectly fine for desktop and server use. It just works, and upgrades are generally pretty smooth. Clearly, you have different experiences, but that doesn’t make Debian “backwards”.

                                                                                      1. 1

                                                                                        One department at my university is mostly-Debian for about 15+ years.

                                                                                        1. 0

                                                                                          I have seen Debian at a university department too, but not at places where actual money is made, or work is getting done. I had to use pkgsrc there to get fresh packages as a user to be able to get my stuff done.

                                                                                          University departments can afford to be backwards, because they are wasting other people’s time and money with that.

                                                                                          1. 3

                                                                                            Every place that I have worked primarily uses Debian or a Debian derivative. (Google used Ubuntu on workstations; at [Shiny consumer products, inc] the server that I was deploying on was Debian, despite the fact that they have their own server OS and they even supported it at the time; and the rest have been smaller firms or I’m under NDA and can’t discuss them). Except for Google, it was always Debian stable. So no, not just universities.

                                                                                            1. 1

                                                                                              BSD Unix was developed at a university.

                                                                                              Linus attended a university when starting to develop the Linux kernel.

                                                                                              The entire ethos and worldview of Free Software is inspired by RMS’ time at university.

                                                                                              The programming darling du jour, Haskell, is an offshoot of an academic project.

                                                                                              I’m really sad so much time and energy and other people’s money have been wasted on these useless things…

                                                                                              1. 2

                                                                                                Nice strawman!

                                                                                                And the infrastructure supporting these was just as backwards for its time as running Debian wasting the the time of students and tutors with outdated tools provided by the host institution…

                                                                                                1. 1

                                                                                                  In the comment I replied to first , you write:

                                                                                                  […] a university department too, but not at places where actual money is made, or work is getting done

                                                                                                  University departments can afford to be backwards, because they are wasting other people’s time and money with that.

                                                                                                  (my emphasis)

                                                                                                  I find it hard to read these quotes in any other way than you believe that universities are a waste of time and money…

                                                                                                  edit clarified source of quotes

                                                                                                  1. 4

                                                                                                    I can also mis-quote:

                                                                                                    I find it hard to read […]

                                                                                                    But I actually rather read and parse your sentences in their completeness.

                                                                                                    My claims were:

                                                                                                    a) I have only seen Debian used at places where efficiency is not a requirement
                                                                                                    b) Universities are such places

                                                                                                    I didn’t claim they don’t produce any useful things:

                                                                                                    […] University departments can afford to be backwards, because they are wasting other people’s time and money with that.

                                                                                                    Which should be parsed as: University Departments are wasting other people’s time and money with not using proper tools and infrastructure, for example using outdated (free) software. They are being inefficient. They waste student and tutor time, thus taxpayer money when not using better available free tools, but it doesn’t matter to them, as It does not show up n their balance sheet, Tutors and Students are already expected to do lot of “off-work hours” tasks to get their rewards: grades or money.

                                                                                                    And yes, they are being inefficient:

                                                                                                    • I had to find floppy disks in 2009 to be able to get my mandatory measurement data from a dos 5.0 machine at a lab. It was hard to buy them, and to get a place where I can read them… This one can be justified as expensive specialized measurement equipment was used and only legacy tools supported it.
                                                                                                    • I had to do my assignments with software available only at the lab, running some Debian (then current) version shipping only outdated packages. OpenOffice kept crashing, and outdated tools were a constant annoyance. As a student my time was wasted. (Until I installed pkgsrc, and rolled my up to date tools)
                                                                                                    • At a different university I have seen students working in Dosbox writing 16 bit protected mode in assembly in edit.com, compiling with some ancient MS assembler, in 2015, because the department thought the basics of assembly programming didn’t change since they introduced the curriculum, so they won’t update the tools and curriculum. They waste everyone’s money, the student’s won’t use it in real life anyway, because they are not properly supervised, as they would be if they were living from the market.
                                                                                                    1. 3

                                                                                                      Thanks for clarifying.

                                                                                                      I realize it might be hard to realize for you now, but I can assure you that “the real world, governed by the market”, can be just as wasteful and inefficient as a university.

                                                                                                      1. 2

                                                                                                        Unfortunately that is also true, I have seen “bullshit jobs” (a nice book btw.) business from inside (been partly a box- ticker for a time), but the enormous waste I saw at universities make me feel that the useful stuff coming out from them is exception, the result of herculean efforts of a few working against all odds, complete institutions working on strangling people/projects leading to meaningful results.

                                                                                                        Wasting someone’s own money is a thing, I don’t care that much about that, wasting taxpayer money is not a good move, but to some extent I can tolerate it… Wasting talents and other people’s time is what really infuriates me.

                                                                                              2. 1

                                                                                                I had to use pkgsrc there to get fresh packages

                                                                                                Did you have root privileges as a student?

                                                                                                1. 2

                                                                                                  pkgsrc supports unprivilieged mode!

                                                                                                  https://www.netbsd.org/docs/pkgsrc/platforms.html#bootstrapping-pkgsrc

                                                                                                  It worked like a charm.

                                                                                                  But I did actually have root privileges, as the guy responsible for the lab was overburdened and sometimes some of us he trusted helped other students. Still I didn’t use that to alter the installed system, as that would be out of my mandate.

                                                                                        2. 1

                                                                                          Debian 10 currently ships with rustc 1.34 for example, which IMO is a pretty good place to put a breakpoint.

                                                                                          1.34 doesn’t have futures nor async/await that seriously impact the code design. Do I really have to wait for Debian 11 in 2021 to use them?

                                                                                          1. 2

                                                                                            No, if you need them then install a newer rustc and use them. But there’s plenty of code that also doesn’t need futures or async/await.

                                                                                        3. 3

                                                                                          Wow, I wasn’t aware that this issue has an acronym and even a place for discussion. Thanks for the pointer!

                                                                                          widely used crates will slow their pace of evolution and, consequently, slow their MSRV increases.

                                                                                          Exactly what I’m hoping for, and precisely the reason I’m not jumping off the ship :)

                                                                                          maybe doing that extra work is what this author means by “maturity.”

                                                                                          In part, yes, that’s what I meant. The other possibility is to hold off adopting new APIs (as you did with alloc in regex; thanks!). I understand both options are a PITA for library maintainers, and might not even make sense, economy-wise, for unpaid maintainers. Perhaps I should’ve used “self-restraint” instead of “maturity”, but that probably has some unwanted connotations as well.

                                                                                          1. 2

                                                                                            Here’s a cargo subcommand (cargo msrv-table) I hacked together (warning, just a hacky PoC) that displays the MSRV by crate version for any particular crate.

                                                                                        1. 5

                                                                                          I believe the issue here stems from something two-fold; there is no standard practice about “minimum supported Rust version” (MSRV), and projects typically don’t provide a table of MSRV to x.y version (in semver sense). As @burntsushi stated, there has been quite a bit of discussion around MSRV practices, and lists the three general (but not standard, as there is none) practices crates adopt.

                                                                                          My personal opinion (and that of my projects) is that a MSRV change should trigger at a minimum a minor version bump. Thus allowing downstream crates to use ^x.y version locks in their Cargo.toml (only increase patch version automatically).

                                                                                          Adding an easy table of “Project ver a.b has MSRV of 1.16, while c.d has an MSRV of 1.24, etc.” would make it easier for downstream crates to not only pick a version lock, but upgrade knowingly. Right now it’s a lot of trial and error.

                                                                                          This doesn’t fix everything, as typically there very little in the way of back-porting features/support to older project versions that coincide with older Rust versions. However, for an unpaid project maintainer to provide the above two items would be a large step in the right direction.

                                                                                          1. 4

                                                                                            Right, yeah. For 1.x crates (or beyond), I generally adhere to the “only bump MSRV in a minor version” rule. I think you were the one who started that. :-)

                                                                                            1. 2

                                                                                              Ah yes, I should have stated I meant >= 1.x, as I also view 0.x as the wild west where (almost) anything goes :-)

                                                                                            2. 2

                                                                                              Did you mean to say ~x.y? Caret(^) is a default. I would probably advise using ~ deps for libraries, because they can lead to genuinely unsatisfiable dependency graphs. ^-requirements are always satisfiable (with the exception of links flag, which is an orthogonal thing).

                                                                                              1. 1

                                                                                                I did, thanks for catching that!

                                                                                            1. 2

                                                                                              Regolith has been my daily driver ever since I learned about it a few months ago (prior to Regolith it was Xubuntu base with custom i3 install, or Fedora w/XFCE base with same custom i3 install). Regolith especially excels for laptops, having that little touch of DE integration makes things like function keys, suspend, etc. Just Work. All this without losing a minimal i3 environment, where I had to spend exactly zero time setting it up. I couldn’t be happier!

                                                                                              1. 5

                                                                                                I’m looking forward to the rest in the series as I’m a fan of the author and everything they’ve done for Rust, however with only the first article out thus far which merely discusses components that may cause slow compilation it leads the reader in an overly negative direction, IMO.

                                                                                                Rust compile times aren’t great, but I don’t believe they’re as bad as the author is leading onto thus far. Unless your dev-cycle relies on CI and full test suite runs (which requires full rebuilds), the compile times aren’t too bad. A project I was responsible for at work used to take ~3-5ish minutes for a full build if I remember correctly. By removing some unnecessary generics, feature gating some derived impls, feature gating esoteric functionality, and re-working some macros as well as our build script the compile times were down to around a minute which meant partial builds were mere seconds. That along with test filtering, meant the dev-test-repeat cycle was very quick. Now, it could also be argued that feature gates increase test path complexity, but that’s what our full test suite and CI is for.

                                                                                                Granted, I know our particular anecdote isn’t indicative of all workloads, or even representative of large Servo style projects, but for your average medium sized project I don’t feel Rust compile times hurt productivity all that much.

                                                                                                …now for full re-builds or CI reliant workloads, yes I’m very grateful for every iota of compile time improvements!

                                                                                                1. 7

                                                                                                  It is also subjective. For a C++ developer 5 minutes feels ok. If you are used to Go or D, then a single minute feels slow.

                                                                                                  1. 4

                                                                                                    Personally, slow compile times are one of my biggest concerns about Rust. This is bad enough for a normal edit/compile/run cycle, but it’s twice as bad for integration tests (cargo test --tests) which have to link a new binary for each test.

                                                                                                    Of course, this is partly because I have a slow computer (I have a laptop with an HDD), but I don’t think I should need the latest and greatest technology just to get work done without being frustrated. Anecodatally, my project with ~90 dependencies is ~8 seconds for an incremental rebuild, ~30 seconds just to build the integration tests incrementally, and over 5 minutes for a full build.