1. 9

    I agree the String thing is confusing, in fact the author didn’t list quite a few different string types that exist, and listed no where near the amount of string conversions or ways to get string-y arguments. However, it’s one of those cases where the underlying problem Rust solved with this confusion actually exists in (nearly?) all languages. Rust, in typical Rust fashion just makes you aware of all the footguns up front. Once you wrap your head around when to use each type, or what their various tradeoffs are, it makes perfect sense. So much so that I’ll get frustrated with other languages that paper over these details leading to bugs. Bottom line, strings in general are hard, really hard.

    1. 3

      I think the distinctions that Rust make are useful and necessary. However, I think one of the problems is that the types are confusingly named. I think String should have been called StringBuf and OsString OsStringBuf, just as you have Path and PathBuf`.

      I think an additional problem that make slices ([T]) and string slices (str) difficult to understand is that they are unsized, built-in types. So, people have to understand the difference between e.g. &str and str and why you cannot just put str in e.g. a struct. I know that there are good reasons for why string references are as they are, but I think from a learning curve perspective, it would have been easier if string slices were something along the lines of a simple Copyable type:

      struct StringSlice<'a> {
        buf: &'a StringBuf,
        lower: usize,
        upper: usize,
      }
      
      1. 1

        Having references to unsized slices is necessary to avoid a performance hit. The StringSlice type above is 1 word larger than &str (which is just a pointer and a length). More importantly it has an additional layer of indirection: buf points to the StringBuf which points to the data, while &str points directly at the relevant data.

        1. 2

          You don’t have to convince me. Like I said, there are good reasons for the current representation. It’s just that it makes (string) slices more opaque.

          This is also quite opposite to many other types Rust, which are transparent and can be understood by just reading the standard library.

          One of my favorite examples is the BinaryHeap::peek_mut method, which would be completely unsafe in another language (since you can modify the tip of the heap, which invalidates the heap property), but in Rust it can be done without any magic. The borrow system takes care of that you can only have one mutable reference (so no-one else can have a view of the heap when the heap property is temporarily broken), the Drop implementation of PeekMut takes care of restoring the heap property when necessary.

    1. -1

      So, the author hates snaps, and is incapable of downloading and installing the deb themselves using dpkg?

      I mean, I get it. There are aspects of snaps that are ugly and downright unpleasant for grizzled UNIX veterans (I’m looking at you, oddball sandboxed configuration directory locations!) but they’re almost undeniably a boon for the average end user.

      Ubuntu is designed to be the LInux distribution for everyone, and that includes decidedly non technical users who want to just install it and have it Just Work (TM).

      Now, I can just HEAR you revving up your keyboard for a scathing retort about how they don’t Just Work for your use case, but you’re most likely a highly skilled technical practitioner with years of experience at the UNIX command line and very highly refined tastes around things like packaging, layout and software installation.

      And that’s great! But maybe you should consider a distribution which expects that and gives you that control by default, rather than one aimed at the lowest common denominator whose aim is to bring Linux to Uncle Alvin who’s 92 and just wants a way to browse his Fox news, read his email, and maybe buy a thing or two online.

      1. 3

        [The author] is incapable of downloading and installing the deb themselves using dpkg?

        I think the concern is more that in certain circumstances that .deb simply installs a snap behind the scenes. Further, the worry of many is that this will continue to be the trend, where even more traditional packages are replaced by snaps.

        Sure, you’ll still have PPAs and can download a random deb from a website (if someone supplies it), but that’s a far step worse than from an official source.

        Ubuntu is designed to be the LInux distribution for everyone, and that includes decidedly non technical users who want to just install it and have it Just Work (TM).

        Yes an no. They’re also marketed heavily towards server and enterprise environments. Canonical is pushing snaps just as hard in those places too (look at LXD, kernel live patching, even things like NextCloud which market a snap install, etc.).

        My personal gripe with snaps is just the marketing doesn’t match product. I don’t have good experiences with Snaps outside of Ubuntu based distros even though they’re marketed as perfectly cross distro. My personal fear is more and more companies release snaps of their products because of how hard Canonical is pushing them, meanwhile on other distros the experience suffers.

        1. 2

          My personal gripe with snaps is just the marketing doesn’t match product. I don’t have good experiences with Snaps outside of Ubuntu based distros even though they’re marketed as perfectly cross distro. My personal fear is more and more companies release snaps of their products because of how hard Canonical is pushing them, meanwhile on other distros the experience suffers.

          That’s valid. As I mentioned I’ve had mixed success with snaps even on Ubuntu (there’s a tendency to contribute snaps that are either busted out of the box or become busted very quickly and are never fixed.)

          I’m also a bit frustrated that Snap versus Flatpak is a thing because more fragmentation is exactly NOT what the Linux desktop needs.

      1. 3

        Maybe it’s just because its Twitter format, but its hard to judge the points. Some points I agree with. Some of the criticisms I’m not sure what they are referring to as it only goes surface deep, so it’s hard to judge if what they’re experiencing is common or they’re just doing something against the grain.

        Also some points comes off as, “I wanted to do X, but Rust wants me to Y. I know better than Rust and I hate that it is putting up road blocks to letting me do what I want.” While this can be very much true in some situations and I’m not doubting the authors ability to write correct programs…however the number of times I’ve felt the same way, only find out later that what I had wanted to do actually WAS flawed in some way is higher than I’d like to admit.

        1. 2

          “I wanted to do X, but Rust wants me to Y. I know better than Rust and I hate that it is putting up road blocks to letting me do what I want.”

          I don’t read any of the points to mean that. Instead my understanding is that the author is saying is that when ‘trying to go against the grain’ you either have to use unsafe to ‘shut rustcc up’ or rely on complex language features, verbose code or external crates and that neither option is good. ej:

          In Rust there will be a tension between simple but plenty kernels of unsafe, and trying to avoid unsafe as much as possible using complex language features.

          The purpose of placating the borrow checker is to guarantee properties of the code, ej. no data races. That says nothing of the purpose the code is written for. That remains the task of a human†. Verbose, hard to read code impedes this. So for some cases there is a tension is between writing simple code wrapped in unsafe or harder to understand code that has some [important] verified properties.

          †: Yes I imagine it is possible to encode business rules in Agda or Coq, but that is hardly the common case when writing rust.

        1. 6

          As much as I dislike snap, this post is overly dramatic. You can easily download the non-ubuntu chromium binary and install it without need of snap.

          The main problems of snap, which are “irreconcilable differences” that will alienate a part of the population, are:

          1. hardcoded home directory pollution
          2. user home must be inside /home/
          3. cannot disable the automatic update feature
          1. 9

            You can easily download the non-ubuntu chromium binary and install it without need of snap.

            I suppose they want to use official packages from a reputable repository. Installing binaries manually really is bad practice for security and maintainability reasons.

            1. 2

              I installed the official chromium .deb for Debian and it works flawlessly. (I prefer firefox, but jitsi does not work well in firefox).

              1. 4

                Is that a repository, or a single .deb file? If the latter, that doesn’t get updates along with regular system maintenance. If it’s an external repository, that could be a decent solution depending on how much you trust it.

                1. 2

                  if chromium is anything like regular chrome or firefox they are updated out of cycle with the rest of the system anyway, unless you happen to turn auto-updates off

                  1. 4

                    At work I’m using Chromium and Firefox from the Debian repositories. Auto updates are turned off and will use the standard system update mechanism.

                    Having random binaries update themselves in a system sounds like a recipe for madness to a sysadmin. Also, how does that even work in a multi-user system where they’re installed system wide? Does that mean these binaries are setuid root or something?

                2. 2

                  jitsi does not work well in firefox

                  I keep hearing this, but I use jitsi from firefox every day and don’t have any issues. There was a feature missing in firefox about a year ago that was preventing jitsi from working, That was reported and fixed eventually although it took a while to get through the system. Maybe there are still some minor issues but nothing I have seen that makes me want to switch to chrome.

                  1. 5

                    Firefox’s implementation of WebRTC has some issues that make Jitsi scale poorly when anyone in a call is on Firefox. This is fine for small groups; it only becomes an issue if there’s more than 10 or so participants.

                    1. 2

                      Ok, thanks for clarifying that. I can confirm I am only using it in small groups.

              2. 5

                I really don’t understand why Ubuntu pushes Snaps when there is Flatpaks (desktop) and Docker (server), unless what they really want is to generate lock in. I wished they were more collaborative and smarter about what maked them stand out (like being a polished desktop Linux). Point 1. was one of the reasons for me to switch to Fedora.

                1. 9

                  I find the existence of both Flatpak and Snap confusing. They seem to solve a problem that only exists for a limited set of software within an already very limited niche of users. Web browsers on desktop Linux distros seem to be well-served by them, but how many engineer-years have gone into building these things?

                  I suspect there’s some big benefit/use-case that I’m completely missing.

                  1. 12

                    I find the existence of both Flatpak and Snap confusing.

                    This!

                    Snap and flatpack try to solve two completely unrelated problems: application sandboxing and package distribution, and do a notoriously bad job at each one.

                    Application sandboxing should be an OS-feature, not requiring any action by the potentially hostile application distributors. Thus, it should be able to act upon arbitrary programs. If I want to run “ls” in a controlled container, so be it. Any application, no matter how is it distributed, must be sandboxable.

                    Package distribution is a different thing. At this point, it seems that nearly all of the problems can be solved by distributing a static executable as a single file.

                    1. 2

                      If I want to run “ls” in a controlled container, so be it.

                      That may be rather difficult. It already needs access to the whole filesystem…

                      1. 3

                        But it doesn’t need to access to the network, or file contents and it definitely should not be allowed to change anything. Plenty of permissions to restrict.

                        1. 2

                          or file contents

                          Can you restrict that on Linux? Is there a separate permission for reading files and reading directories?

                          You’d also need a whitelist for reading some files, such as shared libraries and locale.

                          and it definitely should not be allowed to change anything

                          Well it has to be able to write to stdout… which could be any file descriptor.

                          1. 1

                            Can you restrict that on Linux? Is there a separate permission for reading files and reading directories?

                            So long as the directory has r-x (octal 5) permission, and the file does not have read r permissions you can browse the directory but not read the files contents.

                            1. 3

                              No I mean is there a way to allow readdir but not read? AFAIK Linux does not have that level of granularity.

                    2. 1

                      This is entirely new to me too.

                      From the wikipedia entry https://en.wikipedia.org/wiki/Snappy_(package_manager):

                      The system is designed to work for internet of things, cloud and desktop computing.

                      So it’s a more light-weight Docker I guess.

                      1. 6

                        I’m not sure how much more light-weight they can be, given that Flatpak and Snap are both using the same in-kernel container mechanisms (cgroups, namespaces, seccomp etc.) as Docker.

                        1. 4

                          Somewhat tangential (maybe you happen to know, or somebody else who does is reading) – is the sandboxing any good these days, and do Flathub applications/other packagers user them? About two years ago, when Flatpak was just getting hot, the flurry of “this is the future of Linux desktop” posts convinced me to spend a few weekends with it and it was pretty disappointing.

                          It turned out that virtually all applications on flathub had unrestricted access to the home directory (and many of them had unrestricted access to the whole filesystem), even though it showed the pretty “sandbox” icon – arguably not Flatpak’s fault I guess, but not very useful, and also not very assuring (features that go almost completely unused tend to be broken in all sorts of ways – since no one gets to use them and hit the bugs). Lurking through the bug tracker also painted a pretty terrible picture – obvious bugs, some of which had had serious enough CVEs assigned for months, lingered for months. So basically it was (almost) zero sandboxing done by a system that looked somewhat unlikely to be able to deal with really malicious applications in the first place.

                          (Edit: I don’t mean that Flatpak, or Snap, are bad as a concept – and I also want to re-emphasize, for anyone reading this in 2020, that all of this was back in 2018 or so. But back then, this looked like years away from being anything near something you’d want to use to protect your data – it wasn’t even beta quality, it was, at best, a reasonable proof of concept.)

                          Also, even though this was all supposed to “streamline” the distribution process so that users get access to the latest updates and security fixes more quickly, even the most popular packages were hopelessly out of date (as in weeks, or even months) in terms of security fixes. I expect at least this may have changed a bit, given the increase in popularity?

                          Has any of this stuff changed in the last two years? Should I give it another go this weekend :-) ?

                          (Edit: I can’t find my notes from back then but trying to google around for some of the bugs led me here: http://flatkill.org/ . There’s a lot of unwarranted snark in there, so take it with a grain of salt, but it matches my recollections pretty well…)

                          1. 4

                            It turned out that virtually all applications on flathub had unrestricted access to the home directory (and many of them had unrestricted access to the whole filesystem),

                            A cursory GitHub search of the Flathub organization shows ~150-200 applications have --filesystem=host or --filesystem=home each. And close to 100 have --device=all. So it seems that a large portion is still effectively unsandboxed.

                            Lurking through the bug tracker also painted a pretty terrible picture – obvious bugs, some of which had had serious enough CVEs assigned for months, lingered for months.

                            This is a disaster in the making. Outside the standard SDKs that are provided through FlatHub, applications compile their own picked versions of… pretty much everything. Just going over a bunch of Flatpaks shows that the dependencies are out of date.

                            That said, I see what they are aiming for. The broad permissions are caused by several issues that will probably be resolved in time: broad device permissions are often for webcam access, which should be solved by Pipewire and the corresponding portal. The home/host filesystem permissions can partially be attributes to applications which use toolkits for which the portal mechanism isn’t implemented.

                            The problem that every Flatpak packages their own stuff is more concerning though… I know that the aim is to be distribution-independent, but it seems like a lot could be gained by allowing re-use of regular packages within Flatpaks.

                          2. 2

                            I’m thinking more lightweight conceptually. Docker is seen as a sysadmin/devops thing, Snappy is more like a mobile app.

                            1. 3

                              In practice however it is still a sysadmin thing.

                    3. 4

                      You can easily download the non-ubuntu chromium binary and install it without need of snap.

                      Then you’re either stuck using PPAs (which is a no-go for certain environments) or manually updating the DEB. Both of which are not good options when it should be as easy getting updates from the official repositories.

                      1. 0

                        I’ve found Chris’ recent posts to be increasingly histrionic. He’s otherwise been a reliable read for ages.

                        1. 1

                          You say that but I’d agree it’s a serious bug or even just WTF moment.

                          Yes, there’s the FHS - but nowhere it says (afaik) that software should break if you change something like this, which isn’t even an edge case but has been done for decades.

                          1. 1

                            I don’t disagree with that. It seems like a poor limitation that deserved more attention from the devs once reported. And it would have likely caused problems at the last place I was a Sysadmin.

                            What I’m complaining about is the tone with which he’s presented the issue. And it’s not limited to this post; I’ve been reading his blog for about ten years and it’s been a high quality read for most of that time, until relatively recently when the tone has been more entitled and (for want of a better word) whingy which detracts from the substance of what he’s writing about.

                      1. 1

                        Fork is the best Git client ever, for Mac and Windows. It used to be free, but when it went to US$50 last month I paid immediately and sighed with relief that the devs now have a revenue stream and can continue working on it. It’s that good.

                        I’m sure most of you use the Git CLI. I used to. But a good GUI is so much more efficient, letting you scroll through revision trees and inspect diffs and interactively rebase without having to fill your mental working-set with a ton of details of commands and flags.

                        1. 1

                          I’ve been a big fan of GitKraken for the same reasons. Although there is a free version, the paying the license is absolutely worth it!

                        1. 29

                          The 6-week release cycle is a red herring. If Rust didn’t have 6-week cycles, it would have bigger annual releases instead, but that has no influence on the average progress of the language.

                          It’s like slicing the same pizza in either 4 or 42 slices, but complaining “oh no, I can’t eat 42 slices of pizza!”

                          Rust could have had just 4 releases in its history: 2015 (1.0), 2016 (? for errors), 2018 (new modules) and 2020 (async/await), and you would call them reasonably sized, each with 1 major feature, and a bunch of minor standard library additions.

                          Async/await is one major idiom-changing feature since 2015 that actually caused churn (IMHO totally worth it). Apart from that there have been only a couple of syntax changes, and you can apply them automatically with cargo fix or rerast.

                          1. 17

                            It’s like slicing the same pizza in either 4 or 42 slices, but complaining “oh no, I can’t eat 42 slices of pizza!”

                            It’s like getting one slice of pizza every 15 minutes, while you’re trying to focus. I like pizza, but I don’t want to be interrupted with pizza 4 times. Being interrupted 42 times is worse.

                            Timing matters. Treadmills aren’t fun as a user.

                            1. 13

                              Go releases more frequently than Rust, and I don’t see anyone complaining about that. Go had 121 releases, while Rust less than half of that.

                              The difference is that Go calls some releases minor, so people don’t count them. Rust could do the same, because most Rust releases are very minor. If it had Go’s versioning scheme it’d be on something like v1.6.

                              1. 20

                                People aren’t complaining about the frequency of Go releases because Go doesn’t change major aspects of the language, well, ever. The most you have to reckon with is an addition to the standard library. And this is a virtue.

                                1. 8

                                  So, what major aspects of the language changed since Rust 1.0, besides async and perhaps the introduction of the ? operator?

                                  1. 10

                                    The stability issues are more with the Rust ecosystem than the Rust language itself. People get pulled into fads and then burned when they pay the refactoring costs to move to the next one. Many of those fad crates are frameworks that impose severe workflow constraints.

                                    Go is generally far more coherent as an overall ecosystem. This was always the intent. Rust is not so opinionated and structured. This leads to benefits and issues. Lots of weird power plays where people write frameworks to run other people’s code that would just be blatantly unnecessary in Go. It’s unnecessary in Rust, too, but people are in a bit of daze due to the complexity flying around them and it’s sometimes not so clear that they can just rely on the standard library for a lot of things without pulling in a stack of 700 dependencies to write an echo server.

                                    1. 2

                                      Maybe in the server/web part of the ecosystem. I am mostly using Rust for NLP/ML/data massaging and the ecosystem has been very stable.

                                      I have also use Go for several years, but I didn’t notice much difference in the volatility.

                                      But I can imagine that it is different for networking/services, because the Go standard library has set strong standards there.

                                    2. 6

                                      Modules have changed a bit, but it was optional change and only required running cargo fix once.

                                      Way less disruptive than GOPATH -> go modules migration.

                                  2. 5

                                    That is kind of the point. I love both Go and Rust (if anything, I’d say I’d like Rust more than Go if working out borrow checker issues wasn’t such a painstaking, slow process), but with Go I can go and update the compiler knowing code I wrote two years ago will compile and no major libraries will start complaining. With Rust, not so much. Even in the very short time I was using it for a small project, I had to change half of my code to use async (and find a runtime for that, etc.) because a single library I wanted to use was ‘async or the highway’.

                                    Not a very friendly experience, which is a shame because the language itself rocks.

                                    1. 9

                                      In Rust you can upgrade the compiler and nothing will break. Rust team literally compiles all known Rust libraries before making a new release to ensure they don’t break stuff.

                                      The ecosystem is serious about adherence to semver, and the compiler can seamlessly mix new and old Rust code, so you can be selective of what you upgrade. My projects that were written for Rust 1.0.0 work fine with the latest compiler.

                                      The async addition was the only change which caused churn in the ecosystem, and Rust isn’t planning anything that big in the future.

                                      And Go isn’t flawless either. I can’t upgrade deps in my last Go project, because migration to Go Modules is causing me headaches.

                                      1. 3

                                        Ah, yeah, the migration to modules was a shit show. It took me about six months to be able to move a project to modules because a bunch of the dependencies took a while to upgrade.

                                        Don’t get me wrong, my post wasn’t a criticism of my Rust. As I said, I really enjoy the language. But any kind of big changes like async and so on introduce big paradigm shifts that make the experience extra hard for newcomers. To be fair to Rust, Python took 3 iterations or so until they figured out a proper interface for async, while rust figured the interface and left the implementation to the reader… Which has created another rift for some libraries.

                                2. 4

                                  I can definitely agree with the author, since I do not write Rust in my day job it is pretty hard for me to keep up with all the minor changes in the language. Also, as already stated in the article, the 6 week release cycle exacerbates the problem.

                                  I’m not famliar with Rust’s situation, but from my own corporate experience, frequent releases can be awful because features are iterated on continuously. It would be really nice to just learn the final copy of something rather than all the intermediate steps to get there.

                                  1. 3

                                    Releasing “final copy” creates design-by-commitee. Features have to get real-world use to prove they’re useful.

                                    There’s a chicken-egg problem here. Even though Rust has nightlies and betas, features are adopted and used in production only after they’re declared stable. But without using a feature for real, you can’t be sure it’s right.

                                    Besides, lots of changes in 6-week releases are tiny, like a new command-line flag, or allowing few more functions to be used as initializers of global variables.

                                    1. 6

                                      Releasing “final copy” creates design-by-commitee. Features have to get real-world use to prove they’re useful.

                                      Design-by-committee can be a lot more thoughtful than design-by-novice. I think this is one of the greatest misonceptions of agile.

                                      Many of the great things we take for granted are done by committee including our internet protocols and core infrastructure. There’s a lot of real good engineering in there. Of course there’s research projects and prototyping which are super useful but it’s a full time job to keep up with developments in research. Most people don’t have to care to learn it until it’s stable and published.

                                      1. 2

                                        Sorry, I shouldn’t have mentioned an emotionally-charged “commitee” name. It was not the point.

                                        The point is that language features need iteration to be good, but for a language with strong stability guarantee the first iteration must be the final one.

                                        So the way around such impossible iteration is release only obvious core parts, so that libraries can iterate on the rest. And the rest is going to be blessed as official only after it proves useful.

                                        Rust has experience here: the first API of Futures turned out to have flaws. Some interfaces caused unfixable inefficiencies. Built-in faillibility turned out to be more annoying than helpful. These things came out to light only after the design was “done” and people used it for real and built large projects around them. If Rust held that back and waited for the full async/await to be feature-complete, it’d be a worse design, and it wouldn’t have been released yet.

                                      2. 3

                                        Releasing “final copy” creates design-by-commitee.

                                        I’m not convinced that design-by-crowd is substantively different from design-by-committee.

                                        1. 1

                                          Releasing “final copy” creates design-by-commitee. Features have to get real-world use to prove they’re useful.

                                          There’s a chicken-egg problem here. Even though Rust has nightlies and betas, features are adopted and used in production only after they’re declared stable. But without using a feature for real, you can’t be sure it’s right.

                                          I deny the notion that features must be stabilized early so that they get wide spread or “production use.” It may well be the case that some features don’t receive enough testing on nightly/beta and in order to get more users it must hit stable, but limited testing on nightly or beta is not a reason to stabilize a feature. Either A) wait longer until it’s been more thoroughly tested on nightly/beta or B) find a manner to get more testers of features on nightly/beta.

                                          I’m not necessarily saying that’s what happened with Rust, per se, but it’s close as I’ve seen the sentiment expressed several times over my time with Rust (since 0.9 days).

                                      3. 10

                                        It’s not a red herring. There might be bigger annual releases if there weren’t 6-week releases, but you’re ignoring the main point: Rust changes frequently enough to make the 6-week release cycle meaningful. The author isn’t suggesting the same frequency of changes less often, but a lower frequency of changes - low enough, perhaps, that releasing every 6 weeks would see a few “releases” go by with no changes at all.

                                        No one is trying to make fewer slices out of the pizza. They’re asking for a smaller pizza.

                                        1. 7

                                          How is adding map_or() as a shorthand for map().unwrap_or() a meaningful language change? That’s the scale of changes for the majority of the 6-week releases. For all but handful of releases the changes are details that you can safely ignore.

                                          Rust is very diligent with documenting every tiny detail in release notes, so if you don’t pay attention and just gloss over them only counting the number of headings, you’re likely to get a wrong impression of what is actually happening.

                                          1. 3

                                            How is adding map_or() as a shorthand for map().unwrap_or() a meaningful language change?

                                            I think that’s @ddevault’s point: the pizza just got bigger but it didn’t really get better. It’s a minor thing that doesn’t really matter, but it happens often and it’s something you may need to keep track of when you’re working with other people.

                                            1. 9

                                              Rust also gets criticised for having too small standard library that needs dependencies for most basic things. And when it finally adds these basic things, that’s bad too…

                                              But the thing is — and it’s hard to explain to non-users of the language — that additions of things like map_or() is not burdensome at all. From inside, it’s usually received as “finally! What took you so long!?”.

                                              • First, it follows a naming pattern already used elsewhere. It’s something you’d expect to exist already, not really a new thing. It’s more like a bugfix for “wtf? why is this missing?”.

                                                Back-filling of outrageously missing features is still a common thing in Rust. 1.0 was an MVP rather than a finished language. For example, Rust waited 32 releases before add big-endian/little-endian swapping.

                                              • There’s cargo clippy that will flag too unidiomatic code, so you don’t really need to keep track of it.

                                              • It’s OK to totally ignore this. If your code worked without some new stdlib function, it’ll doesn’t have to care. And these changes are minor, so it’s not like you’ll need to read a book on a new method you notice. You’ll know what it does from it’s name, because Rust is still at the stage of adding baby things.

                                              1. 7

                                                In the Haskell world, there’s a piece of folklore called the Fairbairn Threshold, though we have very clean syntax for composing small combinators:

                                                The Fairbairn threshold is the point at which the effort of looking up or keeping track of the definition is outweighed by the effort of rederiving it or inlining it.

                                                The term was in much more common use several years ago.

                                                Adding every variant on every operation to the Prelude is certainly possible given infinite time, but this of course imposes a sort of indexing overhead mentally.

                                                The primary use of the Fairbairn threshold is as a litmus test to avoid giving names to trivial compositions, as there are a potentially explosive number of them. In particular any method whose definition isn’t much longer than its name (e.g. fooBar = foo . bar) falls below the threshold.

                                                There are reasonable exceptions for especially common idioms, but it does provide a good rule of thumb.

                                                The effect is to encourage simple combinators that can be used in multiple situations, while avoiding naming the explosive number of combinations of those combinators.

                                                Given n combinators I can probably combine two of them in something like O(n^2) ways, so without the threshold as a rule of thumb you wind up with a much larger library, but no real greater utility and much higher cognitive overhead to track all the combinations.

                                                Further, the existence of some combinations tends to drive you to look for other ever larger combinations rather than learn how to compose combinators or spot the more general usage patterns yourself, so from a POSIWID perspective, the threshold encourages better use of the functional programming style as well.

                                            2. 1

                                              Agreed. It has substantially reduced my happiness all around:

                                              • It’s tiring to deal with people who (sincerely) think adding features improves a language.
                                              • It’s disappointing that some people act like having no deprecation policy is something that makes a language “stable”/“reliable”/good for business use.
                                              • It’s mind-boggling to me that the potential cost of removing a feature is never factored into the cost of adding it in the first place.

                                              Mainstream language design is basically living with a flatmate that is slowly succumbing to his hoarding tendencies and simply doesn’t realize it.

                                              What I have done to keep my sanity is to …

                                              • freeze the version of Rust I’m targeting to Rust 1.13 (I’m not using ?, but some dependencies need support for it), and
                                              • playing with a different approach to language design that makes me happier than just watching the constant mess of more-features-are-better.
                                              1. 2

                                                Mainstream language design is basically living with a flatmate that is slowly succumbing to his hoarding tendencies and simply doesn’t realize it.

                                                I like that analogy, but it omits something crucial: it equates “change” with “additional features/complexity” – but many of the changes to Rust are about removing special cases and reducing complexity.

                                                For example, it used to be the case that, when implementing a method on an item, you could refer to the item with Self – but only if the item was a struct, not it it was an enum. Rust 1.37 eliminated that restriction, removing one thing for me to remember.

                                                Other changes have made standard library APIs more consistent, again reducing complexity. For example the Option type has long had a map_or method that calls a function on the Some type or, if the Option contains None, uses a default value. However, until Rust 1.41, you had to remember that Results didn’t have a map_or method (even though they have nearly all the other Option methods). Now, Results have that method too, making the standard library more consistent and simpler.

                                                I’m not claiming that every change has been a simplification; certainly some have not. (For example, did we really need todo!() as a shorter way to write unimplemented!() when they have exactly the same effect?).

                                                But some changes have been simplifications. If Rust is a flatmate that is slowly buying more stuff, it’s also a flatmate that’s throwing things out in an effort to maintain a tidy space. Which effect dominates? As a pretty heavy Rust user, my personal feeling is that the language is getting simpler over time, but I don’t have any hard evidence to back that up.

                                                1. 3

                                                  But some changes have been simplifications.

                                                  I think what you are describing is a language that keeps filling some gaps and oversights, they are probably not the worst kind of additions, but they are additions.

                                                  If Rust is a flatmate that is slowly buying more stuff, it’s also a flatmate that’s throwing things out in an effort to maintain a tidy space.

                                                  What has Rust thrown out? I have trouble coming up with even a single example.

                                                  As a pretty heavy Rust user, my personal feeling is that the language is getting simpler over time, but I don’t have any hard evidence to back that up.

                                                  How would you distinguish between the language getting simpler and you becoming more familiar with the language?

                                                  I think this is the reason why many additions are “small, simple, obvious fixes” to expert users, but for new/occasional users they present a mountain of hundreds of additional things that have to be learned.

                                                  1. 1

                                                    How would you distinguish between the language getting simpler and you becoming more familiar with the language?

                                                    That’s a fair question, and is part of the reason I added the qualification that I can only provide my personal impression – without data, it’s entirely possible that I’m mistaking my own familiarity for language simplification. But I don’t believe that’s the case, for a few reasons.

                                                    I think this is the reason why many additions are “small, simple, obvious fixes” to expert users, but for new/occasional users they present a mountain of hundreds of additional things that have to be learned.

                                                    I’d like to focus on the “additional things” part of what you said, because I think it’s key: if a feature is revised so that it’s consistent with several other features, then that’s one fewer thing for a new user to learn, not one more. For example, match used to treat & a bit differently and require as_ref() method calls to get the same effect, which frequently confused people learning Rust. Now, & works the same with match as it does with the rest of the language. Similarly, the 2015 Edition module system required users to format their paths differently in use statements than elsewhere. Again, that confused new users (and annoyed pretty much everyone) and, again, it’s been replaced with a simpler, more consistent, and easier-to-learn system.

                                                    On the other hand, you might have a point about occasional Rust users – if a user understood the old module system, then switching to the 2018 Edition involves learning something new. For the occasional user, it doesn’t matter that the new system is simpler – it’s still one more thing for them to learn.

                                                    But for a new user, those simplifications really do make the language simpler to pick up. I firmly believe that the current edition of the Rust Book describes a language that is simpler and more approachable – and that has fewer special cases you have to “just remember” – than the version of the language described in the first edition.

                                                    1. 1

                                                      A lot of effort is spent “simplifying” things that “simply” shouldn’t have been added in the first place:

                                                      • do we really need two different kind of use paths (relative and absolute)?
                                                      • do we really need both if expressions and pattern matching?
                                                      • do we really need ? for control flow?
                                                      • do we really need to have two different ways of “invoking” things, (...) for methods (no support for named parameters) and {...} for structs (support for named parameters)?
                                                      • do we really need the ability to write foo for foo: foo in struct initializers?

                                                      Most often the answer is “no”, but we have it anyway because people keep conflating familiarity with simplicity.

                                                      1. 2

                                                        You’re describing redundancy as if it was some fault, but languages without any redundancy are a turing tarpit. Not only we don’t need two kinds of paths, the whole use statement is unnecessary. We don’t even need if. Smalltalk could live without it. We don’t really need anything more than a lambda and a Y combinator or one instruction.

                                                        I’ve used Rust v0.5 before it had if let, before there was try!(). It required a full match on every single Option. It was a pure minimal design, and I can tell you it was awful.

                                                        So yes, we need these things, because convenience is also important.

                                                        1. 2

                                                          You’re describing redundancy as if it was some fault, but languages without any redundancy are a turing tarpit.

                                                          I’m very aware of the turing tarpit, and it simply doesn’t apply here. A lack of redundancy is not the problem – it’s the lack of structure.

                                                          Not only we don’t need two kinds of paths, the whole use statement is unnecessary. We don’t even need if. Smalltalk could live without it. We don’t really need anything more than a lambda and a Y combinator or one instruction.

                                                          Reductio ad absurdum? If you think it’s silly to question why we have both if-then-else and match, why not add ternary operators, too?

                                                          It required a full match on every single Option. It was a pure minimal design, and I can tell you it was awful.

                                                          Pattern matching on options is pretty much always wrong, regardless of the minimalism of design. I think the only reasons Rust users use it is because it makes the borrow checker happy more easily.

                                                          I’ve used Rust v0.5 before it had if let, before there was try!(). It required a full match on every single Option. It was a pure minimal design, and I can tell you it was awful.

                                                          In my experience, the difference in convenience between Rust 5 years ago (which I use for my own projects) and Rust nightly (which is used by some projects I contribute to) just isn’t there.

                                                          There is no real point in upgrading to a newer version – the only thing I get is a bigger language and I’m not really interested in that.

                                            3. 1

                                              This discussion suffers from “Monday morning quarter backing” to an extent. We now (post fact) know which releases of Rust contained more churn than others. “churn” being defined as a change that either introduced a different (usually better IMO) way of doing something already possible in Rust, or a fundamental change that permeated the ecosystem either to due to being the new idiomatic way, or being the Next Big Thing and thus many crates in the ecosystem jumped in early. Either way, my code needs to change due to new warnings (and the ecosystem doesn’t care for warnings) or since many crates are open source I’ll inevitably get a PR to switch to the new hotness.

                                              With that stated, my actual point is that Rust releases every 6 weeks. I don’t know if the next release (1.43 at the time of this writing) will contain something that produces churn or not without closely following upcoming releases. I don’t know if the release after that will contain big changes. So I’m left with either having to follow all releases (every 6 weeks), or closely follow upcoming releases. Either way I’m forced to stay in tune with Rust development. For many this is fine. However in my industry (Government) where dependencies must go through audit, etc, etc. It’s really hard to keep up with. If Rust had “major” (read churn inducing releases) every year, or say every 3 years (at new editions) that would be far, far easier to keep up with. Because then I don’t need to check every 6 weeks, I can check every year, or three years whatever it may be. Minor changes (stdlib additions, etc.) can still happen every 6 weeks, almost as Z releases (in semver X.Y.Z speak), but churn inducing changes (Y changes) happen on a set much slower schedule.

                                              1. 2

                                                When your deps updated to ?, you didn’t need to change anything. When your deps started using SIMD, you didn’t need to change anything. When your deps switched to Edition 2018, you didn’t need to change anything because of that.

                                                Warnings from libraries are not displayed (cap-lints), so even if you use deprecated stuff, nobody will notice. You could sleep through years of Rust changes and not adopt any of them.

                                                AFAIK async/await was the first and only language change after Rust 1.0 that massively changed interfaces between crates, causing a necessary ecosystem-wide churn. It was one change in 5 years.

                                                Releases are backwards compatible, so you really don’t need to pay attention to them. You need to update the compiler to update dependencies, but this doesn’t mean you need to adopt any language changes yourself.

                                                The pain of going through dependency churn is real. But apart from async, it’s not caused by compiler release cycle. Dependencies won’t stop changing just because the language doesn’t change. Look at JS for example: Node has slow releases with long LTS, the language settled down after ES2016, IE and Safari put breaks on language evolution speed. And yet, everything churns all the time! People invent new frameworks weekly on the same language version.

                                              1. 1

                                                I came to say this as well. I’m very fond of zola.

                                              1. 1

                                                My favorite, especially if not used to smaller form factors is the Vortex rac3r 3 without a doubt! The Vortex pok3r is a great 60% if looking for something smaller.

                                                1. 3

                                                  I use a pretty stock doom-emacs with only a few additional packages

                                                  Unlike vim I find emacs to be much harder to simply copy others configs. Probably due to have insanely configurable emacs is, but at least it gets me to stick close to stock (with doom as “stock”).

                                                  1. 18

                                                    For folks wanting more context on how the “minimum supported Rust version” (MSRV) issue is treated in the ecosystem, this issue has a number of opinions (including my own) and some discussion: https://github.com/rust-lang/api-guidelines/issues/123

                                                    As far as I can tell, there is no strong consensus on what to do. In practice, I’ve observed generally the following states:

                                                    1. Some folks adopt an explicit MSRV policy but do not consider it a breaking change to increase it.
                                                    2. Some folks adopt an explicit MSRV policy and consider it a breaking change to increase it.
                                                    3. There is no MSRV policy, and the only guarantee you have is that it compiles on latest stable (or latest stable minus two releases).

                                                    In general, I’ve found that (1) and (2) are usually associated with more widely used crates and generally indicates an overall more conservative approach to increasing the MSRV. (3) is generally the default though, as far as I can tell.

                                                    There’s good reason for this. Maintaining support for older versions of Rust is a lot of thankless work, particularly if your library is still evolving or if your own crate has other dependencies with different MSRV policies. All it takes is one crate in your dependency graph to require a newer version of Rust. (Unless you’re willing to pin a dependency in a library, which is generally bad juju.) Rust’s release cycle reinforces this. It moves quickly and provides new things for folks to use all the time. Those new things are added specifically because folks have a use for them, so their use can propagate quickly in the ecosystem if a widely used crate starts using it. The general thinking here is that updating your Rust compiler should be easy. And generally speaking, it is.

                                                    “Maturity” is perhaps the right word, but only in the sense that, over time, widely used crates will slow their pace of evolution and, consequently, slow their MSRV increases. This isn’t necessarily equivalent to saying that “maturity” equals “slow evolution,” because it is generally possible for crates to make use of newer versions of Rust without increasing their MSRV via version sniffing and conditional compilation. (Not possible in every case, but the vast majority.) But doing this can lead to significant complexity and a greatly increased test matrix. It’s a lot of extra work, and maybe doing that extra work is what this author means by “maturity.” Chances are though, that’s a lot of unpaid extra work, and it’s not clear to me that that is reasonable expectation to have.

                                                    1. 4

                                                      Perhaps part of the solution could be to make LTS versions of rustc and cargo? That way distro maintainers could preferentially use those, and package maintainers preferentially target those. Make the common toolchain setup procedure apt install cargo instead of curl https://sh.rustup.rs > sh and there’s at least a prayer of people preferring that. Debian 10 currently ships with rustc 1.34 for example, which IMO is a pretty good place to put a breakpoint.

                                                      But for this to happen there needs to be agreement on what the LTS versions are. If Debian 10 ships rustc 1.34, Ubuntu 20.04 ships 1.37 and Fedora ships 1.12, then as a crate maintainer I’m not going to bother trying to target a useful minimal version, because it’s a lot of random work that will never be perfect. If everyone ships rustc 1.34, then it’s much easier to say to myself “well I’d like this shiny new feature in rustc 1.40 but I don’t really need it for now, it can just go in the next time I’m making a breaking release anyway”. This actually works in my favor, ‘cause then when a user tries to install my software on some ancient random system I can just say “sorry, you have to use rustc 1.34+ like everyone else, it’s not like that’s a big ask”. Then distro maintainers can backport rustc 1.34 to Debian 9 or 8 if they really need to, and only need to do it once as well for most people’s software to work.

                                                      This happens already, hence why Debian 10 has gcc-7 and gcc-8 packages. It’s fine. The special cases just need to be uncommon enough that it’s not a huge hassle.

                                                      1. 5

                                                        Yes, people generally want some kind of LTS story. There was an RFC that was generally positively received about 1.5 years ago: https://github.com/rust-lang/rfcs/pull/2483

                                                        It was closed due to lack of bandwidth to implement it, but it seems like something that will be revisited in the future. There’s just a ton of other stuff going on right now that is soaking up team bandwidth, mostly in the form of implementing already merged RFCs.

                                                        1. 4

                                                          It would be really sad to let Debian hold back Rust version adoption in the ecosystem the way Debian gets to hold back C++ version adoption via frozen GCC.

                                                          It seems to me it would be a major strategic blunder for Rust to do an LTS instead of the current situation.

                                                          1. 2

                                                            Is Debian a factor anymore? I mean it was always pretty backwards, but does anybody use it, care for it anymore? How independent is Ubuntu from them?

                                                            I only use fedora/centos/rhel or windows for work. I have only seen Ubuntu in use by others in large real-world deployments, but Debian? Never.

                                                            1. 3

                                                              Is Debian a factor anymore? I mean it was always pretty backwards, but does anybody use it, care for it anymore? How independent is Ubuntu from them?

                                                              People do care about Debian and use Debian. That’s fine. What’s not fine is acting entitled to having code from outside the Debian stable archive build with the compilers shipped by Debian stable.

                                                              As Ubuntu LTS releases get older, they have similar ecosystem problems as Debian stable generally, but in the case of Rust in particular, Ubuntu updates Rust on the non-ESR Firefox cycle, so Rust is exempt from being frozen in Ubuntu. (Squandering this exemption by doing a Rust LTS would be a huge blunder for Rust in my opinion.)

                                                              In my anecdotal experience entitlement to have out-of-archive code build with in-archive compilers is less of a problem with RHEL. People seem to have a better understanding that if you use RHEL, you are paying Red Hat to deal with being frozen in time instead of being frozen in time being a community endeavor beyond the distro itself. Edited to add: Furthermore, in the case of Rust specifically, Red Hat provides a rolling toolchain for RHEL. It doesn’t roll every six weeks. IIRC, it updates about every third Rust upstream release.

                                                              1. 3

                                                                The company I work at (ISP & ISTP) use Debian as the operating system on almost all virtual machines running core software which requires n nines uptime.

                                                                1. 3

                                                                  I’ve found Debian Stable to be perfectly fine for desktop and server use. It just works, and upgrades are generally pretty smooth. Clearly, you have different experiences, but that doesn’t make Debian “backwards”.

                                                                  1. 1

                                                                    One department at my university is mostly-Debian for about 15+ years.

                                                                    1. 0

                                                                      I have seen Debian at a university department too, but not at places where actual money is made, or work is getting done. I had to use pkgsrc there to get fresh packages as a user to be able to get my stuff done.

                                                                      University departments can afford to be backwards, because they are wasting other people’s time and money with that.

                                                                      1. 3

                                                                        Every place that I have worked primarily uses Debian or a Debian derivative. (Google used Ubuntu on workstations; at [Shiny consumer products, inc] the server that I was deploying on was Debian, despite the fact that they have their own server OS and they even supported it at the time; and the rest have been smaller firms or I’m under NDA and can’t discuss them). Except for Google, it was always Debian stable. So no, not just universities.

                                                                        1. 1

                                                                          BSD Unix was developed at a university.

                                                                          Linus attended a university when starting to develop the Linux kernel.

                                                                          The entire ethos and worldview of Free Software is inspired by RMS’ time at university.

                                                                          The programming darling du jour, Haskell, is an offshoot of an academic project.

                                                                          I’m really sad so much time and energy and other people’s money have been wasted on these useless things…

                                                                          1. 2

                                                                            Nice strawman!

                                                                            And the infrastructure supporting these was just as backwards for its time as running Debian wasting the the time of students and tutors with outdated tools provided by the host institution…

                                                                            1. 1

                                                                              In the comment I replied to first , you write:

                                                                              […] a university department too, but not at places where actual money is made, or work is getting done

                                                                              University departments can afford to be backwards, because they are wasting other people’s time and money with that.

                                                                              (my emphasis)

                                                                              I find it hard to read these quotes in any other way than you believe that universities are a waste of time and money…

                                                                              edit clarified source of quotes

                                                                              1. 4

                                                                                I can also mis-quote:

                                                                                I find it hard to read […]

                                                                                But I actually rather read and parse your sentences in their completeness.

                                                                                My claims were:

                                                                                a) I have only seen Debian used at places where efficiency is not a requirement
                                                                                b) Universities are such places

                                                                                I didn’t claim they don’t produce any useful things:

                                                                                […] University departments can afford to be backwards, because they are wasting other people’s time and money with that.

                                                                                Which should be parsed as: University Departments are wasting other people’s time and money with not using proper tools and infrastructure, for example using outdated (free) software. They are being inefficient. They waste student and tutor time, thus taxpayer money when not using better available free tools, but it doesn’t matter to them, as It does not show up n their balance sheet, Tutors and Students are already expected to do lot of “off-work hours” tasks to get their rewards: grades or money.

                                                                                And yes, they are being inefficient:

                                                                                • I had to find floppy disks in 2009 to be able to get my mandatory measurement data from a dos 5.0 machine at a lab. It was hard to buy them, and to get a place where I can read them… This one can be justified as expensive specialized measurement equipment was used and only legacy tools supported it.
                                                                                • I had to do my assignments with software available only at the lab, running some Debian (then current) version shipping only outdated packages. OpenOffice kept crashing, and outdated tools were a constant annoyance. As a student my time was wasted. (Until I installed pkgsrc, and rolled my up to date tools)
                                                                                • At a different university I have seen students working in Dosbox writing 16 bit protected mode in assembly in edit.com, compiling with some ancient MS assembler, in 2015, because the department thought the basics of assembly programming didn’t change since they introduced the curriculum, so they won’t update the tools and curriculum. They waste everyone’s money, the student’s won’t use it in real life anyway, because they are not properly supervised, as they would be if they were living from the market.
                                                                                1. 3

                                                                                  Thanks for clarifying.

                                                                                  I realize it might be hard to realize for you now, but I can assure you that “the real world, governed by the market”, can be just as wasteful and inefficient as a university.

                                                                                  1. 2

                                                                                    Unfortunately that is also true, I have seen “bullshit jobs” (a nice book btw.) business from inside (been partly a box- ticker for a time), but the enormous waste I saw at universities make me feel that the useful stuff coming out from them is exception, the result of herculean efforts of a few working against all odds, complete institutions working on strangling people/projects leading to meaningful results.

                                                                                    Wasting someone’s own money is a thing, I don’t care that much about that, wasting taxpayer money is not a good move, but to some extent I can tolerate it… Wasting talents and other people’s time is what really infuriates me.

                                                                          2. 1

                                                                            I had to use pkgsrc there to get fresh packages

                                                                            Did you have root privileges as a student?

                                                                            1. 2

                                                                              pkgsrc supports unprivilieged mode!

                                                                              https://www.netbsd.org/docs/pkgsrc/platforms.html#bootstrapping-pkgsrc

                                                                              It worked like a charm.

                                                                              But I did actually have root privileges, as the guy responsible for the lab was overburdened and sometimes some of us he trusted helped other students. Still I didn’t use that to alter the installed system, as that would be out of my mandate.

                                                                    2. 1

                                                                      Debian 10 currently ships with rustc 1.34 for example, which IMO is a pretty good place to put a breakpoint.

                                                                      1.34 doesn’t have futures nor async/await that seriously impact the code design. Do I really have to wait for Debian 11 in 2021 to use them?

                                                                      1. 2

                                                                        No, if you need them then install a newer rustc and use them. But there’s plenty of code that also doesn’t need futures or async/await.

                                                                    3. 3

                                                                      Wow, I wasn’t aware that this issue has an acronym and even a place for discussion. Thanks for the pointer!

                                                                      widely used crates will slow their pace of evolution and, consequently, slow their MSRV increases.

                                                                      Exactly what I’m hoping for, and precisely the reason I’m not jumping off the ship :)

                                                                      maybe doing that extra work is what this author means by “maturity.”

                                                                      In part, yes, that’s what I meant. The other possibility is to hold off adopting new APIs (as you did with alloc in regex; thanks!). I understand both options are a PITA for library maintainers, and might not even make sense, economy-wise, for unpaid maintainers. Perhaps I should’ve used “self-restraint” instead of “maturity”, but that probably has some unwanted connotations as well.

                                                                      1. 2

                                                                        Here’s a cargo subcommand (cargo msrv-table) I hacked together (warning, just a hacky PoC) that displays the MSRV by crate version for any particular crate.

                                                                    1. 5

                                                                      I believe the issue here stems from something two-fold; there is no standard practice about “minimum supported Rust version” (MSRV), and projects typically don’t provide a table of MSRV to x.y version (in semver sense). As @burntsushi stated, there has been quite a bit of discussion around MSRV practices, and lists the three general (but not standard, as there is none) practices crates adopt.

                                                                      My personal opinion (and that of my projects) is that a MSRV change should trigger at a minimum a minor version bump. Thus allowing downstream crates to use ^x.y version locks in their Cargo.toml (only increase patch version automatically).

                                                                      Adding an easy table of “Project ver a.b has MSRV of 1.16, while c.d has an MSRV of 1.24, etc.” would make it easier for downstream crates to not only pick a version lock, but upgrade knowingly. Right now it’s a lot of trial and error.

                                                                      This doesn’t fix everything, as typically there very little in the way of back-porting features/support to older project versions that coincide with older Rust versions. However, for an unpaid project maintainer to provide the above two items would be a large step in the right direction.

                                                                      1. 4

                                                                        Right, yeah. For 1.x crates (or beyond), I generally adhere to the “only bump MSRV in a minor version” rule. I think you were the one who started that. :-)

                                                                        1. 2

                                                                          Ah yes, I should have stated I meant >= 1.x, as I also view 0.x as the wild west where (almost) anything goes :-)

                                                                        2. 2

                                                                          Did you mean to say ~x.y? Caret(^) is a default. I would probably advise using ~ deps for libraries, because they can lead to genuinely unsatisfiable dependency graphs. ^-requirements are always satisfiable (with the exception of links flag, which is an orthogonal thing).

                                                                          1. 1

                                                                            I did, thanks for catching that!

                                                                        1. 2

                                                                          Regolith has been my daily driver ever since I learned about it a few months ago (prior to Regolith it was Xubuntu base with custom i3 install, or Fedora w/XFCE base with same custom i3 install). Regolith especially excels for laptops, having that little touch of DE integration makes things like function keys, suspend, etc. Just Work. All this without losing a minimal i3 environment, where I had to spend exactly zero time setting it up. I couldn’t be happier!

                                                                          1. 5

                                                                            I’m looking forward to the rest in the series as I’m a fan of the author and everything they’ve done for Rust, however with only the first article out thus far which merely discusses components that may cause slow compilation it leads the reader in an overly negative direction, IMO.

                                                                            Rust compile times aren’t great, but I don’t believe they’re as bad as the author is leading onto thus far. Unless your dev-cycle relies on CI and full test suite runs (which requires full rebuilds), the compile times aren’t too bad. A project I was responsible for at work used to take ~3-5ish minutes for a full build if I remember correctly. By removing some unnecessary generics, feature gating some derived impls, feature gating esoteric functionality, and re-working some macros as well as our build script the compile times were down to around a minute which meant partial builds were mere seconds. That along with test filtering, meant the dev-test-repeat cycle was very quick. Now, it could also be argued that feature gates increase test path complexity, but that’s what our full test suite and CI is for.

                                                                            Granted, I know our particular anecdote isn’t indicative of all workloads, or even representative of large Servo style projects, but for your average medium sized project I don’t feel Rust compile times hurt productivity all that much.

                                                                            …now for full re-builds or CI reliant workloads, yes I’m very grateful for every iota of compile time improvements!

                                                                            1. 7

                                                                              It is also subjective. For a C++ developer 5 minutes feels ok. If you are used to Go or D, then a single minute feels slow.

                                                                              1. 4

                                                                                Personally, slow compile times are one of my biggest concerns about Rust. This is bad enough for a normal edit/compile/run cycle, but it’s twice as bad for integration tests (cargo test --tests) which have to link a new binary for each test.

                                                                                Of course, this is partly because I have a slow computer (I have a laptop with an HDD), but I don’t think I should need the latest and greatest technology just to get work done without being frustrated. Anecodatally, my project with ~90 dependencies is ~8 seconds for an incremental rebuild, ~30 seconds just to build the integration tests incrementally, and over 5 minutes for a full build.

                                                                              1. 3

                                                                                I really like the inline flow and sequence charts. Since it’s a server I may be able to replace my current personal wiki running on a home Qnap which is lacking a few features for me.

                                                                                For Desktop/Mobile I currently am pretty heavily invested in StandardNotes which looks like it ticks all your boxes as well, but I didn’t see it listed in the comparison.

                                                                                1. 1

                                                                                  I love standardnotes but never used it actively apart from mini trials. Although it ticks most of the boxes, it’s still locking me in their platform where it’s not possible to import/export large number of notes. Added a section in the comparison, thanks for recommending.

                                                                                1. 2

                                                                                  This solves a common issue I have on work machines, of deeply nested files and it being easy to lose context about file moves. The only thing I’m missing, is a way to drop the file path, relative to my current directory, of a file I select in broot. This may exist already, I just haven’t dug deep enough to find it yet.

                                                                                  1. 2

                                                                                    Do you mean something like z?

                                                                                    1. 1

                                                                                      Similar yes, but combining the other capabilities of broot would be nice to utilize a single utility. Also, unlike z most of the time I don’t need to jump to the location of the file, but I need to do $something with it.

                                                                                    2. 1

                                                                                      What do you mean “drop the file path” ?

                                                                                      1. 2

                                                                                        Once I select (or focus) a file in broot and exit, it’d be nice if my current command line was populated with the relative path to the file I selected. For example

                                                                                        $ pwd
                                                                                        /home/kevin
                                                                                        $ br
                                                                                          [ selects file in broot with absolute path of /home/kevin/foo/bar/baz.txt ]
                                                                                        $ foo/bar/baz.txt
                                                                                        
                                                                                        1. 2

                                                                                          There’s a verb, :pp which outputs the absolute path. Just like with all verbs, you can define a shorcut (for example ctrl-p).

                                                                                          Does that solve your problem?

                                                                                          more on verbs: https://dystroy.org/broot/documentation/configuration/#verbs-shortcuts-and-keys

                                                                                          1. 1

                                                                                            I can make that work! Thanks! In a perfect world it’d place that path on the command line rather than just printing to stdout, but with something like xargs I can still work with this. Thanks for your work on broot!

                                                                                            1. 3

                                                                                              If you’re on zsh you can pop the path in with zle.

                                                                                              1. 2

                                                                                                I am. This is exactly what I was looking for, thanks!

                                                                                            2. 1

                                                                                              He’s asking for a way to get at just the relative path. In the example given, just foo/bar/baz.txt, and not the full /home/kevin/foo/bar/baz.txt.

                                                                                              1. 2

                                                                                                I could very easily add a verb for that, just like today’s :print_path, a :print_relative_path.

                                                                                                Please kbknapp post an issue or answer here if that’s what you want.

                                                                                      1. 1

                                                                                        Looks like using MUSL with Rust produces 10 syscalls, 6 unique.

                                                                                        rustc -C opt-level=s --target x86_64-unknown-linux-musl hello.rs

                                                                                        1. 2

                                                                                          Something about this article itched at me, and on the orange site user danShumway said this which hit the nail on the head for me:

                                                                                          If you’re looking at [another] ecosystem and saying, “the number of dependencies is problematic because it takes a long time to review them”, I agree with you. If you’re looking at the Go ecosystem and saying, “there are fewer dependencies, so I don’t need to review them”, then that’s a security antipattern.

                                                                                          For example, the Rust standard library was kept small by design because they acknowledge that something in the standard library it shouldn’t be “trusted” just for simply being part of std.

                                                                                          1. 1

                                                                                            I think it depends what sort of security you’ve hoping the standard library gives you.

                                                                                            A standard library might make poor crypto choices, do funny things with deserialization, or have any number of other security sensitive code that can be a risk, so the presence in the standard library isn’t anything like a full seal of approval.

                                                                                            I still think that presence in most languages’ standard library give you some assurance against the kinds of “supply chain” attacks we’ve recently seen in NPM and PyPi. For many libraries, those supply chain attacks are the primary security issue the library raises.

                                                                                            1. 2

                                                                                              I still think that presence in most languages’ standard library give you some assurance against the kinds of “supply chain” attacks we’ve recently seen in NPM and PyPi.

                                                                                              I don’t think you should be conflating those last two.

                                                                                              The thing people seem to worry about in a “supply chain” attack is that they’re depending on a particular package – let’s say foolib – and one day an evil person compromises the package-registry account of foolib’s maintainer, and uploads new packages containing malicious code, which are then pulled automatically by the build processes of people depending on foolib. I believe that has happened a few times to packages on npm.

                                                                                              But as far as I’m aware, that’s not a thing that has happened to PyPI. All the alleged “supply chain attack” stories I’ve seen about PyPI involved typosquatters who’d register a similarly-named package and hope to trick people into installing it instead of the real thing. So, say, someone registering foo-lib or foo-library and hoping you’d not look too closely and conclude their package was what you wanted. While that’s a thing that definitely needs to be policed by the package registry, anyone with foolib in their dependency list is never at risk of receiving a malicious package in that case. Only someone who adds the malicious typosquat as a dependency is in trouble.

                                                                                              (it’s also something difficult to police in an automated way, because it’s somewhat common for package registries to end up with multiple similarly-named but legitimate packages)

                                                                                              1. 1

                                                                                                Thanks, I thought PyPi had both types of attacks, but it appears it’s only been typosquatting.

                                                                                          1. 3

                                                                                            I’ve been enjoying the progress updates on this project! It’s been very helpful in getting into the Rust Async/Await space. Is there a target timeline for this new scheduler to reach stable/master, or would this require async_std v2?

                                                                                            1. 2

                                                                                              Before christmas, we usually release on Thursday. From a users perspective, this change is completely transparent.

                                                                                            1. 5

                                                                                              I consider myself pretty keyed in to the Rust community, but have avoided the async space until it’s settled (as it appears to be doing \o/). Is there some verbiage on how this compares/relates/contrasts to things like tokio and futures-rs? Because if I consider myself pretty close to the Rust space, but don’t know I’m sure it could begin to look confusing for someone outside the community.

                                                                                              1. 9

                                                                                                async-std is an async port of the Rust stdlib and a new runtime system for async/.await. It brings substantially different methods, though it may look somewhat similar to tokio at quick glance. async-std is built partially by previous tokio contributors.

                                                                                                futures-rs is the Rust project-supported library on top of the std::futures::Futures trait, that provides additional abstract interfaces like Stream, AsyncRead and AsyncWrite traits. AsyncRead and AsyncWrite are fundamental for reading on sockets and files.

                                                                                                async-std in contrast to tokio opts into these interfaces fully, which tokio doesn’t. That’s most clearly visible in that tokio doesn’t use AsyncRead and AsyncWrite, but instead their own versions, making them incompatible with any library that wants to use the futures-rs types for interfacing. The notion of “incompatibility” between those two libraries mainly stems from this: a lot of tokio-based libraries can be rather easily ported. (it’s harder in applications that want to be abstract over their runtime!)

                                                                                                Finally, tokio and async-std have very different stability and documentation cultures. tokio has been unstable over the last 3 years, semi-frequently refactoring their interface. For a long time, docs were basically non-existent. async-std wants to commit to its interfaces (which is why we picked a known good one). tokio has just mentioned that the next tokio version after 0.2 will still not be a stable one.

                                                                                                async-std comes fully documented and with a book since its release and is continuing to venture down that line. We are ready to take that responsibility and want to provide a stable foundation for people to work on now.

                                                                                                async-std also innovated some areas of the space, e.g. being making all kinds of tasks available through a JoinHandle based system (which tokio just adopted) and providing the first practical implementation of single-allocation tasks (which tokio just adopted).

                                                                                                It makes us happy that we have something to add to the space. But finally, I think the above philosophical differences make it clear how we can’t operate as the same project.

                                                                                                1. 3

                                                                                                  As I understand it, async-std and tokio overlap substantially, but make different API choices. futures-rs came first, with tokio built to provide additional APIs for it. futures-rs provides the core utilities (the part of the futures API that’s been stabilized in std is quite small), while tokio and async-std provide additional common capabilities on top of it. async-std is newer, and a lot of the folks who work on async-std previously worked on tokio.

                                                                                                  So, to do a little survey:

                                                                                                  • Things found in futures-rs:
                                                                                                    • Stream trait: like a future, but for sequences of values.
                                                                                                    • Sink trait: something that values can be sent to asynchronously, like a channel or a socket.
                                                                                                    • A collection of basic executors (needed to actually run the futures built up with the async/await syntax), including block_on, ThreadPool, and LocalPool (like ThreadPool, but executing on a single thread).
                                                                                                  • Things found in tokio:
                                                                                                    • API for asynchronous file operations
                                                                                                    • API for asynchronous network IO
                                                                                                    • API for asynchronous process spawning
                                                                                                    • API for “futures-aware” synchronization
                                                                                                    • There’s more than this, but the gist is to say that tokio provides a lot of the building blocks you’d want to build real-world async code. In terms of API design, it has its own structure, terminology, and history that looks different from the standard library APIs they are providing async versions of.
                                                                                                  • Things found in async-std:
                                                                                                    • All the same things I listed above for tokio, with an API that looks a lot more like the structure and design of the Rust standard library (hence the name async-std).
                                                                                                    • async-std is also less mature, and not everything is implemented (async process spawning isn’t really ready yet)

                                                                                                  Just an overview, but hopefully it helps!

                                                                                                  EDIT: The exact timeline of futures-rs and tokio has been updated based on a correction from @skade.

                                                                                                  1. 4

                                                                                                    futures-rs came out of the tokio project, and contains a lot of the necessary utilities you’d want to doing practical work with futures, as the part of the futures API that’s been stabilized in std is quite small.

                                                                                                    You characterisation of futures-rs is technically correct, but not historically. futures-rs and tokio go back to each other, but futures-rs (by Alex Crichton) came first and tokio was built as a runtime for it. The projects later split.

                                                                                                    1. 1

                                                                                                      Thanks! I appreciate the correction.

                                                                                                    2. 2

                                                                                                      Exactly what I was looking for, this is super helpful thank you!

                                                                                                  1. 1

                                                                                                    I use fish for my shell (with oh-my-fish). This is pretty killer with Ctrl+R (reverse history search) and fzf