1. 52
  1. 35

    Here’s my take as a library author or as someone who is a bit more conscious of introducing new dependencies:

    • For logging, use the log crate as recommended for its interface. But for simple cases, you don’t need to bring in another crate. Implementing the interface is really easy.
    • For arg parsing, I personally don’t bother with structopt. I find it easier to just work with clap directly, which is what structopt uses internally. And with clap, you can disable its default features to slim down the dependencies. In particular, this saves you from bringing in the crates necessary to support the derive feature assuming you aren’t using some other crate that needs it.
    • For errors, I would recommend just implementing the Error trait yourself. thiserror is nice though if you’re already using procedural macros for something else. anyhow is worth a mention for applications.
    • I second serde and serde-derive. They pull their weight and then some.
    • For an HTTP client, I’ve been using ureq, which is much much slimmer than reqwest. (I recently cut 100 dependencies out of my application by making this switch!) But I haven’t done anything other than very simple things here.

    For applications, I tend to be a bit more relaxed about dependencies. Similarly for internal libraries.

    1. 2

      How is the end binary affected by the number of dependencies? Is there some kind of linking, or similar used, or will the end binary easily become huge with code that never is never actually invoked?

      1. 10

        It does add a little binary bloat, but personally, I don’t care about binary size because I write things for servers rather than embedded etc… and binary size never eats into any scarce resources in this context for me, so it’s not a metric that is worth changing any behavior over in my work.

        What does have a massive impact on my overall productivity and happiness is how long I spend waiting for my code to compile and link. I often compile my code dozens if not hundreds of times in a day, across several platforms that I test it on. CI latency is a key bottleneck in my overall development process. Rust has developed a terrible reputation for long compile times, and it can take minutes to build fairly trivial applications sometimes due to them pulling in a massive dependency tree.

        This is completely avoidable, and you can write things that compile as fast as Go, if this is a metric that influences your own productivity. Shallow dependencies, restricted use of (procedural) macros, avoiding unnecessary traits and generics, etc… will speed up compilation significantly. My Rust embedded database, sled, compiles in ~5 seconds, compared to the Go equivalents sometimes taking over a minute, and rocksdb (C++) taking several minutes. As a result, code-compile-test latency is low, which is the primary metric for getting things done in a correctness-critical project over the timescale of years for me.

        If you’re writing things as a learning experience, and you intend to throw the thing away or never really use it, compile times don’t matter that much. I think a lot of the Rust ecosystem is having fun learning the language, and a lot of the published crates reflect this. But if you have the intention of making something that is well-tested while also being somewhat complex (anything that uses a socket or file) you really can’t expect to get the race conditions and (lack of) error handling ironed out without compiling it a significant number of times, breaking it with simple fault injection and property tests, etc…

        For this reason, I argue that compile times are a significant factor in building high-quality software. The time you spend waiting for compilation becomes quite significant over these timescales, and any frustration you were already feeling due to having discovered a bug will be amplified by having to wait for a minute before knowing if your fix may have addressed it. I spend most of my time looking for and fixing my bugs, and minimizing the suspense between saving a file and waiting to see if I fixed the issue has a massive psychological benefit for me. It goes farther than simple opportunity cost, because the time spent coding becomes more enjoyable in addition to simply occupying a higher proportion of the total time at the keyboard.

        I have a more controversial opinion that having high compile times on a crate that you publish is an insult to your users, because I believe it is actually pretty easy to avoid high compile times, but I think most people are not aware of how easy it can be, or how rewarding it can feel, to perform this kind of optimization.

        1. 8

          I don’t think adding dependencies is an “insult” to users and I don’t think the Rust compiler is anywhere close to the speed of the Go compiler, no matter how you write your code or how few dependencies you use. With that said, yes, reducing dependencies to me is primarily about reducing compile times and reducing maintenance burden.

          I wrote more about it here: https://old.reddit.com/r/rust/comments/j0ugae/rust_crates_that_do_what_the_go_standard_library/g6y63hq/

          1. 3

            sled is proof that you can have complex rust codebases that compile faster than real-life similarly complex codebases written in go, despite go’s clear line-for-line advantage. Most programs do not benefit from optimizing their compile time, but I would argue that there are many pieces of code that have become foundational in the Rust ecosystem, and at great benefit to their authors, without very much consideration being made to this impactful metric.

            I think of the “substitution rate” that an author trades their convenience for the experience of others as a meaningful metric, despite many authors being unaware of the degree that this decision lies within their control. Obviously, it’s nice that authors give their stuff away. But to say that they never get anything in return is absurd. The fact that you and I probably never have to do another technical job interview again says something about this. I owe personal success to my users trusting me, and I believe I owe them a pleasant experience due to this. I do not expect anyone else to feel the same way, but this is my own belief.

            1. 3

              sled is proof that you can have complex rust codebases that compile faster than real-life similarly complex codebases written in go

              Can you show me please? What commands should I run? Which Go repo should I clone to compare it with?

              I don’t really know where you’re going in the rest of the comment. And I certainly have zero expectation that I will be able to skip a technical interview. I had a very active github with tons of projects the last time I looked for a job and absolutely nobody gave me a pass on a technical interview, and I didn’t expect them to. And also, I certainly do not think that authors of open source projects get nothing in return. To be honest, this seems so far off topic and I’m not even sure why we’re talking about it. You said “having high compile times on a crate that you publish is an insult to your users,” and I disagree that it is an “insult.” I don’t know what that has to do with whether I will need to do a technical interview or not.

              1. 3

                So, here’s what I was basing this off of:

                uname -a
                Linux whip 5.7.10_p1-debian-sources #1 SMP Sun Sep 13 18:00:43 CEST 2020 x86_64 Intel(R) Core(TM) i7-10710U CPU @ 1.10GHz GenuineIntel GNU/Linux
                crappy ISP in Germany
                go get github.com/dgraph-io/badger  40.20s user 3.10s system 84% cpu 51.254 total
                git clone git@github.com:spacejam/sled.git sled5 --depth=1  0.15s user 0.04s system 8% cpu 2.279 total
                cargo build  12.16s user 0.87s system 245% cpu 5.299 total

                But after playing with this more I’ve realized that this is not anywhere close to the latency that I see when I run go build directly on the badger repo, which gives me a compile time of:

                go build  1.08s user 0.26s system 133% cpu 1.008 total

                So I’ll retract the claim that sled is faster to compile, and move back to a subjective claim that sled is pretty fast to compile. Comparing sled to itself over time, it used to take over 50 seconds to compile sled on my laptop. So I think I’ve been particularly convinced that Rust can be made to compile faster, given my experience of dropping its compile latency by about 90%.

                The reason I brought up the interview thing is as an example of a common convenience that is enjoyed by people with exposure. The difference for you seems to be that you looked for a job, rather than engaging with one of the people who’s offered you one directly. If nobody has sent you an email trying to hire you onto their team, I’d be pretty surprised, given how much exposure you have, and how frequently exactly this situation happens to others in the space. I recommend following up with some of those people next time you’re open to a change, because it puts you into a far more advantageous position to negotiate for time to spend on open source work and other things, should any of their teams be doing something you’re interested in. It sure beats the other process.

                The reason why these common perks are on-topic is because they are a part of the transaction occurring between us and our users. The transaction occurs at the moment they choose to try out our stuff, paying some cost to understand how to use it, paying some cost to wait for it to download and build, and in return we get various benefits over time, many being intangible and difficult to plan for at the beginning. But nevertheless, we get good things.

                But let’s be honest - the Rust ecosystem is in rough shape. They do not get very many good things. I feel as though there is a huge gap between the personal benefits many authors are enjoying and the horrific friction their users put up with. This is one downside of having such a huge beginner base - things become quite popular despite feeling gross and people learn to ignore their own negative feelings. As a professional trainer, I can say confidently that their negative feelings are valid so much more often than they believe! The stuff that gives them pain is quite often deserving of critique! But so many newcomers into our ecosystem learn to just accept so much garbage. One thing that may contribute to this is that Rust has a lot of pain in general during the learning process. So it’s like people get accustomed to pain, and then simply expect it with everything they use, despite significant improvements being possible. I choose to be somewhat polemical about this in public because I am honestly quite disappointed at the infrequency that I’m able to use anything off-the-shelf. It really seems like people get addicted to the pain of learning the language and learn to perpetuate it through their design decisions.

                1. 3

                  The other thing I’d mention is that you’re comparing go build with cargo build, which I’m not sure is apples to apples. Or at least, it’s worth mentioning it when discussing a comparison. The Go compiler doesn’t have a concept of “debug” or “release” binaries. There is only one mode. It’s a bit tough to compare, because in many cases, compile times matter for development and in those cases, it’s common to use debug builds in Rust. So if cargo build is as fast as go build, well, then that’s a fair point because it matters. But still, it’s important to call that out.

                  As for the rest of your comment, I think we have different experiences and I’ll leave it at that.

                  1. 1

                    I will hire you immediately if you want to work on distributed KV stores, no technical interview :)

                    Bwahahhaa now you’ve received what I’ve described :P

        2. 1

          I’ve been playing with ureq a bit and I love it. So much easier. Wish I learned about this ages ago.

        3. 8

          This article does a good job in pointing out equivalent Rust crates for feature parity with parts of Go’s standard library. My worry for Rust articles like this are that they might quickly become obsolete, whereas Go articles that demonstrate techniques using the standard library may not.

          Sometimes it feels like I blink and there’s a new error handling crate that everyone just has to use. As a result, the previously reigning error crate du jour becomes unmaintained. It makes for a frustrating experience when searching for what to use, as many of the results still show the abandoned ones and all of their praise. Maybe the trick is just to restrict search results to the past year or something.

          1. 9

            https://lib.rs tracks falling popularity of crates, e.g. https://lib.rs/crates/failure/rev and this analysis is included in search ranking.

            1. 1

              That’s nice, didn’t know that.

            2. 2

              Would be very helpful to add some kind of “deprecated” flag to crates.io. So search results can exclude them. Wouldn’t help with articles, but searching for a crate on the registry.

              But I’ve got to say: I’m using error crates from 2015 and they’re working fine. Sure it’s not nice to start with something deprecated, but it’s not like they’ll just stop working. At least they didn’t for me. (And with said search feature on crates.io you could figure out the right choice.)

            3. 5

              Rust’s standard library is severely lacking by comparison

              I take some issue with the description that it’s lacking. As far as I understand, this was a deliberate design choice. I’m not familiar enough with it to be able to comment on the actual reasoning behind that decision (maybe someone else can clarify?), but in my own experience, the split between the standard library and other crates for additional functionality has only been positive.

              1. 23

                Yes it was an active choice to have a small standard library in rust. But calling it lacking is valid, it’s just the other side of the coin.


                • you don’t have to decide on the standards and scope of your STD*
                • you can easily change stuff in crates and evolve rapidly, you can’t just change an std lib (see rust error trait)
                • the community can figure out the best designs over time and adopt new things (async)
                • you have a wider selection to choose from and don’t have to “pay” for unused std stuff*
                • less deprecated-since-1.0 interfaces (looking at you java)
                • less work to do for the std library team, workload easier to scale
                • build systems like cargo are better because they have to deliver, otherwise you don’t want to include so many external crates


                • there is no official standard, even if it’s defacto one (serde), making it harder for new people to figure thm out (which is why this post exists..)
                • because crates can do whatever they want your crates aren’t as stable as std included libraries
                • you can get many competing standards (tokio/async-std/.., actix/tower) which are hard to compare or decide on if you want something that is just stable and has the biggest support in the community
                • you have to decide on your own and hope that you’ve selected the long running horse for your project

                I think rust also had a longer way figuring out the best practices for libraries and things like async/await came later to the party. If you’d have included some of the current libraries 2 years ago in std, they’d have a hard time to keep up with the latest changes.

                *Well you still have to, but it’s easier to say no when you already have a slim std.
                *You could recompile std to not include this, and it’s something that is AFAIK being worked on. But as of now, you don’t get to choose this.

                1. 5

                  I’d like to submit one more downside, based the cousin thread from @burntsushi:

                  • more concern about the transitive dependencies, and related consequences

                  I’m appreciative of how nice it is to be able to reach for the stdlib and know that you’re introducing precisely 0 new dependencies, not even have the mental overhead of auditing the dependency tree or worrying about future maintenance burden etc.

                2. 2

                  It was an active choice; it’s more maintainable and paired with a good dependency management solution like Cargo you can just release the batteries as versioned libraries (and often people will build better ones). Versioning gives you a lot of freedom in changing the API, reducing the pressure on initial design.

                  It’s still fine to call it lacking IMO. The stdlib is largely “complete” by its own standards but it does lack things others have, and it’s a common thing people from other ecosystems take time getting used to. shrug

                3. 5

                  I’m used to thinking of the Go ecosytem as pretty limited, so this is an interesting look at a few places where the line might run the other way.

                  1. 3

                    This surprises me: I tend to think of the Go language as limited, but things like library support, tooling, and the stdlib are either “just fine” or “pretty good”. What areas do you see as weak?

                    1. 3

                      An example from work the other day: Go has nothing in stdlib to help check if one map is a submap of another. Had to write it ourselves

                      1. 1

                        An example from work the other day: Go has nothing in stdlib to help check if one map is a submap of another. Had to write it ourselves

                        Ah. Heh, I guess that’s where “Go the limited language” bleeds into “Go the stdlib” and that area does get painful.

                        Off to implement Len(), Less(i,j) and Swap(i,j) again…

                    2. 1

                      Depends on which languages you are comparing it to. Most languages that people compare to Go or Rust are many times older, so they are of course going to have a much more mature ecosystem.

                      For its age, I think Go’s ecosystem is very impressive. No doubt this is largely due to it being carried by Google’s name, but a success with an advantage is still a success.

                      1. 1

                        In the context, I meant as compared to rust :)

                    3. 3

                      I completely agree with this sentiment. Rust as a language is better than Go, but Rust’s standard library so sorely lacking. While Go isn’t as batteries included as Python, it is still much much better than Rust in that regard. This wouldn’t be so terrible if not for the fragmentation of the Rust community. Which one of the 100s of HTTP libraries should I use? How is this even a question in 2020. Rust may have succeeded in being a “better C++” on the language front, but they have failed miserably on the ecosystem front.

                      1. 1


                        It uses the error!, warn!, info!, debug! and trace! macros which correlate to the highest and lowest levers


                        1. 2

                          Yep, I have that fix in my repo locally. Avoiding pushing it until tomorrow so I can monitor the viewcount better (my site resets metrics on every deploy).

                          Edit: apparently someone helped me get a CSS fix in, and I pushed that. It has the levers fix too.

                        2. 1

                          Re: “flag” and “global mutable state”, although it’s perhaps unfortunate the godoc shows using package scope vars in the examples, there’s nothing to say you need to do this. You can bind or set any variable (including a struct field) into the flag call.

                          I typically have a type options struct which contains the command line info. Calling ParseArgs() on an instance of this object sets up (via the flag package) the options values.

                          Having a struct with methods also provides a nice place to hang additional validation (“don’t set this if that is set”).