1. 65
  1. 20

    People who learned Rust recently often remark on how complicated lifetimes and the error handling are.

    I think with experience it changes significantly how it affects you but it is still true.

    Lifetimes: For a lot of coding, lifetimes in Rust are nearly frictionless for me. And then you want to make an innocent seeming change and you have to add lifetime annotations everywhere or live with extra copies/reference counting. And the self-reference issue mentioned in the article is spot on. I hit that in real life quite often: Wanting to index data in multiple ways (without copies) and not being able to put that easily into the same struct. A lot is possible but it is work.

    E.g. it would be possible to design a HashMap for values that contain their keys, exposed via a trait. But you cannot use the normal HashMap that way.

    Error Handling: For error handling, if you go down the pedantic path of making specific error types: It is a lot of work. The opposite (e.g. with anyhow) is not. What I find hard to do in Rust, is the middleground: General handling of most errors and special treatment of special cases.

    E.g. I recently followed “Crafting Interpeters” and wrote the interpreter in Rust. For return statements he used Exceptions in the Java implementation. I wanted to bake that into the error type. But since that also references a return value which is not always thread safe in my implementation (functions references), a thread safe anyhow Error could not be used to wrap that.

    Once I let go of that idea and just switched to function return values that potentially indicated a Return, it was surprisingly easy. But this sort of puzzling until you find a workable solution with Rust constraints happens from time to time with me.

    1. 4

      just switched to function return values that potentially indicated a Return

      I think I can totally understand what you mean with that. The first few weeks I coded in rust I also tried to “get it right”, using lifetimes, passing pointers with specific lifetimes in structs that originated from another function and so on. It was a huge mess and I’d become desperate trying to avoid clones/Rc or return oriented programming. Now I don’t actually have that problem at all - either I’m doing something very right or very wrong. And no I’m not using Rc/Arc/Clone all over the place (even though I’d strongly advise every beginner to do that instead of fighting the borrow checker for eternity).

      What I meant to say is that my guess is also that many of those experiences come from people who aren’t writing in-depth data structures like doubly linked lists or b-trees with recursive references and so on. No they are just writing some application which doesn’t actually require the magic you’d write for one of these data structures. But for some reason they’re trying to be clever or over-optimize which ends up in such problems.

      Overall I get the feeling that all comes down to what I experienced trying to learn racket for the first time: You’re holding thinking it wrong, you’re still trying to apply the rules of a different language universe.

      making specific error types: It is a lot of work

      I’d highly suggest beginners to use thiserror:

      #[derive(Error, Debug)]
      pub enum AuthError {
          #[error("unknown data store error")]
          Other(#[from] color_eyre::eyre::Error),
          #[error("invalid login")]

      Note that we’re also wrapping an “anyhow”-ish eyre::Error.

      Which also allows for stuff like turning errors into http responses for a web service.

      impl ResponseError for AuthError {
          fn error_response(&self) -> HttpResponse {
              match self {
                  AuthError::NotAuthenticated => HttpResponse::Unauthorized().finish(),
                  e => { warn!("{}",e); HttpResponse::InternalServerError().finish() }
      1. 1

        I guess the core message about lifetimes for Rust is: in “normal” programs they are surprisingly low friction with experience. I mostly hit borrow checker errors which are easily solvable locally by a temporary variable. That is amazing because you get real benefits from this.

        But e.g. if you used stdout so far and want to pass an arbitrary Write for the output to your code, you cannot just add a dyn reference as you would do in Java or other languages. You need to do more work. You can use dyn dispatch+lifetimes and/or generics, potentially parametrizing a lot of types in your program. Alternatively, you have to use Rc<RefCell<_>> or Arc<Mutex<_>>. I am simplifying, depending on your circumstances, there are even more choices but that doesn’t make it simpler.

        I appreciate what Rust is achieving with these constraints (which gives you confidence and ultimately higher productivity ) but while mostly low friction, lifetimes occasionally slow me down significantly. Honestly, everything else would be a miracle.

        making specific error types: It is a lot of work

        I know and use thiserror, it is great for what it does! I consider using it in practice “a lot of work”, e.g. adding at least one enum for each wrapped lib error.

        You can even wrap anyhow::Error as a source but then you still need an extra conversion for these errors. Does eyre::Error solve this? (I wouldn’t really knew how without generic specialization)

        1. 1

          you still need an extra conversion for these errors

          I typically have something like foo().context("adding xyz")?, which will automatically convert this to eyre::Error (and anyhow has something similar IIRC), then ? will convert it to your thiserror implementation.

          1. 1

            That makes a lot of sense for errors that you don’t expect to be handled.

            Thank you.

            1. 1

              That and with color_eyre you get backtraces on top. So typically for DB calls you don’t expect do die and if they do you appreciate a stacktrace.

    2. 18

      Rust, of course, is the only language in existence which offers run-time memory safety without garbage collection.

      I’m not sure if this holds, for instance, Swift does memory management without using a GC.

      1. 13

        And ATS, which provides the ability to do pointer arithmetic and dereferencing safely without garbage collection. In general, claims of “only one that does” and “first to do” should be avoided.

        1. 5

          ATS is the first language to have a “t@ype” keyword.

          1. 3

            ATS is very interesting! thanks for sharing it

            1. 2

              Check out Aditya Siram’s “A (Not So Gentle) Introduction To Systems Programming In ATS,” it’s a great overview.

          2. 11

            It depends a bit on how you define garbage collection. I’ve seen it used to mean both any form of memory management that does not have explicit deallocation or specifically tracing-based approaches. Swift inherits Objective-C’s memory management model, which uses reference counting with explicit cycle detection. Rust uses unique ownership. C++ provides manual memory management, reference counting, and unique ownership.

            1. 6

              Rust uses unique ownership. C++ provides manual memory management, reference counting, and unique ownership.

              Both Rust and C++ provide all three.

              1. 4

                The problem is that C++ can also provide stuff you don’t want in your code usually - without using an unsafe {}.

            2. 5

              Ada meets that definition, too.

              Even modern C++ could claim to have run-time memory safety without GC, but that obviously requires the developer to use it correctly.

              1. 5

                Ada isn’t completely memory safe, though I would say it’s a lot safer than C or C++ with all of the additional built-in error checking semantics it provides (array bounds checks, pre/post conditions, numeric ranges, lightweight semantically different (derived) numeric types, type predicates). I’ve found it hard to write bugs in general in Ada-only code. It’s definitely worth checking out if you haven’t.

                As for modern C++, it feels like we made these great strides forward in safety only for coroutines to make it easy to add a lot of silent problems to our code. They’re super cool, but it has been a problem area for myself.

                1. 3

                  Rust is also not completely memory safe: it has an unsafe escape hatch and core abstractions in the standard library as well as third-party frameworks require it.

                  1. 2

                    I agree. Not a lot of people are familiar with Ada, so my point was to dispel the myth it is completely safe, while also answering the common reply I’ve seen that “Ada isn’t memory safe, hence you shouldn’t use it.”

              2. 3

                Isn’t atomic reference counting a form of GC as well?

                1. 4

                  One could say that everything that deviates from manual memory management is some form of GC. Still, we do have the traditional idea that, generically speaking, GC implies a background process that deallocates the objects asynchronously at runtime.

                  1. 3

                    If you think about it, stacks are GC because they automatically allocate and deallocate in function calls. 🤔 That’s why they were called “auto” variables in early C.

                    Galaxy brain: malloc is GC because it manages which parts of the heap are free or not. 🤯

                    1. 1

                      ahahah great viewpoints, wow I learned so much with your comment, it got me looking into “auto” C variables. never realised that all vars in C are implicitly auto because of the behaviour that they get removed when they go out of scope 🤯 (I wonder how did they got explicitly removed back then? and how it all relates to alloca()? could it have been designed that way to allow for multiple stacks at the same time with other segments besides the stack segment?)

                      that last bit about malloc is amazing, it is indeed managing virtual memory, it is us who make the leaks 😂

                      1. 1

                        all vars in C are implicitly auto because of the behaviour that they get removed when they go out of scope 🤯 (I wonder how did they got explicitly removed back then?

                        Back in the day, as I understand it, there was no “going out of scope” for C. All the variables used by a function had to be declared at the very top of the function so the compiler could reserve enough stack space before getting to any code. They only got removed when you popped the stack.

              3. 8

                zig force the programmer to worry about error handling. The way I see it, things should not error, if they do, stop everything and print a stack trace, please don’t bother me with it…

                run_this_erroring_function() catch @panic("nope");

                I believe this will produce a stack trace in zig

                Otherwise, a good article!

                1. 21

                  I think that the author’s mindset is perfectly understandable for a data scientist and at the same time absolutely inadequate for software engineering (the author does point out their appreciation for that disinction right after that quote). The two fields (data science, swe) both make use of a computer but for very different goals.

                  A high-quality application needs to be able to gracefully handle failures and explicitness in the language helps do the methodical work required to get there. This is obviously true for stuff like kernel modules, but it’s also true for user-facing applications. I’ve used recently a video editing software that would use a lot of ram while encoding a video and it would have been absolutely unacceptable for it to crash the instant it hits system limits. Coincidentally, encoding a video would instantly crash Discord and Firefox for me.

                  And it’s not just allocation problems, but also all kinds of errors when there’s unsaved user data at hand, critical systems, etc. In data science crashing is the best strategy because at that point the program will not be computing the right answer anyway, but the same doesn’t hold as true for other applications.

                  Also explicitness helps with deciding when an error can be safely ignored or not. If you’re writing the above mentioned video editing application and trying to print to a log file fails, you might want to continue running anyway as it should not be a show-stopper for a lengthy encoding process, for example. If instead you’re writing a command line tool whose main job is to print to stdout and printing fails, then that’s a different story, which is why Zig has no print statement.

                  1. 3

                    In data science crashing is the best strategy

                    Also you know very well how your target system looks like and want to perform as fast as possible, using as many speed improvements as you can get. For example we did test openmpi stuff at a local node (64+ cores), using the “let it crash” style was easy there, while also compiling it on that target hardware. And then submitted the batch job to the supercomputer of the university, which distributed it along all nodes. Later on you got your reports back.

                  2. 9

                    I might be missing something, but it seems to me that using try everywhere is the most concise way to let an error bubble up without dealing with it.

                    Coming from Python, I admit I was a bit annoyed to use it in places like adding an item to a list, as it can fail to allocate, but in the end you get to like it.

                    1. 2

                      Yeah, you’re absolutely right, I didn’t even think of that!

                      1. 1

                        With ArrayList specifically, you can also use initCapacity() and/or ensureTotalCapacity()/ensureUnsusedCapacity() and then appendAssumeCapacity() (without try), which can lead to nicer code in some situations.

                    2. 7

                      The author about generics with comptime functions:

                      Despite my enthusiasm, I am simultaneously a bit apprehensive about how far this can be extended to stuff that can’t be fully inferred at compile time.

                      Funnily, my concerns would nearly point into the opposite direction: The mechanism is too general and therefore an IDE or other tool needs to evaluate Zig code for code completion etc.

                      But it does seem super elegant! (No Zig experience here yet.)

                      1. 13

                        That’s true, comptime can do a lot of things that an IDE would have a hard time understanding. The current plan is to add a subcommand to the Zig compiler to make it provide this information to the IDE. This is something that will be explored once the main work on the self-hosted compiler is complete.

                      2. 5

                        Nice article. The author correctly identified places that newcomers feel uneasy with in Rust:

                        • needing to always name all the trait bounds. There’s a RFC for that and I hope it gets implemented this year
                        • lifetimes are something I continuously explain. Though I still feel that the great error messages make up for any trouble you have declaring them
                        • macros may sometimes be overused. The risk of having a hammer, I guess. That said, when used with some care, you can get great results. Serde is a good example here, because the derive macros bridge arbitrary formats and data types
                        • error handling: Just have fn main() -> Result<(), Box<std::error::Error>> { .. } and use ? everywhere.
                        1. 4

                          Not much time to review your code properly, but here are some minor things found in your Zig code:

                          • Here, you can shorten the syntax: std.os.getenv("HOME").?

                          • Yes an ArrayList only manages the memory of the underlying array, not the memory of its items.

                          • Since you use an ArenaAllocator, and your cli lifetime is short, you probably don’t need to worry about deallocating stuff

                          • Channel.name looks like it can just be a byte array name: []const u8

                          1. 2

                            it has some serious, even prohibitive limitations when it comes to applications that require short run-times and low latency … Julia also suffers from another limitation it shares with many other languages: garbage collection

                            Short runtime + low latency requirements would indicate to me that a language compiled to native code with a GC would be the best choice - both C++ and Rust, if used properly, would “waste” time deallocating at the end just to hand back the memory to the OS anyway. With a GC, you get the benefits of never deallocating unless you use up too much memory, which is unlikely in a short-lived program.

                            On Zig:

                            Instead, there are structs dedicated to managing memory known as allocators. One intriguing consequence of this is that different programs are free to take completely different approaches to memory management

                            One can use allocators if one wants to even in C, C++ has them as template parameters in its own standard library, and D even has pre-built ones.

                            1. 1

                              If I had to choose one word to describe the language, it would be “pedantic”. By default, the programmer is burdened with the most laborious error handling I have ever encountered

                              Sounds like a language that I would not want to use myself, but I would really, really want the people who write my web browser to use.

                              I share the same disappointment at the lack of closures in Zig tho. Then again, I’ve never had to write any code for which GC was a problem or bottleneck, so I can’t speak from experience.