1. 15
    1. 17

      I don’t think this somehow negates this post’s conclusion in any way but I think claims like these:

      Assume 70% of your code base is wrapped in unsafe: That is still 30% of your code base where you do not have to think about memory safety!

      are oversimplifying and I wish the Rust community in general were less prone to embracing them. I don’t think it’s inaccurate but it doesn’t tell the story of just how much of a giant, finicky, bug-prone moving piece the safe-unsafe interface is.

      A few years ago RUDRA identified a bunch of memory safety bug patterns in unsafe interfaces – straightforward enough that they actually lend themselves to automatic detection. Interfacing unsafe code correctly is extremely difficult, and its role is foundational to the point where a single incorrect interface has repercussions throughout the entire safe codebase (the same paper cites, for instance, a trivial bug in join() for [Borrow<str>]). I’m mentioning RUDRA because it’s easily reproducible, results are widely available and easy to replicate, and it cites several classes of bugs in one place so it’s a good starting point, but those aren’t the only types of bugs in unsafe interfaces.

      That 30% isn’t 30% where you don’t have to think about memory safety, it’s 30% where you have to think about the memory safety of unsafe interfaces. If that’s, say, mostly code that manipulates very high-level, well-understood constructs in the standard library that sit several abstraction layers above the first unsafe block, it’s usually a day at the beach. But if that 30% is mostly code that uses the other 70% which you mostly wrote yourself, that 30% of the code probably exposes some of the hardest and most subtle bugs in your codebase.

      I’ve heard arguments in terms of “we would have to use tons of unsafe anyway, so Rust would not help us”, too, and my main gripe with them is that they’re often either uselessly generic, organisational artefacts, or both.

      First, there are extreme cases where it applies entirely (e.g. you whole program just pokes at some config registers, so it’s basically one giant unsafe block) or not at all (your whole program peeks at some config registers, so your unsafe wrappers are inherently infailible) but most real-life cases are nothing like that, making conclusions about safety drawn based on line counts alone about as useful as conclusions about producftivity drawn based on line counts alone.

      And second, depending on what the codebase is, it can have an overwhelmingly positive impact on whatever system it’s integrated in. A device tree manipulation library, for instance, would likely end up with so much unsafe code it’s not even funny, but it would still be enormously helpful because most of the potential for bugs is clustered on the user end of the library, not in the library itself, so being able to integrate primitive manipulation idioms into safe manipulation constructs would matter way, way more than the unsafe block line count would suggest.

      1. 7

        I have mixed feelings about RUDRA. On one hand they’ve shown that fully foolproof interfaces around unsafe code are hard. OTOH the majority of the types of unsoundness they’ve identified require evil implementations of traits, like Borrow borrowing a different thing each time. It’s possible in theory, but it’s not a typical code people would write.

        The practical rough problem they’ve highlighted is panic safety. This is something that people writing/reviewing unsafe code need to pay extra attention to.

        1. 9

          I dunno, I found RUDRA extremely useful back then. I’d just started learning Rust and of the three classes of bugs it proposed, the only one any of the code I’d written didn’t have was the one about Send/Sync propagation, mostly because the kind of code that would exhibit it was way more advanced than I could’ve written at the time. Hell it’s probably way more advanced than what I could write today.

          It also highlighted a lot more good points about program correctness in general, not just about memory safety. One of the issues they found, a TOCTOU in join, in addition to being very practical (it’s not just code that people would write, it’s code they did literally write, it was in the standard library) was similar to several similar bugs I found in code of mine. They were probably unexploitable, they were just “plain bugs” that were obvious in hindsight, but I’d missed them during reviews because they were hidden behind macros. Now every time I see a macro in an unsafe block my heart skips a beat.

          Some of their variants aren’t very practical but if you look at the bugs they found on the crates.io sample, a good chunk of them are in the oh God BUT WHY??! class. When I read them in isolation I like to think they’re not typical of the code I’d write but realistically they are representative of the bugs I wrote when I was cranky, sleep-deprived, under pressure and unchecked by well-meaning reviewers, which describes a good chunk of our industry, sadly.

      2. 5

        FWIW I actually agree entirely with you here. My intent was not to oversimplify, but digging in on these details would have led the post down a very long rabbit hole and I was trying to make it a relatively approachable read, rather than a deeeeeep dive. (I think both are valuable.) And the two parts are really intended to hang together, because:

        That 30% isn’t 30% where you don’t have to think about memory safety, it’s 30% where you have to think about the memory safety of unsafe interfaces.

        …is exactly what the second half is getting at. That’s 100% true, but iff your safe wrappers do their jobs, that’s where that mental overhead all goes. And you have to put in all the rigor there, because otherwise this is exactly right:

        But if that 30% is mostly code that uses the other 70% which you mostly wrote yourself, that 30% of the code probably exposes some of the hardest and most subtle bugs in your codebase.

        The key, to my mind, is that that is also true of the other languages on onffer; they just don’t give you any handles for it.

        Again, though, I overall agree with this response.

        1. 3

          Right, I realise it’s probably impossible to go down that rabbit hole and keep the whole thing approachable. I think the trade-offs are worth it regardless of percentages, too – I actually agree with what you wrote entirely. I apologise if I wrote that in terms that were too adversarial. I’m not trying to challenge your point, just to explore that rabbit hole :-D.

          My pet peeve about percentages with this sort of thing is that they tend to hide how hugely variable the difficulty of dealing with unsafe code is inside the very broad field we call “systems programming”. One of the first things I wrote to get the hang of unsafe Rust was a JIT binary translator. That pile of bugs was always like two RETs away from an unsafe block that could panic. The unsafe:safe code ratio was very small but the amount of time (and bugs) I got from interfacing them wasn’t similarly small at all. The interface itself was a very rewarding source of bugs.

          1. 2

            No worries; I didn’t mistake you. I just wanted to be clear about what I was doing: it was either 1,200 words or 10,000.😂

            I think the example you give there is actually the strongest form of the case for “just use Zig”. I still come down on “I would rather isolate to that interface” even though that interface may be hellish, because at least I know where it is, but you’re right that it doesn’t remotely make it easy.

      3. 3

        When I was working on a JVM implementation in Rust, I had to use unsafe for manipulating Java objects on the heap and my biggest gripe was that the memory model of Rust was not well defined in many cases, effectively repeating what I would have had in C/C++’s case with regards to UBs. E.g. it is permissible in Java to data race.

        Is there some improvements in this area since? (I haven’t touched Rust in 2 years unfortunately).

        With that said, the tooling has greatly improved (for both C and C++, but of course for Rust as well) and with miri/valgrind/etc I could catch most bugs early in my unsafe blocks.

        1. 1

          JVM implementation in Rust

          Sounds like an interesting project! How do you represent a Java heap in Rust’s type system? Is it a big array of words, or more structured, like: a Java reference is a Rust pointer; a Java object is a Rust struct, etc? I imagine the former since it’s a VM so the Java classes only exist at runtime.

          1. 2

            It’s an array of 64bit words with a header that points to the class of the given object instance. The JVMs design is quite cool and the specification is written in a surprisingly accessible way with many great design decisions (though also with some historical baggage), for example having only single inheritance allows for appending the fields of a subclass to that of its superclass, allowing for subtyping already from the memory structure.

        2. 1

          I don’t think so. I think the various initiatives, like the UCG, have made some progress but if anything major has made it to “official” status then I’ve missed it, too.

      4. 2

        A device tree manipulation library, for instance, would likely end up with so much unsafe code it’s not even funny

        As the author of FreeBSD’s dtc, I think you’ve picked a bad example here. The codebase is written in C++ currently and uses safe types (smart pointers, iterations) everywhere except around the file I/O layer. It needs to handle cross references and so would need to use the RC trait in Rust, which is unsafe internally, but I think that’s the only place where you’d need unsafe.

        On the other hand, dtc takes trusted input and processes it, so the benefits from safety are limited to reliability guarantees in the presence of buggy inputs.

        1. 1

          It’s a bad example today, you’re right. My “benchmark” for this comparison was, unfortunately, a similar library I wrote a while back, on a platform whose only C++ compiler was partially C++-03 compliant and… not stellar. C++-11 was already around so I had some idiomatic prior art to refer to for safe types but that was about it.

          1. 1

            You’ve made me curious. Was your implementation ever released publicly? I wrote ours in 2013 as part of the removal of GPL’d components from the base system (the original dtc is GPL’d, so I wrote a permissively licensed version that was also intended to be usable as a library, though I don’t think anyone has ever embedded it in another project).

            The original version had a lot of premature optimisation. It had a custom string class that referenced ranges in the input. I eventually moved it over to using std::string because the small-string optimisation in libc++ meant that this generally didn’t increase memory usage and made it much easier to reason about lifetimes (strings are always copied) and even where it did increase memory usage, it was on the order of KiBs for a large DTS file and no one cared.

            1. 1

              No, my implementation was for a proprietary system that was barely more advanced than a fancy bare-metal task scheduler. The device tree compiler was okay. I made the same mistake of using range-referenced strings (in my defense we were pretty RAM-constrained) but it was mostly bump-free.

              Device tree manipulation was the nasty part. We got confident and thought it was a good idea to get the few peripherals that were hotpluggable managed via the device tree as well, and have them dynamically inserted in the flattened device tree. There were a few restrictions that seemed to make that problem a lot more tractable than it was (e.g. we knew in advance which nodes in the tree could get new children, the only hotpluggable peripherals were developed in-house, so we didn’t have to reinvent all of the Open Firmware standard) but it was definitely a bad idea. This was all pre-C++-11 code (I think it was around 2012, 2013?), so no native unique_ptr/shared_ptr, no rvalue references and so on.

    2. 6

      The second point — ability to wrap unsafe code in safe APIs — is especially important.

      Ability to separate high-risk “hold my beer” code from boring glue code has many nice side effects.

      In a team, you can agree who can write and review unsafe code, and not be afraid that your junior team members will cause some horror heisenbug (there can always be other types of higher-level bugs in safe code, but safe code doesn’t cause them, you’d have them anyway, on top of memory issues).

      People often move abstractions built from unsafe code (like clever data structures, concurrency primitives, OS or C library wrappers) into separate crates. This makes it easier to thoroughly test and fuzz just the unsafe code, without having to test a larger project as a whole (once you ensure your safe API is indeed upholding safety invariants, the compiler will prove all uses of that code are safe too).

      1. 11

        and not be afraid that your junior team members will cause some horror heisenbug

        When I was working in C++ often times the junior team members had enough self awareness not to be too clever. The problem was when more senior members (me including) thought they know what they are doing just to be proven wrong half a year later when someone triggered a bug they left, by some ‘unrelated’ change. Have to say that looking back at that time after working professionally in rust for few years, it all looks surreal.

        What I’m trying to say is that junior or senior, this can happen to all of us when we write unsafe code. So it’s better to write it as rarely as possible (instead of all the time with C/C++/Zig).

        1. 8

          Oh god, peak clever, that unsettling part of your career when you’re halfway between too young to be clever and too old for that crap is the worst. I’ve written my worst code during my peak clever years. Thankfully about half of them were wasted at Big Corp where I never had to write anything useful, important, or significantly more challenging than a linear search, for that matter, so there was only so much I could screw up. (And of course I still screwed it up!)

          It’s not just the frame of mind you’re in, it’s the whole context around it. You generally hit peak clever around the time folks start moving into management, so even if you’re not the team lead, you’re often senior enough that most people on the team are too early in their careers to tell if your code is really good or really bad, and there are just not that many people more senior than you who are patient enough to tell you how bad it is. They know there’s no reasoning with people at peak clever anyway, they’ve been through it and they want to punch themselves in the face with a chair every time they remember what they were like.

          That’s why every language feature that fails silently and depends on code review or on programmers knowing their limits is doomed to fail: sooner or later it’s going to end up in the hands of someone who’s at peak clever who will completely screw up and will be too self-absorbed to tell they’ve screwed it up if it doesn’t blow in their faces.

          I don’t think I was colleagues with anyone here when I was at peak clever but if I was, I, uh, I’m really sorry :(.

      2. 9

        Yep! My really spicy take is that Rust is more helpful in the cases where people claim it isn’t worth it because of the use of unsafe! The ability to provide a safe wrapper around unsafe APIs and isolate it is just incredibly valuable.

      3. 3

        This certainly tracks with my experience. I see relatively small amounts of unsafe code down in the bowels, and it gets extra scrutiny. But it’s very natural to build safe APIs on top of it, and use that in the rest of your program. This may be simply because typing unsafe is an extra, deliberate step that you have to do; the language’s syntax itself encourages the pattern.

    3. 5

      For the Roc programming language, they attempted to code everything in Rust. But unsafe Rust was too much of a liability for the runtime, so they rewrote it in Zig. The compiler remains written in Rust.

      https://github.com/roc-lang/roc/blob/main/FAQ.md#why-does-roc-use-both-rust-and-zig

      Another language runtime that switched from unsafe Rust to Zig:

      https://zackoverflow.dev/writing/unsafe-rust-vs-zig/

      If you didn’t click on the links, here’s the Roc experience:

      • We struggled to get Rust to emit LLVM bitcode in the format we needed, which is important so that LLVM can do whole-program optimizations across the standard library and compiled application.
      • Since the standard library has to interact with raw generated machine code (or LLVM bitcode), the Rust code unavoidably needed unsafe annotations all over the place. This made one of Rust’s biggest selling points inapplicable in this particular use case.
      • Given that Rust’s main selling points are inapplicable (its package ecosystem being another), Zig’s much faster compile times are a welcome benefit.
      • Zig has more tools for working in a memory-unsafe environment, such as reporting memory leaks in tests. These have been helpful in finding bugs that are out of scope for safe Rust.

      Summary from the second project (bytecode interpreter and garbage collector):

      • Zig is safer and faster than unsafe Rust.
      • Unsafe Rust is hard. Especially aliasing rules.
      • Zig is faster: From the benchmarks, the Zig implementation was around 1.56-1.76x faster than Rust.
    4. 3

      Memory-safe sections can still experience faults caused by memory unsafety elsewhere.

      1. 8

        That means the unsafe code is unsound, which means it is broken. Unsoundness is one of the few things (possibly the only one) considered severe enough for the Rust stdlib to break code.

      2. 5

        That’s like saying that a line of Java code can still cause a segmentation fault. Technically true, but your model of Java assumes a working JVM. Similarly, a line of safe Rust code can cause a segmentation fault, but we assume a working Rust compiler and sound unsafe code. Note that unless you implement foreign function interfaces or other specialized or low-level things, you never need unsafe. As in, you can write Rust code for years and never use the keyword even once.

        1. 2

          While I agree with you about the general message, the correctness guarantees of the JVM and of Rust unsafes found in the transitive closure of all your code and its dependencies are quite different. And you only need to go off the happy path once, and from then on you are on an UB road to hell.

          1. 2

            Yes, they are quite different. But once you start looking at dependencies like tokio as a runtime, similar to the JVM, the concepts connect together. There may be plenty of unsafe in tokio, but I trust that they are as competent as the JVM guys. On top of it, I write and run only safe Rust. And as a bonus, I can choose whatever runtime I want, since it’s compiled into the binary I ship. So I’m not bound to whatever the distro chooses for me.

        2. 1

          Sure, unsafe Rust code is like C or C++ in terms of correctness. Note that this incentivizes exactly the same machismo which C and C++ professionals channel when they confidently emit unproven code.

          1. 4

            I think the Rust culture is different. unsafe is something to be minimized and scrutinized, to be kept contained. I haven’t yet met anyone who is proud to churn out unsafe Rust code when it was possible to do it in safe Rust instead. Whereas I know plenty of people who are proud of their C skills (yet their applications segfault). I certainly hope that people avoid unsafe as much as possible.

            1. 1

              I guess I should have phrased my first post better. An unsafe block can corrupt memory in such a way that all access faults will occur in safe code. The article’s claims about 70% or 30% of code are spurious given that one unsafe block can spoil an entire otherwise-safe application.

              1. 4

                One CPU bug, or OS bug, or browser bug can cause any line of JavaScript to fault. It can spoil everything. It’s nothing new that we build safe building blocks on top of unsafe ones. The issue you raise exists in literally every single piece of software that provides any kind of safety against faults. Let me give you another example: In C++, there are references (&) that are not nullable and provide a good way to design an API that takes a non-NULL value. Unfortunately, there are many ways for a reference to be NULL anyway if the code is buggy. I debugged plenty of NULL reference segfaults. Yet, the addition of references to C++ was an unequivocally good thing as otherwise, everything would have to be passed as a pointer and the programmer can’t signal that this API is only for non-NULL values. References provide a safety that, while not perfect, makes life easier and better.

          2. 3

            This is not the case for two reasons. Firstly, unsafe Rust is still less dangerous than C and C++ - like, it is dangerous, but the rules to make stuff ‘work’ are massively less problematic than with C and C++, and getting any given degree of assurance that the code is ‘good’ is more tractable in the unsafe Rust case. Secondly, unsafe Rust is compositionally and incrementally showable as ‘good’ in ways that C and C++ are simply not. For C and C++ you need a new from-scratch assurance process for the combination each time you (for example) use two assured modules together in combination, whereas for unsafe Rust you can do a hugely smaller and more tractable analysis of simply the interface points plus the specific guarantees that the ‘safe’ interfaces require. And for C and C++ you in general need a new from-scratch assurance process every time you make any changes at all, whereas for unsafe Rust you in general can use an incremental assurance process.

            1. 6

              Firstly, unsafe Rust is still less dangerous than C and C++

              I disagree here. One example that we hit in migrating some code from C to Rust involved reading a device register. In Rust, that read was written in an unsafe block and in both the C and Rust version that then returned an enumerated type. The Rust version followed the advice to put as little as possible in the unsafe block and so did nothing other than the MMIO read there. Both versions then checked that the result was a valid enumeration value.

              In the C version, an enum is just some syntactic sugar over integers and so check was fine. In the Rust version, enumerations are type safe and its undefined behaviour for the unsafe block to return an invalid value, so the compiler elided the check. When the hardware gave an out of range value, the code behaved wrongly. This was on an embedded device and the Rust version was vulnerable to a glitch injection attack that the C version was not as a result.

              Unsafe Rust is more constrained in what you are allowed to do than C, but not more constrained in what you can write. This means that it’s easier to introduce undefined behaviour from unsafe Rust than it is in C.

              Secondly, unsafe Rust is compositionally and incrementally showable as ‘good’ in ways that C and C++ are simply not.

              Incrementally, sure. Compositionally? No. This was the point of the RUDRA paper: unsafe Rust can easily come with an implicit contract (either accidentally or documented in the API) that callers must respect. Two different pieces of unsafe Rust code may have incompatible contracts. When we did an analysis of Rust for use at Microsoft, there was an issue on the Rust bug tracker for this which had some nice examples of things that can be expressed in unsafe Rust that are each fine in isolation but not when used together. There are also trivial things like the unsafe package that wraps /dev/mem and lets you poke your entire address space and completely undermine memory safety: real (non-malicious) Rust code would never do something that overtly stupid, but any library that uses unsafe can be guaranteed only to expose interfaces that the library author thinks are safe, not ones that actually are and especially not ones that actually are in the presence of other arbitrary uses of of unsafe. The Rust standard library has had errata as a result of the Rust core team using unsafe incorrectly, so you’ll forgive my skepticism over claims that I get properties I can reason about from random cargo packages’ use of unsafe.

            2. 4

              I hear this a lot, and I’m not sure to what extent it’s true.

              Unsafe Rust still has to obey aliasability-xor-mutability, and violating it leads to UB. In other words, there are extra rules and more opportunities to mess up unsafe Rust and cause UB. IIRC this is why some find it better to use C or Zig in some instances.

              Also, Rust isn’t immune to those compositionality problems that C and C++ have. Plenty of unsafe code relies on invariants that are upheld by the safe code around it, and when that safe code behaves unexpectedly, it can cause the unsafe code to cause UB. This means that if two modules interact, they could trigger logic problems in each other which trigger UB in each other’s unsafe blocks.

              It helps to see an unsafe block as a somewhat leaky abstraction, then all the normal downsides of leaky abstractions are also recognizable.

              You might be right in just saying it’s better. I don’t think I’d say it’s “massively” or “hugely” better, but to each their own.

              1. 3

                Like, unsafe Rust isn’t just ‘the rules of C or C++ but with the addition of manually checking for aliasability-xor-mutability’ - the rules are actually very different. The rules of unsafe Rust are fewer and much more comprehensible and checkable and tractable than the rules of C and C++. Most C and C++ programmers simply do not understand the ‘rules’ of the language they program in, and so assume that whatever they do must be what the rules are, and that whatever code they produce is good, when in fact it is not.

                And please read what I wrote. The combination of two Rust codebases that use unsafe blocks does have to have checked their own unsafe code blocks for ‘goodness’ already, but you only have to check the api invariants for the unsafe code in the joint code, as I said, rather than a new from-scratch unsafe analysis of the composition of the codebases, so the situation is precisely as tractable as your invariants, and gives nice composition and incrementality properties. So if you build an API with intractable invariants, you will have a bad time, or if you simply write bad unsafe code but pretend that you checked it, you will have a bad time, but as long as your unsafe code has been checked in isolation and your invariants for the api are tractable, the process will be tractable.

                The process for ‘checking’ C or C++ code on the other hand is entirely untractable, and you have to recheck from scratch every time an incremental change is made or checked codebases are combined.

                Edit: that is, the only things that ‘leak’ out of unsafe blocks in Rust that follow ‘the rules’ are the API invariants and any ‘rule breaking’ intended functionality (eg you could write an unsafe block to deliberately mess with some other part of the program’s function by carefully deliberately breaking certain of the rules, but it would be doing that because that was the intended functionality - eg if you wanted to write a crate that leaked memory or allowed UAFs on purpose, for instance).

                1. 3

                  The rules of unsafe Rust are fewer and much more comprehensible and checkable and tractable than the rules of C and C++. Most C and C++ programmers simply do not understand the ‘rules’ of the language they program in,

                  This may be partly almost true in practice, but the rules of unsafe Rust are currently unwritten,¹ and thus no-one truly knows them.

                  The preeminent work on how to write unsafe Rust correctly, The Rustonomicon: The Dark Arts of Unsafe Rust, is not a true specification or standard (that defines what the rules are) but more like some experienced wizards collecting their advice for apprentice wizards into a draft of a textbook, essentially their educated guesses, evolving as its authors themselves learn more about how to practice their “Dark Arts” relatively safely.

                  ¹ people are working on writing them

    5. 2

      I wonder if the same dynamic that we see on the “type-safety” side is playing out here.

      You would think that you need to 100% prove everything, or that you need 100% code (and branch) coverage in your tests to be safe. Empirically it turns out that a significantly smaller amount of test coverage is effective (and more hurts more than it helps), that programs written in dynamically typed languages are no more buggy than those written in statically typed languages, same for programs in type systems with holes in them.

      It could of course be that memory safety is different, but maybe it really is true that most bugs are shallow (matches my experience), and thus even a fairly superficial scan will be effective at weeding them out.