1. 25

  2. 12

    I like to say that of course Rust prevented Heartbleed, because we don’t currently support wrapping malloc ;). That’s of course a bit glib.

    While it’s true that Rust gives you a lot of assistance with writing safe code, given that unsafe exists, and that we, as a systems language, need to give you underlying access to the machine, Rust will allow you to do all kinds of stupid things if you vouch to the compiler that you’re going to handle the safety aspect. This means that ultimately, you can do any kind of stupid thing in Rust that you can do in any other language.

    An example: a common problem in C code is buffer overflows, which happens when some code thinks that an array has more elements in it than it does. Because C’s arrays don’t have a length as part of the type, it can be error prone to keep track of the length of the array. Rust’s arrays, vectors, and slices all have a length as a part of the type, and so it’s more difficult to have this data be out of sync. But Rust will let you be stupid if you want to:

    fn bad_stuff(v: Vec<i32>) -> i32 {
        unsafe {
            *(v.get_unchecked(v.len() + 1))
    fn main() {
        let vec = vec![1, 2, 3];
        let val = bad_stuff(vec);
        println!("{}", val);

    The bad_stuff function here uses get_unchecked(), which skips the array bounds check, to grab some memory past the length of the end of the vector. Rust will happily compile this code, your unsafe block tells the compiler to trust that you’re using get_unchecked() in a safe way, even though it can’t verify it.

    Now, there’s a decent discussion that can be had on this topic: ‘usual’ Rust code will of course be significantly safer than ‘usual’ C code, since you have the compiler’s assistance. Unfortunately, security is often about the edge cases. While it’s true that not wrapping malloc would have prevented Heartbleed from happening, and most C projects don’t wrap malloc… it did happen. Once Rust starts getting used in the wild, we’ll see Rust with security problems, I’m sure of it. That doesn’t mean that Rust has no value, just that we need to be realistic about the tradeoffs that any of our technologies make. Rust is not a panacea, even if it does cure some of your aches and pains.

    1. 5

      The thing that surprises me most about this discussion is the fixation with private keys. So we have one bug that lets you read plaintext traffic. Then we have another bug that lets you read a private key, which lets you… read plaintext traffic… if you can intercept it. The second bug is “much worse” than the first?

      (But I did start by demanding more clarity, so I can hardly complain when someone draws a fine line between A and B.)

      1. 5

        Private key compromises are potentially much, much more serious than any other plaintext leak: they allow you to impersonate the legitimate keyholder.

        In the context of TLS, to mount an effective attack, you would need to get past hostname verification, which would require compromising either the server’s DNS or the victim client’s DNS resolution.

        1. 5

          No compromise required. DNS is a stateless protocol on a trivially spoofable transport.

          1. 2

            I think “trivially” overstates the case significantly. Is this answer substantially incorrect?

            1. 1

              Your link is correct as far as I know. I should probably reword what I said to “often trivial”. In any case, it’s not very technically difficult to exploit. A non-local attacker may need to do a little research into your ISP’s configured resolvers, and will rely on race conditions and counter a small amount of entropy that could take a non-trivial amount of time to successfully exploit.

          2. 2

            What can or would you do if you could impersonate the legit keyholder?

            1. 2

              Let’s say you managed to extract the victim’s email server’s private key using Heartbleed or some other means. Now, combined with the publicly available certificate of the email server, you can impersonate the victim’s email server, as long as you can convince the victim’s computer to connect to your server instead of the real one.

              If you’re on a LAN with the victim (e.g. at the same coffee shop), you might be able to spoof their DNS directly. Otherwise, targeted client-side malware is probably the most practical way to mess with someone’s DNS these days.

              Once you can get the victim’s browser talking to your server when they think they’re talking to their email server, you stand up a convincingly faked login page, capture their login/password attempts, then, to allay suspicion, move them along to the real email server, either by changing the DNS back, or reverse-proxying their traffic.

              1. 1

                That sounds like a lot of work. Why wouldn’t I use something like heartbleed to simply steal passwords from the server? Leave the whole interception mess out of it.

          3. 3

            The second bug is “much worse” than the first?

            Depends, natch! With the assumption I’m attacking you:

            How often do you rotate your private key? Every year when you pay your CA tax? Less often? Be honest…

            Are you a good boy who uses ephemeral D-H, or do you use RSA (“forward secrecy-schmecrecy!”)

            Does bug #2 let me hoover up the private key leaving you none the wiser? Or, at least, does it force you to dig through old packet dumps to notice after the fact?

            And most important, who am I and who am I attacking - a common criminal whacking at gmail? Or a nation-state actor that doesn’t mind throwing an exploit at a valuable target?

            I can hardly complain when someone draws a fine line between A and B.

            I want a fine line too. No one ever defines threat models when talking about this stuff and it makes me angry.

            Depending on who you’re worrying about and what you’re doing to mitigate risk, stealing the keys can be equivalent to reading plaintext. But under other models stealing the keys is far more valuable than just reading plaintext.