1. 30
  1.  

  2. 7

    I totally get why the author is frustrated. The tone can be rough, especially when it comes to security issues.

    But if you provide an abstraction that makes memory safety issues extinct (like Rust, like SpiderMonkey, like WebAssembly, like …) then you MUST treat these bugs as severe security issues. That’s only fair.

    boats’ assumption is that the programmer is the attacker here, since they’ve written bad code. But I can totally see a real world application having user-influenced input in both parameters of str::repeat.

    1. 6

      Exactly. The programmer is not the attacker that most people have in mind when issuing CVE’s. In general, Rust has a ton of lingering issues that can be leveraged by exploit authors. They used to run the underhanded Rust competitions, where lots of safe rust was demonstrated to be exploitable. They stopped running that competition, and Rust people tend to sweep its results under the floor. Want to exploit real rust applications? Start with that issue list.

      The Rust project does its best to work through that list, but a lot of that stuff is blocked on fundamental architectural changes that have been in progress for years. If it’s your explicit goal, it’s not hard to write a PR that never introduces “unsafe” blocks yet triggers exploitable behavior via these, and other less-known, vectors.

      Bet your fingers that intelligence agencies and certain exploit developers know about this. Only a few Rust people seem to have a sense of real-world exploit ecosystems. By downplaying risks like this, it just perpetuates myths that ultimately get journalists and other at-risk folks executed. These attacks currently require a lot more skill to carry out because it’s a new attack target, and Rust totally has increased the cost of exploitation, but I think a language that so many at-risk individuals are going to be relying on for things that impact their personal safety has a duty to be more clear about the real risks. Rust is a flawed tool because it was made by humans. We should be honest about that so people don’t make bad bets that get people killed.

      In real-world attacks, bugs get chained together. An unlikely-seeming integer overflow is often triggered through other bugs that seem low-impact and unlikely on their own. Defense in depth is what keeps us safe. Splitting hairs for this stuff saves lives.

      1. 3

        I was with you until you almost implied Rust developers in the killings of journalists :-)

        I agree that Rust has increased the cost of exploitation significantly but that there are also actors out there who are willing (and likely already have been willing) to accept that cost and do it anyway.

        1. 1

          By increasing awareness of the real risks we are exposed to as members of the Rust ecosystem, we increase the chances that we will discover and prevent such attacks. I see efforts to downplay severity as increasing exploit viability and shelf life. Abusive governments around the world have a clear track record of using exploits to unmask investigative journalists who end up jailed and murdered. If I were selling Rust exploits to those murderers, I’d be happily cheering boats on, especially as Rust privacy and crypto code is being trusted by more and more folks in sensitive situations.

    2. 11

      But its important to consider the conditions necessary to achieve this: a memory bug can only exist in the program if the programmer calling str::repeat has passed a string and and integer which could, during program execution, have values that, when multiplied together, overflow the pointer sized integer on that platform. This is the crucial point: the programmer must implement code in a certain way for any exploit to be achieved; in the language these CVEs use, the “attacker” is the programmer..

      Does the author think that this isn’t true for the majority of any other vulnerabilities? Does the author think that programmers in other languages are deliberately introducing vulnerabilities, and that Rust protects users from programmers who do so? Does the author think that the introduction of vulnerabiities done by acknowledgement from the authors that a vulnerability exists, and that they simply decide not to bother fixing it?

      Rust bills itself as an airtight hatchway. Addressing the fact that the programmer is not an airtight hatchway is the entire value proposition of Rust. When Rust (1) tells you something is safe, and (2) this is proven to be false, (3) Rust has a vulnerability. Q.E.D.

      What the fuck?

      1. -4

        Does the author think

        No.

        I’m growing the opinion this is why Rust is popular: It promises stupid programmers that they don’t have to stop being stupid, they just have to use Rust, which will somehow detect their stupidity.

        1. 6

          If I disregard the unnecessary condescending tone of your comment, I actually agree with the second part! Although I also believe that there are very few people (and even far fewer teams) who are smart enough to consistently write secure C and C++.

          1. -1

            If I disregard the unnecessary condescending tone of your comment

            I don’t think there’s any shame in being stupid as long as it’s temporary. We were all stupid at one point, with stupid ideas about doing things, and we can get better. I appreciate it could use tact; Pg suggested the term “blub” programmer, but I’m not sure it’s any better since nobody wants to be a “blub” programmer either.

            Although I also believe that there are very few people (and even far fewer teams) who are smart enough to consistently write secure C and C++.

            That might be true, but I think it has more to do with the fact there are very few people who are smart enough to consistently write secure software, and very little at all to do with C or C++.

            One reason why seems to be that many people think that security is some kind of feature or boolean-state they can acquire, or that it’s some kind of continuum – that some languages or tools can be “more secure” than others in some absolute way. This is extremely dangerous thinking and it produces all sorts of the wrong results you observe, and creates the same kind of unnecessary friction towards experts (or non-blub) who want to talk about actually solving them. Real security is about identifying what you want to protect against and developing confidence that you (or your customer) are indeed protected protected from those things, and you have to do it continuously - look at new attack vectors, understanding how they could affect your domain, and understand what kinds of things you can do to protect against them.

            If security is a process or discipline that you have to adopt, it is likely associated with unpopular processes and behaviours on the grounds that there is so much insecure software out there. That is to say, if you want to write secure software, you will not be able to use Rust or C or C++ in the way that “most people” use them, and a lot of the social leverage you have for other programming-related things will not be useful because those are the people writing the insecure software. This can be lonely, and is not attractive to many people, but the alternative is that they’re going to remain stupid in this sense, making bad decisions about security.

            Or they can just choose a language with some security stickers on it.

      2. 7

        We call crypto “broken” whenever a cryptosystem does not live up to the security guarantees its authors published. So if a system that promises to require 2^512 attempts to brute force a key only turns out to require 2^511 attempts, we consider it broken even if 2^511 is still better than others.

        That’s the right thing, because it’s true; if someone who really needed that security guarantee used it, they’d be in trouble. (Please ignore that I chose numbers no one could possibly need…)

        I think we should hold programming languages to the same standard. If they make a guarantee and it turns out not to be true, we should record that as a vulnerability even if the language is still better in respect to a particular threat than most others. Someone who needed that guarantee could try to use it and be harmed.

        A CVE is just a name for a particular kind of problem report. Not a value judgement.

        1. 1

          We call crypto “broken” whenever a cryptosystem does not live up to the security guarantees its authors published. So if a system that promises to require 2^512 attempts to brute force a key only turns out to require 2^511 attempts, we consider it broken even if 2^511 is still better than others.

          I don’t believe this is a majority opinion and in any event, such promises are extremely rare: Every cryptosystem is vulnerable to birthday attacks, so every 512-bit hash function can be “broken” by 2^511 tries, and usually less.

          Bruce Schneier suggests:

          Breaking a cipher simply means finding a weakness in the cipher that can be exploited with a complexity less than brute-force.

          and I like that definition because it’s a little more useful and accessible. I can appreciate in some circles referring to something being “broken” in the sense you describe is common, and it’s certainly academically interesting, but outside of those circles this definition is not interesting or common, even amongst professional cryptographers.

          1. 0

            In the Schneier quote you cite, he uses the word “breaking.” That is an example of exactly what I was referencing when I said “broken”.

            1. -1

              I don’t understand what you’re saying. Can you be clear?

              I thought you were saying (for example) that SHA512 is considered “broken” by people who build and/or use cryptosystems.

              1. 0

                I mean that when we reference fruitful results from the activity Bruce Schneier was describing when he said:

                Breaking a cipher simply means finding a weakness in the cipher that can be exploited with a complexity less than brute-force.

                the past tense way to reference those results is to say that a cipher has been broken.

                When he says that “a cipher can be exploited with a complexity less than brute-force” he is saying that the cipher violates one of its security guarantees. That doesn’t make it useless, and it may still be more useful than comparable operations. But it is said, at that point to have been broken.

                Similarly, a CVE is a precise name for a security problem that has been reported in a piece of software. It’s not a declaration that the software is useless.

                1. -2

                  When he says that “a cipher can be exploited with a complexity less than brute-force” he is saying that the cipher violates one of its security guarantees.

                  I would not say that is accurate, but I understand what Bruce says because he is clear: It’s you I don’t understand.

                  Bruce absolutely would not consider a 512-bit cipher “broken” if the plaintext could be recovered in 2^511 tries, even though he probably agrees that the definition of “broken” has a meaning sometimes used by academics that is not used by anyone else.

                  Similarly, a CVE is a precise name for a security problem that has been reported in a piece of software. It’s not a declaration that the software is useless.

                  Maybe you should choose an example that you understand better or that I understand worse.

                  1. 2

                    It’s you I don’t understand.

                    I will cheerfully take ownership of the fact that I will never be able to compose a piece of writing which will be easily understood by an audience who claims not to understand that “broken” is the result of an exercise in “breaking” something.

                    Maybe you should choose an example that you understand better or that I understand worse.

                    I should have left the example off entirely. I thought I made up such a vague, ridiculous, obviously phony example that no one reading it would be even slightly tempted to argue about its details. You have proven me wrong. Thank you.

                    1. 0

                      I should have left the example off entirely.

                      I agree. It’s extremely confusing when non-experts (politicians) talk about something in a manner similar enough to the way experts talk about it, but are not actually experts.

                      1. 2

                        If a “non-expert” confuses you, I would posit that you are neither an expert nor are you equipped to judge expertise.

                        Not that it matters, but in the space where my example was, I had initially written a few sentences about reduced round compromises being described as “breaks.” I changed it to a made-up contrived example hoping to stay out of these specific weeds.

                        Since you are an expert, I am sure that I do not need to link Schneier’s or Kelsey’s papers that use that exact term for you. But if I’ve over-estimated your expertise and you’d like to go read them, I remember at least one from the AES competition and at least one from the SHA3 competition. Your favorite search engine or nist.gov can surely help you find them.

        2. 3

          I see where withoutboats is coming from here, but I disagree with:

          he programmer must implement code in a certain way for any exploit to be achieved; in the language these CVEs use, the “attacker” is the programmer. That is to say: it rather involves being on the other side of an airtight hatchway.

          Rust’s claims for safety are stronger than other languages, and are described in the rust-o-nomicon:

          One may wonder, if BTreeMap cannot trust Ord because it’s Safe, why can it trust any Safe code? For instance BTreeMap relies on integers and slices to be implemented correctly. Those are safe too, right? The difference is one of scope. When BTreeMap relies on integers and slices, it’s relying on one very specific implementation. This is a measured risk that can be weighed against the benefit. In this case there’s basically zero risk; if integers and slices are broken, everyone is broken. Also, they’re maintained by the same people who maintain BTreeMap, so it’s easy to keep tabs on them.

          And IMO, the CVE withoutboats points to break this promise: I could very well pass well formed usize from the user of say, a webbackend, and still get unsafe behavior. Rust’s promises make me welcome users’ gifts on the other side of the hatchway, even if they are big horses, as long as these horses are made of usize, because Rust told me it would handle the little soldiers getting out of it for me, and in this case, it fails.

          1. 2

            Rust’s claims for safety are stronger than other languages

            Than some other languages.

            Not all.

            And then only with a very very narrow definition of safety that doesn’t affect most programmers or most programs.