1. 35

Rust 1.29.1 was released Sept 25, 2018. The release announcement mentions a security vulnerability. It says something about buffer overflow, which seemed odd to me. There’s a link to an email thread describing it. It’s a typical description of a buffer overflow bug.

This vulnerability is an instance of CWE-680: Integer Overflow to Buffer Overflow[1].

The str::repeat function in the standard library allows repeating a string a fixed number of times, returning an owned version of the final string. The capacity of the final string is calculated by multiplying the length of the string being repeated by the number of copies. This calculation can overflow, and this case was not properly checked for.

The rest of the implementation of str::repeat contains unsafe code that relies on a preallocated vector having the capacity calculated earlier. On integer overflow the capacity will be less than required, and which then writes outside of the allocated buffer, leading to buffer overflow.

Rust is one of several modern languages lauded as “memory safe” and so this kind of error should be fairly rare. I was curious about the code, and the email thread continues:

While the str::repeat function has been in Rust since 1.16.0, this vulnerability was introduced into the standard library in pull request #48657 [2]. The pull request was merged on March 6, 2018 and was first part of the 1.26.0 stable released on May 10, 2018.

Let’s open up that PR https://github.com/rust-lang/rust/pull/48657/. It looks like this change makes a reasonable optimization. I have no context for what prompted the community to desire this optimization. It’s a good speedup. All seems reasonable still.

Flip over to the files tab, to see the diff and it starts to seem a little strange. This change takes a concise and clear functional-style implementation and updates it to a long, complex, and apparently buggy imperative-style.


This seems like a fairly niche use-case function, and yet it became the vector for a rather serious bug. It makes me wonder why the function exists in the standard library at all, or why it was desirable to update it. I’m biased toward functional programming, and this seems to “confirm my bias”. Perhaps there’s more to the story? Sorry for stepping in without context.

  • What does the community think of the story of this bug? Is there more to it?
  • When are optimizations which reduce readability justified?
  • What values should a programming language’s standard library prioritize?
  1. 22

    This seems like a fairly niche use-case function, and yet it became the vector for a rather serious bug. It makes me wonder why the function exists in the standard library at all, or why it was desirable to update it.

    Everything seems niche until you use it. Then you discover your program’s performance is bottlenecked on some string library function. You can either rewrite the code yourself, one off, with a high probability of disaster, or fix the library and hope others will contribute to making it robust.

    1. 1

      Would you name a use-case where you need 100 repetitions of a string in a buffer and generating the buffer in 428.31ns is too slow but generating it in 52.895ns is fast enough? This request is a little tongue-in-cheek, but if you are willing to share such a use-case, I’ll learn from you.

    2. 16

      This change takes a concise and clear functional-style implementation and updates it to a long, complex, and apparently buggy imperative-style.

      True, but the numbers in the PR indicate that the new version was faster as n grew. I don’t know how often str::repeat is used, but — and this ties with the point you make about values — when I use standard library functions in Rust, I expect that they be as performant as is possible.

      What does the community think of the story of this bug? Is there more to it?

      Every software project has stories like this. In fact, I believe that one of the remote holes in the OpenBSD install was exactly this, an integer overflow in a calloc. What’s important is that the project react quickly and responsibly; I feel that the Rust team did that here.

      When are optimizations which reduce readability justified?

      In the standard library, I think they are almost always justified, because that code will be executed much more often than it’ll be read. If not, you might end up like OCaml with 2-3 competing standard libraries.

      It’s a shame that there was an issue with str::repeat, but for Rust, this is an uncommon scenario so far. How many functions were rewritten from a simple, but slower, style to a more complex, but faster, style without problems? Hopefully, the story of str::repeat is the zebra.

      What values should a programming language’s standard library prioritize?

      That depends on the programming language. In Rust, I think that safety and speed are the qualities that should be optimized. This has led the design of Rust for many years — before even 0.1 was released, Rust was GC’ed, but that was removed in favor of the current ownership system, because a GC did not meet the performance expectations that people had of Rust.

      1. 7

        In fact, I believe that one of the remote holes in the OpenBSD install was exactly this, an integer overflow in a calloc.

        Kind of, but the opposite. Caller in sshd hand multiplying numbers to malloc instead of using calloc.

        1. 1

          you might end up like OCaml with 2-3 competing standard libraries.

          Is that the story of how the split happened? Inertia due to readability concerns?

          1. 6

            The Jane Street standard library partly came about because common list functions (e.g., map or filter) in the OCaml standard library are not tail-recursive. For many functions, straight recursion is simpler than tail-recursion, but the folks at Jane Street felt that avoiding stack overflow errors was justification enough to have a slightly more complex implementation.

            1. 1

              Thanks.

          1. 1

            They saw it coming, and yet the readability of the code still obscured the bug from all.

          2. 11

            This seems like a fairly niche use-case function

            Ehhhh I’d really hesitate to engage in “I’ve never needed this, so it must be trivial” thinking. The second you touch web programming, particularly for things like template generation, your program’s hot spots are instantly “niche” string functionality you’d never have thought twice about in other domains. Also, IIRC Ruby got a 30%+ speedup to Rails boot times out of a similar string optimization. Fast string manipulation can matter a lot.

            What does the community think of the story of this bug? Is there more to it?

            Rust’s overflow handling story isn’t as good as it could be, is the short of it.

            When are optimizations which reduce readability justified?

            When the tradeoff is such that worse performance would result in the code failing to meet its goals. Consider that the Rust stdlib’s primary goal isn’t pedantic— it’s not aiming to provide nice and readable sample implementations for learning purposes. The Rust stdlib’s goal is to provide production-quality, optimized versions of whatever it contains, such that all Rust programs can reasonably be expected to use stdlib functions whenever they’re available.

            If the stdlib contained nice, readable, but not as fast as they could be implementations, programs would be encouraged to roll their own re-implementations, repeating subtle bugs throughout the ecosystem, making safety harder. “Weak batteries included” is barely any better than “batteries not included” in standard language libraries.

            1. 1

              Also, IIRC Ruby got a 30%+ speedup to Rails boot times out of a similar string optimization. Fast string manipulation can matter a lot.

              Do you have a link to anything about this? I’d be interested to learn what the change was, and how it had this kind of impact.

              When are optimizations which reduce readability justified?

              When the tradeoff is such that worse performance would result in the code failing to meet its goals.

              I think you’d agree that adding a buffer overflow bug to the standard library of a language marketed as memory safe would also result in the code failing to meet its goals. What is the tradeoff between readability and optimization?

              • Optimized code:
                • Users of low-powered computers can make use of the end product.
                • Costs to companies from compute costs are reduced.
              • Readable code:
                • Code is more likely to be correct.
                • All users are safer from vulnerabilities.
                • Costs to companies from vulnerabilities are reduced.

              These seem pretty evenly matched until I look at it from a consequence vs likelihood perspective. The consequence of vulnerabilities is in the range of impacting national elections, creating global botnets, and permanently infecting device firmware such as your USB or SSD controller. I don’t think the consequence of slow code compares to that.

              • Slow code is moderate-likelihood and low-consequence.
                • Optimizing further reduces the consequence, while increasing the likelihood of vulnerabilities.
              • Vulnerable code is medium-likelihood and high-consequence.
                • Improving readability reduces the likelihood of bugs without impacting their consequences, while increasing the likelihood of slow code.

              You may still disagree, but given the risk, and rust’s marketing as a memory-safe language, I think the standard library should strive for safety over optimization and encourage its community to do the same throughout the ecosystem.

              1. 1

                When are optimizations which reduce readability justified?

                When the tradeoff is such that worse performance would result in the code failing to meet its goals.

                On the flip side, this is a good justification for Rust having tools like Frama-C or static analyzers that catch these kinds of bugs in unsafe code. Then, they can drop down, optimize for performance, run the analysis, see nothing, and they’ll still get few of these problems.

              2. 8

                You’re looking at a buffer overflow coming from an unchecked “unsafe” call, which originally allowed for a great speedup by just raw-copying using the unsafe ptr::copy_nonoverlapping function. This is where you will always see possible security bugs, using unsafe code to improve speed.

                1. 4

                  When are optimizations which reduce readability justified? What values should a programming language’s standard library prioritize?

                  I want to touch on these two parts together.

                  The standard library of any language should always be as fast as it can possibly be, it’s getting used constantly, and consistent efforts should be made to improve the speed of the library as a whole.

                  Now obviously correctness needs to be the highest priority, but mistakes happen.

                  1. 2

                    Now obviously correctness needs to be the highest priority

                    Your first point looks right. This one doesn’t. Correctness is a low priority for a lot of projects that get mass adoption. Inefficient solutions do, too, if they meet developers other requirements. Works well enough free or on the cheap is highest priority for most solutions. Efficient and/or flexible next set. Reliable, secure, and recoverable last. That’s for anything mainstream.

                    1. 1

                      Why do reliability, security, and recover-ability take a back-seat to correctness? Is that the best policy for the standard library of a language marketed as memory safe?

                      1. 1

                        Read the Worse is Better essays by Gabriel. When investing into a language, you are always balancing performance, QA, and number of features. The more features and libraries with C-like speed the better. If they’re mostly correct, they’ll still get wide uptake due to Worse is Better effect. I think they also disabled integer, overflow checking as default for similar reasons. Maybe misremembering, though!

                        Now, if we’re talking high assurance depoyments, then correctness, reliability, and/or security come first. There’s other languages like SPARK and Frama-C that aim for that with their defaults.

                    2. 1

                      I’m not sure I understand. Are you saying that correctness is higher priority than speed?

                      1. 1

                        I’m saying that speed is the highest priority except when it impacts correctness. That is to say make the code as fast as possible, but no faster.

                        I might also have a different opinion of correct than @nickpsecurity does.

                        For me correct doesn’t mean safe or even accurate, it means expected. Does the API produce the expected output given the input? If it doesn’t then it’s incorrect. I expect C APIs to blow up in my face in well understood circumstances. But, it’s unexpected for a rust API to have a buffer overflow though.

                        So ultimately, you get as fast as you possibly can without negative effects. Negative effects always sneak in as we seen here, but it’s not the case that the code is faster because it’s incorrect, only that it accidentally became incorrect as a result of making it faster. I think reasonable steps should be taken to ensure the correctness, and I don’t think the reasonable steps were taken in this case given that someone rightfully questioned the code in the PR and it went through anyway.

                        That’s my view on it, does that make sense?

                        1. 2

                          That makes sense, though that definition of correctness being relative to the expectations of a particular tool or environment seems slippery. Maybe it’s actually a more intuitive definition though. I don’t think I expect python code to be correct in the same way I expect haskell code to be correct.

                          Thanks for your thoughtful responses.

                    3. 5

                      What values should a programming language’s standard library prioritize?

                      This is a very valid question, but since it is based on what one values, there is no ‘correct’ answer.

                      For me personally, Rust is already in the right ballpark performance-wise. So, I prefer correctness and readability. To me the standard library should (largely) be the place where one could look to see correct and idiomatic use of a language (though most standard libraries fail that test). Unfortunately, many parts of the Rust standard library are not very readable and use raw pointers and unsafe a lot. Of course, to some extend this is unavoidable, since you have to provide safe wrappers around raw memory somewhere.

                      But I agree that this change is a risky optimization of a relatively niche function. Also, the function is not really necessary, since one can implement (perhaps less efficiently than the old String::repeat implementation, though it does use size hints) using iterators:

                      https://play.rust-lang.org/?gist=1a36c52d1076ae11202594f124c83357&version=stable&mode=debug&edition=2015