1. 24

  2. 35

    That’s a very clickbaity title, with little relationship to the article.

    The actual article is not a benchmark, but debunking misconceptions people have about Rust’s safety and performance.

    The article is quite OK, actually. I feared worse from PVS-Studio’s blog.

    1. 11

      Yep! It’s a good article that takes apart Anton Polukhin’s FUD fairly well, using code comparisons and pointing out that Mr. Polukhin relied on broken/buggy codegen to make his claims. I‘m surprised at that - I would normally expect an official representative to be a lot more cautious and reserved in his claims.

    2. 6

      I, unfamiliar with C++ and Rust, found the discussion on overflow checks difficult to follow, but I found this to be a helpful resource:


      In particular, I liked the list of PRs at the end where Rust’s debug-build checks helped catch real-world errors!

      1. 1

        In particular, I liked the list of PRs at the end where Rust’s debug-build checks helped catch real-world errors!

        What do you mean by “PRs” here? Because I’m seeing this acronym used more and more frequently for what appears to be “bug report” (and not “pull request” or “peer review”, as I would presume), but I can’t make the connection.

        1. 3

          I’ve heard of PR being used for “Problem Report” in the FreeBSD community (I think this even predates the common use of GitHub). Not sure if this has lead to some confusion.

          1. 1

            Pull requests are not GitHub’s invention. (If that’s where your comment about GitHub comes from.)

            1. 1

              They aren’t, but I’d argue that it’s what popularised the term amongst the wider programming community.

          2. 2

            PR stands for “problem report”. This usage predates GitHub, for example see FreeBSD documentation.

            1. 1

              Pull requests are not GitHub’s invention. (If that’s where your comment about GitHub comes from.)

        2. 6

          I feel like a lot of what is being discussed here has already been talked about at length in various other posts. Is it not odd that there seems to be a collection of C/C++ users who are misrepresenting Rust’s capabilities? Steve already talked about this in his post You can’t “turn off the borrow checker” in Rust which is mentioned in this article. I’ve seen many false statements across Reddit, HN, Discord, etc, that could easily be resolved by reading the documentation. What is causing this? It’s not like Rust’s documentation doesn’t spell out what it restricts.

          All Rust checks are turned off inside unsafe blocks; it doesn’t check anything within those blocks and totally relies on you having written correct code.

          This is objectively false! Granted the original video is in Russian, but if you’re giving a talk about Rust it seems like it would make sense to learn what unsafe actually does before preventing your idea of it as fact.

          My greater question is: why does this happen this much? Am I disproportionately seeing more false comments about Rust than most people, or is there a real issue here? In contrast, people voicing their opinions on Go are founded on Go’s actual flaws. Lack of generics, error handing, versioning, et al. are mentioned, but when it comes to Rust, the argument shifts. Rust has flaws, and they are discussed, but there is quite a lot of misrepresentation. IMO

          1. 14

            It seems like a fairly normal human reaction, I think. People have invested large portions of their life towards C++ and becoming important people in C++ spaces. In that group of people, most are deeply sensible geeks that have reasonable reactions to Rust. But there will be some that have their own egos tightly coupled with C++ and their place in the C++ community, that see the claims made by Rust people as some form of aggression - attacking the underpinning of their social status.

            And.. when that happens, our brains are garbage. Suddenly the most rational person will say the most senseless things. We all do this, I think.. most of us anyway. Some are better than others at calming down before they find themselves with all the lizard brain anger organized on a slide deck, clicking through it on stage.

            1. 2

              While I love this explanation, I do want to point out the complexity and length of the list of actions one must do to build a misleading slide deck and speak on stage about it with absurd confidence.

              1. 1

                Hm that might be true, I think this also happen to a lot of people attacking graphQL, they do not want to accept an alternative to REST.

              2. 6

                I think these are different crowds. People who use Go instead of X vs. C/C++ people looking into Rust. Based on my very limited experience talking to C/C++ developers they got this sort of Stockholm syndrome when it comes to programming languages and they always try to defend the shortcomings of their favorite language. UB is fine because… Overflows are fine because…. They does not see any value in Rust because their favorite language has all. I do not know that many Go developers, but the ones I know are familiar with the shortcomings of Go and do not try to downplay it. All of this is anecdotal and might not represent reality but one potential explanation of what you observed.

              3. 6

                There’s a bad habit among both C++ and Rust programmers of linking to godbolt, counting instructions, and proclaiming “FASTER!”. Instruction counts only become meaningful if your cache-related latency is unrealistically low for most projects. It’s just as bad as comparing lines of high-level code, because accessing some random non-prefetcher-friendly memory from DRAM will cost you hundreds of cycles. Sure, less code means lower I-cache pollution, but it’s not the most important factor in determining the performance of a particular access pattern.

                Do the instructions require pulling in memory from a farther cache/DRAM/network/storage? How predictable are the branches taken in the code? Is the memory read at a constant stride that allows the prefetcher to speculatively fetch it before access? How much irrelevant data gets sucked into caches due to residing on 64-byte cachelines and consuming memory bandwidth? Do writes happen in a loop to more separate cachelines than your core has line fill buffers? Do your writes get improperly forwarded due to 4k aliasing? Instruction counts are such an indirect measurement for “faster” when there are so many factors that significantly impact performance.