1. 18
  1. 20

    This is just nuts IMHO. The guy seems to believe firmly that micro-optimizations at the instruction level are the key to performance. For programs on the scale of cloud services.

    writing good code with bad programmers is a problem of the XX century when transistors grew twofold every 18 months, and programmers’ headcount grew twofold every 5 years. We’re living in 2023. We have more experienced programmers in the world than ever before in history. And we need efficient software now more than ever.

    The way it looks to me is that we have a hell of a lot of junior programmers who didn’t learn much besides web dev and basic Java in school, and that a lot of software is standing on big piles of open source dependencies that were in many cases written by amateurs, and do not come with any support. So it would be nice if we could make our software more robust.

    On the other hand, the one who pays for your code ineffectiveness is now yourself. Every suboptimal routine shows in your AWS bill.

    Quite a lot of high performing big-data software is written in languages that are anything but fast at the micro level, like Erlang/Elixir for example. Performance here isn’t a matter of wringing every CPU cycle out of a loop, it’s using optimal data structures and algorithms and exploiting parallelism.

    Most of them [C++ killers], for instance, Rust, Julia, and Cland even share the same backend. You can’t win a car race if you all share the same car.

    It’s spelled “Clang”, and it is a C++ compiler. And of course the optimizer and code-gen are not the only things driving performance — I think we can agree that those languages have different performance characteristics.

    1. 5

      The last example is an ISA proposal and, having skimmed the site, I honestly can’t tell if it’s a real proposal or satire. Everything in it screams ‘I have never built a high-performance chip’.

      The way it looks to me is that we have a hell of a lot of junior programmers who didn’t learn much besides web dev and basic Java in school, and that a lot of software is standing on big piles of open source dependencies that were in many cases written by amateurs, and do not come with any support. So it would be nice if we could make our software more robust.

      I believe that the ‘C++ killer’ if one exists, will have sandboxing as a first-class citizen and mandate it for all foreign code interop. This is why I am so disappointed by Swift and Rust: neither comes with a security model for doing this. I want to be able to grab random crap from GitHub, use it, and reason about the blast radius if an attacker can take complete control over it.

      Performance here isn’t a matter of wringing every CPU cycle out of a loop, it’s using optimal data structures and algorithms and exploiting parallelism

      I couldn’t agree more. This is why Julia and Python are doing so well in HPC: they make it easy to experiment with a load of different shapes of the algorithm, often with C++ or Fortran kernels for individual steps, and find the one that works best on specific hardware.

      And of course the optimizer and code-gen are not the only things driving performance — I think we can agree that those languages have different performance characteristics.

      Exactly true. g++ and gfortran share the same back end, yet provide very different information for alias analysis to take advantage of and so often gfortran can significantly outperform g++. They use the same loop vectoriser, but the GIMPLE from gfortran contains enough information for the vectoriser to know that it’s safe, the C++ version doesn’t. The more that the front end can provide the mid-level optimisers, the better a job they can do. C++ and Objective-C in clang share most of the front-end logic in addition to the majority of optimisers, yet you’d struggle to find someone who claims that these languages perform the same.

      1. 1

        I believe that the ‘C++ killer’ if one exists, will have sandboxing as a first-class citizen and mandate it for all foreign code interop.

        Do you mean that it would sandbox the foreign code or that it would sandbox itself (or both)? In the former case, what if the foreign code is, say, Cocoa? (I assume a replacement for C++ would want to be able to interoperate with arbitrary OS APIs — or, wouldn’t it?)

        1. 1

          The process sandboxing code that I’ve written for Verona would handle cases like Cocoa. The library needs to be able to talk to the display server, but that’s something that the current interfaces can express. I haven’t implemented macOS sandboxing (though it should be possible), but I tested it with an X toolkit. The host program gets a callback when the untrusted library tries to open the socket to the server, validates the request, and forwards the file descriptor. With hardware support like CHERI, this can be almost zero cost. With OS support on conventional hardware, it can be a lot faster than it is today (Doors on Solaris would improve it a lot, for example).

      2. 1

        I got the sense the author just addresses very different problems than I do.

      3. 14

        tl;dr, the C++ killers are

        • Spiral, a CMU research project
        • The Python library Numba
        • The ForwardCom ISA proposal

        Interesting resources, terrible framing.

        1. 3

          also bad code is a problem of the past

          1. 4

            I seriously can’t say if this is sarcasm or not.

            1. 3

              yeah idk if the article is sarcasm or not either

        2. 3

          There can’t ever be a C++ killer, by definition. The people still writing in C++ these days think C++ is great, and they’re not going to change. The mistaken belief that they are good enough to write CVE-free software using these tools is how they derive some portion of their identity and self-worth, and they will never give it up. The only thing that will kill C++ is time and the mortality of programmers.