1. 34
    1. 4

      I understand the verbs but none of the nouns.

      1. 1

        Runtime performance will be negatively impacted by IR generation, at least to start. The C++ code jank used to generate was quite optimized

        I am curious what the compile times are when you’re eventually aiming for 1:1 performance numbers, potentially re-implementing the Clang optimizations.

        1. 2

          I’m also curious about this, but I’m confident that it’ll remain well below what a C++ front-end (clang) takes in order to compile 100k lines of C++ (especially with the templates we use). With these numbers, I’m already running a standard set of IR optimization passes. The AST passes required in order to get unboxing working nicely should be quite light, my gut tells me.

          We all know that instincts can be way off when it comes to benchmark results, though. The only way to know for sure is to do it. :)

          1. 3

            A data point: I also wrote a compiler that compiles straight to LLVM IR in a different domain where all other compiler compile to C/C++. Compile times are 10x better and runtime between 2x and 4x better.

            Clang and all the zero cost C++ abstractions usually rely on generating a lot of LLVM IR (templates) and relying on the LLVM to optimize it. Both are quite slow. You can do a lot better by generating the IR you want directly.