1. 16

Abstract: “As software becomes larger, programming languages become higher-level, and processors continue to fail to be clocked faster, we’ll increasingly require compilers to reduce code bloat, eliminate abstraction penalties, and exploit interesting instruction sets. At the same time, compiler execution time must not increase too much and also compilers should never produce the wrong output. This paper examines the problem of making optimizing compilers faster, less buggy, and more capable of generating high-quality output.”

  1.  

  2. 4

    So formal semantics for programming languages and program synthesis, including the optimization passes. This is quite ambitious and forward looking. I like the pragmatic approach taken by Zig for that: a fast compiler for debug that produces relatively slow code, and a slow compiler for production that produces fast code. Another avenue would be to use incremental programming and nano-passes, which would dramatically speed up partial recompilations.

    1. [Comment from banned user removed]

      1. 1

        I think Scala pioneered the too-slow-for-it’s-own-good compiler trend ;) Last time I looked, Go was behind Rust, sometimes by a factor of 5x in the benchmarks game. Projects like LLVM for compiled and PyPy for dynamic languages have tremendously helped improved performance of languages, but I’m sure we can go further. The Stalin scheme compiler is a good example of what can be done.

    2. 6

      This paper is flogging a dead horse. There are plenty of corner cases to be tweaked, but they don’t add up to much.

      There are bigger improvements to be had by thinking bigger

      1. 2

        Your thinking bigger article is interesting. Morton encoding is cool.

        1. 2

          Interesting! Were there any programming languages which experimented with/used the Morton encoding for their arrays?

          1. 1

            It’s an implementation technique. If an array is only ever indexed (i.e., no pointers into it) the compiler can use whatever layout it chooses.

            1. 1

              I don’t see how a compiler could deduce usage patterns that would benefit from morton indexing.

          2. 1

            “There are really really big savings to be had by providing compilers with a means of controlling the processors caches, e.g., instructions to load and flush cache lines. “

            It’s true. It’s also field-proven: they’re called scratchpads. They use less circuitry and power since they’re simple, software-driven stores. However, they have to be used wisely by the compiler. Most of what market makes isn’t. So, those pushing caches over scratchpads got more sales. The scratchpads are mainly in embedded products now IIRC.