1. 47
  1.  

  2. 10

    It’s a good read, and it’s good to hear that (some) developers are focussing on improving this.

    Slow compile times are one of my major pain points with GHC/Haskell, and I’ve been confused in the past by how little most Haskellers see this as a problem. There seems to be a strong self-reinforcing effect here, where a slow compiler means you lose people who care about compilation speed.

    1. 16

      There seems to be a strong self-reinforcing effect here

      This is a really common failure mode I see in software product evolution. If you do “design by addressing complaints” then you may satisfy existing users, but fail to fix fundamental things that cause people to not show up at all. It’s form of survivorship bias where you end up putting armor on the fuselage instead of on the engines where you should.

      It takes real effort to get information out of the users you don’t have because they have less incentive to bother communicating with you, but it’s vital if you want to grow a userbase.

      1. 0

        An interesting but exhausting read; when the author writes

        Too many tired metaphors in this section. Sorry about that.

        he’s absolutely right. I wonder why the author kept all this stuff if he realized it’s an issue. It’s also not just the metaphors – something feels really off about the writing style, but I can’t pin-point it.

        I experienced the self-reinforcing effect not with compile-times, but with documentation/beginner friendliness in Scala; this effect definitely exists.

        (Languages with bad documentation largely attract people who don’t care about good documentation, so documentation never improves.)

      2. 10

        “Calamity” might be overstating it, but this article is pretty good. I did some work on actually measuring this stuff a while back, if anyone’s interested: https://wiki.alopex.li/WhereRustcSpendsItsTime

        1. 9

          Single-threaded compiler — ideally, all CPUs are occupied for the entire compilation. This is not close to true with Rust today. And with the original compiler being single-threaded, the language is not as friendly to parallel compilation as it might be. There are efforts going into parallelizing the compiler, but it may never use all your cores.

          At the risk of repeating myself: I suggest that anyone working on compilers, and who knows how to read C++, take a look at main/realmain.cc in the Sorbet type checker codebase, which is ~650 lines of code.

          Recent threads that made me think of this:

          https://lobste.rs/s/mrl19l/what_would_programming_language#c_351ml1

          https://lobste.rs/s/iwbio2/why_sorbet_typechecker_is_fast#c_rp4ekk

          This is how parallel compilers should be written. In particular you can hit * in Vim on workers to see which parts are multi-threaded. Hitting * on indexed and gs also helps you follow the data flow.

          Related to my challenge from a year ago (which was widely misunderstood, I’m talking about reading a 70K line codebase in ~100 lines, not making a 100 line compiler):

          https://lobste.rs/s/gdoaj5/challenge_can_i_read_your_compiler_100#c_7u0jpp

          Basically my pet peeve is that a lot of compiler codebases don’t make their data flow explicit, and they scatter it about many files.

          Rust has a nice structure but it does not mention threading in the lib.rs file here, which to me is evidence that it’s not architected for parallelism (i.e. supporting the quoted claim).

          https://github.com/rust-lang/rust/tree/master/src/librustc_driver

          Basically parallelization has a large effect on the codebase structure. It gives you a more data-oriented design because you have to partition data between threads, and there are certain parts that can’t be parallelized. As I mentioned Sorbet looks like a MapReduce to me, with its “thin waist” of name resolution, and highly parallel parts on either end.

          1. 15
            1. Rust’s type checking takes only a fraction of the compile time. Majority of time is spent in LLVM cleaning up verbose IR.
            2. Sorbet has forward-only inference, which is too simplistic for Rust.
            1. 8

              One potential wrinkle here is that to write a MapReduce style compiler, the language itself needs to be amendable to MapReduce processing. Specifically, it should be possible to process each file more or less independently. Java and Rust make a good example of opposing extrems here.

              In Java, each file starts with a package declaration. That means that by parsing a single file in isolation, compiler can trivially reconstruct a fully-qualified name of the containing class. And that means that computing a Map<FQN, Class> symbol table is an embarrassingly parallel task. Moreover (which is crucially important for IDE scenario), AddFile, RemoveFile (and ChangeFIle as a combination of two) operations can be implemented with roughly O(1) cost. And, given this symbol table, fully analyzing each specific files is very fast, because you can get any global information this file depends on in constant time.

              In contrast, modules in Rust are not flat named things as in Java, but an actual tree. And to construct that tree, the compiler needs to crawl the modules starting from the root. Moreover, due to the way imports, reexports and macros work, to construct a symbol table the compiler needs to run a certain fixed-point iteration algorithm on the set of names visible in each module, and that is hard to make parallel/incremental without some comparatively sophisticated techniques.

              1. 4

                Yes very true, and after finishing the article it makes tons of great points along these lines – i.e. which problems are due to language design ones and which are due to compiler architecture. Great post, and I look forward to reading the rest of the series.

                I guess I would modify my statement to say if you’re designing a new language, start with a thread pool in main() and try to pass it to as many stages as possible :)

            2. 5

              I’m looking forward to the rest in the series as I’m a fan of the author and everything they’ve done for Rust, however with only the first article out thus far which merely discusses components that may cause slow compilation it leads the reader in an overly negative direction, IMO.

              Rust compile times aren’t great, but I don’t believe they’re as bad as the author is leading onto thus far. Unless your dev-cycle relies on CI and full test suite runs (which requires full rebuilds), the compile times aren’t too bad. A project I was responsible for at work used to take ~3-5ish minutes for a full build if I remember correctly. By removing some unnecessary generics, feature gating some derived impls, feature gating esoteric functionality, and re-working some macros as well as our build script the compile times were down to around a minute which meant partial builds were mere seconds. That along with test filtering, meant the dev-test-repeat cycle was very quick. Now, it could also be argued that feature gates increase test path complexity, but that’s what our full test suite and CI is for.

              Granted, I know our particular anecdote isn’t indicative of all workloads, or even representative of large Servo style projects, but for your average medium sized project I don’t feel Rust compile times hurt productivity all that much.

              …now for full re-builds or CI reliant workloads, yes I’m very grateful for every iota of compile time improvements!

              1. 7

                It is also subjective. For a C++ developer 5 minutes feels ok. If you are used to Go or D, then a single minute feels slow.

                1. 4

                  Personally, slow compile times are one of my biggest concerns about Rust. This is bad enough for a normal edit/compile/run cycle, but it’s twice as bad for integration tests (cargo test --tests) which have to link a new binary for each test.

                  Of course, this is partly because I have a slow computer (I have a laptop with an HDD), but I don’t think I should need the latest and greatest technology just to get work done without being frustrated. Anecodatally, my project with ~90 dependencies is ~8 seconds for an incremental rebuild, ~30 seconds just to build the integration tests incrementally, and over 5 minutes for a full build.