1. 11
  1.  

  2. 2

    If you want the “tl;dw” summary of this talk (as I see it), it’s this: the compiler should also be the build system and here’s llbuild, a way of doing that.

    I don’t have any serious objection to this. There are already a bunch of hack-like things to get C/C++ to work nicely with make, ninja, CMake, and friends. And with large builds they’re pretty slow. The library style design of LLVM lends itself nicely to not starting so many processes to communicate, which makes it easier to create structures that can be shared by different parts of the build without resorting to writing things to disk.

    What irked me about the talk was how C/C++ -centric it was, whereas the rhetoric made it sound like it was something for all languages. If it becomes part of LLVM (it’s now part of Swift) then it may prove useful for languages built on LLVM, but that remains to be seen (would Rust use it?). Also, llbuild is described here as “a way to build build systems”. So if I use it to build some new build system, how much of LLVM am I going to depend on? That could be a lot of dependency code just to start my build system – and I’m forced to work with C++ libraries.

    Frankly, I think this has come about because C/C++ is just so awful to compile in large quantities, especially C++. A lot of that blame can be laid at the feet of the #include “mechanism”. And I say this as someone who genuinely likes C (and no longer hates C++). The talk addresses the textual include problem with modules and how much they help, although there is more to it.

    Lastly, I couldn’t help but wonder if maybe, just maybe, if builds for things are so long given the computing speed we have now, that perhaps you’re doing something else wrong? I saw nor heard any acknowledgement of the idea that perhaps the systems being compiled that take so long are too big and could be designed in different ways to avoid long compile times in the first place.