1. 17
  1.  

  2. 2

    Just from seeing the headline I knew it would involve disabling components and targets.

    Don’t try this at home :-)

    Don’t tell me what (not) to do :D

    ninja  9553.96s user 253.71s system 2927% cpu 5:34.99 total
    
    1. 2

      And 48 cores, I call cheats

      1. 6

        Let me introduce you godmode: https://stanford.edu/~sadjad/gg-paper.pdf

        1. 2

          up to 8000 cores from aws lambda

          I guess, sadly they didn’t provide the costs for this run.

        2. 1

          Well, I did beat it with just 16 cores (32 hw threads)…

      2. 2

        How does ninja shave a minute off of the build time? What can it do that make can’t?

        1. 4

          I think part of this is down to the way that CMake uses them. The generated Ninja files rely entirely on Ninja for the build. The generated Makefiles invoke CMake bits to print useful messages. Ninja does the right thing for output by default: it buffers the output from every command and then prints it atomically for any command that produces output and shows the build command only for build steps that fail (unless you pass -v). With make, the build steps all inherit the controlling TTY and so are interleaved. It’s been years since I used make with CMake[1], but as I recall it wraps each compile command in a CMake invocation that captures the output and then tries to write it atomically. The CMake build rules also do more to produce pretty output in the form that is the default for Ninja.

          In addition, it’s not about what Ninja can do that Make doesn’t, it’s about what Ninja doesn’t do. Ninja is not intended as a general-purpose scripting language. It doesn’t have macros, variables that are evaluated by running shell scripts, and so on, it is just a declarative description of build dependencies. It is designed with speed as its only goal, delegating all of the input complexity to a pre-build tool. Make is a far more flexible tool (particularly a modern Make such as GNU Make or BMake) that is designed to be useable without a separate configuration step, even if most uses do add one.

          This makes it simpler to parse and get to the build step. For example, Ninja doesn’t have suffix rules. CMake is responsible for providing the full set of flags and build commands for every individual rules. This is I think Ninja is a little bit more clever about arranging the dependency tree to maximise parallelism. Not relevant for this, but important on more resource-constrained systems: Ninja is also able to have different-sized pools for different jobs, so if link steps take more memory then it can reduce the degree of parallelism during link steps.

          [1] I have this in a file sourced from my .bashrc so CMake automatically uses Ninja:

          # Set the default generator to Ninja.
          export CMAKE_GENERATOR=Ninja
          # Always generate a compile-commands JSON file:
          export CMAKE_EXPORT_COMPILE_COMMANDS=true
          # Give Ninja a more verbose progress indicator
          export NINJA_STATUS="%p [%f:%s/%t] %o/s, %es "
          
          1. 3

            Ninja’s scheduling is really very impressive.

            Many years ago, I patched Jam to compile faster. My approach was different: When I had a choice of which job to start, I’d prefer the one that depended on the most recent source file. This produced amazingly fast compile times if there were errors. Builds that failed would generally fail very quickly, often in the first second. You can imagine what that does to digressions, and I thought it achieved that without any cost to successful builds.

            Ninja was the first build tool to prove me wrong. The order in which it starts jobs is better that mine in some/many cases. Drat.

        2. 1

          I wonder if mold (another recent lld alternative) can help with linking here.