There are already some good replies here, but I want to add some historical context, because I think it has a huge impact on the popularity of CMake and why newer build systems often just seem to target Ninja.
In June 1991 I was maintaining many of the GNU utilities for the Free Software Foundation. As they were ported to more platforms and more programs were added, the number of -D options that users had to select in the Makefile (around 20) became burdensome. Especially for me–I had to test each new release on a bunch of different systems. So I wrote a little shell script to guess some of the correct settings for the fileutils package, and released it as part of fileutils 2.0. That configure script worked well enough that the next month I adapted it (by hand) to create similar configure scripts for several other GNU utilities packages. Brian Berliner also adapted one of my scripts for his CVS revision control system.
[…] As I adapted more GNU utilities packages to use configure scripts, updating them all by hand became impractical. Rich Murphey, the maintainer of the GNU graphics utilities, sent me mail saying that the configure scripts were great, and asking if I had a tool for generating them that I could send him. No, I thought, but I should! So I started to work out how to generate them. And the journey from the slavery of hand-written configure scripts to the abundance and ease of Autoconf began.
When you look at the major competitors of the time, you find things like…SCons and Jam, which are relatively slow (okay I’m less certain about the various Jam frameworks here, that part’s pretty fuzzy to me). CMake took some users from SCons for reasons that include the meta build system aspect but also several other notable points:
The KDE individuals who tried to bring SCons into a shape that made it fit for building such a huge project felt they didn’t have any support from the upstream SCons developers. There were major problems building KDE on non-Linux platforms with SCons (e.g. on OS X); in general they felt it did not yet have a mature configuration system. The only option down that road was to create major SCons fixes and patches on their own. Since these changes would not likely be included in the upstream sources, it would require permanent maintenance of the fixes in a separate repository. In effect, this would have amounted to a fork of SCons. KDE developers would have had to maintain the new build system entirely on their own.
[…]
Pros:
-they offered to implement the things which are missing to build KDE
-they have a fully working configure-like framework, that is easy to use
-I managed to build most of kdelibs (KDE3)
-cmake has no other dependencies except for a C++ compiler
-cmake supports basically every UNIX, MS Windows (MSVC, Borland, cygwin, mingw) and Mac OS X
-cmake can generate Makefiles and projects for KDevelop3, MSVC 6,7, XCode
-cmake has a simple syntax
-it features a testing framework
-it supports: compiling libs, apps, KDE kparts, KDE ioslaves, KDE loadable modules, -enable-final, la-file generation
-there is an am2cmake ruby script which does around 90 percent of the work of converting Makefile.am’s to cmake files
(Aside: the issues with the SCons / bksys layering also led the dev behind it to independently fork SCons into the Waf build system, which did get some degree of usage in projects like Samba. GNOME was interested in Waf a long time ago, but some of the intentional design choices were a bit problematic, like the need to have a copy of Waf itself per-project. I often wonder about an AU where Waf is usable system-wide and takes off more…)
When we first started porting Chrome away from just Windows, we intended to use Scons to build Chrome on all our platforms. But early on in development I discovered that Scons, despite its admirable goals of correctness and ease of use, was quite slow — it could take 40 seconds from starting Scons before it decided to build some source.
C/++ projects don’t usually need the ability to mutate the graph at runtime anyway, since the dependency structure needs to be known ahead-of-time by nature. Why bother with trying to make your own build executor then? It will almost certainly not perform as well as Ninja can ootb without a huge time commitment.
Of course, maybe you still want to do that! Bazel has fancy sandboxing support for hermeticity, and Buck2 has flexible dynamic dependencies, which you need for some use cases Ninja can’t easily accomplish:
Distributed ThinLTO, where the index file says what the dependencies are.
OCaml builds, where the dependencies between source files can only be obtained from running ocamldeps.
Erlang header files, where only a subset of the available headers are accessed, which can be determined by reading the source file.
Erlang BEAM files, where some subset of BEAM files must be compiled in a given order, as they provide features like compiler plugins, but most can be compiled in parallel.
But if you’re an organization with a build team large enough to tackle the execution side yourself, and you also want a heavy focus on hermeticity…you probably also control enough of the toolchain to avoid the need for any configuration? Bazel has no native pkg-config support, and Buck2 only added it relatively recently. Most Google projects that use Bazel in practice just end up using the broader platform information (OS, CPU architecture) to configure things, and anything fancier than that requires you to be explicit, e.g. Skia’s Bazel build scripts hardcode avx2 / avx512 support.
TL:DR: before, CMake won out because it let you generate IDE-native builds but also because it was genuinely better than competitors in unrelated ways. Now, a successor meta build system is winning out because Ninja smokes the competition so you’d probably want to use it anyway. Build systems that need stuff Ninja cannot provide usually come from a context where they don’t care that much about automated configuration, again for unrelated reasons.
(All this being said, if you really do want a first-order build system for C/++, that does exist in projects like xmake, but they’re relatively newer arrivals into this world.)
That configure origin story surprises me in a couple of ways: I thought autoconf was much older, more like Larry Wall’s metaconfig which dates back to the mid 1980s; and I wonder why they didn’t use metaconfig. Not that metaconfig is much more maintainable than autoconf…
This system [metaconfig] executes simple tests similar to Autoconf, but users are often asked to confirm the results of these tests or to set the results to the proper values. Metaconfig requires too much user interaction to select or confirm detected features and it cannot be extended as easily as Autoconf.
The Metaconfig package is similar in purpose to Autoconf, but the scripts it produces require manual user intervention, which is quite inconvenient when configuring large source trees. Unlike Metaconfig scripts, Autoconf scripts can support cross-compiling, if some care is taken in writing them.
I’m not sure why this couldn’t be, say, patched on top of metaconfig, but I’m guessing the rather haphazard evolution of autoconf didn’t leave room for much planning there, beyond an initial “let my write a simple configuration shell script because I don’t want to deal with metaconfig’s manual intervention” that then spiraled.
Those are good points! I think metaconfig became more automatic in the late 1990s, but that was clearly too late.
Its best point was that it had more personality than most build systems. “Congratulations, you aren’t running Eunice!” (It turned out Eunice was a VMS analogue of Cygwin.)
I can answer that with a pretty high degree of confidence: autoconf-style configuration probing. That is, trying to compile or link a test program to discover whether the platform has a certain feature (header, function, etc). When migrating from an autotools-based build system with a bunch of tests like this (as is the case with git), one has two options: redo all these tests in some other way (for example, figuring out which features are known to be available on which platforms and then using macros to detect that) or use a build system that supports the same configuration probing, which narrows it down to CMake or Meson.
In case of CMake, another reason would be support for generating project files (Visual Studio, XCode, etc). In fact, many IDEs now provide built-in support for using CMake as a source of project information.
Hm, configuration probing feels orthogonal to first-order/meta?
Cargo is first-order, but it probes compiler (nothing to crazy, just running rustc --prtint cfg), and there’s autoconf-style probing available as a library: https://docs.rs/autocfg/latest/autocfg/.
Whether pervasive probing (i.e., where you have hundreds of tests for checking every/most headers/functions/etc that you use) is the right approach.
Whether it can be sensibly supported in a native build system that doesn’t necessarily have a separate configuration phase.
Let’s start with the second question. It is awkward to support configuration probing in a native build system because the result needs to be available as one evaluates buildfiles. Concrete example: let’s say you are probing for strlcpy() and if it’s not available, using a shim implemented in strlcpy.c. What this means in practice is that you need the result of this probe to be already known when you decide in your buildfile whether to include strlcpy.c into the build.
So it has to be some sort of a pre-load step where you run the probes. Two things make it more complicated: firstly, you need to cache the result and, secondly, you would ideally like to run those hundreds of probes in parallel (remember, most of them are trying to compile and/or link a small test). So you need to compile a bunch of translation units preferably in parallel and omit redoing the work if it has already been done. Does that ring a bell? Yes, for a native build system, the most natural way would be to build a bunch of targets but it must somehow happen before loading the buildfiles (or, more, precisely, loading must be interleaved with building). I am not aware of any build system to which this update-during-load comes naturally. I am currently trying to retrofit this support to build2 and it is challenging (we need it for something else but it will also allow for configuration probing).
Now to the question of whether pervasive configuration probing is a good idea in the first place. It does have some pretty nasty drawbacks. It is slow, especially if not done in parallel. People routinely forget to drop tests they no longer need. More importantly, it’s brittle. Remember, the idea is to try to compile a test program and if the compilation fails, assume the feature is not available. But those tests don’t analyze the compiler diagnostics to make sure the compilation failed for this specific reason rather than a myriad of other reasons (latent mistake in test, compiler misconfiguration, etc). So some (myself including) think that it’s actually a bad idea. It served its purpose when we had a zoo of Unixes, but now everything is fairly standardized. In build2, for example, we went with a different approach of assuming features are available based on platform macros:
https://github.com/build2/libbuild2-autoconf
Thanks, this makes much more sense now! Basically, existing builds require fully dynamic (monadic, rather than applicative, in build-systems a-la carte terminology) dependencies. That is hard to do and not without tradeoffs. But meta build-system sorta gives you dynamic dependencies “for free”, where “free” means that the user needs to figure out themselves when to re-run configure.
EDIT: and the important aspect here is “existing builds require” means that the builds were written to require. I agree that there’s probably little fundamental need for probing, that its mostly historical accident.
But meta build-system sorta gives you dynamic dependencies “for free”, where “free” means that the user needs to figure out themselves when to re-run configure.
This is a very charitable characterization. I would put it this way: meta build systems give the user a special pre-build step originally meant for configuration but is now reused for all kinds of other things (to compensate for underlying build system deficiencies), most notably source code generation. The problem with this approach is that it partitions the graph into two disconnected sub-graphs (actually, there is normally no explicit graph during the pre-build step) which has exactly the same problems as why “recursive make is considered harmful”.
It is sad to see we are stuck at this local maximum (and a very low maximum it is) with people migrating to Meson (and taking on the Python dependency in the process) instead of to something fundamentally better.
Today’s Unix/Posix-like operating systems, even including IBM’s z/OS mainframe version, as seen with 1980 eyes are identical; yet the 31,085 lines of configure for libtool still check if <sys/stat.h> and <stdlib.h> exist, even though the Unixen, which lacked them, had neither sufficient memory to execute libtool nor disks big enough for its 16-MB source code.
I bet there are tests in Meson projects checking for old enough systems that can’t even execute Meson.
I think one good reason is having to work with different compilers with different flags that are not available some versions. You don’t always get to choose what compiler will be used so a lot of compiler/linker flags end up being probed which is something meta systems are good at. Not that first order systems are unsuitable for this but I think bazel and buck2 etc. don’t support the usecase without making it a pain. (Correct me though I am not sure)
I don’t think “know your CFLAGS” is an option for development, it gets messy with different OS/compiler/version/architecture/build-type combinations. Packagers might get away with it though.
What “first-order” build systems are there? Make and Ninja? Ninja is explicitly meant to be autogenerated, so not even going into that, but Make gets incredibly complex and error-prone once you want something more complicated (e.g. third party deps, or windows/cross platform support). Not to say it can’t be done, but it’s often extra work that a nicer build system like Meson abstracts away.
Though I guess there is one build system which would fit this first order description which is in relatively wide use, which is bazel (not as much in open source though). It does not generate makefiles or anything like that, it runs the compilation commands directly. I guess zig’s build system fits in this too.
Yeah, that is precisely my question! Bazel, buck, build.zig, nix are in this category, but arguably they are much larger in scope than “I want to build my OSS library/binary”. build2 is the only one I know that addresses this specific problem.
I wouldn’t call make or ninja build systems, they are build engines (they solve only the problem of executing rebuilds, which is just one concern of the problem space of building things).
I’d say nix is closer to meta-meta-build system. You can use it as the straight and simple one for your own app, but anything upstream packaged in nixpkgs uses the upstream build too - and that’s with its own wrappers/decisions for various systems/compilers on top.
So unless it’s your custom build, nix is likely to be a nix-cmake-ninja-clang stack, or something similar.
Make can be used as both. You can generate a Ninja-like subset of it
Or it’s very common to use it as a really bad language to configure a build graph (like Starlark, but worse)
I actually used Make this way for the slow build of Oils, and then abandoned it for a Python/Shell/Ninja combo for the fast build of Oils. No more Make - I should have used Ninja all along.
The Android build used to work this way too, before they moved to another Generator/Ninja combo. It was like 500K lines of pure GNU Make. I think Buildroot (a thriving embedded project) is similar – like hundreds of thousands of lines of GNU Make code.
The GNU Make Standard Library is this Lispy thing which even uses Peano number kind of encodings, and it was actually used in Android ….
I would question that categorization … Bazel/Buck also create a Ninja-like graph internally – they just don’t serialize it to disk. Bazel has a strict separation of stages (or did for at least a decade, probably still does).
Serializing might actually be a speed optimization, counterintuitively …
And I’d say in the open source world you have more heterogeneity, so it’s extremely useful to have MANY tools for
different languages generating ONE graph format.
In Bazel/Buck you do it all with Starlark and approved APIs, but in open source, it’s easier for those tools to evolve organically, Unix-style.
In Oils, I use Ninja with a mini-Bazel-like wrapper I wrote myself, not CMake or Meson. This kind of diversity is a feature, not a bug!
(My wrapper has Python, C++, and Zephyr ASDL support. Zephyr ASDL is a bit like protobuf – it’s an IDL that generates C++ and Python code. Protobuf is arguably a raison d’être for Bazel/Buck)
It is true that you have to know when to run the “configure” part, i.e. after you do a git pull. I haven’t found that to be a big downside, but I could imagine it might be in some cases.
Usually I’m just iterating on the same unit test without pulling though, and using Ninja is very fast/convenient there.
OK yeah I was answering: “why have a separate front end and back end?” but there’s also the question of “why have multiple back ends?”
Other people said this, but I think CMake “won” because they bothered to make things work on Windows. They had the Visual Studio project generator, and the makefile generator. And then Ninja came later.
I think Windows basically means you need different back ends. I don’t think Bazel has ever worked on Windows, and it’s also uncommon on OS X.
There is probably some kind of pseudo- / ill-specified narrow waist of an execution graph in many of these tools:
(CMake, Meson, autoconf) x (ninja, Makefile, visual studio project)
I’m so excited that Meson is taking over from CMake. Having a single popular build system is great, when I’m using Meson and a library uses Meson it’s super easy to add that library as a subproject and use it with dependency(). We were arguably close to that world with CMake, but honestly, CMake is sooooo incredibly horrible to work with that I don’t think many old projects even see the value in switching away from autotools.
To me, Meson represents an adequate build system which gets the job done well enough. And that’s far, far higher praise than I’d ever give to CMake or autotools.
To give an opposite experience, compared to CMake, whenever I wanted to do some builds on windows with semi-advanced dependencies (gstreamer, freetype/harfbuzz), meson has been a excruciating PITA to the point I usually just rewrite the build script in CMake for the projects I can maintain at my scale.
Oh, I have no experience trying to use Meson on Windows and it’s a non-concern for me. Windows users can have their Visual Studio and leave the rest of us alone.
Nix is not a distribution, Nixpkgs is a distribution. Nix itself is a package-level build system, which is used to produce and subsequently apply the nixpkgs distribution. It is not designed as a replacement of a C or C++ style build system, it is meant to create and use packages which each have their own build actions. So for Git, for example, the package derivation would require Meson and GCC from nixpkgs, and would call into the Meson build system, which would itself handle the compiler invocations. A more apt comparison than Meson vs Nix would be perhaps Nix vs FreeBSD’s Ports.
In the case of the Git package, it relies on make. Every Nix derivation relies on make by default (which is why the file I linked to only sets make flags and does not explicitly calls actual commands in the build step), and has access to a plethora of tools by default, including a C compiler.
In my personal projects, I override this behavior to directly call tectonic for LaTeX files, node for Javascript dependencies, asciidoctor for AsciiDoc files, and so on (in the nativeBuildInputs attribute of my Nix derivation, since I only want these tools to be used at compile-time). Writing a Makefile seems to be a better approach, however.
I had a quick look for messages about meson on the git mailing list and a couple of things from yesterday caught my eye:
the git 2.48.0-rc0 announcement mentions “Various platform compatibility fixes split out of the larger effort to use Meson as the primary build tool.”
the what’s cooking notice lists a bunch of work in progress to fill in details of the meson build
Back in september there were
reports from the git contributor meeting and some followup notes on build systems
a discussion about bugs in git’s autoconf build which seems to have kicked off the meson effort in earnest
What forces cause meta build systems like CMake & Meson to be relatively more successful for C/C++ than direct, first-order build systems?
There are already some good replies here, but I want to add some historical context, because I think it has a huge impact on the popularity of CMake and why newer build systems often just seem to target Ninja.
autotools was a meta build system because it was a natural extension of wanting a portable make-based build system, i.e. it imposed little on the build host other than “looks like Unix”:
CMake was said to also aim for wide portability, but it traded out generic shell scripts in exchange for being more portable outside of the Unix world. It also explicitly targeted being able to generate IDE-native files as a feature), which leaves it intentionally as a meta build system.
When you look at the major competitors of the time, you find things like…SCons and Jam, which are relatively slow (okay I’m less certain about the various Jam frameworks here, that part’s pretty fuzzy to me). CMake took some users from SCons for reasons that include the meta build system aspect but also several other notable points:
(Aside: the issues with the SCons / bksys layering also led the dev behind it to independently fork SCons into the Waf build system, which did get some degree of usage in projects like Samba. GNOME was interested in Waf a long time ago, but some of the intentional design choices were a bit problematic, like the need to have a copy of Waf itself per-project. I often wonder about an AU where Waf is usable system-wide and takes off more…)
Chromium also had issues with SCons’s performance:
which led them to develop a custom meta-build system “Gyp”.
And then Ninja happened. Ninja has succeeded massively at doing one thing—executing a build graph—and doing it very well. The aforementioned Gyp ended up adding support for running Ninja from generated IDE project files. Gyp’s successor, GN, can only generate projects that invoke Ninja, because it’s usually faster anyway. Ninja means that you can have Meson be a Python project with a small dev team, and that poses no issues for incremental builds because it doesn’t need to do anything for that anyway. (Meson also got its popularity by explicitly targeting FOSS ecosystem projects still stuck on autotools and trying to avoid many of the issues with CMake.)
C/++ projects don’t usually need the ability to mutate the graph at runtime anyway, since the dependency structure needs to be known ahead-of-time by nature. Why bother with trying to make your own build executor then? It will almost certainly not perform as well as Ninja can ootb without a huge time commitment.
Of course, maybe you still want to do that! Bazel has fancy sandboxing support for hermeticity, and Buck2 has flexible dynamic dependencies, which you need for some use cases Ninja can’t easily accomplish:
But if you’re an organization with a build team large enough to tackle the execution side yourself, and you also want a heavy focus on hermeticity…you probably also control enough of the toolchain to avoid the need for any configuration? Bazel has no native pkg-config support, and Buck2 only added it relatively recently. Most Google projects that use Bazel in practice just end up using the broader platform information (OS, CPU architecture) to configure things, and anything fancier than that requires you to be explicit, e.g. Skia’s Bazel build scripts hardcode avx2 / avx512 support.
TL:DR: before, CMake won out because it let you generate IDE-native builds but also because it was genuinely better than competitors in unrelated ways. Now, a successor meta build system is winning out because Ninja smokes the competition so you’d probably want to use it anyway. Build systems that need stuff Ninja cannot provide usually come from a context where they don’t care that much about automated configuration, again for unrelated reasons.
(All this being said, if you really do want a first-order build system for C/++, that does exist in projects like xmake, but they’re relatively newer arrivals into this world.)
That configure origin story surprises me in a couple of ways: I thought autoconf was much older, more like Larry Wall’s metaconfig which dates back to the mid 1980s; and I wonder why they didn’t use metaconfig. Not that metaconfig is much more maintainable than autoconf…
There’s an old paper that describes one project’s choice against metaconfig:
and some more details in the autoconf manual:
I’m not sure why this couldn’t be, say, patched on top of metaconfig, but I’m guessing the rather haphazard evolution of autoconf didn’t leave room for much planning there, beyond an initial “let my write a simple configuration shell script because I don’t want to deal with metaconfig’s manual intervention” that then spiraled.
Those are good points! I think metaconfig became more automatic in the late 1990s, but that was clearly too late.
Its best point was that it had more personality than most build systems. “Congratulations, you aren’t running Eunice!” (It turned out Eunice was a VMS analogue of Cygwin.)
I can answer that with a pretty high degree of confidence:
autoconf-style configuration probing. That is, trying to compile or link a test program to discover whether the platform has a certain feature (header, function, etc). When migrating from anautotools-based build system with a bunch of tests like this (as is the case withgit), one has two options: redo all these tests in some other way (for example, figuring out which features are known to be available on which platforms and then using macros to detect that) or use a build system that supports the same configuration probing, which narrows it down to CMake or Meson.In case of CMake, another reason would be support for generating project files (Visual Studio, XCode, etc). In fact, many IDEs now provide built-in support for using CMake as a source of project information.
Hm, configuration probing feels orthogonal to first-order/meta?
Cargo is first-order, but it probes compiler (nothing to crazy, just running
rustc --prtint cfg), and there’s autoconf-style probing available as a library: https://docs.rs/autocfg/latest/autocfg/.There are two parts to this question:
Whether pervasive probing (i.e., where you have hundreds of tests for checking every/most headers/functions/etc that you use) is the right approach.
Whether it can be sensibly supported in a native build system that doesn’t necessarily have a separate configuration phase.
Let’s start with the second question. It is awkward to support configuration probing in a native build system because the result needs to be available as one evaluates buildfiles. Concrete example: let’s say you are probing for
strlcpy()and if it’s not available, using a shim implemented instrlcpy.c. What this means in practice is that you need the result of this probe to be already known when you decide in your buildfile whether to includestrlcpy.cinto the build.So it has to be some sort of a pre-load step where you run the probes. Two things make it more complicated: firstly, you need to cache the result and, secondly, you would ideally like to run those hundreds of probes in parallel (remember, most of them are trying to compile and/or link a small test). So you need to compile a bunch of translation units preferably in parallel and omit redoing the work if it has already been done. Does that ring a bell? Yes, for a native build system, the most natural way would be to build a bunch of targets but it must somehow happen before loading the buildfiles (or, more, precisely, loading must be interleaved with building). I am not aware of any build system to which this update-during-load comes naturally. I am currently trying to retrofit this support to
build2and it is challenging (we need it for something else but it will also allow for configuration probing).Now to the question of whether pervasive configuration probing is a good idea in the first place. It does have some pretty nasty drawbacks. It is slow, especially if not done in parallel. People routinely forget to drop tests they no longer need. More importantly, it’s brittle. Remember, the idea is to try to compile a test program and if the compilation fails, assume the feature is not available. But those tests don’t analyze the compiler diagnostics to make sure the compilation failed for this specific reason rather than a myriad of other reasons (latent mistake in test, compiler misconfiguration, etc). So some (myself including) think that it’s actually a bad idea. It served its purpose when we had a zoo of Unixes, but now everything is fairly standardized. In
build2, for example, we went with a different approach of assuming features are available based on platform macros: https://github.com/build2/libbuild2-autoconfThanks, this makes much more sense now! Basically, existing builds require fully dynamic (monadic, rather than applicative, in build-systems a-la carte terminology) dependencies. That is hard to do and not without tradeoffs. But meta build-system sorta gives you dynamic dependencies “for free”, where “free” means that the user needs to figure out themselves when to re-run configure.
EDIT: and the important aspect here is “existing builds require” means that the builds were written to require. I agree that there’s probably little fundamental need for probing, that its mostly historical accident.
This is a very charitable characterization. I would put it this way: meta build systems give the user a special pre-build step originally meant for configuration but is now reused for all kinds of other things (to compensate for underlying build system deficiencies), most notably source code generation. The problem with this approach is that it partitions the graph into two disconnected sub-graphs (actually, there is normally no explicit graph during the pre-build step) which has exactly the same problems as why “recursive make is considered harmful”.
It is sad to see we are stuck at this local maximum (and a very low maximum it is) with people migrating to Meson (and taking on the Python dependency in the process) instead of to something fundamentally better.
Quoting from A Generation Lost in the Bazaar:
I bet there are tests in Meson projects checking for old enough systems that can’t even execute Meson.
I think one good reason is having to work with different compilers with different flags that are not available some versions. You don’t always get to choose what compiler will be used so a lot of compiler/linker flags end up being probed which is something meta systems are good at. Not that first order systems are unsuitable for this but I think bazel and buck2 etc. don’t support the usecase without making it a pain. (Correct me though I am not sure)
I don’t think “know your CFLAGS” is an option for development, it gets messy with different OS/compiler/version/architecture/build-type combinations. Packagers might get away with it though.
Personally, I check the release notes for GCC/clang releases for new flags I want and chuck it to a list that meson probes.
What “first-order” build systems are there? Make and Ninja? Ninja is explicitly meant to be autogenerated, so not even going into that, but Make gets incredibly complex and error-prone once you want something more complicated (e.g. third party deps, or windows/cross platform support). Not to say it can’t be done, but it’s often extra work that a nicer build system like Meson abstracts away.
Though I guess there is one build system which would fit this first order description which is in relatively wide use, which is bazel (not as much in open source though). It does not generate makefiles or anything like that, it runs the compilation commands directly. I guess zig’s build system fits in this too.
Yeah, that is precisely my question! Bazel, buck, build.zig,
nixare in this category, but arguably they are much larger in scope than “I want to build my OSS library/binary”. build2 is the only one I know that addresses this specific problem.I wouldn’t call
makeorninjabuild systems, they are build engines (they solve only the problem of executing rebuilds, which is just one concern of the problem space of building things).I’d say nix is closer to meta-meta-build system. You can use it as the straight and simple one for your own app, but anything upstream packaged in nixpkgs uses the upstream build too - and that’s with its own wrappers/decisions for various systems/compilers on top.
So unless it’s your custom build, nix is likely to be a nix-cmake-ninja-clang stack, or something similar.
Yeah, this is correct, not sure why I lumped
nixinto the pile, that’s just wrong, thanks!xmake has shown up a few times on lobsters.
Make can be used as both. You can generate a Ninja-like subset of it
Or it’s very common to use it as a really bad language to configure a build graph (like Starlark, but worse)
I actually used Make this way for the slow build of Oils, and then abandoned it for a Python/Shell/Ninja combo for the fast build of Oils. No more Make - I should have used Ninja all along.
The Android build used to work this way too, before they moved to another Generator/Ninja combo. It was like 500K lines of pure GNU Make. I think Buildroot (a thriving embedded project) is similar – like hundreds of thousands of lines of GNU Make code.
The GNU Make Standard Library is this Lispy thing which even uses Peano number kind of encodings, and it was actually used in Android ….
https://mxe.cc/gmsl.html
I would question that categorization … Bazel/Buck also create a Ninja-like graph internally – they just don’t serialize it to disk. Bazel has a strict separation of stages (or did for at least a decade, probably still does).
Serializing might actually be a speed optimization, counterintuitively …
And I’d say in the open source world you have more heterogeneity, so it’s extremely useful to have MANY tools for different languages generating ONE graph format.
It’s a classic policy-mechanism split - http://www.catb.org/esr/writings/taoup/html/ch01s06.html#id2877777
In Bazel/Buck you do it all with Starlark and approved APIs, but in open source, it’s easier for those tools to evolve organically, Unix-style.
In Oils, I use Ninja with a mini-Bazel-like wrapper I wrote myself, not CMake or Meson. This kind of diversity is a feature, not a bug!
(My wrapper has Python, C++, and Zephyr ASDL support. Zephyr ASDL is a bit like protobuf – it’s an IDL that generates C++ and Python code. Protobuf is arguably a raison d’être for Bazel/Buck)
It is true that you have to know when to run the “configure” part, i.e. after you do a
git pull. I haven’t found that to be a big downside, but I could imagine it might be in some cases.Usually I’m just iterating on the same unit test without pulling though, and using Ninja is very fast/convenient there.
Yeah, it’s not about using a build engine, it’s about being polymorphic in build engines.
Both Meson and CMake can target different build engines, that makes them meta build systems.
OK yeah I was answering: “why have a separate front end and back end?” but there’s also the question of “why have multiple back ends?”
Other people said this, but I think CMake “won” because they bothered to make things work on Windows. They had the Visual Studio project generator, and the makefile generator. And then Ninja came later.
I think Windows basically means you need different back ends. I don’t think Bazel has ever worked on Windows, and it’s also uncommon on OS X.
There is probably some kind of pseudo- / ill-specified narrow waist of an execution graph in many of these tools:
I’m so excited that Meson is taking over from CMake. Having a single popular build system is great, when I’m using Meson and a library uses Meson it’s super easy to add that library as a subproject and use it with
dependency(). We were arguably close to that world with CMake, but honestly, CMake is sooooo incredibly horrible to work with that I don’t think many old projects even see the value in switching away from autotools.To me, Meson represents an adequate build system which gets the job done well enough. And that’s far, far higher praise than I’d ever give to CMake or autotools.
To give an opposite experience, compared to CMake, whenever I wanted to do some builds on windows with semi-advanced dependencies (gstreamer, freetype/harfbuzz), meson has been a excruciating PITA to the point I usually just rewrite the build script in CMake for the projects I can maintain at my scale.
Oh, I have no experience trying to use Meson on Windows and it’s a non-concern for me. Windows users can have their Visual Studio and leave the rest of us alone.
This way of thinking is exactly why meson will continue to be pretty much irrelevant in the grand scheme of things: https://trends.google.com/trends/explore?cat=5&date=all&q=cmake,meson,autotools,build2,bazel&hl=en
Windows is still the N°1 operating system by a large margin.
This is great!
Meson seems to be the build system of choice for polyglot open source projects.
Wouldn’t that be Nix and its 80 thousand packages? It would be interesting to have stats on this.
That’s more of a distribution than a build system. Does nix invoke gcc directly when it builds git or does it involve it via make?
Nix is not a distribution, Nixpkgs is a distribution. Nix itself is a package-level build system, which is used to produce and subsequently apply the nixpkgs distribution. It is not designed as a replacement of a C or C++ style build system, it is meant to create and use packages which each have their own build actions. So for Git, for example, the package derivation would require Meson and GCC from nixpkgs, and would call into the Meson build system, which would itself handle the compiler invocations. A more apt comparison than Meson vs Nix would be perhaps Nix vs FreeBSD’s Ports.
In the case of the Git package, it relies on
make. Every Nix derivation relies onmakeby default (which is why the file I linked to only setsmakeflags and does not explicitly calls actual commands in the build step), and has access to a plethora of tools by default, including a C compiler.In my personal projects, I override this behavior to directly call
tectonicfor LaTeX files,nodefor Javascript dependencies,asciidoctorfor AsciiDoc files, and so on (in thenativeBuildInputsattribute of my Nix derivation, since I only want these tools to be used at compile-time). Writing aMakefileseems to be a better approach, however.See also: https://lobste.rs/s/pt1p9w/nix_is_build_system