1. 45

Really nice design and implementation, and apenwarr’s generally enjoyable writing to boot.

djb’s original design is here: http://cr.yp.to/redo.html

I don’t see a “build systems” tag; maybe we should start a meta thread to add one?

  1.  

  2. 14

    My problem with make is not that there is a bad design. It is not THAT bad when you look at things like CMake (oops, I did not put a troll disclaimer, sorry :P).

    But it has only very large implementations that has a lot of extensions that all are not POSIX. So if you want a simple tool to build a simple project, you have to have a complex tool, with even more complexity than the project itself in many cases…

    So a simple tool (redo), available with 2 implementations in shell script and 1 implementation in python does a lot of good!

    There is also plan 9 mk(1) which support evaluating the output of a script as mk input (with the <| command syntax), which removes the need for a configure script (build ./linux.c on Linux, ./bsd.c on BSD…).

    But then again, while we are at re-designing things, let’s simply not limit outself to the shortcomings of existing software.

    The interesting part is that you can entirely build redo as a tiny tiny shell script (less than 4kb), that you can then ship along with the project !

    There could then be a Makefile with only

    all:
        ./redo
    

    So you would (1) have the simple build-system you want, (2) have it portable as it would be a simple shell portable shell script, (3) still have make build all the project.

    You may make me switch to this… ;)

      1. 1

        Nice! So 2 shell, 1 python and 1 C implementation.

        1. 5

          There is also an implementation in C++. That site also has a nice Introduction to redo.

          I haven’t used any redo implementation myself, but I’ve been wondering how they would perform on large code bases. They all seem to spawn several process for each file just to check whether it should be remade. The performance cost of that not a particularly fast operation might be prohibitive on larger projects. Does anyone happen to have experience with that?

          1. 1

            The performance cost of that not a particularly fast operation might be prohibitive on larger projects. Does anyone happen to have experience with that?

            No experience, but from the article:

            Dependencies are tracked in a persistent .redo database so that redo can check them later. If a file needs to be rebuilt, it re-executes the whatever.do script and regenerates the dependencies. If a file doesn’t need to be rebuilt, redo can calculate that just using its persistent .redo database, without re-running the script. And it can do that check just once right at the start of your project build.

            Since building the dependencies is usually done as part of building a target, I think this probably isn’t even a significant problem on initial build (where the time is going to be dominated by actual building). OTOH I seem to recall that traditional make variants do some optimisation where they run commands directly, rather than passing them via a shell, if they can determine that they do not actually use shell built-ins (not 100% sure this is correct, memory is fallible etc) - the cost of just launching the shell might be significant if you have to do it a lot, I guess.

        2. 3

          The biggest problem with Make (imo) is that it is almost impossible to write a large correct Makefile. It is too easy for a dependency to exist, but not be tracked by the Make rules, thus making stale artefacts a problem.

          1. 1

            I had given serious thought to using LD_PRELOAD hooks to detect all dependencies dynamically (and identify e.g. dependencies which hit the network), but never got around to trying it.

            Anyone heard of anything trying that approach?

          2. 2

            Why this obsession with “simple tools for simple projects” though? Why not have one scalable tool that works great for any project?

            (Yeah, CMake is not that tool. But Meson definitely is!)

            1. 3

              Because I wish all my projects to be kept simple. Then there is no need for very powerful tool to build them.

              On the other hand, if you already need a complex tool to do some job, having another simple tool sum up the complexity of both as you will now have to understand and maintain both !

              If we aim for the most simple tool that can cover all situations we face, this will end up with different tools according to what we expect.

              1. 3

                Meson isn’t a simple tool, it requires the whole Python runtime in order to even run --help.

                CMake is a lot more lightweight.

                1. 4

                  Have you appreciated how huge CMake actually is? I know I had problems compiling it on an old machine since it required something like a gigabyte of memory to build. A two-stage build that took its precious time.

                  CMake is not lightweight, and that’s not its strong suit. To the contrary, it’s good in having everything but the kitchen sink and being considerably flexible (unlike Meson, which has simplicity/rigidity as a goal).

                  1. 2

                    CMake is incredibly heavyweight.

                  2. 1

                    I would like to see how it would work out with different implementations and how “stable” meson as a language is.

                    1. 1

                      Meson is nice, but sadly not suitable for every project. It has limitations that prevent some from using it, limitations neither redo nor autotools have. Such as putting generated files in a subdirectory (sounds simple, right?).

                  3. 5

                    Oy, another build tool.

                    I kind of weary of seeing them show up. Each subtle in their own right, with deep strangenesses and incompatibilities. One Ring to Rule Them All would be grand.

                    1. 4

                      An important thing to preserve: for most projects, you can cd into the project directory, and type make (with sometimes configure) and end of the story.

                      1. 6

                        (this reply may or may not contain trolling)

                        • Doesn’t apply to Windows,
                        • Sometimes you need to run ./autogen.sh
                        • Sometimes you need to install a few packages (you have to know the names) because autoconf isn’t bundled as one package on most distros,
                        • Sometimes autoconf scripts require its tool packages to be installed in specific versions,
                        • Learning to use autoconf requires you to learn a build system which contains backward compatibility for shells/systems that are installed on maybe 10 machines worldwide.
                        1. 3

                          autotools can be used by developers or maintainers, the release tarballs will not require you to run autogen nor install autoconf/automake

                          1. 4

                            Except cases where you’re the user and you need to use the git version, because it contains a fix for some obscure bug only you’re encountering ;)

                            1. 4

                              Or, as is increasingly common, there are no releases and the git repo is rolling-release.

                          2. 2

                            And then you spend hours tracing m4 scripts because there’s a bug in autofools.

                            1. 1

                              What a joy to dig into the autogenerated configure file to debug what is going wrong when you compile statically a project with 10+ libraries !

                              1. 1

                                That’s true, but normally you shouldn’t dig into autogenerated makefiles, unless you’re debugging CMake itself. Standard case is that you debug your build on CMakeLists level (if you’re using CMake).

                              2. 1

                                Doesn’t apply to Windows,

                                I’m there now with a work project that can build on Linux with some of it also on Windows but uses all of CMake and premake on top of autotools, gmake/nmake, gcc toolchain/VC++ toolchain. For a mixed Python/C project this is too much baggage for external users so I’m trying the waf build system. I happen to be stuck on a peculiar and possibly locally inflicted Windows linking behavior but writing rules in a Python DSL is great.

                                1. 1

                                  I’ve tried waf some time ago. It was nice on the beginning, but after a year of using it I’ve stopped understanding my own build systems, because they were nearly standalone Python programs in their own right. Still it was better than pure Makefiles though.

                                2. 1

                                  Learning to use autoconf requires you to learn a build system which contains backward compatibility for shells/systems that are installed on maybe 10 machines worldwide.

                                  And people like me thank them for that!

                                3. 4

                                  yeah…. that is why, despite its manifest and many defects, I tend to default to make for projects that aren’t deeply intermeshed into a single build system. It often calls out to the ecosystem-specific toolsystem (my home work these days is mostly ocaml, for instance).

                                  ./build.sh is also a nice standard to have.

                                  I’m not opposed to redo or another build system. But new generalized systems IMO have to be clearly and visibly The Better Way Forward: 10x or more the obvious effectiveness of make, for someone comfortable writing make.

                                  (mumble: maybe if we stopped writing C/C++, build systems coagulated around that arcane world would stop appearing, letting us get on with writing new scala build systems)

                                  1. 3

                                    I often write a trivial makefile that calls whatever other build tool I’m using, just to preserve this.

                                    1. 2

                                      In all my projects, the Makefile is the entrypoint to building, developing, testing & sometimes even deploying the software.

                                      Yes, you usually end up calling out to programming-language-specific tools underneath (like mix or cabal), but the ability to organize tasks in a dependency tree and to have a single place where they’re all listed is great. Especially when you come back to a project after a long break.

                                    2. 4

                                      One Ring to Rule Them All would be grand

                                      Unlikely. But here’s One Theory to Classify Them All:

                                      (Yes, same work posted twice, six months apart, by different people.)

                                      1. 1

                                        Thanks for links. I just tied a knot around them to facilitate easier discovery of both for anyone who lands on just one.