1. 35
    1. 4

      I find that the header file problem is one that tup solves incredibly elegantly. It intercepts filesystem calls, and makes any rule depend on all the files that the subprocess accesses. Solves headers in an incredibly generic way, and works without requiring hacks like -MMD.

      Not sure if the author is here, but if you are, any plans to support something like that?

      1. 16

        It intercepts filesystem calls, and makes any rule depend on all the files that the subprocess accesses. Solves headers in an incredibly generic way, and works without requiring hacks like -MMD.

        So the “proper” way is to intercept the filesystem calls in a non-portable manner and depend on anything the program opens without regard for whether it affects the output or not (like, say, translations of messages for diagnostics). While explicitly asking the preprocessor for an accurate list of headers that it reads is a hack?

        1. 2

          The problem with the second option is that it isn’t portable between languages or even compilers. Sure, both GCC and clang implement it, but there isn’t really a standard output format other than a makefile, which isn’t really ideal if you want to use anything that isn’t make.

          1. 11

            It’s an unforunate format, but it’s set in stone by now, and won’t break. It has become a de facto narrow waist with at least 2 emitters:

            • Clang
            • GCC

            and 2 consumers:

            • Make itself
            • Ninja has a very nice and efficient gcc -M parser

            Basically it’s an economic fact that this format will persist, and it certainly works. I never liked doing anything with it in GNU make because it composes poorly with other Make features, but in Ninja it’s just fine. I’m sure there are many other non-Make systems that parse it by now too.

            1. 1

              That’s a fair point, also didn’t know Ninja supported it but it makes sense. I wonder if other languages support something similar to allow for this kind of thing, though many modern languages just sidestep the issue all together by making the compiler take care of incremental compilation.

          2. 3

            Most tools could probably read the -M output format and understand it quite easily. It doesn’t use most of what could show up in a Makefile - it only uses single-line “target: source1 source2” rules with no commands, no variables, etc. I imagine if someone wanted to come up with a universal format, it wouldn’t be far off from what’s already there.

        2. 2

          But.. don’t you want to update your program when diagnostic messages are changed? The FUSE mount doesn’t grab eg. library and system locales from outside the project root, so it only affects the resources of the project being built[1]. Heaven forbid you’re bisecting a branch for a change that is, for reasonable or cursed reasons alike, descended from one of those files..

          For those interested, I’ve pitched tup and mused about this in a previous comment here.

          [1]: Provided you don’t vendor your all dependencies into the repo, which I guess applies to node_modules! Idk off the top of my head if there’s a way to exclude a subdirectory for this specific situation, or whether symlinks would work for controlling the mechanism.

          Edit: Oh, it’s u/borisk again! I really appreciated your response last time this came up and hope you’re doin’ great c:

          Edit 2: Oh, and you work on a build system! I’ll check it out sometime ^u^

      2. 11

        I originally started Knit with the intention of supporting automatic dependency discovery using ptrace. I experimented with this with a tool called xkvt, which uses ptrace to run a list of commands and can generate a Knitfile that expresses the dependencies. However, I think this method is unfortunately more of a hack compared to -MMD because ptrace is non-portable (not well supported/documented on macOS and non-existent on Windows) and has a lot of complexity for tracing multithreaded processes. A Fuse-based approach like the one used by Tup is similar (maybe more reliable), but requires Fuse (a kernel extension), and also has the negative that automatic dependency discovery can sometimes include dependencies that you don’t really want. When I tried to use Tup for a Chisel project I ran into problems because I was invoking the Scala build tool which generated a bunch of temporary files that Tup required to be explicitly listed as a result.

        I think if Knit ever has decent support for an automatic dependency approach, it would be via a separate tool or extension rather than directly baked into Knit by default.

    2. 4

      The most significant reason to use make is it’s portability and wide availability, so having the option to generate a Makefile is cool - you could use your nice features while developing, but include a Makefile in a distributed tarball so everyone else can build your project too.

      The nonstandard -C subdir method of sub-builds is pretty hacky anyway, and the better way of doing it in plain make is to use full paths for subdirs, which also means make does know the full dep graph. Recommended reading: recursive make considered harmful.

      I get that the Knitfile being a Lua script is nice and permits the use of the standard library, but working with Lua’s syntax makes some of it a little complicated - like the way of choosing a different directory being totally different to everything else. Compare with OpenBSD’s /etc/*.conf syntax, which are all the same and all consistent. Not that you should change it, Lua is great.

    3. 5

      Converting the knit file to both Ninja and shell is cool!

      That’s exactly what I do for Oil – I have a Python build configuration, and it generates Ninja for fast developing.

      But we just generate a shell script for the end user build, so you don’t even need Ninja or Make to build the new Oil C++ tarball. All you need is a C++ compiler and shell.

      I haven’t written much about this, but I mentioned it here: http://www.oilshell.org/blog/2022/10/garbage-collector.html#declarative-ninja-mini-bazel

      It uses the pattern that Bazel and Buck use (Buck2 thread: https://lobste.rs/s/b0fkuh/build_faster_with_buck2_our_open_source)

    4. 3

      I wonder why it’s stuck using Lua 5.1. That was last updated in December of 2011 and the language has moved on from there, and while I personally didn’t find Lua 5.2 worth using [1], Lua 5.3 was worth the pain though (explicit support for UTF-8 in a non-breaking way [2], along with 64-bit integer support). In my experience, the only reason people stick with Lua 5.1 is to remain compatible with LuaJIT [3], so if that is a concern, then okay. But for Lua code I release, I do try to support Lua 5.1 or higher.

      [1] There was a breaking change in the way modules were supported.

      [2] I’m looking at you, Python.

      [3] The author, Mike Pall, did not agree with the changes the Lua team made with Lua 5.2.

      1. 6

        Knit uses Gopher Lua as its Lua VM, which implements Lua 5.1. It would be great if there were a pure Go Lua VM targeting a more recent version of Lua that is as comprehensive as Gopher Lua.

      2. 1

        Pretty much all the other lua implementations target 5.1 too. There are about 4 JVM implementations, at least one CLR implementation, and a couple of native implementations.

        There’s also Roblox’s fork, Luau, which adds features while aiming for backwards compatibility with 5.1.

    5. 3

      This is so exciting! I’ve been searching for something exactly like this. Looking forward to trying it out.

      One thing I use Make for is custom static site generators. Sometimes there is some functionality I want that’s not available in the language I’m using, e.g. KaTeX math which is a JS library. Rather than spawn a new node process for every piece of inline math, I run the JS script as a server that listens on a Unix domain socket. Using Make I can automatically start and stop the server as needed, by making the socket an “intermediate” file and having the server automatically stop when the file is removed (importantly Make always does this, even on error). You can see how I do it here and here. It would be cool if this would work in Knit as well, or if it had more general support for “post-requisites” so that the server wouldn’t need to watch for removal.