Threads for zby

    1. 3

      I wonder why it’s stuck using Lua 5.1. That was last updated in December of 2011 and the language has moved on from there, and while I personally didn’t find Lua 5.2 worth using [1], Lua 5.3 was worth the pain though (explicit support for UTF-8 in a non-breaking way [2], along with 64-bit integer support). In my experience, the only reason people stick with Lua 5.1 is to remain compatible with LuaJIT [3], so if that is a concern, then okay. But for Lua code I release, I do try to support Lua 5.1 or higher.

      [1] There was a breaking change in the way modules were supported.

      [2] I’m looking at you, Python.

      [3] The author, Mike Pall, did not agree with the changes the Lua team made with Lua 5.2.

      1. 6

        Knit uses Gopher Lua as its Lua VM, which implements Lua 5.1. It would be great if there were a pure Go Lua VM targeting a more recent version of Lua that is as comprehensive as Gopher Lua.

      2. 1

        Pretty much all the other lua implementations target 5.1 too. There are about 4 JVM implementations, at least one CLR implementation, and a couple of native implementations.

        There’s also Roblox’s fork, Luau, which adds features while aiming for backwards compatibility with 5.1.

    2. 4

      I find that the header file problem is one that tup solves incredibly elegantly. It intercepts filesystem calls, and makes any rule depend on all the files that the subprocess accesses. Solves headers in an incredibly generic way, and works without requiring hacks like -MMD.

      Not sure if the author is here, but if you are, any plans to support something like that?

      1. 16

        It intercepts filesystem calls, and makes any rule depend on all the files that the subprocess accesses. Solves headers in an incredibly generic way, and works without requiring hacks like -MMD.

        So the “proper” way is to intercept the filesystem calls in a non-portable manner and depend on anything the program opens without regard for whether it affects the output or not (like, say, translations of messages for diagnostics). While explicitly asking the preprocessor for an accurate list of headers that it reads is a hack?

        1. 2

          The problem with the second option is that it isn’t portable between languages or even compilers. Sure, both GCC and clang implement it, but there isn’t really a standard output format other than a makefile, which isn’t really ideal if you want to use anything that isn’t make.

          1. 11

            It’s an unforunate format, but it’s set in stone by now, and won’t break. It has become a de facto narrow waist with at least 2 emitters:

            • Clang
            • GCC

            and 2 consumers:

            • Make itself
            • Ninja has a very nice and efficient gcc -M parser

            Basically it’s an economic fact that this format will persist, and it certainly works. I never liked doing anything with it in GNU make because it composes poorly with other Make features, but in Ninja it’s just fine. I’m sure there are many other non-Make systems that parse it by now too.

            1. 1

              That’s a fair point, also didn’t know Ninja supported it but it makes sense. I wonder if other languages support something similar to allow for this kind of thing, though many modern languages just sidestep the issue all together by making the compiler take care of incremental compilation.

          2. 3

            Most tools could probably read the -M output format and understand it quite easily. It doesn’t use most of what could show up in a Makefile - it only uses single-line “target: source1 source2” rules with no commands, no variables, etc. I imagine if someone wanted to come up with a universal format, it wouldn’t be far off from what’s already there.

        2. 2

          But.. don’t you want to update your program when diagnostic messages are changed? The FUSE mount doesn’t grab eg. library and system locales from outside the project root, so it only affects the resources of the project being built[1]. Heaven forbid you’re bisecting a branch for a change that is, for reasonable or cursed reasons alike, descended from one of those files..

          For those interested, I’ve pitched tup and mused about this in a previous comment here.

          [1]: Provided you don’t vendor your all dependencies into the repo, which I guess applies to node_modules! Idk off the top of my head if there’s a way to exclude a subdirectory for this specific situation, or whether symlinks would work for controlling the mechanism.

          Edit: Oh, it’s u/borisk again! I really appreciated your response last time this came up and hope you’re doin’ great c:

          Edit 2: Oh, and you work on a build system! I’ll check it out sometime ^u^

      2. 11

        I originally started Knit with the intention of supporting automatic dependency discovery using ptrace. I experimented with this with a tool called xkvt, which uses ptrace to run a list of commands and can generate a Knitfile that expresses the dependencies. However, I think this method is unfortunately more of a hack compared to -MMD because ptrace is non-portable (not well supported/documented on macOS and non-existent on Windows) and has a lot of complexity for tracing multithreaded processes. A Fuse-based approach like the one used by Tup is similar (maybe more reliable), but requires Fuse (a kernel extension), and also has the negative that automatic dependency discovery can sometimes include dependencies that you don’t really want. When I tried to use Tup for a Chisel project I ran into problems because I was invoking the Scala build tool which generated a bunch of temporary files that Tup required to be explicitly listed as a result.

        I think if Knit ever has decent support for an automatic dependency approach, it would be via a separate tool or extension rather than directly baked into Knit by default.