1. 47

Loose transcript.


  2. 6

    I’m really impressed by Zig’s cross-platform hermetic build feature. It’s valuable, unique, and strongly positions Zig as a “better C”.

    What do I really want from a better C? The ability to write once, run anywhere, for real. The ability to lock in all my dependencies, then build for any platform, confident that the binaries will work, and that my code won’t bitrot after a few years. My experience with C, C++ and Java projects on github is that if they haven’t been maintained for the last five years, then you cannot build and run them without updating the code, which requires a lot of expertise.

    Zig can produce distro independent portable Linux binaries that are statically linked, as long as you don’t have the “wrong dependencies”. I have a forthcoming project that will use WebGPU for cross-platform graphics, and glfw or an equivalent for windows and input events. I’m pretty sure you can’t create static platform-independent Linux binaries with these dependencies, because Vulkan on Linux requires dynamic linking, and there is no portable ABI that a statically linked binary can use to interface with Vulkan. Either the Linux Vulkan people need to provide this, or there has to be a separate portable runtime (with a portable ABI) that is ported to each platform, that Zig binaries can dynamically link to.

    I can see that @AndrewRK has done some experiments with portable Vulkan apps in Zig. Although I think there is a lot of work left to be done, I have hope that Zig will mature into the kind of system I’m asking for.

    1. 8

      I think your goals align closely with those of the mach project, right down to the choice of WebGPU and glfw.

      If I understand correctly, they’ve accepted that dynamic linking is sadly required to use Vulkan on Linux and instead focus on a perfect cross compilation story. You’d need to provide separate builds for e.g. x86_64-linux-gnu and x86_64-linux-musl but the gnu ABI build should work on all gnu based systems barring bugs in mach or its dependencies.

      1. 3

        That’s very interesting, thanks for the link!

    2. 4

      I find it interesting that, according to the “Zig software Foundation transaction list by date” screenshot towards the end of the post (a publicly-available document summarizing money flows for the ZSF), Uber is listed as paying twice the amount of money of “Github Sponsors”. There is always a tension in those foundations between small donations and big donors; individual/small donations are decentralized and they make everyone feel better, but big sponsors will easily pay substantially more (this is just one big company using zig tooling, not even the language). Here it sounds like Uber was kind in negotiating a deal where they don’t have too much power over the ZSF.

      1. 7

        For comparison, the total amount of GitHub Sponsors income in 2021 was $179,608.91. The Uber support contract was for $52,800.

      2. 4

        I’ve been doing a lot of embedded work lately and this kind of makes me wonder what Yocto would look like with the toolchain bits swapped out for zig cc.

        1. 3

          I wonder how much of this depends on the Zig language or could be provided stand alone - it seems like they just wanted a better distribution method for clang.

          1. 23

            Note that all of the following components, which are essential to the function of these toolchain features, are completely written in Zig:

            • a cross platform compiler-rt implementation that is lazily compiled from source for the chosen target. All C code depends on this library. (Also all code that uses LLVM, GCC, or MSVC as a backend depends on this library.)
            • a Mach-O linker (macOS)
            • glue code for building lazily from source musl, mingw-w64, glibc stubs, libc++, libc++abi, libunwind. Basically the equivalent of these build systems ported over to imperative .zig and integrated with the job queue & caching system.
            • zig CLI & glue code for passing appropriate flags to clang, such as which C header directories are needed for the selected target

            That said, I am interested in trying to cooperate with other open source projects and sharing the maintenance burden together. One such example is this project: glibc-abi-tool

            1. 6

              Thanks for the reply, it is certainly much more involved than I thought. It is impressive you are able to deliver such an improvement over the status-quo as a sub project.

              1. 9

                Thanks for the compliment. The trick is that almost all of these components are also used by the Zig language itself, or as prerequisites for the upcoming package manager so that C/C++/Zig projects can depend on C/C++/Zig libraries in a cross-platform manner. Mainly the only “extra” sub-project that has to be maintained is the integration with Clang’s command line options (which are helpfully stored in a declarative table in the LLVM codebase).

                1. 2

                  glue code for building lazily from source musl, mingw-w64, glibc stubs, libc++, libc++abi, libunwind. Basically the equivalent of these build systems ported over to imperative .zig and integrated with the job queue & caching system.

                  While what you have achieved is impressive (if my understanding is correct, which is to rip out these projects’ native build support and rewrite it using zig’s build system), I am skeptical such a low-level approach (i.e., writing build description in a low-level language like Zig) will scale to a C/C++ build system/package manager, especially if you want it to be general enough with support for development (rather than just consumption), shared libraries, other C/C++ compilers, etc.

                  I am working on a similar problem (i.e., general-purpose C/C++ build system and package manager) and when I look at libraries like Qt, it’s inconceivable to me that someone would be able (let alone want) to describe its build without a high-level, purpose-built language with a powerful-enough underlying model of build. But perhaps I am wrong.

                  1. 6

                    I don’t know about Qt (may be painful), but from my Rusty experience it’s often surprisingly easy to throw away the whole build system of many C libraries and replace it with just a reliable C compiler.

                    A lot of these complex build scripts reinvent the same defensive patterns for dealing with broken and outdated C compilers, support for exotic platforms, and their own battle with their dependencies on every OS. But all of that is irrelevant if you already have a good C compiler and a portable package manager.

                    The remaining part is finding where the sources are and setting a few #defines for feature flags, which isn’t that hard.

                    1. 2

                      Sure, for simple libraries that are written in (now) portable C this can be easy (you no longer need to check for stdlib.h as this article eloquently points out). But there is still variability across platforms when you do more interesting stuff (or when Windows is concerned). In build2, for example, we have an autoconf build system module that handles this without resorting to slow and brittle configuration probing (i.e., trying to compile/link test programs).

                      I also assume your build scripts are not geared for development, only consumption. For example, I doubt you extract the header dependency information (say, with the -M GCC/Clang option family) and then update necessary files if any of the relevant headers change (though I could be wrong). To put it another way, there is a vast gap between doing a one-shot, consumption-oriented C/C++ build and doing accurate change tracking for incremental builds (which, it seems, what the Zig toolchain is aiming for).

                      In fact, if you think about it, the whole thing is rather backwards: you have a “higher-level” Rust package manager which you hacked to build and distribute “lower-level” C libraries. Wouldn’t it be more natural to rather use a “lower-level” C/C++ package manager with proper support for C libraries to also build and distribute Rust/Zig/etc packages? At least in the deal world?

                      1. 1

                        You’re right about limiting it to one-shot downstream use only. I’m fine with that, because I only use the “transplanted” build system to get access to existing/legacy C code. All new code I write is in other languages that have their native dev experience not affected by complexity of C building.