1. 110
    1. 28

      Exciting times.

      I’ve been sneaking it in at work to replace internal tools that have 1.5 second startup delay and 200+ MB on runtime dependencies with fast, static little zig exes that I can cross-compile to every platform used in the workplace.

      1. 16

        I find your story more flattering than any big tech company deciding to adopt Zig. Thank you very much for sharing!

        1. 8

          My impression of Zig and you all who are behind it has been that you care about these use cases at least as much as enabling big complex industrial applications, and not only in words but in action. :)

          I actually started out with Rust, which I thought would be more easily accepted. I work in the public sector and tech choices are a bit conservative, but Rust has the power of hype in addition to its nice qualities, and has some interest from techy people in the workplace.

          But then the easiest way to cross-compile my initial Rust program seemed to be to use Zig, and I didn’t really want to depend on both of them!

          1. 7

            Seems like Go would be a natural choice. Far more popular than Zig and cross-compiles everywhere. Why Zig?

            1. 19

              Not OP, but I can’t stand programming in Go. Everything feels painful for no reason. Error handling, scoping rules, hostile CLIs, testing, tooling, etc.

              My greatest hope for Zig is that I can use it to replace Go, not just to replace C.

              @kristoff what’s your take on that? Given that Zig has higher-level constructs like async/await built-in, with the support of higher-level APIs, are there reasons programming in Zig can’t be as convenient as programming in higher-level languages like Go?

              1. 16

                I’m not going to argue with that but if you’re my report and you’re building company infrastructure in some esoteric language like Zig that will be impossible to find team members to maintain said infrastructure after you leave, we’re going to have a serious talk about the clash between the company’s priorities and your priorities.

                OP said “sneaking in at work”. When working in a team, you use tooling that the team agrees to use and support.

                1. 11

                  Oh, can’t disagree there. I’m just hoping that someday I can replace my boring corporate Go with boring corporate Zig.

                2. 7

                  Two half-baked thoughts on this:

                  1. small, well-scoped utilities should not be hard for some future engineer to come up to speed on, especially in a language with an ever-growing pool of documentation. if OP was “sneaking in” some Brainfuck, that’s one thing. Zig? that’s not a horribly unsafe bet - it’s a squiggly brace language that looks and feels reasonably familiar, with the bonus of memory management thrown in

                  2. orgs that adhere religiously to “you use tooling that the team agrees to use and support” tend to rarely iterate on that list, which can make growth and learning hard. keeping engineers happy often entails a bit of letting them spread their wings and try/learn/do new things. this seems like a relatively lower-risk way to allow for that. mind you, if OP were “sneaking in” whole database engines or even Zig into hot-path app code without broader discussion, that’s a whole other problem, but in sidecar utility scripts? not much worse than writing a Bash script (which can often end up write-only anyway) IMO

                  1. 9

                    Pretty much this, in my case.

                    The “sneaking” part was not entirely serious.

                    I have used it before at work to implement external functions for Db2, which has a C API, which is very easy to use with Zig: import the C headers, write your code, add a C ABI wrapper on top. Using it just as “a better C” in that case.

                    And while we mostly use “boring” old languages, there are some other things here and there. It’s not entirely rigid, especially not outside of the main projects.

                  2. 3

                    (1) assumes that there is no cost to adding an additional tool chain simply because it’s for a small/self contained utility, which I’d hope people understand is simply not true

                    (2) you’re not wrong about tooling conservatism, but that’s because of your statement (1) being false - adding new tools has a real cost. The goal of a project is not to help you learn new things, that’s largely a happy coincidence. More to the point you’re artificially limiting who can fix things later, especially if it’s a small out of the way tool - once you’re gone if any issues arise any bug fix first requires learning a new tool chain not used elsewhere.

                3. 2

                  At least in my own domain (stuff interacting with other stuff on internet) I could say the same thing about Go, or most languages that aren’t Java/JS/C#/PHP/Python/Ruby. Maybe we will get to live in the 90’s forever :)

                4. 2

                  I am not a Zig user, but a Go user, yet I disagree about the team part.

                  In my experience that’s not really true, and my assumption here is that this is because it’s not just fewer people looking for a job using language X, but also fewer companies for these developers to choose from.

                  More then that I’d argue that the programming language might not be the main factor. As in that’s something you can learn if it’s interesting.

                  Of course all of that depends on a lot of other context as well. The domain of the field that you’ll actually work on, the team, its mentality, frameworks being used, alignment of values within the profession and potentially ones outside as well.

                  I also would assume that using Zig for example might make it a lot easier to find a fitting candidate when compared to let’s say Java where you night get a very low percentage of applications where the candidates actually fit. Especially when looking for a less junior position. Simply because that’s what everyone learns in school.

                  So I think having a hard time finding (good) devs using Zig or other smaller languages (I think esoteric means something else for programming languages) is not a given.

                5. 1

                  That’s completely right, and also a bit sad.

              2. 4

                I don’t think that Zig can be a Go replacement for everyone, but if you are comfortable knowing what lies behind the Go runtime, it can be. I can totally see myself replacing all of my usage of Go once the Zig ecosystem becomes mature enough (which, even optimistically, is going to take a while, Go has a very good ecosystem IMO, especially when it comes to web stuff).

                Zig has some nice quality of life improvements over Go (try, sane defer, unions, enums, optionals, …), which can be enough for me to want to switch, but I also had an interest in learning lower level programming. If you really don’t want to learn anything about that, I don’t think Zig can really be a comfortable replacement, as it doesn’t have 100% fool-proof guard rails to protect you from lower level programming issues.

                1. 1

                  How does Zig’s defer work differently than Go’s?

                  1. 5

                    In Go, deferred function calls inside loops will execute at the end of the function rather than the end of the scope.

                    1. 4

                      Oh I didn’t realize Zig had block scoped defer. I assumed they were like Go. Awesome! Yeah that’s a huge pain with Go.

              3. 3

                I “agree to disagree” on many of the listed issues, but one of them sincerely piqued my interest. Coming from Go and now Rust (and before C, C++, and others), I am actually honestly interested in Zig (as another tool in my toolbox), and tried dabbling in it a few times. However (apart from waiting for better docs), one thing I’m still super confused by and how I should approach it, is in fact error handling in Zig. Specifically, that Zig seems to be missing errors with “rich context”. I see that the issue is still open, so I assume there’s still hope something will be done in this area, but I keep wondering, is this considered not a pain point by Zig users? Is there some established, non-painful way of passing error context up the call stack? What do experienced Zig devs do in this area when writing non-trivial apps?

                1. 4

                  I see that the issue is still open, so I assume there’s still hope something will be done in this area

                  You are right, no final decision has been made yet, but you will find that not everybody thinks that errors with payloads are a good idea. They clearly are a good idea from an ergonomics perspective, but they also have some other downisides and I’m personally in the camp that thinks not having them is the better choice overall (for Zig).

                  I made a post about this in the Zig subreddit a while ago: https://old.reddit.com/r/Zig/comments/wqnd04/my_reasoning_for_why_zig_errors_shouldnt_have_a/

                  You will also find that not everybody agrees with my take :^)

                  1. 2

                    Cool post, big thanks!!! It gives me an understandable rationale, especially making sense in the context of Zig’s ideals: squeezing out performance (in this case esp. allocations; but also potentially useless operations) wherever possible, in simple ways. I’ll need to keep the diagnostics idea in my mind for the next time with Zig then, and see what I think about them after trying. Other than that, after reading it, my main takeaway is, that I was reminded of a feeling I got some time ago, that errors & logging seem a big, important, yet still not well understood nor “solved” area of our craft :/

                    1. 1

                      I used zig-clap recently, which has diagnostics that you can enable and then extract when doing normal Zig error handling. I think that’s an okay compromise. And easier than all those libraries that help you deal with the mess of composing different error types and whatnot.

              4. 2

                Out of curiosity, what issues have you encountered when it comes to scoping rules in Go?

                1. 6

                  There are gotchas/flaws like this: https://github.com/golang/go/discussions/56010

                  I feel like I run into shadowing issues, and then there’s things like where you’re assigning to err a bunch of times and then you want to reorder things and you have to toggle := vs =, or maybe you do err2, err3, etc. In Zig all that error-handling boilerplate is gone and operations become order-independent because you just try.

                  And don’t get me started on the fact that Go doesn’t even verify that you handle errors, you need to rely on golangci-lint for extra checks the language should do…

                  Edit: also as Andrew points out, Go doesn’t block-scope things when it should: https://lobste.rs/s/csax21/zig_is_self_hosted_now_what_s_next#c_g4xnfw

                  Edit: ohh yeah part of what I meant by “scoping” was also “visibility” rules. It’s so dumb that changing the visibility (public/private) of an identifier also makes you change its name (lowercase vs uppercase initial letter).

                  1. 2

                    It’s so dumb that changing the visibility (public/private) of an identifier also makes you change its name (lowercase vs uppercase initial letter).

                    Especially since some people write code in their native language(s) (like at my job), and not all writing systems even have this distinction.

            2. 4

              I’ve had better results (and more fun) with Rust and Zig in my personal projects, so Go didn’t really cross my mind.

              If it was already in use in this workplace, or if there had been interest from coworkers in it, I might agree that it would be a “natural choice”.

              Edit: I think it’s also easier to call Zig code from the other languages we use (and vice versa), than to call Go code. That might come in handy too.

          2. 1

            Just curious what platforms you’re targeting? x86 vs. ARM, or different OSes, etc.?

            1. 1

              The platforms are those used by people in the organisation, currently Linux, Windows, and macOS. Mostly x86, some ARM.

      2. 2

        Ah, the golang niche! Good to see :D

      3. 2

        Do the other devs all know Zig? If not, seems like a large downside is that tools formerly editable by anyone become black boxes?

        To be clear, I hate unnecessarily bloated tools too. I’m just considering the full picture.

        1. 2

          They don’t/didn’t, but it’s a quick one to pick up when you already know C and some other ones.

          I didn’t know the previous language before I started contributing to these tools either.

          It was pretty easy when someone else had already done the foundation, and I think/hope I am providing a solid foundation for others as well.

      4. 1

        What’s the delay from? Are these java tools?

        1. 2

          It’s node/TypeScript. It used to be worse. :)

    2. 9

      The main goal of this first iteration is to enable simple usage of dependencies to start building a package ecosystem, and to make sure that we can easily package C/C++ projects, not just Zig.

      After having gone through this process (i.e., replacing a hodge-podge of build systems with build2) in 300+ C/C++ packages, one especially nasty thing about quite a few of them is the dynamic probing (i.e., compile/link tests) of the target with Autoconf checks (or their CMake equivalent) in order to generate the config.h file. Our solution is the libbuild2-autoconf build system module. I wonder what’s Zig’s plan?

      1. 9

        Wow - I’m checking out libbuild2-autoconf now. This is impressive, I can only imagine the pain you had to go through to make this.

        From the look of it, our strategy will be similar to your pragmatic approach. I’d be interested in comparing notes and collaborating wherever it makes sense to!

        If you want something to look at, here’s an experimental SDL prepared as a zig package. Essentially, it’s a fork of upstream, build system replaced with build.zig. As you noted, the config.h is problematic, and in here I didn’t really solve it in a satisfying way - I prebuilt it for a few targets and then manually tweaked it. It also doesn’t solve dynamic linking against X11 on Linux. So yeah we got some problems to solve.

        1. 4

          Sure, we would be happy to collaborate. I suppose you could easily reuse our Autoconf checks if you are happy with the overall approach. And we sure would be glad to reuse any that you implement.

          A more radical idea would be to reuse the build2 build system (which is available as a C++ library) in Zig. Specifically, you could try replacing the “engine” that’s inside build.zig with it. This will not only give you access to the Autoconf functionality, but also to the 300+ C/C++ packages I mentioned above.

          I have even more radical ideas, but this is probably already pusing it ;-).

    3. 3

      building the compiler itself used to require 9.6GB of RAM, while now it takes 2.8GB

      Wow, how can a compiler for a language like ZIP use that much memory?

      1. 6

        One reason would be that the entire project is giant compilation unit.

        1. 3

          Is it? Then the size of the project would be limited by available memory or CPU architecture. Doesn’t the Zig compiler support separate compilation?

          1. 1

            What’s wrong with whole program compilation? http://mlton.org/References.attachments/060916-mlton.pdf

            1. 8

              That it requires 9.6GB of RAM?

              1. 7

                Optimizing compilers and linkers use a lot of ram. Because it is generally accepted that developers would prefer compile time be shorter, and generally will happily trade ram for that - it is much easier to double the ram in your machine that it is to double the cpu performance.

                This post does say “hey, we’ve made compilation use less ram”, which I’m going to guess was some particular section was using a data structure that had the trade off skewed, or configured with the wrong trade off.

                Sure you can point to old compilers that used less ram, but they produced worse code, took longer, or both.

                There are plenty of reason to bash Zig, but this just isn’t one of them.

                1. 8

                  I think the real tradeoff is that C compilers used to operate a line at a time, more or less. That warps the language and requires things like forward declarations and the preprocessor.

                  (The preprocessor also enables separate compilation – parallelization by processes and incremental builds, which is nice.)

                  But no sane language would make those language concessions now, including Zig.

                  Still I would like to read a blog post about why self-hosted Zig requires 2.8 GB of RAM. I’m not saying it is too much, but I think it would be instructive.

                  Especially after the talk about data-oriented programming and Zig’s tokenizing / parsing / AST (which I found useful).

                  I thought the Zig compiler was around 100K lines of code, not 1M lines of code. So very naively speaking, that would be 28 KB of memory per line, which is ~1000x blowup on the input size.

                  The code representation doesn’t require that much blowup – it’s 10x at most. So what’s are the expensive algorithms for the other 100x? Type checking, executing comptime, register allocation, etc. ?

                  That would be a very interesting analysis

                  1. 9

                    I was curious too, so I learned how to use massif (turned out to be near-trivial) and collected this data:


                    Appears to be mostly LLVM. So another interesting data point would be how much memory is used when Zig builds itself without LLVM involved.

                    Zig’s non-LLVM x86 backend is not capable of building Zig yet, but I can offer a data point on building the behavior tests, which total about 31,000 lines: peak RSS of 90 MiB

                    Another data point would be using that contributor’s branch that improves the C backend mentioned in the post- I’m actually able to use it to translate the Zig self-hosted compiler into C code. Peak RSS: 459 MiB Visualization: https://i.imgur.com/ww23lx3.png So here it looks like the culprit is, again, buffering the whole .c file output before writing it. I would expect the upcoming x86 backend to have an even better memory profile than this since it does not buffer everything in memory.

                    1. 4

                      Ah thanks, well this makes me realize that I read the title / first two paragraphs and assumed that pure Zig code was taking 2.8 GB :-/ Judging by other comments, I was probably not the only one who thought that …

                      This makes more sense – from what little I know about LLVM, there are lots of hard algorithms that are quadratic or exponential in time or space, and heuristics to “give up” when passes use too many resources. I’d definitely expect the hot patching Zig compiler (a very cool idea) to use less memory, since the goal is to generate code fast, not fast code

                      I also sympathize with the 2 years of “under the hood” work – it’s a similar story for https://www.oilshell.org right now !

                      1. 4

                        Yeah it’s a bit awkward to disambiguate between these things:

                        • frontend in C++, backend in LLVM (“old bootstrap compiler”)
                        • frontend in Zig, backend in LLVM (“self-hosted compiler”)
                        • frontend in Zig, backend in Zig (“fully self-hosted compiler”)
                        • frontend in C, backend in C, outputs C (“new bootstrap compiler”)

                        All of these are relevant codebases at this point in time. What would you call them?

                        1. 3

                          I don’t have that strong an opinion, but I would say “self-hosted front end” makes sense – i.e. “Zig’s front end is self hosted”, but I wouldn’t yet say the “Zig compiler is self-hosted”

                          I don’t think of the last one as a “bootstrap” compiler. The term “bootstrapping” is overloaded, but I think of that as the first one only – the thing you wrote before you had Zig to write things in!

                          If the purpose of the last one is to run on architectures without LLVM, then maybe “generated compatible compiler” or “generated compiler in C” ?

                          That said, I probably don’t understand the system enough to suggest good names. I can see why the last one would be used for “bootstrapping” a new platform. (“Turning up” instead ?)

                          stage{0,1,2,3} might be OK too – what I do is link all terms to the “glossary”


                          Although that doesn’t necessarily help people writing on other sites! What I really do is explain things over and over again, while trying to converge on stable terms with explicit definitions … but the terms change as the code changes, and the project’s strategy changes, so I understand the problem :)

                          edit: I think the first one “bootstraps the Zig language” and the second one “bootstraps a new platform with Zig”, which are related but different things. I think having 2 different words for those could reduce confusion, but you would probably have to invent something (which is work, but IMO fun)

                        2. 1

                          frontend in C, backend in C, outputs C (“new bootstrap compiler”)

                          Interesting, didn’t realize y’all were doing a new bootstrap compiler too. Just wondering why it doesn’t make sense to have the fully self-hosted compiler compile to C for bootstrapping?

                          1. 2

                            I’m fine with doing that for a little while but it does not actually solve the bootstrapping problem. Bootstrapping means starting with source, not an output file (even if that output file happens to have a .c extension).

                          2. 1

                            IIRC, that’s the plan

                  2. 2

                    Still I would like to read a blog post about why self-hosted Zig requires 2.8 GB of RAM ….

                    That brings it to the point, thanks; I’ve read that the compiler is 200kLOC, but 500x blowup is still very, very much; assuming that the backend is still LLVM, we can assume that the frontend was responsible for the additional 6.8 GB, which makes it even more mysterious.

                2. 3

                  it is much easier to double the ram in your machine that it is to double the cpu performance.

                  But doubling RAM is only half of the story: you also need adequate memory bandwidth. You can see the effect of this in how poorly (non-Pro) Threadrippers scale for C++ compilation.

                  1. 1

                    Yes, but it’s still easier to double the ram than double the cpu runtime performance - as the last decade I guess has shown the bulk of the big compute increases are from increasing the number of cores, which in general does not seem to benefit compilation of individual translation units.

                    Obviously you would ideally have faster and uses less ram, but that’s true of anything with a trade off: we’d rather there not be one :)

                    So I would rather something use more ram and get things done faster than skimp on memory out of some arbitrary misplaced fear of using the available system resources.

                    A lot of my work in JSC was directly memory vs page load time, and people seem to have found that to be the correct trade off.

                    1. 1

                      TU? plt?

                      1. 2

                        Updated the comment to remove the abbreviations sorry.

                        TU = translation unit, eg the piece of code a compiler is “translating” to assembly or what have you, for example a single .c file (including all the included headers, etc) would be a TU

                        Plt: super ambiguous here sorry, here it is page load time, but contextually you could reasonably have thought programming language theory, though that would make the sentence even more confusing :)

                      2. 1

                        Translation Unit

                        Programming Language Theory

                        1. 1

                          Wrong in the latter! :)

                          Plt here is page load time :)

                          1. 1

                            Aha! Thank you for the correction

                            1. 2

                              Correction is strong - I used plt in a conversation about programming languages, and was not referring to programming languages, what could go wrong? :D

                3. 2

                  Optimizing compilers and linkers use a lot of ram

                  I started building compilers three decades ago and have a formal education to do so and there are a lot of compilers still developed today which use less RAM for the same source code size, are faster and don’t generate worse code. Therefore it’s a fair question what the Zig compiler does differently.

                  1. 2

                    there are a lot of compilers still developed today which use less RAM for the same source code size, are faster and don’t generate worse code.

                    Now I’m curious! Which compilers were you thinking of here?

                    1. 1

                      E.g. all C and C++ compilers I worked with; my favorite is still GCC 4.8.

                      1. 1

                        Interesting…given that apparently most of Zig’s memory usage turned out to be LLVM [0], I’d be really surprised if clang used less RAM and ran faster for the same size C++ code, but I have very little firsthand experience with either clang or sizable C++ code bases, so maybe that’s just my misimpression of clang and C++!

                        edit: just realized it’s also possible that clang isn’t one of the C++ compilers you’ve worked with.

                        [0] https://lobste.rs/s/csax21/zig_is_self_hosted_now_what_s_next#c_bspzkq

                        1. 3

                          To provide some figures, I built my most recent LeanQt release (https://github.com/rochus-keller/LeanQt) with different compilers on x86 and x86_64.

                          Here are the cloc results:

                          LeanQt-2022-10-21/core $ cloc .
                               491 text files.
                               491 unique files.                                          
                                 3 files ignored.
                          http://cloc.sourceforge.net v 1.60  T=3.81 s (128.1 files/s, 76031.5 lines/s)
                          Language                     files          blank        comment           code
                          C++                            209          32448          64890          98328
                          C/C++ Header                   265           9203          11711          68493
                          Objective C++                   10            387            484           1961
                          IDL                              1             12              0            963
                          Assembly                         3            106             67            682
                          SUM:                           488          42156          77152         170427

                          Here is the result of gcc version 4.8.2, Target: i686-linux-gnu

                          /usr/bin/time -v ./lua build.lua .. -P HAVE_CORE_ALL
                          	Maximum resident set size (kbytes): 119912

                          Here is the result of gcc version 5.4.0, Target: x86_64-linux-gnu

                          /usr/bin/time -v ./lua build.lua .. -P HAVE_CORE_ALL
                          	Maximum resident set size (kbytes): 286628

                          Here is the result of Apple LLVM version 7.3.0 (clang-703.0.31), Target: x86_64-apple-darwin15.6.0

                          /usr/bin/time -l ./lua build.lua ../LeanQt -P HAVE_CORE_ALL
                           117989376  maximum resident set size

                          Conclusion: A C++ code base comparable in size with the Zig compiler requires less than 300 MB RAM with all tested GCC and Clang versions; Clang x86_64 requires even less than half of the RAM than GCC x86_64 (118 MB vs. 287 MB).

                          1. 2

                            Wow, not at all what I would have guessed - thank you so much for sharing!

                        2. 1

                          I’m also using Clang/LLVM; the most recent version on my M1 Mac is 13.x, though my favorite version is still 4.x.

                          1. [Comment removed by author]

                  2. 1

                    Sure, I would guess that there are places where they have chosen clearer/more easily understood architecture and algorithms because modern hardware changes the trade offs.

                    Because even on individual TUs in clang can easily hit gigs of ram at a time, LTO modes easily make it insane.

                    You could argue “lazy devs aren’t doing what we had to do in the past” but that would simply mean you should turn around and say “why bother making CPUs faster, or making systems with more ram”.

                    I don’t have three decades of compiler experience unless you consider my masters or work on the Gyro patch for Rotor, which you could argue is “compiler work”, but as none of this was real production level code I wouldn’t in this context. So let’s say I have somewhere in the 10-15 year range of production and shipping to consumers and developers, but I think that my experience is sufficient here.

                    1. 1

                      unless you consider my masters …

                      That was not the point. The point was that I came across a lot of compilers and languages, even in a time when 100 MB was an incredible lot of RAM, and that I cannot explain why a language like Zig, which is not the most complex language there is (certainly less complex than C++) requires ten to hundred times more memory than e.g. the C++ compilers I ever had to do with. Even if we can make educated guesses, they are still guesses.

                      1. 1

                        That’s literally the first part of my answer:

                        Sure, I would guess that there are places where they have chosen clearer/more easily understood architecture and algorithms because modern hardware changes the trade offs.

                        Increasing the available resources (ram, cpu time) is an enabler - if you no longer have to worry about every single byte, or every single cycle, in order to make a robust and usable compiler. Presumably if people start making large scale projects in Zig, the trade offs between simplicity and ease of development vs. performance will change, I assume the 9+ -> 2.7gb reduction was the result of something along those lines.

              2. 2

                It now takes 2.8GB of RAM so I’m not sure what your point is?

                1. 4

                  That brings us back to my original question: Doesn’t the Zig compiler support separate compilation? 2.8 GB is still too much if you e.g. want to compile on an x86 Linux machine.

                  1. 1

                    Not that this is the only answer, but you can easily cross compile with Zig. So you need not compile on an x86 machine.

                  2. 1

                    Separate compilation doesn’t get you a whole lot - if you have standard 1 to 1 of TU to a process, you are now using more memory concurrently than a single process would.

                    The reality is that a lot of compile time performance is achieved by trading off against memory use (that’s how tu/process works), And a lot of the more advanced optimization algorithms for runtime have very large working sets - which you can carefully reduce the size of, but frequently at the cost of compile time again.

                    For many compiler devs the implementation complexity required for fast compilation in a 32bit address space is not worth it on its own, let alone the opportunity cost of doing that instead of something else.

                    1. 1

                      Separate compilation doesn’t get you a whole lot

                      Is this the confirmation that Zig doesn’t support separate compilation?

                      the implementation complexity required for fast compilation in a 32bit address space is not worth it on its own

                      I can easily compile the Linaro ARM GCC which is even bigger than the 200 kLOC of Zig on an x86 Linux machine and it doesn’t use more than a few 100 MB RAM to work. Zig is supposed to be a “better C”, isn’t it?

                      1. 2

                        is this confirmation…

                        No idea, I don’t like Zig as I disagree with a bunch of their core language design decisions, so haven’t investigated any implementation details :)

                        I can compile…

                        Again no idea about the zig compilation, but all 32bit compilers explicitly drop a bunch of compile time performance optimizations due to address space constraints, the extreme case being compilers from a few decades ago that did essentially line by line compilation because anything more burned too much ram - it’s why [Obj-]C[++] all require forward decls (though in c++ templates also don’t help :) )

                        1. 3

                          Here is some x86 and x86_64 data with compiler versions between 2013 and 2019: https://lobste.rs/s/csax21/zig_is_self_hosted_now_what_s_next#c_2lc2dk. Much less than the available memory is used, even on 64 bit systems.

                          1. 2

                            Thanks for all that work!

                            My opinion is very much that as a more recent language Zig took the imo reasonable approach of using standard data types and algorithms, rather than custom everything that llvm, clang, gcc, etc have.

                            Because of your work I’m now curious just how much memory is saved in llvm+clang (I know next to nothing about the gcc code base) by the large amounts of effort in keeping memory use down (llvm originated in an era with much less ram, and clang less so, gcc was literally decades earlier so presumably also does fairly complicated stuff to keep size down).

                            But the big thing that’s different is that in earlier times a lot of core architectural decisions in the older compilers that result in much more complicated data types than I suspect the Zig implementation does. There are numerous different versions of the common core data types in llvm specifically to keep memory use down, other things like how the IR, ASTs, etc get kept around make the types themselves obnoxious but also impacts the architecture as removing info from some types means you need to have fast ways to get that info again.

                            Why would a new language take on that complexity - especially if it’s trying to be welcoming to new devs (imagine if you were introduced to C or C++ by some absurd macro&template monstrosity instead of clean easy to comprehend code)

            2. 2

              The presentation you linked mentions compiling a 100k program with less than 1 GB RAM. If zig is about 150 KLOC (like I think I saw elsewhere in the thread) then I think it lends to Rochus’s point that 9 GB was pretty heavy.

              Of course, the presentation is from 2006 and the MLton authors were probably more concerned with 32 bit builds so they might have made more efforts to keep build sizes low. (Which is to say it was different because things were different. Woo-hoo!)