1. 4

    const maxInt = std.math.maxInt;

    Just curious, is this considered idiomatic zig? It looks similar to a similar construct in other languages, which a shorter alias for a commonly used function. However, that’s not the case here–it’s only used once. Is that just the way import notation works, or is it superfluous and could be replaced with const max_stack_size = std.math.maxInt(u8);?

    1. 7

      It’s superfluous and could be replaced with const max_stack_size = std.math.maxInt(u8);. It probably ended up that way because the code used to be @maxValue(u8) (a builtin function) however this builtin function was removed from the language in favor of the std lib function std.math.maxInt. The simplest way to update all the std lib code was to put const maxInt = std.math.maxInt; at the top and replace @maxValue(u8) with maxInt(u8).

      1. 5

        The design of Zig is such that types, functions, modules and constants are essentially all considered first class values. This is just the idiomatic way to do aliases as such, and whether you do this is a matter of personal preference. That said, it seems to be pretty common for Zig code I’ve seen to not use qualified lookups in code - maybe just to give it a more ‘C-like’ feel?

      1. 2

        I wonder if there’s a place for a language that is much closer to C++ but with Rust’s approach to memory management, to make porting easier.

        1. 4

          Yeah I wonder about this too. It seems like ownership is here to stay, with:

          It seems like D and Nim have closer C interop than Swift or Rust, although I don’t have direct experience with them.

          One thing I like about Zig is it integrates with C almost as well as C++ does (by including libclang in the compiler):


          So maybe someone can think of a way to bolt it on to Zig (optionally?), although it seems pretty far from its philosophy.

          The problem I see is that ownership is a global property like other kinds of type safety. I think you need a language that also easily supports isolated processes and message passing to have a migration path (and i agree that the migration path in Rust is asking a lot).

          That is, basically follow the approach in:

          Some thoughts on security after ten years of qmail 1.0

          with some parts of the codebase in legacy languages and some in a safe language like Rust.

          1. 3

            Safety is on my radar. I’m on board with the “safer unsafe” thing, for example.

            I’m also planning on doing some research into making Zig safer. I have a plan for the following kinds of unsafety:

            • Use-after-free
            • Invalid pointer cast

            Some stuff is already solved (has safety protections), such as:

            • Wrong union field access
            • Double await / invalid resume
            • Integer arithmetic overflow
            • more

            The goal is to end up with a short list of undefined behavior that is not safety-protected.

            All that said, the caveat is that safety protections can be entirely disabled in any given scope. I would expect most projects to keep the safety on, identify bottlenecks, and then disable safety in the bottlenecks only.

            Applications that could ship with all safety off would be:

            • Shrink-wrapped video games
            • Well-tested & fuzzed embedded device code, that needs to have small binary size & needs to max out the hardware performance
            • Software that is going to run in a sandbox anyway, for example the WebAssembly target
          2. 4


            I’m skeptical there is a better arrangement of these things, but I invite anyone who wants to help to expirement with finding a better way! Just know the road has been paved with a lot of failed attempts.

            I became a Rust booster not because it’s the perfect language, or even the one I love the most, but because I believe it does the best job balancing safety and practicality.

            Or for the Stargate fans: “many have said that, but you are the first I’ve believes could actually do it”.

            1. 3

              There are many things Rust could do to ease porting from C++. For example, C++ has function overloading and Rust doesn’t. Entirely trivial (just use different names for different overloads), but also annoying when you are trying to do 1:1 port.

              1. 1

                Unfortunately, overloading and type inference make an explosive combination. Rust at best could allow overloading by number of arguments (equivalent to optional arguments).

              2. 2

                And the community which went viral or something like it. Rare to achieve that kind of momentum in a language. Especially sustained momentum. The new language probably won’t re-create it just based on the odds. Better to build on what has momentum.

                1. 2


                  I keep trying to use Rust for random projects and hitting walls. I’ll keep trying to find something I like it for. Chances are I’ll end up doing some of it as work since more and more of the components of the day job are moving to it.

                2. 3

                  I would love to have slices in C! Just this one thing alone would improve C safety, and help automatic porting of C to Rust.

                  But given that C99 tried to specify array sizes (with very awkward syntax int size, arr[static size]), and after 20 years nobody uses it, compilers don’t do anything useful with it, I don’t have any hope for anything like this catching on and making a dent.

                  But there’s a proposal for lifetime annotations in C++: https://herbsutter.com/2018/09/20/lifetime-profile-v1-0-posted/

                  1. 2

                    Check out D. One of its main developers wrote C++ compilers, knew its pain points all too well, and wanted C++‘s power without all its problems. It’s GC by default with no-GC allowed for performance. It compiles really fast. They’re recently looking at how to merge Rust-like safety into it.

                    Edit: I don’t know if they’re close enough for porting or anything. It’s just one of best C++ alternatives I know.

                  1. 12

                    What’s the hot take? This is just saying it’s good

                    1. 13

                      I think it was a big mistake not accepting the try proposal. I was actually thinking to myself, man, Go is catching up to Zig’s error handling abilities, this is closing the gap. But now I’m raising an eyebrow and wondering what they’re thinking. The reasoning for closing the proposal seems to be “we didn’t explain it well enough”. That’s kinda odd.

                      1. 4

                        I was also surprised and also kinda pity this decision, from a practical point of view of a professional Go developer.

                        That said, I think I do kinda see some arguments that I think could be what made them pause.

                        Or at least personally to me, I think the thing that made me the most uneasy about this proposal, was that it basically encouraged a significantly different approach to error handling than what was “officially” considered idiomatic in Go beforehand. Notably, if your code was written in the “idiomatic way”, you wouldn’t really be able to change it to anything better with try. And what I mean by “idiomatic” here, is adding some extra info/context to the error message, and maybe also doing some extra logic in the “if err” block. As opposed to a trivial if err != nil { return err }, which is not considered officially idiomatic in Go - though quite commonly found in real life code (esp. with helper libs like log15.v2 etc.).

                        This is a somewhat subtle thing, and I’m not sure if I managed to explain it well enough (I can try giving more examples if you’d like, maybe in priv if you prefer). I’m not even 100% sure if that’s their actual main reason, but personally to me, that was the one thing that made me feel not 100% good about this proposal from the beginning. And kinda helps me rationalize the rejection, although from short-sighted immediate perspective I’d much prefer to have the try.

                        1. 3

                          Thanks for this explanation! This made a lot more sense to me than the official explanation on the issue tracker.

                          As a counter-argument (not to you but for the proposal in general) try allows the language to automatically add context to an error as it bubbles up the stack. For example, it could add the source file, line, column, function name where the try occurred. Once the error reaches a point where the application reports the error or decides to panic because of it, the resulting data attached to the error explains exactly what the developer needs to know about what happened.

                          In Zig this is called error return tracing and I’m getting really positive feedback about it.

                          1. 3

                            Yep, you’re technically not wrong (regarding the counter-argument). The thing is, stack traces are somewhat controversial in the Go community.

                            I mean, it’s technically easy to attach a stack trace to an error in Go when it’s created, if that’s what you want (my last employer’s codebase works that way). If you mentally take a step back however, there are some interesting issues with stack traces, and especially if you try to use only stack traces when dumping error messages:

                            • stack traces are tightly coupled to a specific version of a codebase; a very next commit may make line numbers in your stack trace invalid/misleading; thus, you must track the codebase of a binary very well; as a corollary, stack traces mean little when analyzed in isolation, without source code.
                            • pure stack traces will still lack context that may be important at intermediate steps/levels of the stack (e.g. values of some local variables that may help when debugging, or may otherwise shed more light on what was the meaning of the call in a particular frame).
                            • also, stack traces tend to be noisy, making it somewhat tedious to find actually important information in them.
                            • stack traces are arguably developer friendly, but not very end-user friendly.

                            In contrast, an error message “officially” seen as “idiomatic” by the Go “fathers” could look kinda like below, when emitted from a program (with no formal stack trace):

                            error: backup of /home/akavel to /mnt/pendrive failed: cannot write /mnt/pendrive/foo/bar: device not ready

                            With some care, such an error message can be short, informative, give some potentially important extra context (e.g. the /home/akavel path as the source of the backup), be time-proof, self-contained, arguably more end-user friendly, and still usually make it possible to trace a concrete call stack in the codebase that emitted the error (though with some extra work compared to raw stack traces).

                            I don’t claim they are perfect, or that they are strictly better than stack traces. But I do understand their advantages and find this dilemma quite interesting and challenging (no clear winner to me). Also, it’s worth to note that this area is now kinda being further explored by the Go 2 proposals, esp. the “error values” one, with regards to how the ergonomy here could be maybe further improved, both for error message writers and readers.

                            1. 2

                              That’s a fair point about different versions of code with regards to stack traces.

                              And I do like your example quite a bit. That really is an amazingly helpful error message, both to end users and to developers. If Zig had the ability to do hidden allocations like Go does I would be all over this.

                              Even without that though, maybe there is a way… perhaps with something like this proposal.

                              1. 3

                                If you haven’t yet, please try and take a look at what is explored in the relevant Go 2 design draft (https://go.googlesource.com/proposal/+/master/design/go2draft-error-values-overview.md). Even if you don’t understand the whole context, I think there are some potentially interesting thoughts and considerations there. Please note also this is a very early stage “thought experiment”/exploration, that is not even at a proposal/RFC stage yet (or rather, it is something that could match the “request for comments” phrase if it was treated literally and without historical baggage).

                                As to the Zig proposal you linked, one thing that’s sorely missing for me there in order to fully understand it, is what kind of error messages/traces this could enable. I don’t see any example output there. Would it allow printing extra information as part of Zig’s “error return tracing”? Or let the programmer build error messages like what I’ve shown? I don’t know Zig enough to understand the unstated consequences. So, we’re now in a reversed situation, where previously I explained to you the unstated Go context that you didn’t have, and now it’s me who doesn’t have Zig context ;)

                              2. 2

                                I actually don’t find the ‘idiomatic Go error message’ example very helpful. I see the why the error happened, but what can I do to fix it and where can I do that? These are really the kinds of questions stack traces answer. I don’t find that line numbers shifting over time to be very compelling as an argument against. Stack traces are meant to be used to jump to the exact line numbers where the error travelled through. Typically when you’re debugging, you already know the specific commit you’re debugging–some commit SHA of a deployment. So you would already have that commit checked out and trace the error with accurate line numbers.

                                1. 1

                                  In this particular example case, the answer to your question (what, where) would be something like: “Insert the backup pendrive back into the USB port”. That’s not something one could fix in code in any way, so actually, stack traces are of completely no use here! (Ok, one could maybe make the code ignore the error and go on, but anyway, the message would still have to land in logs.)

                                  Other than that, as I said, the “idiomatic” errors are not perfect, and have disadvantages vs. stack traces, with the main (only?), most important one being how super easy and powerful it is to jump through code when you do have the call stack with line numbers and do know the commit SHA. And please note, that the Go 2 draft designs do try and explore if and how the advantages of both approaches could maybe be fused together.

                              3. 2

                                try allows the language to automatically add context to an error as it bubbles up the stack

                                Nothing in the proposal mentioned anything like this, and if you mean that users could combine try with deferred functions that annotated all errors returned in the function scope the same way, well, (a) that was already possible, and (b) it’s significantly worse than doing individual in-situ annotations, because (i) it physically separates error generating statements from the code that handles them, and (ii) it forces all errors that escape a function to be annotated the same way.

                              4. 1

                                Like you, I’d prefer to have try immediately, but I agree it introduces a style which is not idiomatic.

                              5. 1

                                I also feel like they abandoned the proposal mostly to keep the peace in the community. That said, catch and try fit better in Zig because you have error traces. Go doesn’t have error traces, and this is why people insist on decorating errors, and why they dislike try which only permits to return “naked” errors.

                              1. 3

                                Id like to see a feature comparison of D and Zig.

                                1. 5

                                  I admire Walter Bright and the D language & community. I think the project sets a great example, and parts of Zig were inspired by or at least informed by the D language. That said, in the spirit of friendly competition, here’s one difference: how lean binaries are:

                                  $ zig build-exe hello.zig --release-small --strip --single-threaded 
                                  $ strip ./hello
                                  $ ./hello
                                  Hello, World!
                                  $ ls -ahl ./hello
                                  -rwxr-xr-x 1 andy users 6.0K Jul  3 19:18 ./hello
                                  $ ldd ./hello
                                  	not a dynamic executable


                                  $ dmd hello.d -O -release -inline -boundscheck=off 
                                  $ strip hello
                                  $ ./hello
                                  Hello, World!
                                  $ ls -ahl hello
                                  -rwxr-xr-x 1 andy users 699K Jul  3 19:13 hello
                                  $ ldd ./hello
                                  	linux-vdso.so.1 (0x00007fffc8591000)
                                  	libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007ff03913a000)
                                  	libm.so.6 => /usr/lib/libm.so.6 (0x00007ff038fa4000)
                                  	librt.so.1 => /usr/lib/librt.so.1 (0x00007ff038f9a000)
                                  	libdl.so.2 => /usr/lib/libdl.so.2 (0x00007ff038f95000)
                                  	libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007ff038d7f000)
                                  	libc.so.6 => /usr/lib/libc.so.6 (0x00007ff038bc7000)
                                  	/usr/lib/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007ff03915d000)

                                  I’m sure there are plenty of things D could show off in this thread. And please let me know if there are other flags I could pass above.

                                  1. 3

                                    Using your example, I used D’s betterC mode


                                    dmd -betterC hello_betterc.d

                                    Getting 8.2K executable.

                                    $:~/d-vs-world/binsize$ ls -ahl hello_betterc

                                    rwxr-xr-x 1 v v 8.2K Jul 4 01:46 hello_betterc

                                    /d-vs-world/binsize$ ./hello_betterc Hello betterC

                                    /d-vs-world/binsize$ ldd hello_betterc

                                    linux-vdso.so.1 (0x00007fff79d32000)
                                    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fc80ce40000)
                                    /lib64/ld-linux-x86-64.so.2 (0x00007fc80d433000)

                                    cat ./hello_betterc.d 
                                     extern(C) void main()
                                       import core.stdc.stdio : printf;
                                       printf("Hello betterC\n");

                                    dmd –version

                                    DMD64 D Compiler v2.086.1-beta.1

                                    (sorry, had to make several formatting updates to the post)

                                    1. 2

                                      Ah! Thank you for sharing this. I tried to use -betterC but I didn’t know how to get past the compile errors from my hello world code:

                                      import std.stdio;
                                      void main()
                                          writeln("Hello, World!");

                                      which gave:

                                      /usr/include/dmd/std/stdio.d(3806): Error: template std.stdio.File.LockingTextWriter.put cannot deduce function from argument types !()(string), candidates are:
                                      /usr/include/dmd/std/stdio.d(2865):        std.stdio.File.LockingTextWriter.put(A)(scope A writeme) if ((isSomeChar!(Unqual!(ElementType!A)) || is(ElementType!A : const(ubyte))) && isInputRange!A && !isInfinite!A)
                                      /usr/include/dmd/std/stdio.d(2895):        std.stdio.File.LockingTextWriter.put(C)(scope C c) if (isSomeChar!C || is(C : const(ubyte)))
                                      hello.d(5): Error: template instance `std.stdio.writeln!string` error instantiating

                                      I see now that the trick was to use core.stdc.stdio.printf.

                                  2. 3

                                    I have only read about Zig but they feel very different too me on a high level.

                                    Zig wants to be a simple language: Few features that can be combined in powerful ways.

                                    D wants to be a comprehensive language: Many features so wherever you go you have the right tool for the job.

                                  1. 7

                                    As the developer of a (formerly) single-letter-named language myself, it’s tempting to believe that we’ve created something groundbreaking and fresh. But it shouldn’t come as a surprise that there’s decades-old research into the topic of this language’s forte.

                                    The silver lining is that it’s possible to find informative educational content if you know where to look — such as this video, which sheds some light on the internal workings of the compiler: https://www.youtube.com/watch?v=Paj43J0pi4s

                                    1. 2

                                      underrated post

                                      1. 1

                                        there’s decades-old research into the topic of this language’s forte.

                                        That would be a surprise to me, considering the nature of the language:

                                        It is a single-paradigm, multi-tenant friendly, turing-incomplete programming language that does nothing but print one of two things:

                                        • the letter h
                                        • a single quote (the Lojbanic “h”)
                                        1. 1

                                          Apparently you didn’t watch the video.

                                          1. 4

                                            Ah, got me. I actually clicked the link, but iterated through my tabs backwards and by the time I got back to it I was completely confused how I had gotten there. But I have small children so I didn’t question it too much before closing it :p

                                      1. 1

                                        From the reasoning outlined, the author points to no-GC-needed and direct C compatibility as key drivers for Kitlang (or at least that’s what I understood).

                                        Would be interested to know why other languages that have optional GC and zero-friction interoperability with C did not fit: ( eg BetterC (subset of D), Zig, and Nim with realtime-GC or off).

                                        Overall though, for my own education purposes seeing a general-purpose language done by one person in a-less-than-few-years-time, learning from it, is very useful, and inspiring too.

                                        1. 3

                                          This page has C, C++, Rust, Zig, and Haxe: https://www.kitlang.org/comparisons.html

                                        1. 1

                                          Any game demos yet? I’m excited to see what people come up with.

                                          1. 41

                                            Am I the only one shocked by the poverty wages paid in open source? I make more a day than that project makes a month and not by a small margin.

                                            The current open source licenses have failed us completely when middlemen make billions while coders make less than minimum wage.

                                            1. 11

                                              Am I the only one shocked by the poverty wages paid in open source? I make more a day than that project makes a month and not by a small margin.

                                              Probably because giving away something for free and then holding out your hat afterward in expectation of payment is a shitty business model. It barely works for some musicians, it doesn’t work at all as a faceless github account on the Internet.

                                              If you want to make money doing what you love to do, you have to create a workable business model around it. And the thing about businesses is that sometimes they work and sometimes they don’t. Just because you create a thing doesn’t mean you are suddenly entitled to receive a profit from it. The thing you make or do needs to have value to others, you have to prove that value to them, you need a system for delivering that value, and then make it easy for them to pay you for it. There are plenty of companies doing well enough with their own spin on a combination of open source with commercial opportunity. Like any other business, it’s not easy and many fail. But it is possible.

                                              The current open source licenses have failed us completely

                                              Incorrect, they work exactly as they were intended to work. The majority of open source software is given away with no expectation of anything tangible in return. Open source software (generally) gets written because someone had an itch to scratch and wanted to share the results or their work with a community. The authors may have many motivations for giving away their code (altruism, recognition, social interaction) but none of them are to make a bunch of money.

                                              Finally, I have no opinion of the musl project but if they actually want donations, they’re doing a very good job of hiding that fact on their website.

                                              1. 4

                                                Really? He makes only slightly less than half of what I make (and it probably goes a lot further for him), and I consider myself well-compensated by the standards of the amount of work I put in (if not well-compensated by the standards of this industry, where dev wages are hugely inflated).

                                                1. 4

                                                  You are not the only one.

                                                  1. 3

                                                    The current open source licenses have failed us completely

                                                    Some licenses are way more permissive towards freeloading than others.

                                                    1. 1

                                                      That’s true, but there’s another twist to it, right?

                                                      Giving away the source code freely and then having freeloading (e.g., not pushing changes upstream or sharing source code they link with) services live behind an inscrutable wall (network service, usually) makes catching violations very difficult. At least with a binary you can decompile it and get an idea of what libraries were used–there are no such easy fruit for web applications if even the slightest effort is put into it.

                                                      1. 0

                                                        Yes, SaaS make violations difficult to catch. However, licensing can make such “dark pattern” too risky or too unpractical to be used at a large scale.

                                                        Unfortunately license adoption needs a critical mass is needed for this to work. If 90% of software stays under very permissive licenses freeloading will not stop.

                                                    2. 1

                                                      It depends on purchasing power, right?

                                                      There are countries even in continental Europe where a small Patreon campaign can match a lawyer’s salary.

                                                      Also a reason to not donate to larger projects through Patreon is that it’s impossible for the project management to accomodate everyone. People without that much disposable income, who would prefer to pay for specific features, even if they are on the roadmap but not a priority, may choose to keep their money instead.

                                                      Or buy into a more commercial solution to get what they need without the open-source politics.

                                                      1. -5

                                                        A single lawyer can’t save a company several million in opex a month. I’ve have.

                                                        I don’t quite understand why the go to example of people trying to explain to me why I should be getting paid less are jobs which have no inherent ability to scale, be it lawyer, doctor or building architect and whose only reason for being highly paid is a cartel keeping wages inflated. Here’s hoping we import enough Cuban doctors or Latvian lawyers that their wages reflect the difficulty of their job and the demand for it.

                                                        1. 24

                                                          Please don’t dismiss other people’s work just because you don’t understand what they do. Of course a single lawyer can save a company millions in operating expenses per month, and their profession has been doing it for far longer than we have.

                                                          1. -5

                                                            [[citation needed]]

                                                            I have seen teams lawyers cost the opposing side tens of millions pretty easily, I have never seen them save money inside a company that wasn’t being sued. In short, a zero sum profession, with a high bar to entry and a marvelously developed class consciousness. Good job if you can get it. I only wish developers could develop that sense too, because we add actual value in the trillions.

                                                            1. 13

                                                              I have seen teams lawyers cost the opposing side tens of millions pretty easily, I have never seen them save money inside a company that wasn’t being sued.

                                                              There’s a good reason you might not see it. By having proper policies and procedures in place to comply with the law, lawyers can save a company money by understanding the law and ensuring that they don’t get fined or sued. For example, breaking wage and hour laws in New York can be very expensive. One of the fines listed is $50 per employee per day. There are like 65,000 fast food workers alone in NYC. If all of the companies failed to comply, that would total over $3 mil / day in just fines. That’s before all the lawsuits that would probably also show up.

                                                              Also interesting, if the guys racking up tens of millions lose their case, they might end up paying those tens of millions back to the people they sued because of laws regarding recovery of attorney’s fees, or counter-suits.

                                                              In short, a zero sum profession, with a high bar to entry and a marvelously developed class consciousness. Good job if you can get it. I only wish developers could develop that sense too, because we add actual value in the trillions.

                                                              I’m wary any time someone talks about how great software engineering is or developers are. Were the people who wrote the code to do spoofing and layering adding actual value? How about the engineers and developers behind the Clipper chip?

                                                              1. 4

                                                                It’s not always a zero sum game, there are (unethical) agencies who make multi million euros a year by sending cease and desist letters for allegedly (and often false) copyright violations. Sadly, it took many years until this practice was prevent by the government in Germany, maybe because about half of the politicians are lawyers themselves.

                                                                Update: typo

                                                                1. 3

                                                                  You can bet Goodwill consulted a lawyer before implementing this cost-cutting strategy.

                                                              2. 8

                                                                A single lawyer can’t save a company several million in opex a month.

                                                                Not to take away from your point, but they absolutely can: M&A, restructuring, downsizing, RightSizing™. And so on. I personally know a lawyer whose sole job is to fly around BoA offices around the world shitcanning [redundant] people.

                                                                I don’t quite understand why the go to example of people trying to explain to me why I should be getting paid less…

                                                                Because people like free stuff and one obvious way they get free stuff is if you work for free.

                                                                However if zig is successful, Andrew will likely get hired at a big marketing company like Google or Facebook where he’ll lead a charge to zig all the things with a nice fat salary doubled up with lots of RSU. It’s a bold move, and not for everyone, but using a “open source career” to bootstrap an enterprise retirement is easy enough to do that (while the markets are good) people are doing it on accident.

                                                                1. 16

                                                                  That’s not my plan. I might consider working at a place like Mozilla but never Google or Facebook. I’m looking into starting my own non-profit company. Do you know how amazing it is to not have a manager?

                                                                  1. 7

                                                                    Can I suggest you start a for-profit company instead and make a nice life for yourself? There’s nothing unethical about charging customers money for the tools you build. It’s worked quite nicely for me and I hate to see fellow OSS enthusiasts scrape by and play down their own value to society.

                                                                    1. 6

                                                                      I appreciate that you’re looking out for my interests, but why not start a non-profit company and make a nice life for myself and others? Non-profits are allowed to charge customers money for tools. There’s nothing stopping me from having a nice salary in a non-profit.

                                                                      1. 4

                                                                        What is the benefit of a non-profit vs a privately owned company that can do what it wants? I suppose I can see why a programming language steward company might be a non profit.

                                                                        1. 3

                                                                          While @mperham is one of the best examples I know of for turning a profit and contributing to the community (maybe followed by Richard Hipp/sqlite), may I augment his suggestion with that of a Benefit corporation if that suits your priorities better?

                                                                          It seems to me that the bigger problem for you (vs @mperham) is that almost everybody expects language toolchains to be free at this point (there are some exceptions, but most of those seem like legacy / gigantic enterprise work).

                                                                          But either way, I hope to see you continue the great work!

                                                                      2. 2

                                                                        Reminds me of a quote from one of my favorite movies:

                                                                        Free winds and no tyranny for you? Freddie, sailor of the seas. You pay no rent. Free to go where you please. Then go. Go to that landless latitude, and good luck.

                                                                        For if you figure a way to live without serving a master, any master, then let the rest us know, will you? For you’d be the first person in the history of the world.

                                                                        Lancaster Dodd

                                                                      3. 0

                                                                        Not to take away from your point, but they absolutely can: M&A, restructuring, downsizing, RightSizing™. And so on. I personally know a lawyer whose sole job is to fly around BoA offices around the world shitcanning [redundant] people.

                                                                        I said single, any push like that would require a team of at least a dozen. On average, sure, a team of 50 lawyers can for a small investment of 10 million get you a couple of billion in roi.

                                                                        1. 3

                                                                          I’ve personally seen a single lawyer acting as in-house counsel and compliance officer in a heavily regulated space save the company millions in potential fines, and tens (if not hundreds) of thousands in filing and process fees.

                                                                  1. 32

                                                                    This guy is a far better hype man than he is anything else.


                                                                    Try to use it yourself if you think it is good. Send me a message later if I was right or wrong.

                                                                    I didn’t say he was bad, actually I think he is a great hype man, one of the best in this space. The way he appeals to surface level curiosity while keeping facts just obscure enough to solicit donations[1] is a work of art. It is a valuable lesson to know how to manipulate people at a large scale. If he starts a marketing consultancy, I think he might do very well.

                                                                    To the author of V: I really want you to succeed, if everything promised worked as advertised It would be a great thing for everyone.

                                                                    [1] https://www.patreon.com/vlang

                                                                    1. 3

                                                                      In fairness I assumed this was work done in spare time and not something being paid for with $800 monthly donations, maybe I should have done more research.

                                                                      1. 3

                                                                        I am a bit jealous, I don’t know how to start an OSS project with so many donations. Being liberal with what you promise seems to help.

                                                                        I hope it is a “fake it till you make it” more than a “cut and run”. Obviously if everything promised worked as advertised It would be a great thing for everyone.

                                                                        1. 3

                                                                          Rates vary and my city tends to be expensive, but 800/mo seems unlikely to fund more than a little hobby work a week. I suppose if you were already setup (no debts to service) you could make it work in a cheap area…

                                                                          1. 7

                                                                            I think lots of stuff like this is done for ego and personal satisfaction rather than the money. The money is icing on the cake. There’s another benefit that hyping projects might bring. So, the job interview goes like this:

                                                                            Interviewer: “Do you have any prior projects with the impact that justifies the high-paying position you’re applying for?”

                                                                            V developer: “I made a language that (big claims here). Easier than Rust. I open-sourced it on Github. As you see, it had four, thousand stars in no time.”

                                                                            Interviewer: “Wow, that’s pretty impressive to pull that off and get so many users.”

                                                                            Yet, hardly anything was delivered. Add this project as another data point on why Github stars are meaningless to me. If anything, they make me more skeptical of a project.

                                                                            1. 3

                                                                              I never said it is livable, I just just think that is enough incentive to exaggerate the truth.

                                                                        2. 3

                                                                          Are you aware your comment above, as it appears at the time of me writing this reply, is a raw and pure ad hominem, with absolutely no merit-based argument at all? Could you please try to provide some verifiable technical criticism instead?

                                                                          1. 27

                                                                            Ok, the language itself has more than 3800 stars on github, but most of the libraries in this repository are empty stubs:

                                                                            Let’s compare to zig, an equivalent project made by a serious person with many contributors and 3000 stars on github:

                                                                            The difference? The V guy is a better hype man. The Zig guy spent far more time working on things. I didn’t say he was bad, actually I think he is a great hype man.

                                                                            1. 25

                                                                              musl-libc is another example of a project that has very little hype, but represents a significant contribution to the open source community at large, with incredibly high quality standards. It’s been around for years, yet making less than V on Patreon.

                                                                              I honestly feel a little guilty about this. I’ve done some marketing to hype up Zig and so I’m making more than Rich does with musl, even though it has been around longer than Zig as well. Furthermore Zig ships with musl and a lot of the standard library is ported from musl, so the Zig project benefits in a huge way from musl. I’m donating $5/mo on Patreon to musl, but I dunno, it doesn’t seem fair to give that low amount. But I’m also not making a living wage yet, so… it also doesn’t feel right to just give a large portion of my income away.

                                                                              1. 6

                                                                                For now everyone should look after themselves and not feel guilty, but I do think collectively we need to do something to fix the broken nature of open source funding.

                                                                                1. 6

                                                                                  And create a funding model that isn’t contingent on HN hype/Github stars.

                                                                                  1. 2

                                                                                    This is tantamount to, “nullify the power social influence has over allocation of resources.” Not saying some progress can’t be made, but realize what you’re up against here.

                                                                                    There are some political ideologies inadvertently end up trying to shift society towards social power being even more influential rather than less. Given how easy it is to monopolize celebrity, this is bad for solidarity & equality.

                                                                                2. 1

                                                                                  Thank you so much! That is exactly the kind of a comment I was hoping for, backed with some concrete references. This gives so much more substance and weight/value to me as a reader than the original one, significantly boosting the speed with which I can evaluate the subject. Thanks again!

                                                                                  1. -1

                                                                                    Language version 0.0.12 that’s been just released has not very mature libraries. Developed by 1 person. Sorry about that.

                                                                                    1. 5

                                                                                      For all the hype you’ve been generating, I think people expected some kind of 1.0 release, or at least all the promised features to actually be implemented.

                                                                                  2. 9

                                                                                    From the patreon page:

                                                                                    C/C++ translation
                                                                                    V can translate your entire C/C++ project and offer you the safety, simplicity, and up to 200x compilation speed up.
                                                                                    Hot code reloading
                                                                                    Get your changes instantly without recompiling!
                                                                                    Since you also don't have to waste time to get to the state you are working on after every compilation, this can save a lot of precious minutes of your development time.

                                                                                    Maybe it will reach these goals one day, but it doesn’t do them now.

                                                                                    I will give you some time to find that in the repository and verify it.

                                                                                    1. 0


                                                                                      Full code next week, but you can already build a major file transpiled from C to V and replace the object file in the project.

                                                                                      1. 5

                                                                                        is that C++ too? How about the hot code reloading? It isn’t a full project either like you say.

                                                                                        1. 5

                                                                                          That’s one file though, not the entire game like you seem to be implying.

                                                                                    2. 1

                                                                                      I don’t know man, just from a few moments glance through this it looks like a serious effort that probably took him a significant amount of time. I can’t imagine it would be so great to have invested so much time and then see people being so critical, especially in an unfounded way.

                                                                                      1. 22

                                                                                        There’s some context here: for the past few months he’s been making pretty extreme claims about V’s features, and attacking anyone who expressed any skepticism. He also refused to explain how any of it works, just saying we should wait until the open release. Well… it’s the open source release and it turns out the skeptics were right.

                                                                                        1. -1

                                                                                          I never attacked anyone. Do you have examples of my attacks or extreme claims?

                                                                                          1. 42

                                                                                            My examples will be released in 2 weeks.

                                                                                        2. 7

                                                                                          It isn’t unfounded, see comment above - and any time anyone posts anything, you can expect criticism. It happens all the time.

                                                                                          I didn’t say he was bad, actually I think he is a great hype man.

                                                                                          1. -1

                                                                                            I only receive constant criticism like from you in this thread :) I don’t bother to post criticism. Any examples?

                                                                                            1. 5

                                                                                              I’d like to note that, to me, most of your replies here just look like damage control. This kind of comment is usually not well-received in technical forums such as Lobste.rs. Don’t respond to every comment you don’t like - let your work speak for itself!

                                                                                        3. 1


                                                                                          Can you please point out specific things that don’t work as expected or things that I “hype” on the website?


                                                                                          1. 22
                                                                                            • hot code reloading
                                                                                            • C++ translation
                                                                                            • “a strong modularity” (this isn’t grammatically correct)
                                                                                            • compiles to native binaries without dependencies…except libcurl or other libraries you obviously depend on, not to mention a C compiler. That is a dependency whether you like it or not.
                                                                                            • you mention doom being translated to V, your repo only shows what looks like a single file being translated
                                                                                            • you give an example of a “powerful web framework” yet show no code to back up that claim
                                                                                            • you do concurrency in the most lazy and inefficient way possible and promise to have something that big companies struggle with by the end of this year
                                                                                            • you claim being able to cross compiling…except from non-macOS to macOS
                                                                                            • it only looks like you support amd64 processors from the code I read
                                                                                            • you claim no null yet have both perfect c interoperability and optionals. What happens when the c libraries the user wants to use require the use of null? What happens when the optional is not filled with the target data?

                                                                                            Need I go on?

                                                                                            1. 4

                                                                                              Can you substantiate your claim that

                                                                                              V compiles ≈1.2 million lines of code per second per CPU core

                                                                                              from the website?

                                                                                              1. 4

                                                                                                Generating 1.2 million lines of code with:

                                                                                                print "fn main() {"
                                                                                                for i = 0, 1200000, 1
                                                                                                  print "println('hello, world ')"
                                                                                                print "}"

                                                                                                I got the following error:

                                                                                                $ time v 1point2mil.v
                                                                                                pass=2 fn=`main`
                                                                                                panic: 1point2mil.v:50003
                                                                                                more than 50 000 statements in function `main`
                                                                                                        2.43 real         2.13 user         0.15 sys

                                                                                                Note that this is 50,000 ish lines of code in TWO SECONDS.

                                                                                                1. 3

                                                                                                  Generating 1.2 million lines of code with: I patched the compiler to remove the arbitrary 50’000 statement restriction (and also disabled calling out to the C compiler, so that it doesn’t artificially inflate compilation times), and I got these times:

                                                                                                  % time compiler/v test.v
                                                                                                  Edit: remembered another fun one: The generated C has a baseline of 69 warnings (I did not make that up) then one for every string literal, since the type of `tos` uses `unsigned char *` instead of `char *`.
                                                                                                  14.58user 0.60system 0:15.61elapsed 97%CPU (0avgtext+0avgdata 1311836maxresident)k
                                                                                                  0inputs+91504outputs (0major+316215minor)pagefaults 0swaps

                                                                                                  Some other things I also noticed:

                                                                                                  • The 1000 byte global allocation at the start is used for format string literals suffixed with a !. If you generate a string that exceeds this length, the generated code aborts with an assertion failure from malloc().
                                                                                                  % cat test.v
                                                                                                  fn main() {
                                                                                                  x := 1
                                                                                                  println('$x [redacted repetitions]'!)}
                                                                                                  % v test.v; clang ~/.vlang/test.c -w; ./a.out
                                                                                                  malloc(): corrupted top size
                                                                                                  fish: “./a.out” terminated by signal SIGABRT (Abort)
                                                                                                  • V supports goto but doesn’t check whether labels are defined, so they crash in the C compiler;
                                                                                                  • V doesn’t ever convert their integer literals to integers internally; There’s a check for division by zero, but it’s comparing the literal to '0' - the string. So this won’t compile:
                                                                                                  fn main() {
                                                                                                    x := 1 / 0

                                                                                                  But this does:

                                                                                                  fn main() {
                                                                                                    x := 1 / 00
                                                                                            1. 1

                                                                                              And here it is for Rust (I didn’t do it, but here it is).

                                                                                            1. 10

                                                                                              This is noteworthy - according to the notes this is release-candidate-1. Congrats to the Nim team & contributors for getting to this point. I’ll be curious to find out how much the adoption rate of Nim changes once 1.0 is reached.

                                                                                              1. 14

                                                                                                Myself this week. I’ve only gotten 5 hours of sleep per night in the last week. It’s getting bad.

                                                                                                1. 7

                                                                                                  That sucks, enjoy catching up sleep.

                                                                                                  1. 1

                                                                                                    I absolutely plan to enjoy every moment of it.

                                                                                                  2. 4

                                                                                                    Update: I’m in the hospital :(

                                                                                                    EDIT: I was discharged and told to not do something again. I’m okay otherwise.

                                                                                                    1. 1

                                                                                                      Oh shit, what?

                                                                                                      1. 2

                                                                                                        Apparently my anxiety is that bad

                                                                                                        1. 3

                                                                                                          Sending my support. Anxiety is really tough - i’ve been there.

                                                                                                          1. 2

                                                                                                            That sucks, I hope you feel better soon.

                                                                                                            1. 2

                                                                                                              They gave me lorazepam, I feel peaceful but my body isn’t calming down yet

                                                                                                              1. 2

                                                                                                                Glad you got some treatment relatively quickly.

                                                                                                                I’ve made my rounds around here and I think the Jewish has a very well-oiled machine in emerg compared to the others. Wherever you are, it looks like it’s going better for you.

                                                                                                      2. 2

                                                                                                        Here’s some empathy and some advice. Feel free to take or leave either as you see fit.

                                                                                                        Damn I hope you feel better soon. I’ve been enjoying reading your blog posts and I selfishly want you to get your sleep so I can keep enjoying your work.

                                                                                                        Intense exercise works for me. When I physically exhaust my body, sleep comes more easily. Also it helps with my motivation and general sense of well-being.

                                                                                                        1. 2

                                                                                                          Thanks! I think the lorezapam at the hospital helped the most.

                                                                                                        2. 2

                                                                                                          Buff yourself. Like in World of Warcraft. Life is just like that just different magic.

                                                                                                          While you don’t have potion of rejuvenation IRL, there are some nice buffs for sleep optimization - you could do any/all of Ubiquinol, Piracetam, Spirulina, Vitamin C, K3 MK7, Retynol, ALCAR.

                                                                                                          Or you can be boring and sleep :)

                                                                                                          1. 3

                                                                                                            I generally try to avoid giving specific medical advice unless invited to, as there’s no way to know a person’s situation unless they volunteer it, and people with chronic health issues are generally well aware of the options that people are prone to suggesting. If I were asked to suggest medical treatment to look into for this scenario, it would most likely be treatment aimed at improving sleep duration and quality, as that’s generally probably the highest priority for a sustainable lifestyle.

                                                                                                            I am not trying to make you feel bad for weighing in - I can see that you’re trying to help - but I do think it’s important that things like this should be opt-in rather than opt-out, so I’m hoping you’ll reflect on it.

                                                                                                            1. 1

                                                                                                              Recommending OTC supplements is in the domain of nutrition, it should not be generally considered as ‘medical treatment’. Potential for harm is virtually non-existent here, the worst case I can think of is that no effect happens.

                                                                                                              1. 3

                                                                                                                My concern is more about the psychological harm that people often experience as a result of hearing the same advice over and over, to the point that it’s too exhausting and burdensome to even reply to it most of the time. Longer-term, it is often the case that the potential frustration of receiving unsolicited advice becomes an incentive to not talk about health issues at all. When nobody talks about health issues because of the potential reactions, existing societal stigmas are reinforced.

                                                                                                                1. 0

                                                                                                                  What concerns me is this self-appointed babysitting of grown up people. Maybe that works where you work at, but your deduction about this really concerns me. Furthermore, as far as I can see, our pony dude is more then capable to speak for herself.

                                                                                                                  People should rise their own kids and do that stuff at home. Picking up on random strangers on the internet and trying to re-educate them is not a virtue in my world.

                                                                                                                  1. 1

                                                                                                                    I offered advice that you can think about, or not, as you wish.

                                                                                                          2. 1

                                                                                                            Wednesday update: I had 6.5 hours of sleep and a happy dream instead of a horrific nightmare. Things are getting better.

                                                                                                          1. 5

                                                                                                            This version supports the prefers-color-scheme media query. I’m excited about this because it means people who prefer light and people who prefer dark can both get their way, with a CSS-only solution, and a setting at the browser scope (or possibly even system scope) rather than individual website settings.

                                                                                                            You can see it in action with the zig documentation

                                                                                                            1. 3

                                                                                                              I like Nim. For me a big part of my opinion is the onboarding process for new users. With Nim its dead simple. You download a zip:


                                                                                                              only 18 MB! Extract, create a file:

                                                                                                              echo "hello world"

                                                                                                              and compile:

                                                                                                              nim compile hello.nim

                                                                                                              and thats it. People like to tout Rust, but Rust cant do this:


                                                                                                              1. 4

                                                                                                                I have deployed a stripped down version (only nim and nimble binaries, and the stdlib) of the static binary distribution for GNU Linux x64 for my team, and I believe it’s < 8MB.

                                                                                                                1. 2

                                                                                                                  This is really interesting.

                                                                                                                  andy@xps ~/d/n/nim-0.19.6> ls bin/
                                                                                                                  7z.dll                 libpng12.dll        nimgrep.exe     png.dll
                                                                                                                  7zG.exe                libpng3.dll         nimsuggest.exe  SDL2.dll
                                                                                                                  7-zip.dll              libssl-1_1.dll      pcre32.dll      SDL2_ttf.dll
                                                                                                                  c2nim.exe              libssl-1_1-x64.dll  pcre3.dll       sqlite3_32.dll
                                                                                                                  libcrypto-1_1.dll      libui.dll           pcre64.dll      sqlite3_64.dll
                                                                                                                  libcrypto-1_1-x64.dll  makelink.exe        pcre.dll        ssleay32.dll
                                                                                                                  libcurl.dll            nimble.exe          pdcurses32.dll  ssleay64.dll
                                                                                                                  libeay32.dll           nim.exe             pdcurses64.dll  vccexe.exe
                                                                                                                  libeay64.dll           nimgrab.exe         pdcurses.dll    zlib1.dll

                                                                                                                  In particular I see vccexe.exe and makelink.exe which I am guessing are microsoft visual studio’s C++ compiler and linker?

                                                                                                                  1. 4

                                                                                                                    From the nim screen casts, the lead dev Araq uses windows as his daily machine, which is a stark contrast to nearly every other language. It makes some sense he makes it work really well.

                                                                                                                    1. 2

                                                                                                                      This might also be the reason why Nim is really bad at dealing with symlinks.

                                                                                                                      1. 1

                                                                                                                        You should open an issue for whatever symlink issue you are seeing: https://github.com/nim-lang/Nim/issues.

                                                                                                                        While Araq uses Windows, there are a lot of stakeholders and other devs on GNU Linux or MacOS type systems (even RPi, etc.)

                                                                                                                        The dev team is very responsive in replying to/fixing the bug reports.

                                                                                                                          1. 1

                                                                                                                            I see .. that indeed is bad.

                                                                                                                            1. 1

                                                                                                                              looks like it was closed without a reason - why didnt you follow up?

                                                                                                                  1. 1

                                                                                                                    Great talk. I have a couple questions that weren’t answered in the Q&A and I don’t see answered on the Zig website.

                                                                                                                    Does the Zig frontend compile-to-C under the hood? This is something that irked me about some other newer modern languages in this same space like Nim due to the inadvertant undefined behavior that it could create rather than going directly to IR (which to be clear could also create other UB, but at least it could be well defined for the language instead of being a bug introduced due to the C conversion).

                                                                                                                    How is cross-compiling support for other OSes? In the shown slide, all of the targets were Linux on some arch and libc, but is it just as easy to cross-compile from one OS to another? Actually, this looks supported according to the website!

                                                                                                                    1. 11

                                                                                                                      Does the Zig frontend compile-to-C under the hood? This is something that irked me about some other newer modern languages in this same space like Nim due to the inadvertant undefined behavior that it could create rather than going directly to IR (which to be clear could also create other UB, but at least it could be well defined for the language instead of being a bug introduced due to the C conversion).

                                                                                                                      This is a fallacy, “compiling to X” doesn’t mean you inherit its flaws, that’s the whole point of compiling. UB is a worry when you’re writing C manually, but when compiling to C it’s only a problem if there is a bug in the Nim compiler.

                                                                                                                      Asm has an “unknown opcode exception”, C doesn’t. And C compiles to asm.

                                                                                                                      1. 6

                                                                                                                        Does Nim put checks in place for things like signed overflow? If so, how does it affect performance?

                                                                                                                        1. 1

                                                                                                                          Yeah, it does. Haven’t benchmarked it personally so not sure. This is of course all customisable, so if you’re feeling brave you can disable these checks.

                                                                                                                          1. 1

                                                                                                                            Thanks for the response, and doing overflow checks really does seem like the right thing. I remain curious about the effect on performance, as in my opinion this is a serious drawback of C as a compilation target.

                                                                                                                      2. 5

                                                                                                                        Does the Zig frontend compile-to-C under the hood?

                                                                                                                        LLVM-based (like Rust and Crystal), from the looks of it, though Zig’s written in itself now.

                                                                                                                        1. 21

                                                                                                                          though Zig’s written in itself now.

                                                                                                                          Clarification: the self-hosted compiler is not able to build anything beyond hello world yet. However the zig compiler that is shipped on ziglang.org/download is in fact a hybrid of C++ and Zig code. It’s actually pretty neat how it works:

                                                                                                                          1. Build all the compiler source into libstage1.a, and userland-shim.cpp into userland-shim.o.
                                                                                                                          2. Link libstage1.a and userland-shim.o into zig0.exe. This is the C++ compiler, but missing some features such as zig fmt, @cImport, and stack traces on assertion failures.
                                                                                                                          3. Use zig0 to build stage2.zig into libuserland.a.
                                                                                                                          4. Link libstage1.a and libuserland.a into zig.exe, which has features such as zig fmt, @cImport, and stack traces on assertion failures.

                                                                                                                          Think about how cool this is, in step 4, the exact same library file is linked against a self-hosted library rather than a c++ shim file, and therefore the re-linked binary gains extra powers!

                                                                                                                          1. 1

                                                                                                                            I love PLs but I’ve rarely sat down and done the work to piece through how magical this stuff is. Thanks!

                                                                                                                        2. 4

                                                                                                                          I’d say the Zig cross-compilation story is even better than Go’s. And that’s a really hard bar to meet.

                                                                                                                        1. 1

                                                                                                                          This seems like a blurb that reiterates the myth that garbage collected languages are inherently bad for performance, wrapped around a paper that reinvents hardware-assisted garbage collection, an idea that existed since ’80s.

                                                                                                                          1. 16

                                                                                                                            The fact that an idea has existed since the ’80s should not be grounds for dismissal.

                                                                                                                            From the article, regarding prior work:

                                                                                                                            While the idea of a hardware-assisted GC is not new [5]–[8], none of these approaches have been widely adopted. We believe there are three reasons:

                                                                                                                            1. Moore’s Law: Most work on hardware-assisted GC was done in the 1990s and 2000s when Moore’s Law meant that next-generation general-purpose processors would typically outperform specialized chips for lan- guages such as Java [9], even on the workloads they were designed for. This gave a substantial edge to non-specialized processors. However, with the end of Moore’s Law, there is now a renewed interest in accelerators for common workloads.
                                                                                                                            2. Server Setting: The workloads that would have benefitted the most from hardware-assisted garbage collection were server workloads with large heaps. These workloads typically run in data centers, which are cost-sensitive and, for a long time, were built from commodity components. This approach is changing, with an increasing amount of custom hardware in data centers, including custom silicon (such as Google’s Tensor Processing Unit [10]).
                                                                                                                            3. Invasiveness: Most hardware-assisted GC designs were invasive and required re-architecting the memory system [8], [11]. However, modern accelerators (such as those in mobile SoCs) are typically integrated as memory-mapped devices, without invasive changes.
                                                                                                                            1. 3

                                                                                                                              You’re right, I didn’t mean to dismiss the paper, just the article that makes it sound like a novel idea. On the contrary, I’m glad that HWAGC is getting attention at this age.

                                                                                                                            2. 3

                                                                                                                              Is it a myth? There seems to be a trend here that the non-garbage-collected languages are faster: https://benchmarksgame-team.pages.debian.net/benchmarksgame/which-programs-are-fast.html

                                                                                                                              1. 5

                                                                                                                                It’s got a bit of truth in it, in that most garbage collectors commonly used are mediocre at best and our modern processors are optimised towards dynamic allocation.

                                                                                                                                jwz has a good summary of the GC ordeal.

                                                                                                                                1. 3

                                                                                                                                  I don’t understand your citation. I have read that article, but what does it have to do with dynamic allocation?

                                                                                                                                  Although I generally like jwz’s writing, I’m not convinced by the second citation either, because it’s over 20 years old and basically saying that at that time the only good GCs were in Lisp implementations. Something that surveys the current state of the art would be more convincing.

                                                                                                                                  Personally I think GC’s are invaluable, but you can always do better by applying some application-specific knowledge to the problem of deallocation. There are also many common applications and common styles of programming that create large amounts of (arguably unnecessary) garbage.

                                                                                                                                2. 8

                                                                                                                                  Garbage collected environments can—and often do—have higher overall throughout than manually managed memory environments. It’s the classic batch processing trade off, if you do a lot of work at once it’s more efficient, but it won’t have the best latency. In memory management terms, that means memory won’t be freed right away. So GCs need some memory overhead to continue allocating before they perform the next batch free.

                                                                                                                                  This is one of the reasons iPhones circa 2014 ran smoothly with 1GB RAM, but flagship Androids had 2-3GB. Nowadays both have 4GB, as the RAM needed to process input from high resolution cameras dwarfs the memory concerns of most applications on either phone.

                                                                                                                                  Note the latency from batching frees (i.e. garbage collection) doesn’t refer to GC pause time. So called “stop the world” collection phases exist only in mediocre GC systems, like Java’s, because they’re relatively easy to implement. ZGC looks promising, 10+ year old GC technology is finally making it to the JVM!

                                                                                                                                  C/C++ also make it easier to control memory layout and locality, which massively improves performance in some cases. But as jwz points out in the post linked by @erkin, GC languages could provide similar tools to control allocations. No mainstream GC language does, probably since the industry already turns to C/C++ for that by default. Unity comes close, they made a compiler for a C# subset they call HPC#, which allows allocation and vectorization control. And I think D has some features for doing this kind of thing, but I wouldn’t call D mainstream.

                                                                                                                                  1. 5

                                                                                                                                    Chicken Scheme, uses stack for the ‘minor’ garbage collection.

                                                                                                                                    https://www.more-magic.net/posts/internals-gc.html “… This minor GC is where the novelty lies: objects are allocated on the stack. … “

                                                                                                                                    Stack memory, by itself is typically a contiguous block (an well optimized part of virtual memory). I think in general, the trend in GC is to have ‘categories’ of memory allocation models, where the optimizations is done per category. The determination of categories is done at compile time, with a possibility of a memory allocation to graduate (or move) from one type/category to another (not using ‘categories’ in algebraic sense here).

                                                                                                                                    1. 1

                                                                                                                                      I’m not sure I buy that argument, because GC inherently does more work than manual memory management. It has to walk the entire heap (or parts of it for generational GC), and that’s expensive. You don’t need to do that when manually managing memory. Huge heaps are still an “unsolved problem” in GC world, but they aren’t in manual world.

                                                                                                                                      I’m a fan of GC, but I think the problem is that you can only compare the two strategies with identical/similar big applications, because that’s where GC really shines. But such pairs of applications don’t exist AFAICT.

                                                                                                                                      1. 4

                                                                                                                                        It has to walk the entire heap

                                                                                                                                        No. There are many more interesting strategies than simple generational heaps.

                                                                                                                                        You don’t need to do that when manually managing memory

                                                                                                                                        Instead you just call malloc and free all the time, functions that lock, can contend with other threads, and needs to manipulate data structures. Compare to alloc in a GC world: atomic increment of arena pointer.

                                                                                                                                        Huge heaps are still an “unsolved problem” in GC world

                                                                                                                                        Azul’s GC manages hundreds of GBs without any global pauses. Huge enough for you? ZGC uses similar strategies, copying many of those ideas that have been around since 2008 (to my knowledge, likely earlier).

                                                                                                                                        Unimplemented in common open source languages != unsolved.

                                                                                                                                        But such pairs of applications don’t exist AFAICT

                                                                                                                                        Compare IntelliJ and Qt Creator. Only it’s not a fair comparison, since IntelliJ runs on Oracle / OpenJDK Java, and that JVM still doesn’t have any good garbage collectors (ZGC is still in development / who knows if it will ever actually ship).

                                                                                                                                        P.S. I love all the work you’re doing on Oil, and your blog posts are fantastic!

                                                                                                                                        1. 4

                                                                                                                                          Eh your reply reads very defensively, like I’m attacking GC. In fact most of my posts on this thread are arguing for GC for lots of applications, and I plan to use it for Oil. (Thanks for the compliment on the blog posts. I’ve been on a hiatus trying to polish it up for general use!)

                                                                                                                                          I’m just pointing out that GC has to do work proportional to every objects on the heap. What are the other strategies that don’t need to do this? Whether the GC is generational, incremental, or both, you still have to periodically do some work for every single heap object. I think the article points out that incremental GC increases the overhead in exchange for reducing pauses, and I saw a recent retrospective on the new Lua GC where they ran into that same issue in practice.

                                                                                                                                          (In Go, I think it’s a bit worse than that, because Go allows pointers to the middle of objects. Although Go also has escape analysis, so it’s probably a good engineering tradeoff, like many things Go.)

                                                                                                                                          I also think you’re comparing the most naive manual memory management with the most sophisticated GC. Allocators in the 90’s weren’t very aware of threads but now they certainly are (tcmalloc, jemalloc, etc.)

                                                                                                                                          Many (most?) large C applications use some kind of arena allocation for at least some of their data structures. For example, the Python interpreter does for it’s parse tree. It’s not just malloc/free everywhere, which is indeed slow.

                                                                                                                                          Anyway, my point is that the batch processing argument tells some of the story, but is missing big parts of it. It’s not convincing to me in general, despite my own preference for GC.

                                                                                                                                          Unsurprisingly, the comparison is very complex and very application-specific. It’s hard to make general arguments without respect to an application, but if you want to, you can also make them against GC because it fundamentally does more work!

                                                                                                                                          1. 5

                                                                                                                                            I don’t think you’re attacking GC, but I think you’re wrong about performance. Comparing jemalloc to G1GC, and similar generational collectors in common dynamic languages, you’re spot on. Generational collection shines in web application servers, which allocate loads of young garbage and throw it all away at the end of a request. But for long lived applications like IntelliJ it’s not so hot. Last I checked (which was a while ago) IntelliJ’s launcher forced the concurrent mark and sweep GC. Stuff like Cassandra and Spark are really poorly served by either of those GCs, since neither prevents long pauses when the heap is really large.

                                                                                                                                            Batching pretty much does cover the argument. Assume you have a page of memory with dozens of small object allocations. Which is faster, individually calling free on 90% of them, or doing a GC scan that wastes work on 10% of them? As long as you can densely pack allocations, GC does very well.

                                                                                                                                            Arenas are certainly a big win, but in many ways a page in the young generation is like an arena. Yes, manual memory management will always win if you perform all of your allocations in arenas, freeing them in groups as the application allows. Commercial databases do so as much as possible. gRPC supports per-request arenas for request and response protos, a clever answer to the young generation advantage for short stateless request / response cycles.

                                                                                                                                            I might be wrong, but I don’t think you’re considering that GCs can use MMU trickery just as much as anyone else. Suppose instead of periodically scanning the old generation, you occasionally set 2mb hugeages of old objects as no read no write. If it’s accessed, catch the segfault, enable the page again, and restart the access. If it’s not accessed after a while, scan it for GC. Having access statistics gives you a lot of extra information.

                                                                                                                                            Now imagine that instead of the random musings of some internet commenters, you have really clever people working full time on doing this sort of thing. There’s a whole world of complex strategies enabled by taking advantage of the memory virtualization meant to emulate a PDP-11 for naive programs. Normal incremental collection has more overhead because the GC has to redo more work. ZGC avoids lots of this because it maps 4 separate extents of virtual memory over the same single contiguous extent of physical RAM, and the GC swaps out pointers it’s looking at for different virtual addresses to the same physical address. Trap handlers then patch up access to objects the GC is moving.

                                                                                                                                            The whole “GC is linear wrt total heap size” conventional wisdom is a myth, perpetuated by mainstream GCs not doing any better.

                                                                                                                                            [GC] fundamentally does more work!

                                                                                                                                            It’s still a win for a GC if executing more instructions results in fewer memory accesses. CPUs are wicked fast, memory is slow. Back when I worked on a commercial database engine, loads of our code paths treated heap memory like most people treat disk.

                                                                                                                                            1. 1

                                                                                                                                              The whole “GC is linear wrt total heap size” conventional wisdom is a myth, perpetuated by mainstream GCs not doing any better.

                                                                                                                                              Which GCs avoid doing work linear in the # objects on the heap? Are you saying that ZGC and Azul avoid this?

                                                                                                                                              This isn’t something I’m familiar with, and would be interested in citations.

                                                                                                                                              I still think you’re comparing the most advanced possible GC with the status quo in manual memory management, or more likely lots of old C and C++ code. (There is apparently still work on manual memory management like [1].)

                                                                                                                                              The status quo in GC is basically what JVM, .NET and various dynamic language VMs have.

                                                                                                                                              But I’m interested to hear more about the advanced GCs. How do they avoid doing work for every object?

                                                                                                                                              The permanent generation is something I’ve seen in a few designs, and the idea makes sense, although I don’t know the details. I don’t think it’s free, because whenever you have generations, you have to solve the problem of pointers across them, e.g. with write barriers. Write barriers incur a cost that doesn’t exist in manual management schemes. How much I don’t know, but it’s not zero.

                                                                                                                                              I’d rather see concrete use cases and measurements, but if we’re making general non-quantitative arguments, there’s another one against GC.

                                                                                                                                              [1] https://news.ycombinator.com/item?id=19182779

                                                                                                                                              EDIT: To see another reason why the batch argument doesn’t work, consider Rust. Roughly speaking, the Rust compiler does static analysis and inserts deallocation at the “right” points.

                                                                                                                                              It does not “batch” the deallocation or allocation, as far as I know. Are you saying that Rust programs are slower than the equivalent garbage collected programs because of that? Maybe they would be if marking the heap were free? But it’s not free.

                                                                                                                                              Anyway, my goal isn’t to get into a long abstract argument. I’d be more interested in hearing about GC strategies to minimize the total overhead of scanning the heap.

                                                                                                                                              1. 2

                                                                                                                                                Which GCs avoid doing work linear in the # objects on the heap?

                                                                                                                                                There are lots of ways to skip work. As you said, the Go GC handles pointers to the middle of objects. Using similar principles a GC can handle pointers to a page of objects, and free the whole page together. You also mentioned Go’s escape analysis at compile time. Do the same escape analysis and count the number of escaped objects for an entire region of memory, dump it when it hits zero. Mark regions that contain external references: if a page never had objects with pointer fields, or if all the pointer fields reference within the page, why scan its pointers before releasing the memory?

                                                                                                                                                I still think you’re comparing the most advanced possible GC with the status quo in manual memory management, […] The status quo in GC is basically what JVM, .NET and various dynamic language VMs have.

                                                                                                                                                I’m refuting your claims that “GC inherently does more work than manual memory management” and that large heaps are an “unsolved problem.” Large heaps aren’t unsolved. GC doesn’t “inherently” do more work. And regardless, number of operations doesn’t equal efficiency.

                                                                                                                                                It does not “batch” the deallocation or allocation, as far as I know. Are you saying that Rust programs are slower than the equivalent garbage collected programs because of that?

                                                                                                                                                Of course Rust programs aren’t always slower. But garbaged collected programs aren’t always slower either, that’s my entire point here.

                                                                                                                                          2. 3

                                                                                                                                            Instead you just call malloc and free all the time, functions that lock, can contend with other threads, and needs to manipulate data structures. Compare to alloc in a GC world: atomic increment of arena pointer.

                                                                                                                                            I’m not an expert in GC theory but this looks like a bold statement to me. I think that efficient allocation schemes in multithreaded environment are both doable and rather common, and I think that memory allocation in most GC implementations is far more expensive than a single atomic pointer increment. I totally understand that in some cases, state-of-the-art GC can match the performance of manual memory allocation, but I have yet to see a proof that GC is always better than no GC.

                                                                                                                                            1. 4

                                                                                                                                              I’m not saying GC is always better, just that it can be, and often is. Plenty of C/C++ code does ordinary per-object malloc and free, especially C++ when virtual classes come into play. For those applications I claim a sufficiently advanced GC would have higher throughput.

                                                                                                                                              As I discussed in my other comment, you can optimize manual memory code to only allocate into arenas, and free arenas at exactly the right time. Having domain knowledge about when an arena can be freed will certainly be better than the GC guessing and checking.

                                                                                                                                              allocation in most GC implementations is far more expensive than a single atomic pointer increment

                                                                                                                                              Correct. But I also claim most mainstream GC implementations are mediocre. If the environment doesn’t support object relocation by the GC, the allocator needs to fill gaps in a fragmented heap. When the GC can compact a fragmented heap and release large chunks of contiguous memory, the allocator barely has to do anything.

                                                                                                                                      2. 4

                                                                                                                                        The problem is that such benchmarks usually compare hand-tuned, optimized-to-death code examples, which are simply not representative of the way code is written in the wild.

                                                                                                                                        If you have unlimited time to tune and optimize, the fewer amenities the language/runtime has, the better, so non-GC-languages will always have an edge in this scenario.

                                                                                                                                        BUT: 99.9% of the time, this is not the scenario anyone is operating under.

                                                                                                                                        1. 2

                                                                                                                                          I don’t think it’s a myth either – the issue is more complex than than that. It depends a lot on the application, etc.

                                                                                                                                          But I also don’t think the benchmarksgame is a strong indication either way, because those programs are all 10 or 100 lines long. You can always do better on small programs without garbage collection – i.e. by applying some problem-specific knowledge to the code.

                                                                                                                                          Large applications are where garbage collection shines, because I think the number/probability of memory bugs like use-after-free or memory leaks scales nonlinearly with application size. But that’s exactly where it’s hard to do any meaningful comparison, because we don’t have 2 independent implementations of large applications.

                                                                                                                                          My suspicion is that GC / manual memory management isn’t really a useful variable to separate good and bad performance. It’s kind of like how the language doesn’t have too much bearing on how good an application is. There are good and bad apps in C, in Java, in Python, in Lisp, etc. The difference between good and bad apps is very large, and whether they use garbage collection probably isn’t the most salient factor. You can be very sloppy about memory with GC, or you can be disciplined.

                                                                                                                                          Also, FWIW I have been making a list of C and C++ programs that use some kind of garbage collection / automatic memory management. So far I have

                                                                                                                                          If anyone knows of others, I’m interested! I’ve been thinking about garbage collection in Oil’s implementation (both for its own structures and user structures, but more of the former right now.)

                                                                                                                                          1. 3

                                                                                                                                            You are incorrect about that GCC link. One, it links to a very old version of the docs, ore modern docs are here: https://gcc.gnu.org/onlinedocs/gcc-9.1.0/gcc/Garbage-Collection.html Two, it IS a feature for the programs it compiles, using the Objective C runtime.

                                                                                                                                            1. 2

                                                                                                                                              Hm it looks like I have the wrong link, but GCC does do some garbage collection?


                                                                                                                                              I’ve never worked on GCC, but I think I read a comment on Hacker News that said it used GC which led me to Google for it.

                                                                                                                                              Yeah I just checked the source and this looks like it’s in the compiler itself and not the runtime library:

                                                                                                                                              ~/src/languages/gcc-9.1.0/gcc$ wc -l ggc*
                                                                                                                                                1018 ggc-common.c
                                                                                                                                                 322 ggc.h
                                                                                                                                                 118 ggc-internal.h
                                                                                                                                                  74 ggc-none.c
                                                                                                                                                2647 ggc-page.c
                                                                                                                                                 525 ggc-tests.c
                                                                                                                                                4704 total

                                                                                                                                              This garbage-collecting allocator allocates objects on one of a set of pages. Each page can allocate objects of a single size only; available sizes are powers of two starting at four bytes. The size of an allocation request is rounded up to the next power of two (`order’), and satisfied from the appropriate page.

                                                                                                                                              And it looks like it’s used for a lot of core data structures:

                                                                                                                                              tree-phinodes.c:      phi = static_cast <gphi *> (ggc_internal_alloc (size));
                                                                                                                                              tree-ssanames.c:      ri = static_cast<range_info_def *> (ggc_internal_alloc (size));
                                                                                                                                              tree-ssanames.c:  new_range_info = static_cast<range_info_def *> (ggc_internal_alloc (size));
                                                                                                                                              tree-ssa-operands.c:      ptr = (ssa_operand_memory_d *) ggc_internal_alloc

                                                                                                                                              This isn’t surprising to me because I have found memory management for hueg tree/graph data structures in compilers to be a big pain in the butt, and just this subdir of gcc is 1M+ lines of code. You really do want a garbage collector for that class of problems. However Clang/LLVM appears to use the C++ “ownership” style without GC. It seems very verbose though.

                                                                                                                                            2. 3

                                                                                                                                              If anyone knows of others,

                                                                                                                                              Some game engines have their own garbage collection. I did not use any myself, but check https://wiki.unrealengine.com/Garbage_Collection_%26_Dynamic_Memory_Allocation

                                                                                                                                              probably other C++ game engines offer something in this area too.

                                                                                                                                              1. 2

                                                                                                                                                Thanks that’s a great example! I found this link to be pretty good:


                                                                                                                                                I think the commonality is that games have huge graph data structures, just like compilers. On the other hand, scientific programs have more flat arrays. And web servers written in C++ have request-level parallelism / isolation, so memory management can be simpler. So yeah it depends on the app, and for certain apps garbage collection is a huge win, even in C++.

                                                                                                                                        1. 5

                                                                                                                                          @andrewrk Would it be a lot of work for you to add an android target to your awesome Zig cross-compilation machinery? Ideally, without requiring me to install the whole Android NDK? In my spare time, I’m working on a lean “toolchain” enabling building .apk files without having to install Android SDK nor NDK. Currently, I have working minimal prototypes of:

                                                                                                                                          Now I need something that could build JNI .so files for Android, that I could embed in .apk files. Currently I’m generally targeting Nim, but it requires installing Android NDK, and I haven’t tested yet if the Android target is still working in Nim. If I could build Android .so JNI files with Zig, it could sway me its way… at least for this stage of the effort… and, once I get it to work, I would be more than happy to do a writeup for the Zig community on how to build Android APKs this way, kinda in return!… *nudge, nudge, wink, wink* also much tempting, no? ;P Also, maybe I could then make Nim use the Zig’s embedded C compiler instead of the Nim’s default GCC/MinGW, for some kind of an unholy marriage of Nim+Zig?

                                                                                                                                          1. 3

                                                                                                                                            I’ll encourage you to check out DockCross. I was able to build Nim binaries for the android-arm64 target with minimal effort. Of course, it only has the r16 version of the NDK but maybe that’s a good thing? I don’t have any Android experience per se but maybe this helps.

                                                                                                                                            https://github.com/nim-lang/nightlies/blob/master/nightlies.sh#L136 https://github.com/nim-lang/nightlies/blob/master/dx.sh

                                                                                                                                            1. 2

                                                                                                                                              Thanks! DockCross may be interesting, though rather as an intermediate step, as I’d prefer to avoid having to install docker as well. I’m aiming to minimize the size and dependencies. Did you manage to build a .so library usable in an .apk? That’s what I’m interested in for JNI.

                                                                                                                                            2. 2

                                                                                                                                              This is definitely something I want to explore. I don’t have much experience with Android development so it would be a big time investment personally to look into this, but I would be happy to provide guidance to someone who wants to go down this path. If ability to cross compile for Android out-of-the-box added less than ~10MiB to the total download size of a given tarball on ziglang.org/download, I think that would definitely be worth it.

                                                                                                                                              As for Nim they should just do exactly what Zig does and use clang/llvm libraries instead of depending on a system C compiler installation. I don’t see any advantages to the way they have it set up.

                                                                                                                                              1. 1

                                                                                                                                                Hm; so, what would be the first steps I’d need to take, if I wanted to try exploring this path? Assuming I’m on Windows and have the Android NDK (or sources?) downloaded? I don’t have much experience with Android dev either, but willing to try and see how far I can get with some initial low hanging fruit(s).

                                                                                                                                                1. 1

                                                                                                                                                  The first step would be determining what is the ABI of those .so files. What functions do you need to export? What types do you need to know about? What are the files in the NDK and what are they used for?

                                                                                                                                                  What ABIs does native Android code have available to the system? Are they defined by the NDK? Are they stable? How often do they change?

                                                                                                                                                  In summary, the first step is figuring out, in low level details, how the whole thing works.

                                                                                                                                                  1. 1

                                                                                                                                                    As to the ABI (and syscalls, etc.), I’d imagine that’s probably encoded in some patchset they maintain over Clang, no? Would you mean having to reimplement the whole diff, either by rev. eng., or by finding the sources of the actual patchset online and just grabbing them? Plus understanding them. I’ll try to look around if the patchset is somewhere to be found, hopefully it could shed some light.

                                                                                                                                                    edit: FYI & FTR, this seems to be a potentially good docs starting point for further reading and exploration. Though still a bit too high-level. A bit more is here. Anyway, the sources seem to be available. That said, given that I don’t have enough knowledge about clang, I don’t currently see clearly any low hanging fruits for me here, unfortunately. (Hmm, unless maybe cloning this and clang and running a diff? Hmm, but for this I’d need a Linux box…) So, for the time being, I think I’ll back off from this avenue. But I’ll keep it in mind, maybe it’ll grow on me enough to reevaluate it one day. Thanks for the heads up!

                                                                                                                                                    edit 2: As of Android 7.0: Clang has been updated to 3.8svn (r243773, build 2481030). Note that this is now a nearly pure upstream clang.

                                                                                                                                                    edit 3: There’s some more info how to retrieve toolchain sources; but it seems to use some repo tool, and I’m currently not able to understand how to dig up the actual eventual git repo & commits :( from it, I believe the clang sources are at: https://android.googlesource.com/platform/external/clang; IIUC the branch master upstream-master might be “vanillla clang”, and dev or master might be with patches applied. Not confirmed yet.

                                                                                                                                            1. 1

                                                                                                                                              Language-wise, why would one use Zig over C? From this page, it doesn’t seem like Zig is different from C outside of minor syntactic sugar.

                                                                                                                                              Toolchain-wise, it makes the case for why the zig command is useful, which makes sense to me, since this is an area where C is deficient.

                                                                                                                                              1. 15

                                                                                                                                                Huh? That’s that the whole page is about. C doesn’t have: generic data structures and functions, compile-time reflection and code execution, order independent top-level declarations, non-nullable pointer types, allocator types, etc.

                                                                                                                                                None of those things are “minor syntactic sugar”.

                                                                                                                                                If you don’t understand what those things mean, then it would be better to frame your comment as an honest question. Otherwise, what you are asking is explained very well by the article.

                                                                                                                                                1. -5

                                                                                                                                                  Outside of compile-time reflection, I consider those other features minor syntactic sugar. Sugar aside, for what it’s worth, I’ve never felt that the C language was lacking for not providing those features. As an experienced C programmer, pet features aren’t enough of an incentive for me to migrate to Zig, and I don’t think Zig can compete with C on the basis of pet features.

                                                                                                                                                  1. 14

                                                                                                                                                    Generics and non-nullable pointers are certainly not minor syntactic sugar, and to me they make Zig potentially more expressive and safer than C. Doesn’t hurt code re-usability either to have generic data structures.

                                                                                                                                                    1. -4

                                                                                                                                                      Would you stop writing C and exclusively use Zig in all previous situations where you would have used C just because it has generics and non-nullable pointers?

                                                                                                                                                      1. 4

                                                                                                                                                        When it reaches 1.0, yes, sure, anything to avoid C’s army of footguns.

                                                                                                                                                        1. 1

                                                                                                                                                          What features in 1.0 are you anticipating?

                                                                                                                                                          1. 3

                                                                                                                                                            the same features it already has, mostly, but with a self hosted compiler and better stdlib and tooling (like editor plugins). I’m not the person to ask. The biggest 1.0 feature is language stability, which is required for adoption.

                                                                                                                                                            1. 1

                                                                                                                                                              What makes you think Zig is not stable enough today? For what have you used it?

                                                                                                                                                    2. 10

                                                                                                                                                      Did you read this section? https://ziglang.org/#Performance-and-Safety-Choose-Two

                                                                                                                                                      Based on the fact that you’re saying “pet features” I think you either didn’t see this, or didn’t grasp the implications of this.

                                                                                                                                                      1. -3

                                                                                                                                                        Putting the reading comprehension abilities of your target audience into question when they aren’t convinced by your arguments is probably not the best way of increasing the user-base of your project.

                                                                                                                                                        As far as detecting integer overflows at runtime goes, clang and gcc have that ability with UBSAN. https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html

                                                                                                                                                        1. 3

                                                                                                                                                          Have you ever tried to turn on multiple of the sanitizers at once?

                                                                                                                                                          1. -4

                                                                                                                                                            No, what is your point? I only need UBSAN to replicate the behavior of Zig.

                                                                                                                                                            1. 6

                                                                                                                                                              Incorrect. UBSAN does not catch:

                                                                                                                                                              • invalid union field access
                                                                                                                                                              • unsigned integer overflow

                                                                                                                                                              More capabilities are planned. Clang does not support more than one sanitizer at once; if you want UBSAN and ASAN you’re out of luck. incorrect, see below

                                                                                                                                                              It also does not allow you to selectively disable checks in the bottlenecks of your code. also incorrect

                                                                                                                                                              1. 1

                                                                                                                                                                UBSAN provides -fsanitize=unsigned-integer-overflow, which does catch unsigned integer overflow. I’m not sure what you exactly mean by invalid union field access, but UBSAN also provides -fsanitize=alignment which catches misaligned structure deferencing.

                                                                                                                                                                ASAN only catches memory errors, I don’t think it catches what you are referring to as invalid union field access, so still not sure what your point is about running ASAN and UBSAN simultaneous. As I said before, UBSAN is enough to replicate Zig’s features and much more.

                                                                                                                                                                Clang does not support more than one sanitizer at once; if you want UBSAN and ASAN you’re out of luck.

                                                                                                                                                                This claim is simply false, this works fine:

                                                                                                                                                                clang -fsanitize=address -fsanitize=undefined foo.c

                                                                                                                                                                It also does not allow you to selectively disable checks in the bottlenecks of your code.

                                                                                                                                                                That’s also false, see __attribute__((no_sanitize("undefined"))) https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html#id10

                                                                                                                                                                1. 4

                                                                                                                                                                  I stand corrected. I have corrected the misinformation in my posts, but you have not corrected the misinformation in yours.

                                                                                                                                                                  UBSAN provides -fsanitize=unsigned-integer-overflow, which does catch unsigned integer overflow.

                                                                                                                                                                  This is inherently flawed since unsigned integer arithmetic in C is defined to be twos complement wraparound. There will be false positives.

                                                                                                                                                                  I don’t need to win this argument. I was misinformed on a couple points which make my position less of an open and shut case, fine. There are plenty of other talking points. See you in the thread for 0.5.0 release notes, I look forward to reading your comments then.

                                                                                                                                                                  1. 0

                                                                                                                                                                    I stand corrected. I have corrected the misinformation in my posts, but you have not corrected the misinformation in yours.

                                                                                                                                                                    What misinformation have I put forth?

                                                                                                                                                                    This is inherently flawed since unsigned integer arithmetic in C is defined to be twos complement wraparound. There will be false positives.

                                                                                                                                                                    Just because it’s defined doesn’t mean it’s intentional. In 80% of cases (read: pareto majority) unsigned integer overflow is not intended in C.

                                                                                                                                                                    See you in the thread for 0.5.0 release notes, I look forward to reading your comments then.

                                                                                                                                                                    Believe it or not, it’s not my mission in life to trash your language. I believe I gave fair feedback. Many of the people here may pretend to support you, but it doesn’t seem like many of them will actually use the language when asked directly. If you want to eventually appeal to the majority of real C programmers, and not a minority of hobbyist enthusiasts, it would be wise to not take negative feedback defensively.

                                                                                                                                                                    1. 5

                                                                                                                                                                      I believe I gave fair feedback.

                                                                                                                                                                      To me it seems more like, for whatever reason/motivation, that you just came to this thread to do some trolling and to generally be combative with folks.

                                                                                                                                                                      1. 2

                                                                                                                                                                        He just said in his opinion the features are not worth making the switch. That is just an opinion - an honest one at that. I don’t know if I would call it trolling…

                                                                                                                                                                        1. 1

                                                                                                                                                                          Hmm trolling is defined as feigning a position in bad faith. I’m not feigning any position, I genuinely believe the feedback I am providing, and I’m providing this feedback in good faith. That’s actually the opposite of trolling. What statement specifically made it seem to you like I was trolling?

                                                                                                                                                                          1. 4
                                                                                                                                                                            1. -3

                                                                                                                                                                              Ugh an accusation of sealioning requires an assumption of bad faith, which is an act of bad faith itself. You have no rational basis to assume bad faith on my part, and criticism itself isn’t an indication of bad faith.

                                                                                                                                                        2. 6

                                                                                                                                                          I somewhat agree that if you look from a big picture perspective, many of the features are just syntactic sugar. There are not significantly more things you can do with zig that you can’t do with C. I also think that the sum is greater than the parts, and all the ergonomic features will make a large difference in how pleasant it is to write code.

                                                                                                                                                          Things like defer statements and automatic error code types may be enough to push bug counts down for example.

                                                                                                                                                          Generics and a compelling dependency manager probably would be a fundamental shift in how people use the language vs C too.

                                                                                                                                                          1. 4

                                                                                                                                                            Just try to imagine what kind of changes would be required for C compilers to implement this minor syntactic sugar. On top of all the things that were already mentioned, there’s some support for error handling and it seems to have some sort of type-interference.

                                                                                                                                                            It’s perfectly fine you don’t necessarily consider these features important enough to use Zig in favor of C but technically speaking, these differences are quite major.

                                                                                                                                                            From Wikipedia:

                                                                                                                                                            Language processors, including compilers and static analyzers, often expand sugared constructs into more fundamental constructs before processing, a process sometimes called “desugaring”. […] This is, in fact, the usual mathematical practice of building up from primitives.

                                                                                                                                                            Transcompilation from Zig to C would be far beyond a simple expansion (assuming you don’t consider C to be a syntactic sugar for assembly in which case yes, it would).

                                                                                                                                                            1. 3

                                                                                                                                                              C++ competed with C on the basis of having extra features like that. There’s a chance for Zig to as well, esp if maintaining simplicity.

                                                                                                                                                              Far as syntactic sugar, I think anything that increases productivity and maintainability is a net benefit. You might continue writing and rewriting more code in C for nothing to perform the same tasks. At same time, there might be other people who don’t want to work with 1970’s limitations if a language is available without them in 2019. They’ll gladly take a chance to reduce code required, make less breaking changes, and catch more errors at compile time.

                                                                                                                                                              1. -1

                                                                                                                                                                My argument isn’t that language features aren’t good. If you want an advanced language, use Rust or C++. But if you want something simple like C, is using Zig really worth it?

                                                                                                                                                                Be honest here, is Zig’s language feature set compelling enough to you such that you will abandon C, and exclusively use Zig in all places you would have previously used C?

                                                                                                                                                                1. 9

                                                                                                                                                                  But if you want something simple like C

                                                                                                                                                                  The C standard isn’t simple. A subset of Rust might even be simpler than analyzing all that and the compiler behaviors. We’re talking a complicated language that looks simple vs another language that might be as or less complicated. Could be an advantage.

                                                                                                                                                                  “and exclusively use Zig in all places you would have previously used C?”

                                                                                                                                                                  You keep saying that but it’s nonsense. I always watch out for people saying a proposal should “always” or “never” be (property here). They’re often setting something up to fail rather than trying to understand real-world pro’s and con’s. The reason is that there’s a large variety of contexts that a solution might apply to… or not. There might be uses for C that a C replacement isn’t as appropriate for. Especially while the replacement is less mature without decades of ecosystem investment. More like there will be “some places” or “most places” to use Zig with others using C. Over time, some people might hit the “all places” point. Not a good starting point for evaluation, though.

                                                                                                                                                                  1. -5

                                                                                                                                                                    There might be uses for C that a C replacement isn’t as appropriate for.

                                                                                                                                                                    Interesting logic. The fact that you have avoided answering whether or not Zig replaces C for you, when that is the explicit intended purpose of Zig, shows that your defense of Zig is sort of phony so really not worth acknowledging.

                                                                                                                                                                    1. 6

                                                                                                                                                                      Zig is an alpha language in development. C took some time to get where it is. You’re fine with that but Zig has to, in version 0.4, immediately replace a language with decades of development in all situations. You should ditch C, too, on that basis since people with high-level languages and efficient assembly were saying similar things about it early on. Alternatively, allow Zig a chance to develop and attempt to prove itself incrementally like C did.

                                                                                                                                                                      1. -3

                                                                                                                                                                        You’re arguing against a point I never made. I’m criticizing Zig on the features it’s offering today, not what it may provide tomorrow. Could Zig be a viable replacement for C in the future? Sure, we agree there. Am I saying that development of Zig should halt? No.

                                                                                                                                                                        Now be honest here, is Zig’s current language feature set compelling enough to reasonably consider it a replacement for C today? Doesn’t have to be in all cases, let’s say most cases.

                                                                                                                                                                        1. 5

                                                                                                                                                                          Now be honest here, is Zig’s current language feature set compelling enough to reasonably consider it a replacement for C today?

                                                                                                                                                                          Depends on the use case and compiler quality. I think one should assume answer is no by default until proven otherwise. It’s currently at a stage where hobbyists with an interest might apply it to various problems C is used for. That will produce the field data needed to evaluate what it can do now and how it might be improved.

                                                                                                                                                                          You moved the conversation pretty far, though. You originally accused it of just being syntactic sugar vs C. That’s like saying C is just syntactic sugar for assembly knowing there’d be a lot of work making a compiler for that “syntactic sugar” to work vs doing all the same things as easily in assembly. Zig is doing a mix of syntax, semantics, and analysis to attempt to bring benefits over what C and its compilers do. The compile-time programming alone looks like a huge improvement over C’s preprocessor.

                                                                                                                                                                          1. 1

                                                                                                                                                                            Depends on the use case and compiler quality. I think one should assume answer is no by default until proven otherwise.

                                                                                                                                                                            Thanks for agreeing with me. By default I wouldn’t use Zig over C either. It might be useful if you let the creators know what it would take for you to generally default to using Zig over C.

                                                                                                                                                                            That’s like saying C is just syntactic sugar for assembly

                                                                                                                                                                            Nope, there is a specific definition of syntactic sugar, and C is most definitely not syntactic sugar for assembly. I would consider Zig’s small set of language features syntactic sugar for C.

                                                                                                                                                                            1. 3

                                                                                                                                                                              If it is syntactic sugar, you should be able to throw it together in an afternoon or so integrating it with C. Nothing semantic or deep that poses challenges. Whereas, he’s been working on this full time for a language that already does things C cant without going far from the language.

                                                                                                                                                                              1. -2

                                                                                                                                                                                If it is syntactic sugar, you should be able to throw it together in an afternoon or so integrating it with C.

                                                                                                                                                                                Yes, with the exception of compile-time reflection, I can implement all of Zig’s language features in C using a combination of macros and a small support library.

                                                                                                                                                                                1. 3

                                                                                                                                                                                  with the exception of compile-time reflection, I can implement all of Zig’s language features in C using a combination of macros and a small support library

                                                                                                                                                                                  I don’t care about Zig one way or the other, but this is a gross misrepresentation. In Zig, quote, “A generic data structure is simply a function that returns a type”. You cannot implement that with a combination of macros. You could implement a set of predefined parameterised types (lists, sets, so on), but that is not the same thing as supporting declaration of generic types directly in the language, particularly with the flexibility that Zig does provide.