1. 1

    memory size is almost unlimited

    This is both mathematically false and in practice false. The entire article is based on this premise.

    Memory is a limited resource. Memory allocation can fail. Any language that denies this fact cannot be used for 100% reliable software.

    1. 10

      I wonder how much security review maintainers actually do. Reviews are difficult and incredibly boring. I have reviewed a bunch of crates for cargo-crev, and it was very tempting to gloss over the code and conclude “LGTM”.

      Especially in traditional distros that bundle tons of C code, a backdoor doesn’t have to look like if (user == "hax0r") RUN_BACKDOOR(). It could be something super subtle like an integer overflow when calculating lengths of buffers. I don’t expect a volunteer maintainer looking at a diff to spot something like that.

      1. 4

        a backdoor doesn’t have to look like if (user == “hax0r”) RUN_BACKDOOR()

        Reminded me of this attempted backdoor: https://freedom-to-tinker.com/2013/10/09/the-linux-backdoor-attempt-of-2003/

        1. 3

          I assume that as long as we keep insisting on designing complex software in languages that can introduce vulnerabilities, then such software will contain such vulnerabilities (introduced intentionally or unintentionally).

          I think integer overflow is a great example. In C & C++ unsigned integers wrap on overflow and signed integers exhibit UB on overflow. Rust could have fixed this, but overflow is unchecked by default in release builds for both signed and unsigned integers! Zig doubles down on the UB by making signed and unsigned integer overflow both undefined unless you use the overflow-specific operators! How is it that none of these languages Rust doesn’t handle overflow safely by default?! (edit: My information on Zig was inaccurate and false, it actually does have checks built into the type-system (error system?) of the language!)

          Are the performance gains really worth letting every addition operation be a potential source of uncaught bugs and vulnerabilities? I certainly don’t think so.

          1. 6

            Rust did fix it. In release builds, overflow is not UB. The current situation is that overflow panics in debug mode but wraps in release mode. However, it is possible for implementations to panic on overflow in release mode. See: https://github.com/rust-lang/rfcs/blob/master/text/0560-integer-overflow.md

            Obviously, there is a reason for this nuanced stance. Because overflow checks presumably inhibit performance or other optimizations in the surrounding code.

            1. 2

              To clarify what I was saying: I consider overflow and UB to both be unacceptable behavior by my standard of safety. Saying that it is an error to have integer overflow or underflow, and then not enforce that error by default for all projects feels similar to how C has an assert statement that is only enabled for debug builds (well really just when NDEBUG isn’t defined). So far the only language that I have seen which can perform these error checks at compile time (without using something like Result) is ATS, but that language is a bit beyond my abilities.

              If I remember correctly, some measurements were taken and it was found that always enabling arithmetic checks in the Rust compiler lead to something like a 5% performance decrease overall. The Rust team decided that this hit was not worth it, especially since Rust is aiming for performance parity with C++. I respect the team’s decision, but it does not align with the ideals that I would strive for in a safety-first language (although Rust’s primary goal is memory safety not everything-under-the-sun safety).

              1. 4

                That’s fine to consider it unacceptable, but it sounded like you thought overflow was UB in Rust based on your comment. And even then, you might consider it unacceptable, but surely the lack of UB is an improvement. Of course, you can opt into checked arithmetic, but it’s quite a bit more verbose.

                But yes, it seems like you understand the issue and the trade off here. I guess I’m just perplexed by your original comment where you act surprised at how Rust arrived at its current state. From what I recall, the intent was always to turn overflow checks on in release mode once technology erased the performance difference. (Or rather, at least that was the hope as I remember it at the time the RFC was written.)

                1. 2

                  Oh no I actually am glad that at least Rust has it defined. It is more like Ada in that way, where it is something that you can build static analysis tools around instead of the UB-optimization-wild-west situation like in C/C++.

                  From what I recall, the intent was always to turn overflow checks on in release mode once technology erased the performance difference.

                  Yes I believe that was in the original RFC, or at least it was discussed a bunch in the GitHub comments, but “once technology erased the performance difference” is different than “right now in shipping code after the language’s 1.0 release”. I would say it is less of surprise that I feel and more a grumpy frustration - one of the reasons I picked up (or at least tried to pick up RIP) Rust in the first place was because I wanted a more fault-tolerant systems programming language as my daily driver (I am primarily a C developer). But I remember being severely disappointed when I learned that overflow checks were disabled by default in release builds because a that ends up being only marginally better than everyone using -fwrapv in GCC/Clang. I like having the option to enable the checks myself, but I just wish it was universal, because that would eliminate a whole class of errors from the mental model of a program (by default).


            2. 3

              With the cost, it depends. Even though overflow checks are cheap themselves, they have a cost of preventing autovectorization and other optimizations that combine or reorder operations, because abort/exception is an observable side effect. Branches of the checks are trivially predictable, but they crowd out other branches out of branch predictors, reducing their overall efficiency.

              OTOH overflows aren’t that dangerous in Rust. Rust doesn’t deal with bare malloc + memcpy. Array access is checked, so you may get a panic or a higher-level domain-specific problem, but not the usual buffer overflow. Rust doesn’t implicitly cast to a 32-bit int anywhere, so most of the time you have to overflow 64-bit values.

              1. 3

                How is it that none of these languages handle overflow safely by default?!

                The fact that both signed and unsigned integer overflow is (detectable) UB in Zig actually makes zig’s –release-safe build mode safer than Rust’s –release with regards to integer overflow. (Note that Rust is much safer however with regards to memory aliasing)

                1. 1

                  I stand corrected. I just tested this out and it appears that Zig forces me to check an error!

                  I’ll cross out the inaccuracy above…and I think I’m going to give Zig a more rigorous look… I’m quite stunned and impressed actually. Sorry for getting grumpy about something I was wrong about.

                2. 2

                  I don’t think so either. I did a benchmark of overflow detection in expressions and it wasn’t that much of a time overhead, but it would bloat the code a bit, as you have to not only check after each instruction, but only use instructions that set the overflow bit.

              1. 59

                Update: been working on a better approach to these problems that leave affected users feeling less put-out. I’ll be starting with a better email template in the future:


                In the future I’ll be working on better automated soft limits, so that users aren’t surprised by this.

                @sjl: after thinking it over more, I was unprofessional and sarcastic with you. I apologise.

                1. 41

                  I think it would be beneficial for you to take on the mindset that your users’ use cases are always valid, by definition, as a premise. Whether or not your service can handle their use cases, maybe not, but this idea that you know better what your users should be doing is not going to accomplish your goals.

                  As another example, I happen to need 2-3 more GiB RAM than the sr.ht freebsd build services offers at the moment, and have offered to up my monthly donation to account for the resource usage, and you’ve turned me down, on the grounds that I’m fundamentally abusing computer hardware in some moral way. As a result, Zig freebsd builds have many of the tests disabled, the ones where the bootstrapping compiler is a bit memory hungry. Zig’s FreeBSD users suffer because of this. And as a result, when someone else offers me a different FreeBSD CI service with more RAM, I’m ready to accept it, because my use case is valid.

                  1. 6

                    Could linking with the boehm conservative gc work as a stop gap? I think it won’t require any code changes.

                    1. 4

                      Something Andrew doesn’t mention here is why he needs 2-3 GiB more RAM: because, by design, his compiler never frees memory. Nearly all of that RAM is dead memory. In order to accomodate this use-case, I’d have to provision dedicated hardware just for Zig. Sometimes, use-cases are wrong, and you need to correct the problem at the source. Just because someone is willing to throw unspecified sums of money at you to get their “use-case” dealt with doesn’t mean it’s worth dealing with. I have finite time and resources and maybe I feel like my time is better spent implementing features which are on-topic for everyone else, even at the expense of losing some user with more money than sense.

                      1. 44

                        even at the expense of losing some user with more money than sense.

                        I really hope you change your tune here. Insulting users is pretty much the worst thing you could do.

                        Another thread recently talked about the fact that compilers don’t free memory, because the goal of a compiler is to be as fast as possible, so they treat the heap as an arena that the OS frees. Compilers have done this for 50+ years—zig isn’t special here.

                        1. 4

                          I didn’t mean to imply that Andrew doesn’t have sense, but that the hypothetical customer-thats-always-right might not.

                          As for compilers never freeing to be fast, utter bollocks. So the compiler should OOM if I have <8G of RAM to spare? Absolutely nuts. Freeing memory is not a performance bottleneck.

                          1. 38

                            Your reasoning is sound, your wording and phrasing choices are not. In what I’ve read you don’t come off as witty when you’re dealing with a paying customer and telling them they can’t do something which I also think is unreasonable, you come off as a dick. That’s how it appears. I don’t have any problems with you or your services and I think you working on this stuff is awesome… but I wouldn’t pay for insults in addition to whatever else you might provide.

                            1. 5

                              As long as I’ve known him Drew has pretty consistently been like this. It’s not a bad thing. It’s quite refreshing actually.

                              1. 35

                                It’s refreshing to have a business make fun of you?

                                1. 9

                                  It’s quite refreshing to see someone willing to say ‘no you’re wrong’ instead of the typical corporate ‘the customer is always right’ bullshit so many people here have obviously come to expect.

                                  Sometimes the customer is wrong.

                                  1. 34

                                    It’s OK for both people to be right, and the customer to stop paying for the service and walk away. It’s then OK for the customer to go tell people about how they were treated. Hopefully that happens more.

                                2. 24

                                  As a former moderator in prior communities, I politely disagree. Folks that are never not toxic are a serious liability and require special effort to handle well. I recall one memorable day when Drew dared me to ban him; I should have let the emotions flow through me and removed him from the community.

                                  Also, as a former business owner, I politely disagree that this is good business practice.

                                  1. 16

                                    I agree it’s good, now I know to avoid this business!

                            2. 14

                              CPU speed vs memory usage is a fundamental resource tradeoff that occurs all the time in computing. Just because you disagree with where on the spectrum someone has chosen to aim their design doesn’t mean they’re stupid. Especially when they too are a mostly-one-person project operating on limited resources.

                              It’s PERFECTLY VALID to say “I don’t have time to accommodate this one special case, sorry”. It is NOT perfectly valid to say “you are stupid for needing this special case, go away”. Money vs. person-time is another fundamental resource tradeoff where different people have different priorities.

                              1. 21

                                Regardless of the use case, I’d really rather not have my SCM platform making discretionary decisions about what I’m working on. The users aren’t paying for you to audit them, they’re paying for the services provided by the software. If you want your service to come with the exemption that you get to unilaterally decide whose content is allowed and whose content isn’t allowed, you’re free to do that. Just expect the community to nearly unanimously respond with “we’ll go elsewhere”

                                1. 7

                                  He’s not making ‘discretionary decisions about what [you’re] working on’. I don’t see Drew saying ‘you can’t use this service because I don’t like the way your compiler is designed’. He’s saying ‘provisioning dedicated hardware for specific projects is a lot of time and effort that I don’t have, so I’d need to have a really really good reason to do it, no matter how much money you’re willing to throw at me, and you haven’t given me one’.

                                  Every service out there gets to decide what is allowed and what isn’t. Look at the terms of service of any file or repository hosting service anywhere. GitHub, GitLab, Bitbucket, imgur, pastebin services… ALL of them make it clear in their terms of service that it’s entirely up to their whim whether they want to host your files or not.

                                  1. 32

                                    Drew is literally commenting on a particular users project, and how its design is a problem, so I have no idea what you’re talking about:

                                    Something Andrew doesn’t mention here is why he needs 2-3 GiB more RAM: because, by design, his compiler never frees memory. Nearly all of that RAM is dead memory.

                                    As for compilers never freeing to be fast, utter bollocks.

                                    @andrewrk can hopefully clarify, but I thought his offer to up monthly donations was to improve sr.ht’s FreeBSD offering, in general, not necessarily to only improve Zig builds (Zig builds would improve as a byproduct of improving the FreeBSD infrastructure). If the donations were only to be used to improve Zig-specific experiences, then I understand the argument that Drew doesn’t want to commit to that.

                                2. 12

                                  It just seems weird to me that one of your criteria for whether or not to give a customer resources is based on a personal audit of their code. Are you going to do this for every customer?

                                  1. 3

                                    I completly understand the concern here, and take it very seriously. I usually don’t dig into the repo at all and just reach out to the user to clarify its purpose. In this case, though, the repo was someone’s personal website, and named as such, and connecting the dots did not require much.

                                    1. 2

                                      As explained downthread, it’s “Alert fires -> look for what’s caused the alert -> contact customer whose repo tripped the alert”.

                                  2. -2

                                    ‘The customer is always right’ is nonsense.

                                    1. 10

                                      Nobody’s suggesting otherwise.

                                      1. -8

                                        Literally everyone else in this thread is acting like the customer is always right.

                                  3. 18

                                    You handled this very professionally and courteously, I plan to continue to use sh for many happy years to come.

                                    1. 6

                                      You are under no obligation to explain or justify what your business model is to anyone, or on a personal level what self sustainability, your own peace of mind, well being or definition of meaningful sustainable work is.

                                      There is a particular mode of doing business these days which people inside that paradigm often do not understand that they are inside and therefore apply force to get others to conform.

                                      You’re breaking old paradigms and inventing new ways of running organisations and that is brave, ground breaking and commendable and politically powerful.

                                      I hope issues like this does not deter you one bit from blazing your own trail through the fucked up world that is tech organisations in late stage capitalism and I hope you share as much as you can about how you’re doing personally and in ‘business’.

                                      1. 2

                                        git-lfs implementations often don’t allow to reclaim unreachable blobs: once you push a binary blob, even on a branch that you deleted, it will take some space forever.

                                        Maybe it is worth investigating git-annex while you’re on this topic.

                                        1. 5

                                          Yeah, git annex is also something I intend to study. I’m only just setting up large file servers for blob storage, figuring out how to apply them is the next step.

                                      1. 2

                                        I think this misses the point of the previous article. I don’t think that the previous article is shaming languages for providing more features or anything like that. It highlighted the fact that a bunch of languages in their compiled form do additional things that are unnecessary to do what the code does. This is where the author misses the point. E.g when talking about stack traces, the author seems to not know that the Zig(safe) build does have stack traces using DWARF while only making 3 calls and weighing 11KB. Doing the “lets include repeating in the program” section misses the point again. This wasn’t any kind of a performance benchmark. It’s more of a compiler optimization benchmark. Then we go to excusing go. Yeah, sure, we want fast builds. But do we really need to have all of that stuff that is entirely unnecessary for the execution of the given code. So why have it? Do we use reflection in our code? Do we use multi-threading? Do we create any objects that would require error handling? Does the task benefit from knowing weather the streams block? Should this program care about signals? Do we need to know the executable path? No. No we don’t and this is additional code that complicates debugging. Now some might say it doesn’t complicate it, but if you need to debug a performance bug on startup, all of this different stuff just gets in the way, while not being necessary.

                                        1. 2

                                          Fair enough. I will grant that most existing high level languages aren’t particularly good at optimizing programs which merely print “hello world” to the screen.

                                          It’s a very silly thing to optimize for, with no connection to the challenges of real-world software development, but there you go.

                                          My “lets include repeating in the program” was to illustrate why “slow” startup could be a problem in theory, but then to illustrate why merely eliminating code bloat isn’t always the best avenue for improving performance. (also “slow” here is not an electron app. We’re talking about a few milliseconds. it’s complete imperceptible)

                                          In my experience debugging a high level language like Go is much, much easier than debugging raw assembly. But then again I’ve only ever had to get that low-level maybe once or twice in my career. (Whereas I have to fix bugs in Go/Python/C#/Java code all the time)

                                          And I’ll take a fast build that makes a 5MB binary over a slow build that makes an 11KB binary. 5MB means nothing on my laptop, where I do most of my primary development. But a 1 second build vs a 3 minute build means a whole lot.

                                          Also FWIW I used Go because its what I knew. But the same reasoning applies to any of the languages. I bet if you dig into what Rust or Java is doing, there are also abundantly good reasons for the syscalls there.

                                          1. 2

                                            Numbers mean something. 5 MB doesn’t matter in terms of a hard drive, but it matters. That’s at least 2 levels of magnitude over what the size could be. Comparing any binary against a hard is such a cop out, that comparison hasn’t been meaningful in 20 years.

                                            Instead of hand waving it away, it would be better if you could explain what is being gained out of those extra 4 MB.

                                            1. 2

                                              The size of the binaries is a product of many things:

                                              1. Statically linking libraries instead of dynamically linking them
                                              2. Debugging information
                                              3. Pre-compiling object files to improve compiler performance
                                              4. Language features preventing dead-code analysis from eliminating all unused code

                                              Now certainly folks could spend time improving that, and there are issues in the go repo about some of them, but in practice a 5MB binary doesn’t matter in my everyday life. It doesn’t use up much space on my hard drive. I can upload them very quickly. They start plenty fast and servers I run handle them just fine.

                                              Why prioritize engineering resources or make the Go compiler slower to fix something that matters so little?

                                              1. 1

                                                …you did it again. You compared the size of the executable against the size of your hard drive. Thats the very thing that I just commented against in my last comment. Comparing an executable size against hard drive size hasnt been a meaningful comparison in at least 20 years. Why do you keep doing it? Cant you find a better comparison?

                                                1. 1

                                                  I’ve lost the issue here.

                                                  Why is a 5MB binary a problem?

                                                  1. 2

                                                    I never said 5MB binary was problem. I said comparing the size of an executable, to your hard drive size, is a meaningless comparison and has been for a couble of decades.

                                                    Ive said pretty much this same comment 3 times now, am I not being clear?

                                                    1. 2

                                                      OK. I agree a 5MB binary is not a problem.

                                                      1. 1

                                                        I also agree with you that short sighted comparisons are wrong.

                                          2. 1

                                            E.g when talking about stack traces, the author seems to not know that the Zig(safe) build does have stack traces using DWARF while only making 3 calls and weighing 11KB.

                                            The zig programs were built with --strip which omits debug info. A minimal program capable of displaying its own stack traces with --release-safe comes out to ~500KB. That’s the size of both the DWARF info, and the code to parse it and utilize it to display a stack trace.

                                            But yeah it’s still only 3 syscalls. The third one is a segfault handler to print a stack trace. The std lib supports opt-out with pub const enable_segfault_handler = false; in the root source file (next to main).

                                          1. 4

                                            Interesting. I would say that today a proper hello world should at least ensure it ouputs unicode/utf8 - and maybe do l10n.

                                            I wonder why c99 w/puts is so bloated?

                                            1. 7


                                              print "👋🌍\n"


                                              1. 3

                                                For example, although I’d go with something more along the lines of regular utf8 multibyte, like Japanese or Norwegian (partly because they are languages I know).

                                                One probably really should do a right-to-left language too, for good measure.

                                                And as mentioned, basic localization (l10n) might be useful too - unfortunately that does impose a bit more - finding a way to encode and look up translations.

                                                1. 5

                                                  Outputting utf8 is the same thing as outputting bytes. “👋🌍\n” is a valid string literal in C and you can print it to stdout. Applications and especially libraries rarely need to decode utf8.

                                                  1. 1

                                                    Sure, nothing wrong with using emoji. I tend to prefer Norwegian mostly because I know I have proper font support everywhere for multibyte sequences.

                                                    On another note, for many, many applications dealing with text, you probably want to do things like sort, which implies decoding in some form. And maybe case-insensitive sort and/or comparison.

                                                    For web servers and similar applications, you want to be aware of encodings, even if all you do is crash and burn on non-utf8 data.

                                                    Ouputting “bytes” to a “text” stream can be a little unreliable as well.

                                            1. 2

                                              I reviewed a system76 laptop back in 2014. I have avoided System76 ever since, but perhaps it’s time to give them another chance. Have they addressed the problems noted in that earlier review? Particularly the one about the power cable not fitting in well.

                                              1. 4

                                                Yep, the power cable fits in perfectly (and the power brick is much nicer and smaller as well). The touchpad has physical buttons now too, I don’t think I’d buy a laptop that doesn’t except for the MacBooks.

                                              1. 10

                                                I would prefer we maybe even remove plang specific tags. IMO zig is not ready for its own tag. Where is the cobol tag? Where is the forth tag? So on and so forth.

                                                I think there is enough posting about it to warrant a tag.

                                                All the links you posted are months apart from each other.

                                                1. 15

                                                  Lobsters seems to get more Zig submissions than COBOL submissions. That is very unreflective of the real world, but Lobsters tags are for Lobsters submissions, not for the real world.

                                                  1. 1

                                                    Lobsters has more Nim submission than Zig submissions. And if I use any other metric, Zig is non-existent, e.g. Github repositories: it has fewer than 10.

                                                    1. 1

                                                      I don’t actually care about whether lobste.rs has a zig tag, but this claim is simply false

                                                      GitHub reports 136 public repositories which both have the tag zig manually selected by the maintainer as well as using the Zig language.

                                                      1. 1

                                                        Oh, I was searching via language, Github must not have indexed those projects yet, only 2 results show up: https://github.com/search?q=language%3Azig

                                                        Still 1/10th the size of Nim, though that may be due to differences in age.

                                                  2. 6

                                                    In regards to the COBOL and FORTH tags, I have not seen nearly as many active COBOL and FORTH posts as I have Zig posts. Also the creators/maintainers of those languages are not as active as the creator/maintainers of Zig are on this site.

                                                    I think that the discussion on whether any language should have a tag (which is a fine discussion to have) is separate from if Zig should have a tag. In my opinion the question is, given that we have language specific tags should Zig get its own?

                                                    1. 3

                                                      There are languages with a vastly larger audience that don’t have a tag, and imho shouldn’t have a tag, my favorite language included. Tags are not a magical boon to a language, in fact they are liable to have an opposite effect if the audience isn’t large enough. I am not a fan of Zig, but I do think it deserves a fair shot as much as any other. If given the chance, perhaps it will one day turn into a language I’m excited about. Without that fair shot, that day will never come.

                                                      1. 3

                                                        I don’t think that the main purpose of this site is to be an incubator for languages, but a place for people to discuss articles related to tech. So, to elaborate on my post above, in my opinion the question is*, “would a tag help users of this site find/avoid a topic?”, not “would the tag help the growth of said topic?”.

                                                        *given that we have language specific tags and given how tags are currently being used to filter for topics

                                                        1. 3

                                                          I don’t think there is enough audience or activity to make a zig tag useful or interesting, either to filter or search for. I would probably filter it out if it existed, but it would not be a very useful or good filter because it would filter out an article a month. If we are going to create a new tag I would actually prefer something that covers Rust, C, C++, Zig, D, Go and the various other low/no gc languages that primarily value performance.

                                                    2. 3

                                                      Agreed. As an alternative proposal, perhaps having tags for types of languages would be more useful. For example, a tag for languages are that typically compiled (e.g. C, Go, Rust) and a tag for languages that are typically interpreted (e.g. Python, Bash).

                                                      (Yes, yes, there are languages that may fall in one category or another based on the context, but in those cases, the appropriate tag can be used by the poster; I’m not suggesting that the tags draw hard lines based on the language being discussed)

                                                      1. 3

                                                        A little finer grained than that would be nice but yes. The ML tag gets used by OCaml, F#, Haskell etc.

                                                    1. 15

                                                      I think this is a small front in a greater battle - should the application developer be in charge of distribution, or should the operating system vendor? This battle gets more complex with i.e the fact Linux distributions aren’t very good at ABI stability, and language-specific package managers.

                                                      You can regenerate them from autoconf, but I don’t think many do? Even systems like FreeBSD ports just use existing configure scripts.

                                                      If you patch the configure script, you have to regenerate it, so…

                                                      1. 6

                                                        I think you hit on the crux of the issue. As an upstream application developer, I want my users to get the intended experience of my application, including access to the latest releases. As a package maintainer, I want my users to get the most integrated system that works for them.

                                                        One classic example of this dissonance is the naming of nodejs on Debian. node was already taken by another package, so Debian devs had to do a lot of patching to rename the binary. This was not a popular decision with nodejs devs.

                                                        1. 1

                                                          A better alternative: the application developers are in charge to make their software easy to package. The package maintainers do the packaging. Everybody wins.

                                                        1. 16

                                                          I submitted this because I want to show off something in particular: that async/await code that works in both evented I/O mode and normal (blocking) mode. This readme file explains it at the bottom of the “Performance” section.

                                                          I think a lot of people have not realized yet that zig’s async/await is actually intended to be a high level “express the concurrency in your business logic” sort of thing, and I hope this can show it off a bit.

                                                          Also is Zig noteworthy enough now to get its own lobste.rs tag? :-)

                                                          1. 4

                                                            I would think that would be decided by Zig users :). However I might caution you that most people use Tags as a filter rather than a search term, as searching for Zig is somewhat trivial (It’s not a particularly common word) but filtering out Zig requires a tag. It may be nice however if there were a tag for language discussions that are manual memory management or gc-lite like rust.

                                                            1. 2

                                                              Perhaps it’s because it’s late here, I’m tired, and I’m also getting a cold, but I’m not sure I get the point you’re trying to make here. Am I right to believe that this example throws shade to the famous node.js “factorial server” from years ago that showed computation kills an I/O based event loop, and that Zig doesn’t have this problem?

                                                              And, as a follow up: is the reason for this that you’ve made the computation multiplication of 2 numbers, and then yield back to the event-loop for the next call up the stack?

                                                              1. 1

                                                                The point is that there’s no “async version” of the standard library, and when one makes a package in zig, they don’t have to choose whether to make an async version or not. You can write the same code, which expresses concurrency, but that works in applications that want to use evented I/O or blocking.

                                                                1. 1

                                                                  no “async version” of the standard library, and when one makes a package in zig, they don’t have to choose whether to make an async version or not.

                                                                  Got it. This wasn’t something that I actually considered when reading this. Not sure why…

                                                                  1. 1

                                                                    Does that mean that the std lib will undergo asyncification?

                                                                    1. 2

                                                                      No, and that’s the point!

                                                                      You can write the same code, which expresses concurrency, but that works in applications that want to use evented I/O or blocking.

                                                              1. 12

                                                                Why is a Go version from 3 years ago used? What is the point of this comparison? If it is about performance then it is strange to use such an old version of one of the languages. If it is about source code comparison then what is the point of including the benchmarks? When I saw that Go from 3 years ago is used I initially suspected malicious intent and then I noticed that some of the Zig examples are calling C code for some reason. If there is some kind of a benchmark performed than the goal posts should be clearly defined and the methodology should be honest.

                                                                1. 5
                                                                1. 15

                                                                  The Zig gmp versions call C code, and are no longer really “Zig”, or a fair comparison to the Go code. You can use cgo to do the same with Go for an equal comparison.

                                                                  1. 7

                                                                    It looks like for the direct Zig vs Go comparisons, Go clearly wins. Now in full generosity to the author it appears they were trying to show how GMP versions are fast, but it’s unclear why that should be notable here.

                                                                    fact-linear.go - 0.11 seconds
                                                                    fact-linear.zig - 0.51 seconds
                                                                    fact-channel.go - 0.10 seconds
                                                                    fact-channel.zig - 1.0 seconds
                                                                    fact-linear.go - 19.86 seconds
                                                                    fact-linear.zig - 46.43 seconds
                                                                    fact-channel.go - 20.2 seconds
                                                                    fact-channel.zig - 23.6 seconds
                                                                    1. 6

                                                                      it’s unclear why that should be notable here.

                                                                      Quoting the article:

                                                                      The most important thing to note here is that the fastest performance was a single-threaded implementation that linked gmp, which has high performance big-integer multiplication.

                                                                      So including gmp was a smart decision by Brendon to point out that hey, big caveat in this example, the bottleneck is large integer multiplication, and both Go and Zig’s std lib big ints are not even close to gmp’s performance.

                                                                      Which is why I’m trying to draw attention to the actual thing I’m trying to post about which is the fact-await.zig example demonstrating the same code working with evented I/O enabled and disabled. I think the significance of this has still not quite sunk in.

                                                                      1. 11

                                                                        I think perhaps you might want to reword your article some to include some of these ideas a little more explicitly and perhaps redundantly, also earlier on in the article. If you wish to share effectively what you have done here, you have to word it in such a way that it gets received in the way that you expect. Test it out on some readers you can trust to be honest and direct. If you were trying to communicate that the bottleneck for both languages is a large integer multiplication that’s a fine and acceptable thing to say, but that wasn’t clear to me the reader. Certainly at a glance you can concede that it appears as though you’re comparing Go to Zig with GMP, as though Go can’t use GMP even though it can 1 .

                                                                        [1] https://github.com/ncw/gmp

                                                                    2. 5

                                                                      GMP versions are clearly marked GMP and I don’t see anything unfair. I think if you send PR to add cgo/GMP version it will be accepted.

                                                                      1. 19

                                                                        Just the three letters “gmp” don’t mean much; I had to look at the Zig source code to see it’s just calling C code; it could be a Zig library, for example. Right now someone looking at the overview will go away with the impression that Zig is significantly faster than Go, which is simply not the case.

                                                                        I think if you send PR to add cgo/GMP version it will be accepted.

                                                                        Probably, but I don’t think it’s my job to correct misleading benchmarks on the internet, and I don’t think that “send a PR” is a constructive reply to this kind of feedback.

                                                                        1. 3

                                                                          The article links to gmplib.org… You had to look at the Zig source code, but you could also have been an attentive reader. I tend to agree that the article could have been even more clear, and wording improvement also will be accepted with gratitude.

                                                                          I don’t think that “send a PR” is a constructive reply to this kind of feedback.

                                                                          Your “cgo” comment was clearly a feature request and not a bug report, and I believe “send a PR” is appropriate.

                                                                          1. 13

                                                                            I don’t know why you’re being so defensive; comparing a program with embed C versus one that doesn’t is clearly comparing apples to oranges. That it favours the language the author is trying to promote is not a good look, to say the least.

                                                                            If you need to carefully read everything, follow links, and have knowledge of what certain three-letter acronyms are, then you’re just being misleading. It’s no different than starting a chart at non-0 to exaggerate trends, for example. Sure, the information is in there, but also easy to miss. Any casual reader – which are most readers – will take away that Zig is faster than Go, which is not demonstrated by this particular benchmark. This is rather interesting as the Zig’s homepage claims that “Zig is faster than C”.

                                                                    1. 20

                                                                      What are the alternatives? Any other CDN offering free services for open source projects?

                                                                      1. 12

                                                                        What exactly do you need?

                                                                        1. 20

                                                                          ziglang.org is a static site with no JavaScript and no server-side code. The home page is 22 KB data transferred with cold cache. The biggest service that CloudFlare offers is caching large files such as:

                                                                          These are downloaded by the CI server for every master branch push and used to build & run tests. In addition to that, binaries are provided on the download page. These are considerably smaller, but as the number of users grows (and it is growing super-linearly), the cost of data transferred was increasing fast. My AWS bill was up to $20/month and doubling every month.

                                                                          Now these assets are cached by CloudFlare and my AWS bill stays at ~$15/month. Given that I live on donations, this is a big deal for me.

                                                                          1. 13

                                                                            You might consider having a Cloudflare subdomain for these larger binaries so that connections to your main website are not MITM’d. Then you could host the main website wherever you please, and keep the two concerns separable, allowing you to change hosting for the binaries as necessary.

                                                                            1. 4

                                                                              If I were in this situation I would be tempted to rent a number of cheap (~€2.99/month) instances from somewhere like Scaleway each with Mbps bandwidth caps rather than x GB per billing period and have a service on my main server that would redirect requests to mirror-1.domain or mirror-2.domain, etc depending on how much bandwidth they had available that second.

                                                                          2. 17

                                                                            Fastly does: https://www.fastly.com/open-source

                                                                            Amazon also has grant offerings for CloudFront.

                                                                            1. 9

                                                                              Avoid bloating your project’s website, and use www. Put all services on separate subdomains so you can segregate things in case one of them gets attacked. If you must use large media, load them from a separate subdomain.

                                                                              edit: Based on the reply above, maybe IPFS and BitTorrent to help offload distributing binaries?

                                                                              1. 5

                                                                                I use Dreamhost for all of oilshell.org and it costs a few dollars a month, certainly less than a $15/month AWS bill.

                                                                                I don’t host any 300 MB binaries, but I’d be surprised if Dreamhost couldn’t handle them at the level of traffic of Zig (and 10x or 100x that).

                                                                                10 or 15 years ago shared hosting might not be able to handle it, but computers and networks got a lot faster. I don’t know the details, but they have caches in front of all their machines, etc. The sys admin generally seems very competent.

                                                                                If I hosted large binaries that they couldn’t handle, I would either try Bittorrent distribution, or maybe create a subdomain so I could easily move only those binaries somewhere to another box.

                                                                                But I would bet their caches can handle the spikes upon release, etc. They have tons of customers so I think by now the industry learned to average out the traffic over all of them.

                                                                                BTW they advertise their bandwidth as unmetered / unlimited, and I don’t believe that’s a lie, as it was in the 90’s. I think they can basically handle all reasonable use cases and Zig certainly falls within that. The only thing you can’t do is start YouTube or YouPorn on top of Dreamhost, etc.

                                                                                FWIW I really like rsync’ing to a single, low latency, bare metal box and rather than using whatever “cloud tools” are currently in fashion. A single box seems to have about the same uptime as the cloud too.

                                                                                1. 7

                                                                                  A single box seems to have about the same uptime as the cloud too.

                                                                                  That’s… unpleasantly true. Getting reliability out of ‘the cloud’ requires getting an awful lot of things exactly right, in ways that are easy to get wrong.

                                                                                  1. 2


                                                                                  2. 5

                                                                                    I’ll add I was a DreamHost customer because they fight for their users in court. The VPS’s are relatively new. The customer service is hit and miss according to reviews.

                                                                                    Prgmr.com I recommend for being honest, having great service, and hosting Lobsters.

                                                                                    One can combine such hosts with other service providers. The important stuff remains on hosts dedicated to their users more than average.

                                                                                  3. 4

                                                                                    Free just means you aren’t paying for it. This means someone else is paying the cost for you. Chances are their $$$‘s spent is going to do something good for them, and not so good for you. Perhaps the trade off is worth it, perhaps it isn’t.

                                                                                    Assuming the poster is accurate, and Cloudfare is a front for US intelligence, does it matter for what you are using it for?

                                                                                    Of course, should the US Government be able to spy on people through companies like this is an entirely different question, and one that should see the light of day and not hide in some backroom somewhere.

                                                                                    1. 13

                                                                                      Free just means you aren’t paying for it. This means someone else is paying the cost for you. Chances are their $$$‘s spent is going to do something good for them, and not so good for you. Perhaps the trade off is worth it, perhaps it isn’t.

                                                                                      In Cloudflare’s case, one fairly well documented note is that free accounts are the crash test dummies:


                                                                                      The DOG PoP is a Cloudflare PoP (just like any of our cities worldwide) but it is used only by Cloudflare employees. This dogfooding PoP enables us to catch problems early before any customer traffic has touched the code. And it frequently does.

                                                                                      If the DOG test passes successfully code goes to PIG (as in “Guinea Pig”). This is a Cloudflare PoP where a small subset of customer traffic from non-paying customers passes through the new code.

                                                                                      I’d be curious if those customers are rebalanced and how often!

                                                                                      1. 30

                                                                                        Using free tier customers as limited guinea pigs is honestly a brilliant way to make them an asset without having to sell them to someone else. Whatever else cloudflare is doing with them, that one’s a really cool idea.

                                                                                    2. 3

                                                                                      Netlify is an option if your site is static, and it offers free pro accounts for open source projects (there’s also a tier that’s free for any project, open source or not, which has fewer features).

                                                                                      Disclaimer: I work there.

                                                                                      1. 1

                                                                                        Not free, but Digital Ocean Spaces (like S3) is 5$/month for up to something like 5GB, and includes free CDN.

                                                                                      1. 5

                                                                                        Friendly warning to anyone manually updating, Firefox will trash any settings you have regarding updating. So if your machine is set for manual updates, and you install Firefox 70, it changes to automatic updates. Mozilla refuses to fix this:


                                                                                        1. 22

                                                                                          Firefox works fine when managed by, for example, a Linux distribution’s package manager. This necessarily means that the auto updater is disabled. So this use case is handled fine. It’s also open source and Mozilla is non-profit, so I’m not sure why you’ve chosen Firefox as your evil straw man. Your bug report is not productive and your childish antics have only wasted developers’ time, who have plenty of actual work to do.

                                                                                          1. 2

                                                                                            I think the bug report is legitimate, as are the explanation, resolution and workaround that have been provided. You reply seems unnecessarily harsh to me, though I agree the submitter accepting the Firefox response would be appropriate.

                                                                                          2. 18

                                                                                            Tbh your behaviour on that bug wasn’t really acceptable. When a maintainer closes an issue please don’t re-open it simply because you disagree. As for the “bug” itself, you were given two possible workarounds. Yet for some reason you still expect developers to spend their time catering to your very obscure edge case.

                                                                                            1. 9

                                                                                              Thats interesting framing.

                                                                                              They broke the feature. Its not me asking for something new. Its me asking them to restore the previously working behavior. Before, if you wanted to permanently disable updates, you just go into about:config and toggle, and done. Now you have to manually create a JSON file, a file which is removed the next time you manually update. Just because you call it “very obscure edge case”, doesnt make it so.

                                                                                              I imagine, especially in Lobster community, and especially in the current environment of well deserved distrust with tech companies, that its not as obscure as you think or hope it is.

                                                                                              LOL just checked your bio:

                                                                                              Engineering productivity at Mozilla


                                                                                              1. 10

                                                                                                There is an XKCD which is precisely applicable to your situation.

                                                                                                Your use case is an edge case as far as the Mozilla developers are concerned & since it’s a tiny, tiny minority of users that are relying on it (i.e., you) then the rest of the userbase comes first. Consider yourself fortunate that the mozilla developers were kind enough to tell you how to do what you want to do in future.

                                                                                                This is the price of using a piece of software with millions of users in a unique fashion I’m afraid - your use case is never going to trump those millions of users.

                                                                                                1. 11

                                                                                                  You seem to be under the impression that there’s a smoky back room at Mozilla HQ where they twirl their mustaches and cackle about how they’re deliberately taking away your freedom. Admittedly it’s been some years since I worked for Mozilla, and I was A) remote employee and B) didn’t work on the browser, but if there is such a room I certainly never heard about it, let alone got invited to go see it.

                                                                                                  And what I think from reading that bug report is that you used a feature (the distribution directory) for a purpose it wasn’t intended for, and then were unhappy when it behaved as documented, because it turned out not to support the use case you want. Your use case is apparently very important to you personally, but that doesn’t mean it has to be important to them, or that they have to support it. They have the freedom to decide not to support your use case; you have the freedom not to use their software. And in fact you’re better off than you’d be with some browsers, because you also have the freedom to grab the source, modify it to suit your use case, and use and distribute your fork. But you don’t and never will have a right – moral, legal, or otherwise – to force them to support your use case.

                                                                                                  1. 4

                                                                                                    You seem to be under the impression that there’s a smoky back room at Mozilla HQ where they twirl their mustaches and cackle about how they’re deliberately taking away your freedom.

                                                                                                    That is a strawman.

                                                                                                    Changes like this always have a reason. Usually, someone runs a study, or reviews a retrospective, and finds that like 20% (I don’t know the number, but I’m sure I could find it if I looked) of Firefox users had auto-updates disabled by some adware installer or whatever. And the only way Mozilla knows of preventing other software running on the same Windows machine from changing a setting is to hardcode it into the executable, where the Windows code integrity system will ensure it doesn’t get changed.

                                                                                                    That doesn’t change the fact that the solution here is removing power from the end user in ways that are frequently quite harmful. I’m pointing directly at Page Translator here. That kind of “collateral damage” is extremely messed up.

                                                                                                    Hard-coding auto-updates into the EXE probably isn’t that bad (running an outdated browser with known CVEs on the Internet is just stupid). Neither is the whole practice of shipping a blocklist (obviously, allowing the blocklist to be disabled in about:config, where adware installers can change it, would completely defeat the purpose of the blocklist). The fact that it ended with an add-on that clearly isn’t adware getting blocked, on the other hand, is a scandal.

                                                                                                    1. 7

                                                                                                      I read the Bugzilla bug linked from that post, and it appears that there was a policy change from “side-loaded extensions can execute remote-source code” to “they can’t”.

                                                                                                      I have a hard time seeing that as “a scandal”. Especially given how many times we’ve seen the pattern of an extension/add-on that used to be safe and gets taken over by an entity who abuses the extension’s privileges to do malicious things.

                                                                                                      So it seems there’s been a decision that nobody gets trusted to execute remote code from an add-on, and while there are certainly going to be examples like the translator add-on that intuitively feel like they should get special exceptions to that policy, special exceptions for the “good” add-on authors don’t scale.

                                                                                                      Meanwhile, the “freedom” arguments almost always really boil down to demanding that someone else write software in a way that the “freedom” supporter prefers. And I don’t see any principle of software freedom which supports forcing other people to write the things you want.

                                                                                                    2. 4

                                                                                                      You seem to be under the impression that there’s a smoky back room at Mozilla HQ where they twirl their mustaches and cackle about how they’re deliberately taking away your freedom.

                                                                                                      You missed last week’s meeting, BTW.

                                                                                                      1. 3

                                                                                                        We mostly talked about you and then assigned all the bugs to you. This is what happens when you miss a meeting.

                                                                                                    3. 7

                                                                                                      As an industry, the risk of users being on outdated versions of software is huge. It is prudent to take measures to auto-update and ensure that users are on the latest versions of software to reduce this risk. An inconvenience to you is a huge boon to me, and the industry as a whole.

                                                                                                      1. 4

                                                                                                        Thats a false choice. If a user understands the consequences, and is warned loud and clear before making this type of change, they should be allowed to do so. Thats why sudo exists and UAC on Windows.

                                                                                                        1. 4

                                                                                                          TBH users been trained to click “Allow” and “I Agree” until the dialogs go away.

                                                                                                          Those decisions such as enabling auto-updating to everyone are based on statistics. They are much more likely to help than to hinder. Just displaying a dialog box is not enough these days.

                                                                                                          I too would like if it was simple toggle somewhere in settings to disable updates, but Firefox is a more complex project than people realize and there is a ton of checks, balances, and teams working on different aspects of what is in essence a little virtualized operating system with a poor choice of view model for apps.

                                                                                                          I understand why you’re frustrated, I have my pet bugs too. The good thing is that you can change stuff, you can engage in constructive dialog and send a patch. And like everyone, you need to be prepared for the team who develop that app not to want your patch or feature.

                                                                                                          The good news is that even if they don’t want that feature, there is nothing stopping you from building your own build at home or forking. Still, I’d think that if you’re forking because you don’t want auto-update because you distrust big tech, then how the hell do you expect a single person to maintain security updates for a browser? I think auto update is really good and brings in a ton of fixes in.

                                                                                                          1. 2

                                                                                                            That’s a bad analogy. You should probably thinking about how the browsers that come with Windows get updated. Mozilla needs to weigh multiple issues and stakeholders here. It does not serve them well to cater to edge cases which significantly increase risk.

                                                                                                            This is also not a user “rights” issue like your language suggests. Your rights are to take the open source code and make your own build if what Mozilla provides doesn’t work for you. Your time might be better spent looking at the overall update space and lobbying for solution that gives you more of what you want while aligning to the high-level goals of Mozilla.

                                                                                                          2. 3

                                                                                                            I don’t think users end up in about:config without meaning to and accidentally turn off automatic updates. Mozilla’s response to this issue seems silly, since they should never have removed the about:config switch in the first place. At the same time, Mozilla’s decision to remove the distribution directory seems to positively affect a large number of non-technical users who would receive a custom copy of Firefox (maybe alongside another piece of software) and then try to install vanilla Firefox.

                                                                                                            Good for non-technical users, bad for corporations trying to unify rollouts of software updates. Users like @cup are probably insignificant to Mozilla in making this decision (as they represent a very small vocal minority of the Firefox userbase).

                                                                                                            1. 8

                                                                                                              I don’t think users end up in about:config without meaning to and accidentally turn off automatic updates.

                                                                                                              Third-party software, installed on the same computer, does that. A lot of the blocked add-ons have block descriptions like “overrides search behavior without user consent or control”. Because it’s adware.

                                                                                                              I’ve seen people get infected with that kind of thing; it comes bundled with another application that they installed. If it’s in about:config, then the third-party installer can just change it. If it’s hardcoded in the EXE, then the app can’t change it without re-signing Firefox, which will get their signing key revoked by Microsoft and Apple, and likely get them sued by Mozilla for trademark infringement.

                                                                                                              1. 6

                                                                                                                That’s sneaky. I guess the fact that I wasn’t aware of that goes to show that Mozilla had more of a point here than I thought they did. Thank you for informing me.

                                                                                                                I know that sandboxing on the desktop would solve this particular problem, but I’m afraid of the consequences of that on software development and particularly people learning to code. It’s pretty difficult to be exposed to real-world programming on a locked-down mobile device.

                                                                                                                1. 1

                                                                                                                  I don’t think that’s a problem.

                                                                                                                  • There are perfectly good ways that an operating system vendor could allow users to turn off the sandbox without allowing arbitrary applications to do that. I particularly like the Chromebook method where you use a literal jumper on the motherboard to switch it off; it’s really just a skewmorph, to make sure the human understands that they’re doing something to their computer at a low level, but it seems effective enough.

                                                                                                                    Unfortunately, Mozilla’s attempt to ship an actual operating system didn’t go anywhere, so they never got the chance to implement anything like that.

                                                                                                                  • Breaking out of the sandbox is really only necessary if you want to do systems programming. I love systems programming, but most software is application code, and most of that is written with sandboxed systems like web-based JavaScript and spreadsheet macros in Excel. I can write that kind of stuff on locked-down mobile devices right now.

                                                                                                              2. 1

                                                                                                                Companies trying to unify rollouts of software is part of the problem. Many major malware incidents get investigated and the findings include well meaning administrators who wished to unify and manage updates - but, failed to update in certain situations.

                                                                                                                We shouldn’t maintain the pretense that admins will get this right 100% of the time. They are people and they will fail. Their efforts are best spent elsewhere, including work to encourage devs to test compatibility, giving admins more of a stake in software acquisitions, etc.

                                                                                                            2. 5

                                                                                                              Thanks for reporting this behavior, and also thank you for pointing out the potential conflict of interest of the person who criticized you for the way you reported it. It’s really unfortunate that the Mozilla devs are optimizing for silently updating the browser and making it difficult for users to disable this behavior. If anyone is aware of a fork of Firefox that doesn’t do this, I’d love to hear about it.

                                                                                                              1. 16

                                                                                                                The initial report was fine, it’s the way they kept reopening that isn’t

                                                                                                        1. 32

                                                                                                          First off, thanks for asking! One of the most demotivating things is when upstreams don’t care!

                                                                                                          I am speaking form the perspective of an OpenBSD port maintainer, so everything will have a fairly OpenBSD centric slant.

                                                                                                          Responses to your questions:

                                                                                                          What can be done to minimize friction with building and installation?

                                                                                                          Use one build tool. Know it well. Often times the build tool has already solved problems you are attempting to solve. For example:

                                                                                                          The other end of “not using one tool” is reaching for other build tools from inside an existing one. This just makes things suck as packagers have untangle a heap of things to make the program work.

                                                                                                          Is a curl | bash pattern okay enough for packagers or should it be expected that they will always try to build things the hard way from source?

                                                                                                          Always build from source. I call these scripts “icebergs” as they often look small until you glance under the surface. 90% of the time things that “install” via this method use #!/bin/bash which is absolutely not portable. Then, typically, the script will pull down all the dependencies needed to build $tool (this usually involves LLVM!!).. then the script will attempt to build said deps (99.9% of the time without success) with what ever build tool/wrapper the author has decided to use. Meanwhile, all the dependencies are available as packages (most of the time with system specific patches to make them build), and they could have simply been installed using the OSs package manager.

                                                                                                          Are there entire languages or ecosystems to avoid for software that desires to be packaged?

                                                                                                          NPM. We tried to make packages out of npm early on and ultimately hit version gridlock. There are still things we package that build npm things (firefox, chromium, kibana) - but they are shipping a pre-populated node_modules directory.

                                                                                                          Are there projects that serve as positive or negative examples that could be referenced?

                                                                                                          I have linked a few examples above, but if you want many, dig down to the patches directory for any port in the OpenBSD ports tree, you can see exactly what we deal with when porting various things.

                                                                                                          My goal with the links is not to say “look how stupid these guys are!”, it’s simply to point out that we could all stand to know our tooling / target users a bit better.

                                                                                                          In general

                                                                                                          The best thing you can do is understand the tools you are using. Know their limits. Know when a tool is the right one for a given job.

                                                                                                          Also don’t think of your build environment as something that end users will have to deal with. Doing so creates a false sense of “I must make this easy to build!” and results in non-portable things like curl | bash icebergs. The vast majority of the time it will be package maintainers who are working with it.

                                                                                                          1. 11

                                                                                                            For fun I made a quick and dirty list of ports and their respective patch count, here are the top 4:

                                                                                                                 710        chromium
                                                                                                                 609        iridium
                                                                                                                 331        qtwebengine
                                                                                                                 249        posixtestsuite
                                                                                                            1. 8

                                                                                                              I agree with nearly everything you said here, but there is one thing I would like to push back on:

                                                                                                              NPM. We tried to make packages out of npm early on and ultimately hit version gridlock. There are still things we package that build npm things (firefox, chromium, kibana) - but they are shipping a pre-populated node_modules directory.

                                                                                                              I went through the very painful process of packaging my own upstream software for Debian and ran into this rule about having to package all the dependencies, recursively. I don’t think a policy that mandates NPM packages must all be separate system packages is reasonable. I ended up rewriting most of my own dependencies, even creating browserify-lite, because reimplementing Browserify from scratch with no dependencies was easier than packaging them. Mind you this is a mere build dependency.

                                                                                                              On top this, some JS dependencies are small, and that’s fine. But Debian ftp masters didn’t accept dependencies that were smaller than 100 lines, while also not accepting pre-bundling node_modules.

                                                                                                              The policy is outdated. I’ve heard all the arguments, I’m familiar with the DFSG, but I think it’s too strict with regards to static dependencies. Static dependencies are sometimes quite appropriate. In, particular, just bundle the node_modules folder. It’s fine. If there’s a package in particular that should be system-wide, or you need patches for the system, go for it. Make that package a system package. Allow multiple versions to solve version lock.

                                                                                                              1. 7

                                                                                                                Static dependencies are sometimes quite appropriate. In, particular, just bundle the node_modules folder. It’s fine.

                                                                                                                It’s not. If it’s just a “blob” basically, how do you then know when one of those modules is in need of a security upgrade? The entire concept of packaging is based on knowing which software is on your system, and to avoid having multiple copies of various dependencies scattered around.

                                                                                                                The real problem is the proliferation of dependencies and the break-neck speed at which the NPM ecosystem is developed.

                                                                                                                1. 2

                                                                                                                  how do you then know when one of those modules is in need of a security upgrade?

                                                                                                                  This is the most common argument I hear. The answer is simple: the security upgrade is upstream’s problem.

                                                                                                                  The distribution / package manager must accept this fact. Upstream could, for example, have a security issue that is in its application code, not any of its dependencies. Due to this fact, distros already must track upstream releases in order to stay on top of security upgrades. It is then, in turn, upstream’s job to stay on top of security upgrades in its own dependencies. If upstream depends on FooDep 1.0.0, and FooDep releases 1.0.1 security update, it is then upstream’s job to upgrade and then make a new upstream security fix release. Once this is done, the distro will already be tracking upstream for security releases, and pull this fix in.

                                                                                                                  I don’t buy this security upgrade argument. Static dependencies are fine.

                                                                                                                  The real problem is the proliferation of dependencies and the break-neck speed at which the NPM ecosystem is developed.

                                                                                                                  The real problem is how productive the community is and how successful they are at code reuse? No. The real problem is outdated package management policies.

                                                                                                                  1. 3

                                                                                                                    This is the most common argument I hear. The answer is simple: the security upgrade is upstream’s problem.

                                                                                                                    That’s insane. First of all, CVEs are filed against specific pieces of software. For truly critical bugs, distribution maintainers often coordinate disclosure in such a way that the bugs are fixed before details are released into the wild. This allows users to patch before being at too much risk.

                                                                                                                    Expecting that all upstreams which include said package statically will update it in a timely manner means that those upstream packages need to be conscientiously maintained. This is a pipe dream; there are lots and lots of packages which are unmaintained or maintained in a haphazard way.

                                                                                                                    As a user, this is exactly what you don’t want: if there’s a critical security update, I want to be able to do “list packages”, check if I’m running a patched one, and if not, a simple “update packages” should pull in the fix. I don’t necessarily know exactly which programs are using exactly which dependencies. Using a package manager means I don’t have to (that’s the entire point of package managers).

                                                                                                                    So, if some packages on my system included their dependencies statically, I would be at risk without even knowing it.

                                                                                                                    If upstream depends on FooDep 1.0.0, and FooDep releases 1.0.1 security update, it is then upstream’s job to upgrade and then make a new upstream security fix release.

                                                                                                                    So instead of updating 1 package, this requires the (already overworked) package maintainers to update 50 packages. I don’t think this is progress.

                                                                                                                2. 4

                                                                                                                  I agree, individual packaging of recursive deps is a nightmare.

                                                                                                                  The one thing that makes it a “feature” on OpenBSD is the forced use of ftp(1), which has extra security measures like pledge(2). I trust ftp(1) much more than npm/node/other tools when it comes to downloading things.

                                                                                                                  One super down side to the “use ftp to fetch” method is that unless the tools in question (npm, go get, gem) expose the dependency resolution in a way that can be used by the ports framework/ftp.. you are basically stuck re-implementing dependency resolution, which is stupid.

                                                                                                                  I didn’t know about the100 line limit. That’s interesting! I can see it being a pain in the butt, but I also appreciate the “minimize dependencies” approach!

                                                                                                                3. 2

                                                                                                                  This is all excellent advice. I would also add that commercial and other financially backed publishers can’t outsource this, and can’t shortchange this. It’s important to have a distro liaison if you care about the end user experience. If you don’t have this you end up in the GNOME or Skype or NPM or Docker situations. Invest in testing the packages. Monitor the bug reports. Stay ahead of changes on the dependency tree. And why not, sponsor the bug bash and the BoF and all of that.

                                                                                                                1. 3

                                                                                                                  I haven’t looked in years but last I checked the source code for this game was a really interesting combination of Haskell, Pascal, and C++.

                                                                                                                  1. 1

                                                                                                                    Still is!

                                                                                                                    1. 2

                                                                                                                      Haskell looks like it’s the server, and C++ seems to be the GUI, but what are they using Pascal for?

                                                                                                                      1. 1

                                                                                                                        The whole core engine is in Pascal.

                                                                                                                    2. 1
                                                                                                                    1. 4

                                                                                                                      Musl has some really nice commit messages: https://git.musl-libc.org/cgit/musl/log/

                                                                                                                      1. 3

                                                                                                                        I had a brief scan through the comments when this was on HN and I wasn’t satisfied with any of the excuses of not using Linux. Everything mentioned was way less of a problem than the things outlined in this blog post.

                                                                                                                        1. 12

                                                                                                                          The guy is a professional Mac developer. Isn’t that a good enough reason to stay on MacOS?

                                                                                                                          1. 2

                                                                                                                            Yeah that’s a good reason. That’s why I have macOS, Windows, and Linux computers as well.

                                                                                                                            Of the three, Linux gives me the least overall amount of sysadmin chores to do.

                                                                                                                          2. 7

                                                                                                                            I am on Mac for work (Linux at home). Once you install all of the GNU utils, it acts a lot like a locked down linux machine would.

                                                                                                                            But I find that using the same OS as your coworkers relieves stress when things when a program does not work. If I am on Mac and Cicso web meetings crashes, it is the software’s fault. If I am on Linux and it breaks, in the eyes of my employer it is my fault for being the one guy in the office using Linux.

                                                                                                                            1. 4

                                                                                                                              This is also a good reason, although depending on your job you might not need to do this. At my last job, the fact that I ran NixOS while everyone else was on macOS made it possible to run the code locally and make improvements to the build system while others had to ssh to dev environments and were not in a good position to fearlessly fiddle with their environment, and thus not the build system either.

                                                                                                                          1. 3

                                                                                                                            The primary issue I have with using the prefers-color-scheme media query is it doesn’t allow me to specify which theme I’d like per site. If you’re considering implementing this, I’d suggest adding a setting to your site with three options: light, dark, and system. That way, users on systems that don’t have OS-level theme support can still benefit from a dark/light mode, and users have the ability to override the system default should they choose to do so.

                                                                                                                            1. 10

                                                                                                                              Strong disagree. This feature is actually well-designed in that the configuration is correctly external to the website. If the user says they prefer dark, then they prefer dark. Leave it to browser extensions - which already exist - to allow overriding this setting on a per-website basis.

                                                                                                                              1. 3

                                                                                                                                That’s like saying because a user allowed notifications on one site, they must want notifications on every site.

                                                                                                                                1. 4

                                                                                                                                  No, it’s like saying sites should not make it configurable whether they notify you; instead they should rely on the browser’s permissions system.

                                                                                                                              2. 2

                                                                                                                                It may be fine to have an override if “system” is a default. The problem with any other solution to user selection is that if you switch your system then any page you visit will be jarring. It’s like having flash photography right in your face while browsing and it is a serious problem. An analogy to this might be the way TV commercials used to be super loud. Sure, you could use the volume controls to make it work for you, but for that short period you have to deal with something unpleasant.

                                                                                                                                If I had to chose between dark-mode by system setting and no-dark mode, I’d chose the prior. For me, this is a clear step forward as it creates a consistent level of lightness. If a selection requires user intervention, then there is always going to be the same problem. Similar issues would happen if it required JavaScript to run (how does it even start displaying a partially loaded page? or what if JS is disabled out of privacy or efficiency concerns?).

                                                                                                                                One way to improve it might be to allow the browser to support an override for a site rather than the site program it. In that case, this CSS media query is still the right way to go.

                                                                                                                              1. 5

                                                                                                                                Some devs are not very happy with that new notary thing.

                                                                                                                                1. 4

                                                                                                                                  It only applies for binaries downloaded with the browser. Anything on a game launcher or similar is already unaffected.

                                                                                                                                  1. 1

                                                                                                                                    Isn’t it related to every executable (and kext) that is being built by every developer?

                                                                                                                                    1. 3

                                                                                                                                      No. it’s only related to binaries marked as quarantined. It’s up to the transferring application to set that extended attribute. Compilers don’t. Browsers do.

                                                                                                                                      Stuff you build for yourself is unaffected. Same goes for whatever pre-built binary brew downloads.

                                                                                                                                      1. 1

                                                                                                                                        Compilers don’t. Browsers do.

                                                                                                                                        The Notary service does it. Not the browser. The browser (well, only Safari AFAIK) leaves a note so that the OS can tell you where a document, file or binary came from. But that is not the notarization process. That happens on Apple’s servers. You have to send your app to the Notarization API and you will get back a binary that has some special signed meta data attached.

                                                                                                                                        If it were as simple as the browser adding this then every piece of malware would be doing that.

                                                                                                                                        Note that this does not cost money. You don’t need a $100 developer subscription. All you need is an Apple ID.

                                                                                                                                        1. 1

                                                                                                                                          I was talking about setting the quarantine xattr. Unless an executable has that flag set, the OS will execute it even when it’s not signed.

                                                                                                                                          Of course notarization has to be done by Apple and not each user individually. That was the main goal of the change.

                                                                                                                                    2. 1

                                                                                                                                      I don’t get it, how is downloading a binary with a browser different than a game launcher?

                                                                                                                                      1. 3

                                                                                                                                        Game launcher (e.g., Steam) is verified. It’s now Steam’s job to police the contents of their platform. If they fail Apple can blacklist Steam for everyone at a moment’s notice, so Valve is incentivized to not ship malware through Steam.

                                                                                                                                        1. 2

                                                                                                                                          Browsers set the gatekeeper flag, game launchers don’t. It sounds stupid.

                                                                                                                                          1. 0

                                                                                                                                            Browsers don’t set any flags.

                                                                                                                                            1. 2


                                                                                                                                              I beg to differ

                                                                                                                                              1. 1

                                                                                                                                                Yes but this is not used by Gatekeeper to decide whether or not to run a binary. This is just for the notification you will see in the Finder when you open the app. Notarization is a signing process. If it were just as simple as adding some meta data to a file then every piece of malware would be doing that.