1. 7

    I don’t see why we can’t have secure and open systems. Doesn’t open source show this is just as feasible as security through obscurity?

    It’s great that you are happy to have Apple repair your device. I’ve chosen this myself in the past. But just as I wouldn’t want to be forced to take my car to the dealer, I don’t want to be forced to go to Apple to repair my phone.

    Sorry, but I don’t find you arguments compelling. We should have parts and specifications regardless of how long devices last and how beneficial it is to integrate components.

    1.  

      It’s not security through obscurity, it’s security through physical barriers. If it’s harder to get to the components, it’s harder for an attacker to mess with them. That’s why a Trusted Computing Module that holds keys is in the CPU package instead of being a separate component. If an SSD or RAM can’t be pulled out of its socket, that prevents some attacks on it, or at least makes them a lot more difficult and time-consuming; meaning you’ve blocked evil-maid attacks and probably evil-semi-skilled-police-forensics-lab attacks.

      1.  

        Ok, but how does a manufacturer providing third parties components negate any of this? If they provide a security module, it won’t have the original keys. If they integrate RAM into the CPU, they can still provide a replacement for those components.

    1. 12

      This is the first time I’ve seen a lawyer argue that a license that differentiates between commercial and non-commercial use is a thing that you can do. The other advice I’ve read suggests that it’s too fraught with corner cases to be allowed. For example:

      • If I use a music player to listen to music on my headphones while I’m working, is that a commercial use?
      • If I put ads on my blog, is it a commercial use of the text editor that I use to write the entries or the CMS that I use to host it?
      • If a company uses the software for a purpose that is not connected to their commercial activities, is that a commercial use?
      • If I use the software for some free project and someone decides later to give me a donation to support my work, am I now violating the license?
      • Does a fee-paying university count as a commercial use? What if it’s also a registered charity?

      Splitting the world into commercial and non-commercial activities is normally a problem that lawyers say is too hard.

      1. 20

        Well, Creative Commons did it back in the day by adding the NC variants.. in an intentionally flexible way:

        The inclusion of “primarily” in the definition recognizes that no activity is completely disconnected from commercial activity; it is only the primary purpose of the reuse that needs to be considered.

        In their experience there weren’t many conflicts over the definition. So I guess (like was recently said about engineering) “commercial use is like pornography, I know it when I see it” – and that’s good enough.

        1. 5

          In all honesty, the whole software licensing idea is a bit anoying and not as useful as most people think.

          I put the MIT license on code I push to github because they force me to use a license. But in good truth, I have no way to enforce it in most cases. Nor would I care about ofenses in many cases.

          I wish software authors would be less possessive about their code and put the focus on the code itself rather than overhead. I miss the days when one would post code online and whomever wanted would do whatever they wanted with it without bringing the boring licensing discussions to attention. Attribution would naturally occur to an acceptable level givena good cominity with rnouth well intended people.

          I also don’t quite agree with the concept of paying for a copy of the software and not being able to do whatever one wants with it, within reasonable limits such as non-usurpation. I understand it is a reality today and perhaps even the most adapted to today’s economy, but it is a practice that should be questionable. Is it really ethically correct? I don’t think so.

          1. 17

            For me, licenses are not for the code publishers, but rather the code consumers.

            If you publish code without a license, then in my jurisdiction it’s technically copyrighted by default. I’m legally not allowed to use it at all, and open myself up to legal liability if I do. After I make my millions, how do I know you won’t some day take me to court and demand a percentage of that? By putting a license on your code, you’re giving people peace of mind that you’re not gonna turn around and try to sue them later.

            1. 6

              Agreed. At work a few years ago I copied-and-pasted a page of useful code from a gist I found on GitHub, including the comment identifying the author, and I added a comment saying where I got it from.

              Before our next release, when we had to identify any new open source code we were using, I added a reference to that file. The legal department then became worried that there was no license associated with it. Someone ended up tracking down the author and asking him, and he assured him he had no claim on it and put it in the public domain.

            2. 8

              I wish software authors would be less possessive about their code and put the focus on the code itself rather than overhead.

              Unfortunately, this attitude only leads to mass exploitation of developers and enrichment of corporate interests.

              The world is full of assholes who will take advantage of the free work and good will of others and give nothing back.

              The world is also full of useful idiots who will give over their stuff to the aforementioned assholes and then, years later after discovering that you can’t pay rent with Github stars or HN posts, cry and piss and moan about how they were ripped off.

              So, yeah, licenses are important.

              1. 10

                You can’t “exploit” someone by taking [a copy of] what they’re giving away for free. Free means free.

                If you create stuff and don’t charge money for it but have the expectation that people will give you money for it anyway or at least recompense you somehow_ … then you are either living in a small traditional village culture, or an anarchist commune. In both of those environments there is such a social contract*. If you’re not, you are indeed an idiot, unless you encumber your software with a license that forces such recompense.

                I don’t believe most open source contributors who don’t use copyleft licenses are idiots. I believe they genuinely make their software available for free and don’t expect to see a dime directly from it.

                In my case I do so to give back to the world, and because having people use and appreciate what I’ve made makes me feel good, and because it enhances my reputation as a skilled dude to whom my next employer should pay a handsome salary.

                * I highly recommend Eric Frank Russells’s 1940s SF story “…And Then There Were None”, about a colony planet that adopts such a society, inspired by Gandhi, and what happens to a militaristic galactic empire starship that rediscovers the planet.

                1. 6

                  You can’t “exploit” someone by taking [a copy of] what they’re giving away for free.

                  I would argue that you absolutely can if you take something offered freely, make a profit at it, and do not somehow pay that back to the person who helped you out. It’s somewhat worse for many maintainers because there is active pressure, complaining, and hounding to extract still further value out of them.

                  I don’t believe most open source contributors who don’t use copyleft licenses are idiots. I believe they genuinely make their software available for free and don’t expect to see a dime directly from it.

                  Not idiots–useful idiots. It’s a different thing.

                  I think there is for many of us a belief that we give away our software to help out other developers. I think of neat little hacks I’ve shared specifically so other devs don’t ever have to solve those same problems, because they sucked and because I have myself benefited from the work of other devs. This is I would argue an unspoken social compact that many of us have entered into. That would be the “not directly see a dime” you refer to, I think.

                  Unfortunately, it is obvious that as a class we are not recouping the amount of value we generate. It is even more painful because it’s a choice that a lot of developers–especially web developers, for cultural and historical reasons–sleepwalk through.

                  Consider Catto and Angry birds, right? Dude wrote Box2D (without which you don’t really get Angry Birds as a physics game) and never saw (as reported anyways) a red cent of the 12BUSD in revenue they booked in 2012. That’s insane, right? There’s no world in which that is just.

                  (One might argue “@friendlysock, ours is not a just world.” In which case, sure, take all you can and give nothing back, but fucking hell I’m not gonna pretend I don’t find it in equal measure sad and offensive.)

                  Our colleague I’m responding to is exactly that sort of person that a company, investor, or founder loves–yes, yes, please, don’t think too hard about licenses, just put your work in the public domain! Don’t worry your pretty little head about getting compensated for your work, and most certainly don’t worry about the other developers you put out of a job! Code wants to be free, after all, and don’t fret about what happens to development as a career when everything we need to write has either been written or can be spun whole-cloth by a handful of specialists with the aid of GPT descendants!

                  I suspect our colleague means well, and lord knows I wish I could just focus on solving neat problems with code, but we can ill afford to ignore certain realities about our industry.

                  1. 5

                    I would argue that you absolutely can if you take something offered freely, make a profit at it, and do not somehow pay that back to the person who helped you out.

                    Nah, I’ve published MIT stuff, and my take is - go for it, commercialize the hell out of it, you don’t have to pay me anything.

                    The point of MIT is to raise the state of the art, to make the solution to a problem universal. That includes corporations. No reciprocity is required: the code being there to be used is the point of releasing it.

                  2.  

                    If you’re not, you are indeed an idiot, unless you encumber your software with a license that forces such recompense.

                    Well, that’s exactly the point of the article. If you don’t want to be exploited, don’t use MIT, but instead use this or this license.

                  3.  

                    I think you overestimate how common those “idiots” are (I disagree that the world is “full” of them as snej explains in the sibling comment), maybe due to the occasional cases that get a lot of attention, and I think you underestimate how a spirit of giving can benefit the commons, for genuine non-financialized benefit to the giver and others. Copyleft hasn’t solved the domination problem, and with AI-(re)written code being a likely dominant future force, I won’t be surprised to see license relevance decline. There’s other approaches to the world’s problems than licenses, and maybe in some cases restrictive licenses trap us in local minima.

                  4. 4

                    I feel similarly, in terms of over-focusing on licenses, and I don’t care what the not-well-intentioned people do with most of the code I put online; not that I would never speak out but life’s too short and I’d rather focus on other ways to convey my values and have a positive impact. (this isn’t a statement against other people using copyleft or non-commercial, I still consider using them in some cases) Two licenses that might fit those goals better than MIT are the public domain Unlicense and WTFPL.

                    With the future looking like it’ll be full of AI-assisted code trained on every open codebase, we need solutions other than licenses more than ever. “Computer, generate me a program in Zig that passes the LLVM test suite in the style of Fabrice Bellard.”

                      1. 5

                        Problem with some licenses like Unlicense is that not all jurisdictions allows you to voluntarily place your work under public domain, so in such jurisdictions that license is void.

                        1. 5

                          Thanks for pointing that out, do you know of the best alternative? The Unlicense Wikipedia page says the FSF recommends CC0 instead.

                          From Wikipedia on CC0:

                          Or, when not legally possible, CC0 acts as fallback as public domain equivalent license.

                          1.  

                            The Unlicense also intended to do exactly that. The “Anyone is free…” and the “AS IS” paragraphs are the fallback.

                            1.  

                              While the FSF recommends the CC0 for non-software content, they do not recommend it for software. The OSI has similar concerns.

                        2.  

                          Jim Weirich (author of rake, rest in peace) used the MIT license for most of his work but a few smaller projects used this simple license:

                          You are granted permission to read, copy, modify, redistribute this software or derivatives of this software.

                          It’s important to grant at least some license, otherwise (as I understand it) in the US you do not have any rights to make copies of the work unless you are the copyright holder or are granted a license. There is a lot of old software in the world where the author has passed away or otherwise moved on, without ever granting an explicit license, leaving the software to sit unused until the copyright expires.

                          (I am not a lawyer and this is not legal advice)

                          1.  

                            What happens if you copy paste a 20 line script from a blog and include it in the project of a product you make in the context of a private company of yours which doesn’t publish its code?

                            It’s not like the open source police will read all your source files and search line by line to try to find it out there on the web. If anything, most companies a ton of low quality code that no one wants to look at.

                            1.  

                              I think you are making the point that a license does not in practice restrict someone from using your code under terms not granted by the license; I agree.

                              You wrote that you wished “software authors would be less possessive about their code and put the focus on the code itself rather than overhead”. I also agree with that sentiment, but I do not believe that implies publishing code “without bringing the boring licensing discussions to attention” (which I interpreted as “without a license”) is the best path to putting the focus on the code.

                        3. 3

                          The most common thing that I see is a pair of products. Product Community Edition is MIT or BSD or AGPL or, occasionally, GPL, and comes with a git repo and a mailing list, and a refusal to take patches unless accompanied by an IP transfer. It’s always free.

                          Product Business Edition or Enterprise Edition is licensed on commercial terms and includes at least one major feature that businesses feel is a must-have checkbox item, and some amount of support.

                          I used to see a bunch of open source products where the main (usually sole) dev sold a phone app that went with the product, in order to raise some money. That seems less popular these days.

                          1.  

                            As you and I have discussed here before, it is quite reasonable to talk about Free Software licenses which are effectively non-commercial. The licenses I enumerated at that time are uniform in how they would answer your questions: yes, all of those things are allowed, but some might be unpalatable to employers. Pleasingly, on your third point, a company would be afraid to try to use Free Software provided under these licenses, even for purposes outside their charter of commerce.

                            1.  

                              I got something slightly different from reading the post; it’s not “you can differentiate between commercial and non-commercial” in a license; it’s “if you want to differentiate between commercial and non commercial then don’t dual-license using the MIT license because that creates ambiguity”.

                              1. 5

                                Just to be pedantic, it doesn’t create ambiguity. MIT pretty much lets anyone use it, where your intention was probably not that. Therefore, the issue isn’t ambiguity, it’s redundancy.

                              2.  

                                I don’t see why one couldn’t write a software license that differentiates between commercial and non-commercial use, using whatever criteria the license writer wants for edge cases. That will probably end up not being a free software license - a license that attempts to judge what kinds of uses of software count as “commercial” and legally forbid them limits user freedom to use that software in a way incompatible with the notion of free software - and this will affect free software advocates’ willingness to use software licensed under such terms. But there are plenty of non-free software licenses in this world, what’s one more?

                              1. 10

                                +1. I’ve been bouncing off of LISP’s [REDACTED] syntax since 1980 and have no desire to spend a bunch of time trying to read or write it, but SICP is an important book that I’d like to read.

                                JS is by no means my favorite language either, but I’ve used it before, and it’s fairly readable. Its approachability and popularity should increase the audience for the book, which can’t be a bad thing.

                                1. 3

                                  It’s funny because I was reading through the version cndreisbach posted above, and it looks so strange to me without the S-expressions. Like, in my head the underlying assumption is that everything is S-expressions and I work in various syntactic sugars over them.

                                  1. 2

                                    If it were sugary enough I wouldn’t mind, but I haven’t seen any LISPs that allow a more, uh, human-oriented syntax. (For example, something on the level of Smalltalk syntax would be nice.)

                                    1. 5

                                      There is OpenDylan which is lispy but with a more algol-adjacent syntax.

                                      You could do a more natural language version of Lisp with Racket. It has all the tools at your disposal there from the start. You might enjoy Beautiful Racket.

                                      1.  

                                        I’ve heard Racket has good tools for building new syntaxes (syntaces?) Maybe I’ll try it out.

                                      2.  

                                        For me, a lifelong beginner, I find LISPs far simpler to understand and program than most other languages.

                                        1.  

                                          I mean when I’m writing Haskell or Prolog or C#, they feel like sugar. In many cases I prefer sugar. It’s nice to read and helps my visual system find errors.

                                    1. 2

                                      Pascal enforced this — there was no “return” at all, you had to fall off the end of the function. I remember that it made error handling really awkward: if your function sequentially called a bunch of functions that might return errors, you generally ended up with a lot of deeply-nested IF blocks.

                                      1. 2
                                        1.  

                                          Nim doesn’t enforce this but follows this tradition. If the last statement in a block is a produces a value without an assignment then it is returned, or you can set the implicit result variable.

                                        1. 10

                                          It’s not about modernity, but ability to fix things in a timeframe shorter than decades. C has fully embraced ossification as a symbol of stability and compatibility, and C++ has made the C community prejudiced against any new language features.

                                          In C, there’s no hope to get even smallest most broken things fixed in a reasonable time. 5 years or longer to get something into the standard (if it gets in at all), several years until it’s implemented, then a few more years before laggard Linux distros update their compilers, then projects will wait a couple more years just in case someone still has an old compiler. If you support MSVC that’s an extra 10 years of waiting until Microsoft puts it on their roadmap, and writes a sub-par implementation. And then you will still run into projects that insist on staying on C89 until the end of time.

                                          In Rust, for small warts, the time between “this is stupid, we should fix it” and having it fixed in production is 3 months, maybe a year.

                                          1. 10

                                            There came a time when I got tired of running as fast as I could just to stay in place. And it seems to me that with modern languages du jour, that’s all you are doing—running as fast as you can just to stay in place.

                                            1. 3

                                              Depends what you use — Node.js has a senseless policy of making semver-major ( = breaking) releases every 3 months, which ripples through the JS ecosystem. Swift went through a lot of churn too.

                                              OTOH Rust has stabilized over 6 years ago, and kept very stable. I have a project from 2016 that uses Rust + JS + C. Rust builds perfectly fine, unchanged, with the latest compiler. JS doesn’t build at all (to the absurd level that the oldest Node version compatible with my OS is too new to run newest versions of packages compatible with my JS). I’ve also had to fix the C build a few times. C may be dead as a language, but clang/gcc do change and occasionally break stuff. C also isn’t well-isolated from the OS, so it keeps tripping up on changes to headers and system dependencies. In this regard Rust is more stable and maintenance free than C.

                                              1. 2

                                                Not sure where you are getting your information about Node.js releases. New LTS (even-numbered) major versions are released every year, with minor non-breaking version updates more frequently.

                                                1. 1

                                                  Sorry, every 6 months: v16 2021-04-20, v17 2021-10-19, v18 2022-04-19. This isn’t merely a numbering scheme, they really reflect semver-major breaking changes. I’d prefer Node not to be on v18.x.x, but on v1.18.x, maybe v2.9.x. LTS doesn’t fix Node’s churn problem. It’s merely a time bomb, because you have to upgrade to a newer LTS and face the breakage eventually.

                                                  I much prefer Rust’s approach of staying on 1.x.x forever, because I can upgrade the compiler every month, keep access to latest-greatest features, and add newly-released dependencies, but still be able to build a 6-year-old project with 6-year-old dependencies and upgrade it on my own schedule.

                                                2. 1

                                                  C may be dead as a language

                                                  It’s not dead as much as usually not used as the primary language for applications. It has a bunch of niches in which it’s used as a low level glue, and there’s quite a few embedded developers who use it.

                                                  on changes to headers and system dependencies

                                                  Programs which break due to header changes are often the result of people not properly including all of the things that they need. This can be a very hard problem to solve because of how header files can be implicitly included, though there are some tools like IWYU which can help mitigate the problem.

                                                  1. 3

                                                    I mean dead in terms of future evolution. TFA shows that even such a basic thing like parsing a number has a poor API that could have been fixed at any point in the last 40 years, but wasn’t. Users C of are so used to dealing with such papercuts, that often there’s no will to improve anything, so these problems will never be fixed.

                                                    As for headers, I mean things like Linux distros changing headers in /usr/include. I don’t really have control over this in C — pkg-config or equivalent will give me whatever it has. In Rust/Cargo I have semver ranges and lockfiles that give me compatible dependencies.
                                                    Same goes for compilers. There are known footguns like -Werror, but also subtler ones due to implementation changes. For example, apple-clang started “validating” header include paths for whatever SDK policy they had. Using GCC instead isn’t easy or reliable either, because Apple keeps adding non-standard C to their headers, even stdio.h. OTOH Rust works as a reliable baseline, and isolates me from these issues.

                                                3. 2

                                                  C++ for all its faults is a decent middle ground in this respect. It’s steadily adding good features, without breaking backward compatibility (except a few edge cases.)

                                              1. 28

                                                I don’t see this as a case for a modern language, as much as a problem with working with C.

                                                This has worked in Ada since its initial version (Ada83), throws Constraint_Error on out of range and bad input, and works similarly on floats Float'Value(S), and even custom enumerations with their names with Enum_Type'Value(S).

                                                X : Integer := Integer'Value (S);
                                                

                                                Strings are also not null terminated, but there’s Interfaces.C.Strings if you need to convert to C-style to work with an imported function.

                                                1. 19

                                                  I think here “modern” means “newer than 1970”.

                                                  1. 14

                                                    Lol in that case, C counts (created in 72).

                                                    1. 4

                                                      EDIT: I think I wildly misinterpreted your point. I was considering “modern” by going off of this quote:

                                                      one of the several more modern languages that have sprung up in the systems programming space, at least for newly written code. I make no secret that I love Rust and Zig both, for many similar reasons. But I also find Nim to be impressive and have heard great things about Odin.

                                                    2. 11

                                                      The C++11 versions of these functions also throw exceptions on out-of-range or parse-error conditions. This makes me a bit sad, because I normally disable exceptions in C++ - I wish the standards group would change their stance on subsetting and properly standardise the no-exceptions dialect.

                                                      1. 7

                                                        I haven’t tried them yet but C++2017 added from_chars and to_chars. They don’t throw exceptions or support localization.

                                                        1. 2

                                                          Zounds! I did not know of these.

                                                          1. 2

                                                            Thanks, that looks like exactly the API that I was looking for! Apart from the not supporting localization bit - it would be nice if they could take an optional std::locale. That said, another of my pet peeves with the C++ committee’s aversion to subsetting is that higher-level functionality such as locales are mandatory in all environments. You can sort-of get around this by defining precisely one locale (“C” or “POSIX”, the only one that POSIX requires to exist) but then you’re relying on your compiler to do a lot of tricky dead-code elimination.

                                                            1. 3

                                                              Not supporting a locale is a goal of those interfaces:

                                                              Unlike other formatting functions in C++ and C libraries, std::to_chars is locale-independent, non-allocating, and non-throwing. Only a small subset of formatting policies used by other libraries (such as std::sprintf) is provided. This is intended to allow the fastest possible implementation that is useful in common high-throughput contexts such as text-based interchange (JSON or XML).

                                                        2. 7

                                                          I’d consider Ada a modern language! At the very least, one of the earliest languages with modern sensibilities.

                                                          1. 8

                                                            I would consider never versions of Ada (especially Ada 2012) to be modern as well. My point was that the author was emphasizing much newer languages, and I was addressing that this has been solved in an older language for a long time.

                                                        1. 4

                                                          This seems like a workaround for using a bad diffing program. On Mac OS, use the opendiff that ships with XCode, it graphically highlights changes within a line. It’s great, as long it’s not one of the odd years where Apple breaks it for no good reason. I’m sure there are similar diff programs for other platforms.

                                                          1. 2

                                                            I see your point, but I like this style of linebreaks primarily because it helps my writing. I like seeing the lengths of my sentences. Are any wildly long? If so, I’ll see it right away. Are the lengths too similar and monotonous? If so, I’ll see it right away. Also, I write more slowly if I enter a linebreak after every sentence.

                                                            The discussion at https://sembr.org (which Tenzer suggested) talks about the benefits to writers and readers without any special focus on diffing. Maybe I should have posted that instead, but I only learned about it this morning!

                                                            1. 2

                                                              Agreed. And when this style was invented, it was a workaround for primitive line-oriented editors like ed where making changes in a line was a lot harder than operating on the line as a whole.

                                                              I used to see Usenet posts and emails like this back in the day, from people who presumably either composed them in ed or had just got in the habit of writing everything that way. It’s kind of weird to read, like everything becomes a kind of modern poetry.

                                                              The various Git GUI apps I’ve used over the years tend to highlight changed words too. (I currently use Fork, which is so good I almost never use the CLI anymore.) it’s very useful even in source code.

                                                              1. 1

                                                                Fair enough, though I partly disagree for the reasons I gave to @carlmjohnson.

                                                                You both may find this funny (or unsurprising): the comments to the original post have pretty much this same debate. Some people respond that the advice doesn’t apply now that we don’t all write in ed and have shitty diff tools. Other people reply that they like the tip for writing and not just for tools.

                                                            1. 2

                                                              We were promised 1/10th of the $200 million, or $20 million in stock, on completion. $10 million to me, $5 million to Ed, and $5 million to Karen Crippes,

                                                              This is very far removed from the compensation I am used to seeing, is this normal and I’m just oblivious?

                                                              1. 7

                                                                A separate issue, but apparently they didn’t receive those stocks. (Very much a tangent, but maybe of interest.)

                                                                1. 5

                                                                  I was at Apple at the time, in the macOS division, and this was very far from normal, at least in my experience.

                                                                  I think in this case it has something to do with the very large expense of not doing this, the mind numbing grunge of the actual work, and the very small number of people actually qualified to do it. Kind of a perfect storm.

                                                                  1. 1

                                                                    It looks quite high but it’s not unusual for companies to give occasional one-off bonuses that may be a multiple of their normal compensation to employees who were instrumental in shipping products that brought in large amounts of revenue. From the article, it sounds as if the author was someone who was right at the top of the engineering track. Levels.fyi doesn’t have data from people that high up in any of the big tech companies except Google (where it has a single data point for an E9 engineer making around $4.5/year). Two levels below that at Apple they have someone making over $1m/year.

                                                                    I’m not sure exactly how you can extrapolate across the industry from this. At Microsoft, the base salary scales roughly linearly with level but the stock and bonus amounts scale with some polynomial factor (on the assumption that the more influence you have over the overall success of the company, the more your total compensation should reflect this).

                                                                    Even accounting for inflation, this looks like it’s a large but not unbelievable bonus amount for one of the most senior engineers for completion of a project that saved the company a much larger amount. That said, it sounds as if the team were never actually paid this bonus, so who knows? I guess the lesson is that if you’re promised a large bonus in advance, get it in writing.

                                                                    1. 1

                                                                      Answered in the very next paragraph

                                                                      I got the $10 million, because it was going to be my job on the line, and potentially, my ability to work in industry at a high level, ever again, in the future.

                                                                    1. 2

                                                                      This looks very impressive!

                                                                      I see that macOS, Windows and Linux are supported; has anyone tried to use it on iOS? Android?

                                                                      1. 2

                                                                        It was built ok on IOS and Android, but not tested yet.

                                                                      1. 3

                                                                        This project seems weirdly pessimal:

                                                                        • Non-commercial research (so you lose the possibility of making a fortune), but working in secrecy like a corporate product lab (so you lose cross-pollination of ideas.)
                                                                        • Everyone’s in one office but they don’t talk to each other informally or have friendly social connections

                                                                        Maybe it’s no surprise that I’ve never heard of them. And even after reading this article I have no real idea of what they’re trying to create, other than that it might be some kind of ubiquitous computing environment like PARC worked on in the 90s.

                                                                        I guess when it finally reaches 1.0 they will announce it to the world in a splashy press conference and explain how it will change society? The last thing I remember that was introd that way was the Segway.

                                                                        1. 5

                                                                          Seems like the exact things this author struggles with are the ones that make Zig good for systems programming.

                                                                          1. 28

                                                                            Really? The biggest complaints I saw here were “poor/missing/incomplete documentation”, “awkward APIs”, and “poor tooling” (upgrades, package management, etc.)

                                                                            I do a lot of systems programming in C++, and I don’t consider any of those to be pluses.

                                                                            1. 6

                                                                              awkward APIs

                                                                              I’d guess that depending on how bare-metal you are, you’ll appreciate passing around your allocators, defining print yourself, disallowing non-efficient/static bitshifts and some other annoyances for when you just want to write stuff on a typical x86 machine where you can allow for some overhead and have a defined set of minimum expectations.

                                                                              That said I don’t agree with this either: I think you can allow specifying your own allocators or print functions without sacrificing usability for everything higher level. We’ll see how well the custom allocator addition to rust works out, but my guess is that it’ll work fairly well. And I think core vs std is a nice concept for this duality of “machine” expectations in rust. See also here for an idea of injecting allocators by default without passing them around all the time in rust (which definitely is hidden logic/structure and is a point where I can feel the discussion of complexity).

                                                                              And for me the string matching and printing example rust had back then was definitely a reason to take a look at it. “C(++) with sane string handling ?”

                                                                              poor/missing/incomplete documentation

                                                                              For docs or tooling I won’t judge a pre 1.0 language. Rust just had a huge momentum and @steveklabnik and others just did an awesome job on the docs.

                                                                              1. 4

                                                                                you’ll appreciate passing around your allocators

                                                                                There are times I’ve appreciated setting my allocator, customizing options, etc, but I can’t say I see why I’d appreciate passing it explicitly at each call site that requires one.

                                                                                I’m open to learning that I should appreciate that.

                                                                              2. 3

                                                                                I am a sample size of one but the Zig IRC channel seemed pretty hostile the few times I’ve stuck my head in there. I’ve seen Andrew permaban at least one person for what seemed like a pretty trivial thing.

                                                                                1. 5

                                                                                  That doesn’t match my experience in the slightest. First of all, I don’t think I’ve seen more than 3 people banned over the last 2 years I’ve been there and always with good reason. The vibes I get are anything but hostile.

                                                                                  I’d suggest spending a longer amount of time before drawing your conclusions.

                                                                                  Anyhow, there are public logs of the IRC channel here: https://github.com/marler8997/zig-irc-logs with a WIP frontend: https://marler8997.github.io/zig-irc-webpage/

                                                                                  And older pre libera.chat logs with a nicer frontend: https://freenode.irclog.whitequark.org/zig/2021-05-10

                                                                                2. 1

                                                                                  “poor/missing/incomplete documentation”

                                                                                  From https://news.ycombinator.com/item?id=29966743, as linked by @roryokane:

                                                                                  The author also mentioned issues with the autogenerated docs for the standard library. Those docs are currently incomplete and in fact greet you with this message as soon as you open them:

                                                                                  These docs are experimental. Progress depends on the self-hosted compiler, consider reading the stdlib source in the meantime.

                                                                                  I’ve seen some comments here about how recommending to read the source code is unhelpful. I vehemently disagree because of practicality first (if something is not documented elsewhere, then that’s the best you can do) and second because reading the source code should not be considered something primitive that developers used to do before discovering fire.

                                                                                  I think the docs have a ways to go. However, I find myself checking the source regularly even when using C or C++. I think having well-written source code should be step one, while documenting that code well should be step two. So while I think Zig’s docs have a long way to go, I also think they’re on the right track.


                                                                                  “awkward APIs”

                                                                                  Is this a direct quote? I’m not seeing it anywhere in the article. Perhaps you could remind me where it is brought up.


                                                                                  “poor tooling” (upgrades, package management, etc.)

                                                                                  Those are all problems. They are also problems that can be solved by adding additional tools (i.e., $ zig <blank>). Thus, I don’t consider them weaknesses in the language. They have much more to do with the limited development power behind the language. That’s not a pass, though; just a reason.


                                                                                  Here’s my take:

                                                                                  • If you’re looking for a kinda-mature language with promise that already blows C out of the water, Zig is a good choice.
                                                                                  • If you’re looking for a fully developed ecosystem with package management up the wazoo, obviously Zig isn’t there yet. Objectively speaking, it’s a matter of scale. In my personal view, it’s only a matter of time.
                                                                              1. 2

                                                                                Hey thanks for sharing my blogpost! Let me know if you have any other ressources, I will be happy to add them to the blogpost 😊

                                                                                1. 1

                                                                                  Is the topic intended to be limited to distributed-in-a-data-center servers, or do you plan to cover peer-to-peer networks as well, stuff like Dat and IPFS?

                                                                                1. 2

                                                                                  What’s interesting about the ui? Simply that it’s vector based?

                                                                                  1. 13

                                                                                    I also find interesting the tabbed windows – each tab can be a different app

                                                                                    1. 5

                                                                                      Like Haiku!

                                                                                      1. 2

                                                                                        Fluxbox also offers that.

                                                                                    2. 2

                                                                                      If we’re raising questions, what’s with boots in seconds, don’t all OSes do?

                                                                                      Edit: not that this doesn’t look interesting, it’s just that that particular boast caught my eye.

                                                                                      1. 17

                                                                                        There’s a great Wiki Page for FreeBSD that Colin Percival (of tarsnap fame) has been maintaining on improving FreeBSD boot time. In particular, this tells you where the time goes.

                                                                                        A lot of the delays come from things that are added to support new hardware or simply from the size of the code. For example, loading the kernel takes 260ms, which is a significant fraction of the 700ms that Essence takes. Apple does (did?) a trick here where did a small amount of defragmentation of the filesystem to ensure that the kernel and everything needed for boot were contiguous on the disk and so could be streamed quickly. You can also address it by making the kernel more modular and loading components on demand (e.g. with kernel modules), but that then adds latency later.

                                                                                        Some of the big delays (>1s) came from sleep loops that wait for things to stabilise. If you’re primarily working on your OS in a VM, or on decent hardware, then you don’t need these delays but when you start deploying on cheap commodity hardware then you discover that a lot of devices take longer to initialise than you’d expect. A bunch of these things were added in the old ISA days and so may well be much too long. Some of them are still necessary for big SCSI systems (a big hardware RAID array may take 10s of seconds to become available to the OS).

                                                                                        Once the kernel has loaded, there’s the init system. This is something that launchd, SMF, and systemd are fairly good at. In general, you want something that can build a dynamic dependency graph and launch things as their dependencies are fulfilled but you also need to avoid thundering herds (if you launch all of the services at once then you’ll often suffer more from contention than you’ll gain from parallelism).

                                                                                        On top of that, on *NIX platforms, there’s then the windowing system and DE. Launching X.org is fairly quick these days but things like KDE and GNOME also bundle a load of OS-like functionality. They have their own event framework and process launchers (I think systemd might be subsuming some of this on Linux?) and so have the same problem of starting all of the running programs.

                                                                                        The last bit is something that macOS does very well because they cheat. The window server owns the buffer that contains every rendered window and persists this across reboot. When you log back in, it displays all of your apps’ windows in the same positions that they were, with the same contents. If then starts loading them in the background, sorted by the order in which you try to run them. Your foreground app will be started first and so the system typically has at least a few seconds of looking at that before you focus on anything else and so it can hide the latency there.

                                                                                        All of that said, for a desktop OS, the thing I care about the most is not boot time, it’s reboot time. How long does it take between shutting down and being back in exact same state in all of my apps that I was in before the reboot? If I need a security update in the kernel or a library that’s linked by everything, then I want to store all state (including window positions and my position within all open documents, apply the update, shut down, restart, reload all of the state, and continue working. Somewhat related, if the system crashes, how long does it take me to resume from my previous state? Most modern macOS apps are constantly saving restore points to disk and so if my Mac crashes then it typically takes under a minute to get back to where I was before the reboot. This means I don’t mind installing security updates and I’m much more tolerant of crashes than on any other system (which isn’t a great incentive for Apple’s CoreOS team).

                                                                                        1. 1

                                                                                          And essence basically skips all of that cruft? Again, not to be putting the project down, but all that for a few seconds doesn’t seem much, once a week.

                                                                                          I don’t think I reboot my Linux boxes more often, and even my work Windows sometimes reminds me that I must reboot once a week because if company policy.

                                                                                          Maybe if I had an old slow laptop it would matter to me more. Or of i was doing something with low-power devices (but then, I would probably be using something more specialised there, if that was important).

                                                                                          Again. Impressive feat. And good work and I hope they make something out of it (in the long run, I mean). But doesn’t Google also work on fuchsia, and Apple on macos? They probably have much more chance to become new desktop leaders. I don’t know, this seems nice but I think there biggest benefit is in what the authors will learn from the project and apply elsewhere.

                                                                                          1. 2

                                                                                            And essence basically skips all of that cruft? Again, not to be putting the project down, but all that for a few seconds doesn’t seem much, once a week.

                                                                                            It probably benefits from both being small (which it gets for free by being new) and from not having been tested much on the kind of awkward hardware that requires annoying spin loops. Whether they can maintain this is somewhat open but it’s almost certainly easier to design a system for new hardware that boots faster than it is to design a system for early ’90s hardware, refactor it periodically for 30 years, and have it booting quickly.

                                                                                            But doesn’t Google also work on fuchsia, and Apple on macos? They probably have much more chance to become new desktop leaders. I don’t know, this seems nice but I think there biggest benefit is in what the authors will learn from the project and apply elsewhere.

                                                                                            I haven’t paid attention to what Fuchsia does for userspace frameworks (other than to notice that Flutter exists). Apple spent a lot of effort on making this kind of thing fast but most of it isn’t really to do with the kernel. Sudden Termination came from iOS but is now part of macOS. At the OS level, apps enter a state where they have no unsaved state and the kernel will kill them (equivalent of kill -9) whenever it wants to free up memory. The WindowServer keeps their window state around so that they can be restored in the background. This mechanism was originally so iOS could kill background apps instead of swapping but it turns out to be generally useful. The OS parts are fairly simple, extending Cocoa so that it’s easy to write apps that respect this rule was a lot more difficult work.

                                                                                        2. 5

                                                                                          In the demo video, it booted in 0.7s, which, to me, is impressive. Starting applications and everything is very snappy too. The wording of the claim doesn’t do it justice though, I agree with that.

                                                                                          1. 3

                                                                                            Ideally you should almost never have to reboot an OS, so boot time doesn’t interest me nearly as much as good power management (sleep/wake).

                                                                                            1. 3

                                                                                              how many people live in this ideal world where you never have to reboot the OS?

                                                                                              1. 6

                                                                                                IBM mainframe operators.

                                                                                                1. 4

                                                                                                  It’s not never, but I basically only reboot my macs and i(pad)os devices for OS updates, which is a handful of times per year. The update itself takes long enough that the reboot time part of it is irrelevant - I go something else while the update is running.

                                                                                                  1. 3

                                                                                                    I think it’s only really Windows that gets rebooted. I used to run Linux and OpenBSD without reboots for years sometimes, and like you I only reboot MacOS when I accidentally run out of laptop battery or do an OS update, as you say.

                                                                                                  2. 3

                                                                                                    I dunno; how many people own Apple devices? I pretty much only reboot my Macs for OS updates, or the rare times I have to install a driver. My iOS devices only reboot for updates or if I accidentally let the battery run all the way down.

                                                                                                    I didn’t think this was a controversial statement, honestly. Haven’t Windows and Linux figured out power management by now too?

                                                                                                    1. 1

                                                                                                      pretty much only reboot my Macs for OS updates, or the rare times I have to install a driver

                                                                                                      That’s not “never”, or are MacOS updates really so far/few between?

                                                                                                    2. 1

                                                                                                      I feel like this is one of those things where people are still hung up from the days of slow HDDs and older versions of Windows bloated with all kinds of software on startup.

                                                                                                      1. 1

                                                                                                        It depends a bit on the use case. For client devices, I agree, boot time doesn’t matter nearly as much as resume speed and application relaunch speed. For cloud deployments, it can matter a lot. If you’re spinning up a VM instance for each CI job, for example, then anything over a second or two starts to be noticeable in your total CI latency.

                                                                                                    3. 2

                                                                                                      If it boots fast, does sleep matter?

                                                                                                      1. 6

                                                                                                        It does unless you can perfectly save state each time you boot. And boot in less than a second.

                                                                                                        1. 4

                                                                                                          Only if it can somehow store the entire working state, including unsaved changes, and restore it on boot. Since that usually involves relaunching a bunch of apps, it takes significantly longer than a simple boot-to-login-screen.

                                                                                                          This isn’t theoretical. Don’t you have any devices that sleep/wake reliably and quickly? It’s profoundly better than having to shut down and reboot.

                                                                                                          1. 2

                                                                                                            Only if it can somehow store the entire working state, including unsaved changes, and restore it on boot

                                                                                                            That’s another interesting piece of the design space. I’ve seen research prototypes on Linux and FreeBSD (I think the Linux version maybe got merged?) that extend the core dump functionality to provide a complete dump of memory and associated kernel state (open file descriptors). Equivalent mechanisms have been around in hypervisors for ages because they’re required for suspend / resume and migration. They’re much easier in a hypervisor because they interfaces for guests have a lot less state: a block device has in-flight transactions, a network device has in-flight packets, and all other state (e.g. TCP/IP protocol state, file offsets) is stored in the guest. For POSIXy systems, various things are increasingly difficult:

                                                                                                            • Filesystem things are moderately easy. You need to store the inode and offset. If another process modifies the file while you’re suspended then it’s not really different than another process modifying it while you’re running. Filesystem locks are a bit more interesting - if a process holds a filesystem lock and is suspended to disk, what should happen? You probably don’t want to leave the file locked until the process is reloaded, because it might not be. On the other hand, it will probably break in exciting ways if it silently drops the lock across suspend / resume. This isn’t a problem if you’re suspending / resuming all processes at the same time.
                                                                                                            • Network connections are basically impossible, which makes them easy: you just drop all connections and restore listening / bound sockets. Most programs already know how to handle the network going away intermittently.
                                                                                                            • Local IPC can be interesting. If I create a pipe and fork, then one of the children is frozen to disk, what should happen? If both are frozen and restored together, ideally they’d both get the pipe back in the same state, which means that I need to persist a UUID or similar for each IPC connection so that restoring groups of processes (e.g. a window server and an application) can work.

                                                                                                            If you have this mechanism and you have fast reboot, then you don’t necessarily need OS sleep states. If you also have a sudden termination mechanism then you can use this as fallback for apps that aren’t in sudden-termination state.

                                                                                                            Of course, it depends a bit on why people are rebooting. Most of the time I reboot, it’s to install a security update. This is more likely to be in a userspace library than the kernel. As a user, the thing I care most about is how quickly I can restart my applications and have them resume in the same state. If the kernel / windowing system can restart in <1s that’s fine, but if my apps lose all of their state across a restart then it’s annoying. Apple has done a phenomenal amount of work over the last decade to make losing state across app restarts unusual (including work in the frameworks to make it unusual for third-party apps).

                                                                                                            1. 1

                                                                                                              All my devices can sleep/wake fine, but I almost never use it. My common apps all auto start on boot and with my SSDs I boot in a second or two (so same as time to come out of sleep honestly, in both cases the slowest part is typing my password).

                                                                                                              1. 1

                                                                                                                On my current laptop, it wakes up instantly when I open the lid, enter the password, and the state is as exactly as I left it. (And it’s probably less gigabytes written to disk than hibernation either.)

                                                                                                    1. 10

                                                                                                      I’m disappointed that this doesn’t specifically and in more detail discuss the fact that thoughtlessly adding abstractions to problems often makes them worse in the long run (it certainly makes systems harder to reason about). This problem seems to be a problem peculiar to software due to the “frictionless” nature of complexity in software compared to physical machines. And it is related to just attempting to throw more power at these problems rather than improving the programmer’s ability to wield the power (i.e. the knowledge and ingenuity codified in extant tools) already at hand.

                                                                                                      Also it seems the author fails to reason solidly about technical debt: it makes sense to take on loads of this if the objective is to hit million x growth and then worry about the fallout later (and if the first to market advantage remains extraordinarily high). That approach is certainly used inappropriately by non-startups, but failing to see that this stems from the foolish attempt to “do things like a startup” when you can’t expect explosive growth doesn’t help make the argument.

                                                                                                      I agree with most of the author’s comments, I just think that the post could be a lot stronger.

                                                                                                      1. 10

                                                                                                        “Any problem can be solved by adding a layer of abstraction, except for the problem of too many layers of abstraction.” (Also seen as “…except for performance problems.”)

                                                                                                        1. 1

                                                                                                          Sometimes a performance problem can be fixed by adding another layer of indirection which reads the information from all the other layers of stuff and generates the code that you would’ve written if there had been only one layer which directly solved the problem. ;)

                                                                                                      1. 4

                                                                                                        To my knowledge, the paper is quite off-base in its discussion of LMDB; it might be correct for System R, but the papers on LMDB describe the page management very clearly and it’s not based on multiple memory-mappings at all. Instead it’s copy-on-write to a free page, and there are two special pages in the file that store the roots of the trees, and are updated alternately. Updated pages are written back using file I/O.

                                                                                                        A lot of the criticism of mmap has to do with attempting to use a writeable mapping. I totally agree with that; faulting changed pages back to the file is too uncontrollable, plus there’s the danger of stray writes corrupting the file.

                                                                                                        I’m willing to accept mmap isn’t a good idea for a high-scale server DBMS running on big-iron CPUs. But that’s not the only use case for databases. I think mmap is great for smaller use cases, like client-side. It’s faster (LMDB smokes SQLite) and requires less tuning since there’s no read-buffer cache you have to set the size of. Allocating most of your RAM for caches is fine when the entire computer is a dedicated DB server, but it’s a terrible idea on a home PC and even worse on mobile.

                                                                                                        The experimental key-value store I was building last year is inspired by LMDB but can use either mmap or a buffer cache for reads. (It always uses buffers for writes.) On my MacBook Pro, mmap is significantly faster, especially if I’m conservative in sizing the read cache.

                                                                                                        It would be interesting if the authors repeated their benchmarks on less behemoth-sized systems, like maybe a Core i7 with 16GB RAM. I suspect that the cache-coherency slowdowns wouldn’t be as bad in a CPU with 8 cores instead of 64.

                                                                                                        1. 2

                                                                                                          I also have fairly good experience with LMDB (compared to much more feature-rich SQLite). I am wondering if using MMAP for application-level caching, will continue to improve over-time as OS swap subsystems continue to optimize SSD-based swap space handling.

                                                                                                          I had seen that DragonFly BSD made specific concentrated effort in this area: https://leaf.dragonflybsd.org/cgi/web-man?command=swapcache&section=ANY And, perhaps, systems like LMDB can take advantage of OS-specific tuning sooner-rather-than-later.

                                                                                                          Many i/o access systems that optimized their performance (scheduling, etc) for spinning disks in the past, gradually change to take advantage for faster non-spinning disks. With that, prioritizing higher, the design choices that improve cache line consistency, and reduce context switching.

                                                                                                        1. 7

                                                                                                          This seems fine to me … but let me just peevishly point out that we don’t even have a tag for audio/music, which is far more mainstream than VR/AR. (There is an audio tag but it’s to mark posts that link to audio files like podcasts.)

                                                                                                          1. 2

                                                                                                            What they benchmarked was only the space efficiency of various encodings. Disappointing that they didn’t look at performance of encoding the data or of accessing all or portions of it.

                                                                                                            1. 3

                                                                                                              Agreed, on the other hand that would be a benchmark of implementations, not specifications.

                                                                                                            1. 2

                                                                                                              It’s good to see secret handshake being used for other things outside of SSB! For testing your implementation, in case you haven’t seen it, theres shs1-test, which does conformance tests in a language-agnostic way.

                                                                                                              1. 1

                                                                                                                Thanks! I had not seen that. I’ll add a test using it when I get a chance.

                                                                                                              1. 2

                                                                                                                My favorite from this list are constrained auto and the mathematically correct comparison functions (I’ve periodically inherited code bases that were scary to change but would produce many many signed vs unsigned warnings when actually compiled with warning enabled)

                                                                                                                1. 1

                                                                                                                  using enum makes me very happy. I can finally write code like this:

                                                                                                                  struct Foo
                                                                                                                  {
                                                                                                                          enum X { A, B };
                                                                                                                          using enum X;
                                                                                                                  };
                                                                                                                  
                                                                                                                  Foo::X x = Foo::A;
                                                                                                                  

                                                                                                                  Previously, X was its own symbol namespace and so this the last line ended up needing to be:

                                                                                                                  Foo::X x = Foo::X::A;
                                                                                                                  

                                                                                                                  If the enum has a useful and meaningful name, then this is incredibly verbose. The class can avoid name conflicts (don’t declare two enums with values with the same name in the same class, it’s just a bad idea) and they remain scoped to the class.

                                                                                                                  I’ve already used non-type template parameters to use strings as template parameters and even pairs of string and value to define compile-time maps. These are really fun.

                                                                                                                  Similarly, I’m already using the structure initialisation syntax to provide optional named arguments to functions.

                                                                                                                  I’m a bit sad that it’s taken 30 years to get a starts_with and ends_with method on strings but strings in C++ are still a bit of a mess. Baking the representation of strings into its interface is one of the biggest mistakes a language / standard library can make and it’s very hard to fix that in C++ now.

                                                                                                                  contains is similar. It’s the most common operation that I do with any set data type, and with std::set I need to do a lookup and then compare the iterator against end(), which is verbose and not instantly obvious what it means.

                                                                                                                  Heterogeneous lookup is also something I’ve been missing for a while. Objective-C lets you look up using any type that implements the compare methods correctly. This is somewhat error prone because it’s hard to ensure that [a isEqual: b] == [b isEqual: a] in the presence of subtyping. The C++ approach is much cleaner and it lets me do things like have an owning object (smart pointer, wrapper around a file descriptor, and so on) in the container and use the non-owning variant for lookups. I have some slightly convoluted code that works around not being able to do this today, which I’m looking forward to updating to C++20.

                                                                                                                  I’m also using the source location stuff already in some custom invariant-checking functions that use libfmt to provide pretty error messages when I hit an invariant violation (the annoying thing about C’s assert is that it can tell you x != y but it can’t tell you what x or y is).

                                                                                                                  As with C++14 and C++17, there’s nothing there that is completely revolutionary but lots of things that each make my life a little bit easier.

                                                                                                                  Oh, and one more thing: std::atomic now has basic futex-like behaviour. Unfortunately, I didn’t notice that the implementation in libc++ is far from ideal until we’d shipped a binary release and now it’s part of the ABI. Microsoft STL does it a better way. This should let simple futex use cases work without any platform-specific code.

                                                                                                                  1. 2

                                                                                                                    what was the issue with libc++’s atomic?

                                                                                                                    1. 1

                                                                                                                      For std::atomic::wait and std::atomic::wake, there’s an interface between the code in the header and the library. This interface is the simplest case for something like a futex.

                                                                                                                      Most operating systems have something like a futex, at least for this case: in the wait path, the kernel acquires a lock indexed by the object’s address, compares the object’s value against something provided, and (if they don’t match) releases the lock and blocks the calling thread. The corresponding wake call acquires a lock indexed by the address (so, the same lock as the wait), signals all waiters, and returns. Userspace code on the wake path does an atomic exchange to read modify the value and see if there are waiters and then issues the wake if there are any waiters. If a waiter races then it will either fail the compare that the kernel does with the lock held and not block, or it will wait just before the wake.

                                                                                                                      The C++ atomics model requires these operations to work with std::atomic<T> for all values of T. This means that types that the platform’s futex equivalent does not natively support must be implemented by maintaining a local table mapping from address to a lock word. The interface for deciding when to use the underlying value directly with the kernel interface or use the look-aside table forms part of the ABI contract between the header and the standard library. In libc++, there were two poor design choices:

                                                                                                                      • The header defines a single type that is used natively as keys. This is a 32-bit integer on Linux, a 64-bit integer on all other platforms.
                                                                                                                      • The header uses type-based overloading to dispatch to this version.

                                                                                                                      This is fine for Linux today, where futex supports only 32-bit keys and for macOS where the equivalent supports only 64-bit ones, though it’s not fantastic because Linux may add a futex64 or even futex128 at some point. It is far less fine for FreeBSD, where the interfaces support 32-bit keys on all targets but support 64-bit keys only on 64-bit targets (32-bit PowerPC, for example, lacks 64-bit atomics), which means that FreeBSD can’t support the ABI that has been defined to use only 64-bit types on 32-bit platforms. It is annoying on Windows where the native interfaces support all power-of-two sizes from 1-8 bytes but we have to use the indirection layer (which can add false contention) for everything other than 8 bytes.

                                                                                                                      Ideally, the library interface would take the size of the type so that the set of types that the host platform supports can be modified in the .so without breaking any code linked against it. Making this change in libc++ is an ABI break.

                                                                                                                    2. 1

                                                                                                                      That “using” in your example isn’t needed with an enum, but it is with an enum class.