1. 16

    An approximated summary based on titles of video segments [?], for people who don’t want/can’t watch the video now:

    • “Language spec”
    • “Self-hosted compiler”
    • “[Merging & discussing PRs and helping people get unstuck - good PRs can also influence our priorities]”
    • “LLVM [got a new] release[, we need to rebase/update]”
    • “Official package manager”
    • “Zig (Bug) Stability Program”

    then the video proceeds to a Q&A apparently taking ~30min of the ~50min video, so there might be some notable content there too.

    1.  

      I guess one of the coolest features of Zig promised for 0.7.0 falls under the “Self-hosted compiler” point.

    1. 1

      So, after reading the post, one thing that I’m kind of confused if you are aware of, is that Nix/Guix (or really Scheme in the latter’s case) in theory allow generating whatever output scripts you need (that’s in fact part of their allure), so I’d imagine it should be possible to try and use them for Terraform/HCL/whatever else.

      A few other more or less related technologies that you might be interested to read about:

      1. 1

        The Terraform bit was kinda a “hey now we’re doing all the nodes with this, can we do the infra as well?” thought, but yes, I could do the Terraform generation in Nix/Guix. Given it’s templating support, maybe the other way around as well (but then I’d probably hit the limits in HCL, so probably realistically no).

        I’d heard of Cue/Dhall before, but wasn’t a massive fan in either case (partially because of a personal aversion to Haskell that Dhall’s use of lambda symbols really doesn’t help with, and not overly fond of Go either for Cue’s scripting). I’d like to have an actual go with them and see if the benefits outweigh my revulsion, but haven’t had time.

        Hermes looks really interesting. I’ve just had a quick dig around, and I haven’t seen anyone doing a “HermesOS” yet, but that would be very much towards my personal sweet spot on such matters. I’d previously dismissed Janet (mostly in a “oh, another small Lisp, yay, I’ll stick to Clojure thanks” way), but I’m seeing now the potential for it in this sort of system config space.

        1. 1

          Among cue and dhall, IMO cue is the interesting one. One thing I had issue with when trying it a couple years ago was crappy error handling (roughly boiling down to the infamous “?” error message of old compilers), with no clear will to improve when I reported that. To me that was the main thing that stopped me from using it then; no idea if they improved it since. (Modulo the fact that I am also personally rather a fan of Go so don’t have problem with that particular aspect.) Due to that I was loosely exploring the idea to try and learn microKanren and see if I could write an “alternative cue” with better error handling. However I failed to grasp microKanren quickly enough for leisure-time hacking… which is somewhat fortunate as I’m already spread far too thin among hobby projects.

          As to Hermes there’s companion hpkgs, though I’m not aware of hermesOS indeed.

      1. 11

        There are basically 5 classes of programs that the author discusses

        1. Fully dynamically linked programs. Only python programs are of this form.
        2. Partially dynamically linked programs. This describes C and C++ using dynamic libraries. The contents of the .c files are dynamically linked, and the contents of the .h files are statically linked. We can assume that the .h files are picked up from the system that the artifact is built on and not pinned or bundled in any way.
        3. Statically linked programs without dependency pinning. This describes rust binaries that don’t check in their Cargo.lock file to the repository, for instance.
        4. Statically linked programs with dependency pinning. This describes rust binaries that do check in their Cargo.lock file to the repository. (For simplicity sake we can include bundled but easily replaceable dependencies in this category)
        5. Programs with hard to replace bundled dependencies (statically or dynamically linked, for instance they complain about rustc llvm which is dynamically linked).

        I think it’s pretty clear that what the author is interested in isn’t actually the type of linking, they are interested in the ease of upgrading dependencies. This is why they don’t like python programs despite the fact that they are the most dynamically linked. They happen to have tooling that works for the case of dynamically linked C/C++ programs (as long as the header files don’t change, and if they do, sucks to be the user) so they like them. They don’t have tooling that works for updating python/rust/go/… dependencies, so they don’t like them.

        They do have a bit of a legitimate complaint here that it takes longer to relink all the statically link dependencies than the dynamically linked ones, but this strikes me as very minor. Builds don’t take that long in the grand scheme of things (especially if you keep around intermediate artifacts from previous builds). The benefit that we don’t have the C/C++ problem where the statically linked parts and the dynamically linked parts can come from different code bases and not line up strikes me as more than worth it.

        They seem to be annoyed with case 3 because it requires they update their tooling, and maybe because it makes bugs resulting from the equivalent of header file changes more immediately their problem. As you can guess, I’m not sympathetic to this complaint.

        They seem to be annoyed with case 4 because it also makes it makes the responsibility for breaking changes in dependencies shift slightly from code authors to maintainers, and their tooling is even less likely to support it. This complaint mostly strikes me as entitled, the people who develop the code they are packaging for the most part are doing so for free (this is open source after all) and haven’t made some commitment to support you updating their dependencies, why should it be their problem? If you look at any popular C/C++ library on github, you will find issues asking for support for exactly this sort of thing.

        Category 5 does have some interesting tradeoffs in both directions depending on the situation, but I don’t think this article does justice to either side… and I think getting into them here would detract from the main point.

        1. 5

          I was especially surprised to see this article on a gentoo blog, given that as I remember gentoo (admittedly from like 10-15 years ago), it was all about recompiling everything from source code, mainly For Better Performance IIRC. And if you recompile everything from source anyway, I’d think that should solve this issue for “static linkage” too? But maybe gentoo changed their way since?

          Looking at some other modern technologies, I believe Nix (and NixOS) actually also provide this feature of basically recompiling from source, and thus should make working with “static” vs. “dynamic” linking mostly the same? I’m quite sure arbitrary patches can be (and are) applied to apps distributed via Nix. And anytime I nix-channel --upgrade, I’m getting new versions of everything AFAIK, including statically linked stuff (obviously also risking occasional breakage :/)

          edit: Hm, Wikipedia does seem to also say Gentoo is about rebuilding from source, so I’m now honestly completely confused why this article is on gentoo’s blog, of all the distros…

          Unlike a binary software distribution, the source code is compiled locally according to the user’s preferences and is often optimized for the specific type of computer. Precompiled binaries are available for some larger packages or those with no available source code.

          1. 11

            “Build from source” doesn’t really solve the case of vendored libraries or pinned dependencies. If my program ships with liblob-1.15 and it turns out that version has a security problem, then a recompile will just compile that version again.

            You need upstream to update it to liblob-1.16 which fixes the problem, or maybe even liblob-2.0. This is essentially the issue; to quote the opening sentence of this article: “One of the most important tasks of the distribution packager is to ensure that the software shipped to our users is free of security vulnerabilities”. They don’t want to be reliant on upstream for this, so they take care to patch this in their packages, but it’s all some effort. You also need to rebuild all packages that use liblob<=1.15.

            I don’t especially agree with this author, but no one can deny that recompiling only a system liblob is a lot easier.

            1. 2

              AIUI the crux of Gentoo is that it provides compile-time configuration - if you’re not using e.g. Firefox’s Kerberos support, then instead of compiling the Kerberos code into the binary and adding “use_kerberos=false” or whatever, you can just not compile that dead code in the first place. And on top of that, you can skip a dependency on libkerberos or whatever, that might break! And as a slight side-effect, the smaller binary might have performance improvements. Also, obviously, you don’t need libkerberos or whatever loaded in RAM. Or even on disk.

              These compile-time configuration choices have typically been the domain of distro packagers, but Gentoo gives the choice to users instead. So I think it makes a lot of sense for a Gentoo user to have strong opinions about how upstream packaging works.

              1. 2

                But don’t they also advertise things like --with-sse2 etc., i.e. specific flags to tailor the packages to one’s specific hardware? Though I guess maybe hardwares are uniform enough nowadays that a typical gentoo user wants exactly the same flags as most others?

            2. 4

              This complaint mostly strikes me as entitled, the people who develop the code they are packaging for the most part are doing so for free (this is open source after all) and haven’t made some commitment to support you updating their dependencies, why should it be their problem?

              Maybe I’m reading too much into the post, but the complaints about version pinning seem to be implying that the application maintainers should be responsible for maintaining compatibility with any arbitrary version of any dependency the application pulls in. Of course application maintainers want to specify which versions there compatible with; it’s completely unrealistic to expect an application to put in the work to maintain compatibility with any old version that one distro or another might be stuck on. The alternative is a combinatoric explosion of headaches.

              Am I misreading this? I’m trying to come up with a more charitable reading but it’s difficult.

              1. 3

                I’m not sure. When I wrote a Lua wrapper for libtls, I attempted to support older versions, but the authors of libtls didn’t do a good job of versioning macros. I eventually gave up on older versions when I switched to a different libtls. I am not happy about this.

              2. 3

                Don’t JVM and CLR programs also do all dynamic linking all the time, or almost so?

                1. 2

                  Er, when I said “Only python programs are of this form.” I just meant of the languages mentioned in the article. Obviously various other languages including most interpreted languages are similar in nature.

                  I think the JVM code I’ve worked on packaged all it’s (java) dependencies inside the jar file - which seems roughly equivalent to static linking. I don’t know what’s typical in the open source world though. I’ve never worked with CLR/.net.

                  1. 3

                    It depends…

                    • Desktop or standalone Java programs usually consists of a collection of JAR files and you can easily inspect them and replace/upgrade particular libraries if you wish.
                    • Many web applications that are deployed on a web container (e.g. Tomcat) or an application server (e.g. Payara) as WAR files, have libraries bundled inside. This is bit ugly and I do not like it much (you have to upload big files to servers on each deploy), however you still can do the same as in the first case – just need to unzip and zip the WAR file.
                    • Modular applications have only their own code inside + they declare dependencies in a machine readable form. So you deploy small files e.g. on a OSGi container like Karaf and dependencies are resolved during the deploy (the metadata contain needed libraries and their supported version ranges). In this case you may have installed a library in many versions and proper one is linked to your application (other versions and other libraries are invisible despite they are present in the runtime environment). The introspection is very nice and you can watch how the application is starting, whether it is waiting for some libraries or other resources, you can install or configure them and then the starting process continues.

                    So it is far from static-linking and even if everything is bundled in a single JAR/WAR, you can easily replace or upgrade the libraries or do some other hacking or studying.

              1. 3

                Writing a GUI app to help me do and manage backups of my photos. After my phone’s sdcard quit on me, this jumped to the top of my list of free-time projects. And as I couldn’t find existing one that would fit me, I obviously have to write my own from scratch, sigh. I first tried to do it in Go, but was surprised there’s no sensible embedded db package in the ecosystem. So I switched to Nim, and again surprisingly it goes importantly better. I’m using sqlite for db, wnim for windows gui. Though where Go certainly shines vs. Nim is concurrent programming and io.Reader composability. But Nim has some threading support; I never used it before, so hopefully once I learn how to use it, it will be better. Though after Go, I feel like blindly running through a minefield when coding threads in a language without an explicitly documented & human-readable memory model.

                1. 1

                  I think lots of people switching to Go because of poorly documentation of Nim. Nim has many features (maybe twice as Go) but you can’t find doc or tutorial. Reason is most probably there is no foundation or etc. behind Nim.

                  1. 1

                    Hm, I just invented the following quip, that I could probably characterize 3 languages on a D&D scale:

                    • Rust - Lawful Good
                    • Go - Neutral Good
                    • Nim - Chaotic Good

                    I’d tweet it if I used twitter or mastodon ;)

                1. 20

                  I’m a bit confused by the conclusion of this article.

                  At this point I think it is actually pretty stable. I consider the project a big success.

                  A huge part of this has been due to Haskell and its excellent ecosystem

                  But throughout the article, you see the failings of the haskell ecosystem: no IDE, poor refactoring tools, ridiculously slow compile times, a poor standard library, and unmaintained libraries. What exactly is the “excellent ecosystem” here? It sounds like this project succeeded despite haskell being the main language.

                  Personally I think it’s fine to use whatever you want, but this article reeks of a rose-colored reimagining of what haskell is because the author enjoys coding in haskell. Haskell isn’t typically used in production systems for projects like this and this seems to be another anecdote as to why it isn’t production-ready rather than a success story.

                  1. 18

                    Haskell is in kind of an odd place, there are some things that the author calls out that are problems (and thankfully the community has been taking great strides recently in starting to address a lot of them) but for a lot of people the benefits outweigh the problems. The challenge of course is that a lot of the things that are great about haskell don’t exist in other languages at all, and that makes it particularly hard to tell a compelling story about it. “Everyone” knows you need an IDE to be successful in writing code, but nobody misses tooling like hoogle, because just doesn’t exist outside of the haskell ecosystem. “Everyone” knows that build times are important, but haskell is one of the few communities that have really embraced nix (although that’s not a haskell specific tool) to widely use it for getting distributed caching, builds, and reproducibility.

                    I don’t claim haskell is the best language, or that it’s right for everyone, but there are good reasons to use it in production. The particular strengths and weakness of haskell mean you’re paying quite different costs than you would with some other more mainstream languages, but for a lot of people that really is worth it, and I don’t think it’s just for the rose colored glasses.

                    1. 10

                      It was hard to include everything I wanted to say as it is already quite long. I’m trying to offer an evaluation of using Haskell in prod. As such I am making sure to point out all the pain points, but I also point out some really strong areas.

                      I guess what I don’t spell out is that Haskell the language (and many of the libraries) are what I would describe as ‘best in class’. You write less code, that is simpler (once you learn Haskell) and has much less scope for bugs to creep in, compared to popular programming languages. Despite the warts in Haskell + ecosystem, it is stellar. And my conclusion is that it works well in prod too.

                      1. 5

                        Thanks a lot for this article! That’s exactly why I found it so interesting, a really practical and nonobvious list of pros & cons. It’s not often that I see articles like this. If you disregard the flamewar, the article got quite many upvotes, which means a lot of people found it interesting too :)

                        1. 2

                          Thanks!

                    1. 2

                      Cool, I will probably try this out. Right now I’m using https://stackedit.io which is in-browser and has live preview. Even though it’s a browser app, it’s very fast. It doesn’t hit the network at all unless you tell it to. It’s essentially a “local first” app.

                      Does anyone use another markdown editor with live preview? I see VSCode. I saw things like

                      https://github.com/marktext/marktext

                      but I didn’t want to use an Electron app. I also looked at:

                      https://remarkableapp.github.io/linux/screenshots.html

                      but it wasn’t quite what I wanted.

                      1. 2

                        I’m a sucker for “WYSIWYG” for Markdown. I tried marktext for a while, but for some reason I don’t recall now I eventually came back to https://www.zettlr.com/. It’s still Electron AFAIK, but I haven’t seen a non-Electron WYSIWYG markdown editor yet, nor have written one either still.

                        1. 2

                          I got lot of alternate suggestions on linux subreddit. See if any of them fits your need.

                        1. 5

                          I don’t get this.

                          It’s tagged [rust], but the project seems to be in Python?

                          Someone (the maintainers?) added a Rust dependency, and now the software won’t build on some architectures?

                          I’ll unflag this as off-topic if I can a good explanation to why this on-topic, and not some misguided attempt to get more upvotes on a random GH issue.

                          1. 12

                            As from the sheer amount of comments and reaction on the issue, this is a large deal. I’ve never seen a Github issue this active. It is a Python project, but the main problem is that LLVM/Rust is not available on some platforms. So I think both tags are justified.

                            1. 12

                              It’s a real-life case of a practical problem at an intersection of quite a few technical and social topics important for programmers (package management, backwards compatibility, security, cross-platform-ness, FFI), with no clear “perfect” solution, which makes various compromises in how to solve it worth pondering and discussing as to possible consequences/pros and cons.

                              A high “value density” case-study which has potential to bear learnings that can later be used to educate smaller day-to-day cases, by giving insights as to potential consequences of specific decisions.

                              1. 3

                                Yes, the discussion here has been enlightening. The discussion on GH was way too in media res for someone who wasn’t already caught up on the issue to get much out of it.

                                For example, I’m still unclear about exactly what “pyca” is. It’s not a standard part of Python? If it isn’t, why is this project specifically so interesting that people who build docker images use it?

                                A thoughtful blog post (like the top comment by @ubernostrum) would have been a much better submission imho.

                                But I’ve removed my flag now, anyway.

                                1. 4

                                  pyca stands for Python Cryptographic Authority. It’s an organization that regroups python libraries and tools related to cryptography. They are regrouped to allow better interoperability between them. The term “Authority” is, in my opinion, a little bit excessive as nobody requires you to use them; but as they are used by building block tools, you still use them transitively, hence the shitload of comments in the issue.

                                  This is not the only “Authority” in the python world, there is also the Python Packaging Authority (pypa), the Python Code Quality Authority (PyCQA), and maybe other ones that I don’t know about. As they often involve python core developers, their “Authority” status looks genuine.

                              2. 8

                                I found it to be relevant and informative, but that’s probably just because my day job includes writing and maintaining Python and because I use Rust in several of my side projects.

                              1. 2

                                FWIW, you can shorten the last 2 lines in mode() to the following idiom: return mode_map[m] or m.

                                1. 1

                                  wow, that looks great. I’m surprised by how clean and straightforward it is.

                                  1. 1

                                    Heh, neat. Thanks.

                                  1. 3

                                    Does anyone know when the videos might get uploaded after the fact? I won’t be able to attend live… or are there already some recordings of the live streams available somewhere?

                                    1. 2

                                      In a normal year the livestream videos are edited, reviewed, and approved pretty quickly by the speakers and room admins, usually in 1-7 days. This year the videos are all (99%) pre-recorded but I think the editing is happening anyways to include Q&A (optionally).

                                      Videos will be uploaded to the page of each individual talk in the room schedules.

                                    1. 9

                                      If you’re interested, see also my Go port of the project (now 59 commits behind upstream); a visual comparison of the results is available at: http://akavel.github.io/ditaa/

                                      1. 26

                                        These articles are a lovely read. As the articles explicitly discuss, this documents the theory and some of the implementation details of RE2. RE2 and these articles also directly inspired Go’s regexp package (authored by Russ himself) and Rust’s regex crate (authored by me).

                                        One of the main motivations of RE2 itself was that most popular regex engines were implemented via backtracking. Depending on the regex, executing a backtracking implementation may take exponential time in the size of the input, which can lead to things like REDoS. RE2 on the other hand, guarantees that it will always execute a regex search in O(mn) time, where m ~ len(regex) and n ~ len(haystack). For folks familiar with RE2 and these articles, that’s not news. So with that in mind, I figured I’d document some of the downsides of implementing regexes using the approach described by RE2. (And of course, I do not mean for the list to be exhaustive.)

                                        For context, and for folks not familiar with RE2 internals, RE2 is actually composed of a number of regex engines. In general, each regex engine occupies a particular space in performance vs features. That is, the faster the engine, the less versatile it is.

                                        • The Pike VM is the most versatile. It most closely resembles Thompson’s construction followed by an NFA simulation with the additional power of tracking capture group match locations. It can service all requests but has very high constant factors. This makes it extremely slow in practice, but, predictably slow.
                                        • The “bitstate backtracker” implements the classical backtracking algorithm, but maintains a bitset of which states have been visited. This maintains the algorithmic complexity guarantees at the expense of some overhead. The size of the bitset is proportional to len(regex) * len(haystack), so it can only be used on small regexes and small haystacks. But, it is in practice a bit quicker than the Pike VM. Other than the restrictions on sizes, it can service all requests.
                                        • The “one pass NFA” is like the Pike VM, but can only execute on regexes that never need to consider more than one possible transition for any byte of input. This leads to a substantial reduction in constant factors when compared to the Pike VM. For the subset of regexes on which this can execute, it can service all requests.
                                        • Finally, the “lazy DFA” or “hybrid NFA/DFA” is very much like a DFA, except it compiles itself at search time. In a large number of cases, this provides the performance benefits of a traditional DFA without having to pay the exorbitant cost of building the entire DFA before searching. This is the fastest regex engine in RE2, but can only provide the starting and ending offsets of a match. It cannot report the positions of capturing groups. (Getting the starting location of a match actually requires a separate reverse scan of the haystack, starting at the end of the match.)

                                        OK, so with that in mind, here are some downsides:

                                        • Even though RE2 and its ilk will never take exponential time on any input, they can still take a long time to execute. While search time is usually said to be linear in the size of the haystack, this assumes that the size of the regex is held as a constant. As I said above, execution is actually O(mn), so if your regex is large, then search time can be especially slow. For example: https://github.com/golang/go/issues/7608
                                        • DFAs don’t handle look-around too well. And if you try to add it (even for a small fixed size), it can blow up their size pretty quickly. RE2 does add limited single-byte look-around support to the DFA to deal with ^, $ and \b. Unfortunately, this prevents things like $ from treating \r\n as a line terminator and \b from being Unicode-aware. Rust’s regex crate does make \b Unicode-aware by default, so this is a pain point.
                                        • In order to resolve the capture group locations, RE2 will typically first run the DFA to find the span of the entire match and then one of {Pike VM, bitstate backtracker, one-pass NFA} to find the spans of each capturing group. This results in three passes over the matching portion of the text (twice with the DFA), which adds overhead to each match that a pure backtracking implementation doesn’t typically have. Moreover, the Pike VM and bitstate backtrackers are quite slow on their own, typically much slower than the backtracking implementations found in places like PCRE. (When PCRE doesn’t exhibit catastrophic backtracking, of course.)
                                        • Unicode is a tricky thing to figure out. Both RE2 and Go have fairly limited support for Unicode. Things like \w and \s, for example, are not Unicode-aware by default. Neither is \b. Rust’s regex crate, on the other hand, makes all of those things Unicode-aware. Plus a lot of other stuff. The main problem here is that Unicode really explodes the sizes of things, particularly the DFA. For example, if you fully compile the DFA for Unicode-aware \w, its size in memory is about 300KB. But a non-Unicode-aware \w is a mere 1.3KB. Now, this memory usage isn’t fully realized because of the lazy DFA, but it does mean you need to spend more time building out parts of the DFA depending on what kind of text you’re searching.
                                        • For similar reasons, large bounded repetitions aren’t handled well because they lead to very large automatons. When the automatons get large, you wind up with huge constant factors that slow down your search. In general, the worst cases are the things that would otherwise cause a fully compiled DFA to be exponential in size with respect to the regex. For example, [01]*1[01]{N} will produce about 2^N states in a traditional DFA.

                                        I’ve never studied a production grade pure-backtracking regex engine in a lot of detail, but one of the main conclusions to draw here is that in order to get a regex engine that is both fast in practice and fast in theory, you need a lot of complexity. That is, you need a lot of different regex engines implemented internally to deal with all of the different cases. If you give up on needing to be fast in practice, then you can just implement the Pike VM and be done with it. At that point, the hardest parts are probably the parser and the Thompson construction.

                                        With all that said, Hyperscan also guarantees linear time, but it goes about the problem in a very different way.

                                        1. 1

                                          Could you have a really dumb (exponential time) backtracking implementation that counts how many times it has backtracked, and bails out and switches to the O(nm) implementation stack once it has backtracked more than say 2 times per input byte?

                                          I expect this would not be an improvement to predictability. :)

                                          1. 2

                                            In theory yes. But usually you use backtracking to provide additional features that can’t be provided using finite automata. So in order for your idea to work, you also need to restrict the regex features used.

                                            IIRC, PCRE has a backtracking limit feature. That’s one of the reasons why its searches can fail. (Searches in RE2 or the regex crate can never fail.)

                                          2. 1

                                            Thank you for the awesome writeup! Do you have a version of it on your blog? I’m pretty sure a lot of people would find it interesting!

                                            I’m curious if you explored PEGs too, and LPEG in particular, and if yes would it be a huge stretch to ask if you fancied a similar writeup and in particular comparison to RE2? I know it’s a huge ask, but that’s probably the only chance I’ll ever have to pose it with a non-0 probability of getting an answer, so I’m not gonna skip it :D

                                            1. 3

                                              No I don’t and no I haven’t really explored PEGs. Sorry! PEGs live in a different domain than general purpose regex engines I think, which is why I haven’t explored them much.

                                              As for whether I could write such a post, maybe. The problem is that blog posts take a long time to write. I have a couple in the pipeline, but they’ve been there for over a year already. :-( I started a notes repo with the intent of publishing lower effort and possibly non-technical posts. So maybe I could put it there.

                                              1. 2

                                                In the back of my head, I’m slowly baking a thought how to make a lowest effort pipeline for picking some comments I wrote here or on HN and converting them into something like the “notes” you mention. Not yet sure how to present them to make the result attractive/approachable for readers. (Especially the “question” part.) Some “Socratic” dialogs or what?

                                          1. 7

                                            I don’t want to 💩on the author’s writeup here, because it is a decent one. I’m using it to launch another public objection to Go Generics.

                                            A lot of proposals for and write ups about Go Generics seem to miss that there’s a very large group of Go users who object to Generics, and for good reason. It’s not because this group questions the efficacy of generics in solving very specific problems very well – objectors are generally well attuned to Generics’ utility. What’s objected to is the necessity of Generics. The question that we pose is do we need generics at all? Are the problems that Generics solve so important that Generics should pervade the language?

                                            From the author’s conclusion

                                            I was able to solve a problem in a way that was not previously possible.

                                            Being able to solve problems in new ways isn’t always valuable; it can even be counter-productive.

                                            1. 24

                                              Nothing is necessary except an assembler. Well, you don’t even need the assembler, you can just flip the bits yourself.

                                              Go has an expressiveness gap. It has some kind of big classes of algorithms that can’t be made into libraries in a useful way. Most people advocate just rewriting basically the same code over and over forever, which is kind of crazy and error-prone. Other people advocate code-generation tools with go generate, which is totally crazy and error-prone, even with the decent AST tools in the stdlib. Generics close the gap pretty well, they’re not insanely complex, and people have had decades to get used to them. If you don’t want to use them yourself, don’t use them, but accept that there are people for whom, say, the ability to just go get a red-black tree implementation that they can use with a datatype of their own choosing, without loss of type-safety or performance, will greatly improve the usefulness of the language.

                                              Plus, from a purely aesthetic standpoint, it always seemed criminal to me to have a language that has first-class functions, and lexical closure, but in which you can’t even write map because its type is inexpressible.

                                              1. 9

                                                Go has an expressiveness gap.

                                                That’s true. You’ve identified some of the costs. Can you identify some of the benefits, too?

                                                1. 12

                                                  Easy: not having a feature protects you from bright idiots that would misuse it.

                                                  Honestly though, that’s the only argument I can make against generic. And it’s not even valid, because you could say this about almost any feature. It’s a fully general counter argument: give people hammers, some will whack each other’s heads instead of hitting nails.

                                                  Assuming basic competency of the users and assuming they were designed from the ground up, generics have practically no downsides. They provide huge benefits at almost no marginal cost. There is a sizeable up-front cost for the language designer and the compiler writer, but they were willing to pay that kind of price when they set out to build a general purpose languages, didn’t they?

                                                  1. 2

                                                    They provide huge benefits at almost no marginal cost.

                                                    If this huge benefit is only one in a minor part of the project, or even, in a minority of projects, then it has to be balanced and thought through.

                                                    Right now, I don’t know many people that work Go daily, telling me that not having generics makes their day a pain.

                                                    Most of them told me that it’s sometimes painful, but that’s actually pretty rare.

                                                    There is a sizeable up-front cost for the language designer and the compiler writer, but they were willing to pay that kind of price when they set out to build a general purpose languages, didn’t they?

                                                    Is the burden really on them? To me the it is on the program writer.

                                                    1. 8

                                                      There’s likely a survivorship bias going on there.

                                                      I used Go as a programming language for my side projects for years. The thing that finally got me to give it up was the lack of generics. In writing PISC, the way I had approached it in Go ended up causing a lot of boilerplate for binding functions.

                                                      Go is something I’d happily write for pay, but I prefer expressiveness for my side projects now, as the amount of effort that goes into a side project is a big determining factor in how much I can do in one

                                                      1. 3

                                                        There is a sizeable up-front cost for the language designer and the compiler writer, but they were willing to pay that kind of price when they set out to build a general purpose languages, didn’t they?

                                                        Is the burden really on them? To me the it is on the program writer.

                                                        Assuming we are a collaborative species (we mostly are, with lots of exceptions), then one of our goals should be minimizing total cost. Either because we want to spend our time doing something else, or because we want to program even more stuff.

                                                        For a moderately popular programming language, the users will far outnumber and outproduce the maintainers of the language themselves. At the same time, the languages maintainers’ work have a disproportionate impact on everyone else. To such a ludicrous extent in fact that it might be worth spending months on a feature that would save users a few seconds per day. Like compilation speed.

                                                        Other stuff like generic will affect fewer users, but (i) it will affect them in a far bigger way than shaving off a few seconds of compilation time would have, and (ii) those particular users tend to be library writers, and as such they will have a significant impact on the rest of the community.

                                                        So yes, the burden really is on the language creators and compiler writers.


                                                        Note that the same reasoning applies when you write more mundane software, like a train reservation system. While there is rarely any monetary incentive to make that kind of thing not only rock solid, but fast and easy to work with, there is a moral imperative not to inflict misery upon your users.

                                                    2. 5

                                                      I haven’t used Go in anger but here are some benefits from not including generics.

                                                      • Generics are sometimes overused, e.g. many C++ libraries.
                                                      • The type system is simpler.
                                                      • The compiler is easier to implement and high quality error messages are easier to produce.
                                                      • The absence of generics encourages developers to use pre-existing data structures.
                                                    3. 2

                                                      If red-black trees and map were just built in to Go, wouldn’t that solve 90% of the problem, for all practical purposes?

                                                      What I really miss in Go is not generics, but something that solves the same problems as multiple dispatch and operator overloading.

                                                      1. 3

                                                        Sort of, but no. There’s too many data structures, and too many useful higher-order functions, to make them all part of the language. I was just throwing out examples, but literally just a red-black tree and map wouldn’t solve 90% of the problem. Maybe 2%. Everyone has their own needs, and Go is supposed to be a small language.

                                                        1. 1

                                                          Data structures and higher-order functions can already be implemented in Go, though, just not by using generics as part of the language.

                                                    4. 15

                                                      Technically Go does have generics, they just aren’t exposed to the end developer, except in the form of the builtin map and array types, and are only allowed for internal developers. So in a sense, Go does need generics and they already pervade the language.

                                                      I don’t personally have a horse in this race and don’t work with Go, but from a language-design perspective it does seem strange to limit user-developed code in such a way. I’d be curious what your thoughts on why this discrepancy is OK and why it shouldn’t be fixed by adding generics to the language.

                                                      1. 14

                                                        I don’t personally have a horse in this race and don’t work with Go, but from a language-design perspective it does seem strange to limit user-developed code in such a way.

                                                        Language design is all about limiting user defined code to reasonable subsets of what can be expressed. For a trivial example, why can’t I name my variable ‘int’? (In Myrddin, as a counterexample, var int : int is perfectly legal and well defined).

                                                        For a less trivial example, relatively few languages guarantee tail recursion – this also limits user developed code, and requires programmers to use loops instead of tail recursion or continuation passing style.

                                                        Adding generics adds a lot of corner cases to the type system, and increases the complexity of the language a good deal. I know. I implemented generics, type inference, and so on in Myrddin, and I’m sympathetic to leaving generics out (or, as you say, extremely limited) to put a cap on the complexity.

                                                        1. 3

                                                          I see only two legitimate reasons to limit a user’s capabilities:

                                                          1. Removing the limitation would make the implementer’s life harder.
                                                          2. Removing the limitation would allow the user to shoot themselves in the foot.

                                                          Limiting tail recursion falls squarely in (1). There is no way that guaranteeing tail recursion would cause users to shoot themselves in the foot. Generics is another matter, but I strongly suspect it is more about (1) than it is about (2).

                                                          Adding generics adds a lot of corner cases to the type system, and increases the complexity of the language a good deal.

                                                          This particular type system, perhaps. This particular language, maybe. I don’t know Go, I’ll take your word for it. Thing is, if Go’s designers had the… common sense not to omit generics from their upcoming language, they would have made a slightly different language, with far fewer corner cases they will inevitably suffer now that they’re adding it after the fact.

                                                          Besides, the complexity of a language is never a primary concern. The only complexity that matters is that of the programs written in that language. Now the complexity of a language does negatively impact the complexity of the programs that result from it, if only because language space is bigger. On the other hand, this complexity has the potential to pay for itself, and end up being a net win.

                                                          Take C++ for instance. Every single feature we add to it increases the complexity of the language, to almost unbearable levels. I hate this language. Yet, some of its features definitely pay for themselves. Range for for instance, while it slightly complicates the language, makes programs that use it significantly cleaner (although only locally). That particular feature definitely pays for itself. (we could discuss other examples, but this one has the advantage of being uncontroversial.)

                                                          As far as I can tell, generics tend to massively pay for themselves. Not only do they add flexibility in many cases, they often add type safety (not in C++, they don’t). See for instance this function:

                                                          foo : (a -> b) -> [a] -> [b]
                                                          

                                                          This function has two arguments (where a and be are unknown types): a function from a to b, and a list of a. It returns a list of b. From this alone, there is a lot we can tell about this function. The core idea here is that the body of the function cannot rely on the contents of generic types. This severely constraints what it can do, including the bugs it can have.

                                                          So, when we write let ys = foo f xs, here’s what we can expect before we even look at the source code:

                                                          • Assuming f is of type a->b, then xs is a list of a, and the result ys is a list of b.
                                                          • The elements of ys, if any, can only come from elements of xs.
                                                            • And they must have gone through f.
                                                            • Exactly once.
                                                          • The function f itself does not affect the number or order of elements in the result ys
                                                          • The elements of xs do not individually affect the number or order of elements in the result ys
                                                          • The only thing that affects the number or order of elements in the result ys is the size of xs (and the code of foo, of course).

                                                          This is quite unlike C++, or other template/monomorphisation approaches. Done right, generics have the opportunity to remove corner cases in practice. Any language designer deciding they’re not worth their while better have a damn good explanation. And in my opinion, the explanations offered for Go weren’t satisfactory.

                                                          1. 4

                                                            Complexity of a language is the primary concern!

                                                            Languages are tools to express ideas, but expressiveness is a secondary concern, in the same way that the computer is the secondary audience. Humans are the primary audience of a computer program, and coherence is the primary concern to optimize for.

                                                            Literary authors don’t generally invent new spoken languages because they’re dissatisfied with the expressive capability of their own. Artful literature is that which leverages the constraints of it’s language.

                                                            1. 4

                                                              Literary authors don’t generally invent new spoken languages because they’re dissatisfied with the expressive capability of their own. Artful literature is that which leverages the constraints of it’s language.

                                                              Eh, I have to disagree here. Literary authors try to stretch and cross the boundaries the of their spoken languages all the time, specifically because they search ways to express things that where not yet expressed before. To give some uncontroversial examples, Shakespeare invented 1700 new words and Tolkien invented not one, but a couple of whole new languages.

                                                              I am but a very low level amateur writer, but I can tell you: the struggle with the tool to express your ideas is as real with spoken languages as it is with programming languages. It is an approach from another direction, but the results from spoken languages turn out to be as imperfect as those from programming ones.

                                                              1. 1

                                                                I’d argue that constrained writing is more common, if nothing else than showing ones mastery of a shared language is more impressive than adding unknown elements.

                                                                Tolkien’s Elvish languages, while impressively complete, are simply used as flavor to the main story. The entire narrative instead leans heavily on tropes and language patterns from older (proto-English) tales.

                                                                1. 1

                                                                  Yes, you have a point. I mentioned Tolkien because that was the first writer that created a new language that I could come up with. But in the end, if you want to express an idea, then your audience must understand the language that you use, otherwise they will not get your message. So common language and tropes can help a lot.

                                                                  However, I think your mention of constrained writing is interesting. Because in a way, that Go does not have generics, is similar to the constraint that a sonnet must follow a particular scheme in form and content. It is perfectly possible to add generics to Go, the same way as it is very possible to slap another tercet at the end of a sonnet. Nothing is stopping you, really, Expect that then it would no longer be a sonnet. Is that a bad thing? I guess not. But still almost no-one does it.

                                                                  I’d say that the rules, or the constraints, are a form of communication too. If I read a sonnet, I know what to expect. If I read Go, I know what to expect. Because some things are ruled out, there can be more focus on what is expressed within the boundaries. As a reader you can still be amazed. And, the same as in Go, if what you want to express really does not fit in the rules of a sonnet, or if it is not worth the effort to try it, then you can use another form. Or another programming language.

                                                                2. 1

                                                                  Your points don’t conflict with my points, and I agree with them.

                                                                3. 2

                                                                  Can we agree that the goal of programming languages is to reduce costs?

                                                                  • Cost of writing the program.
                                                                  • Cost of errors that may occur.
                                                                  • Cost of correcting those errors.
                                                                  • Cost of modifying the program in the face of unanticipated new requirements.

                                                                  That kind of thing. Now we must ask what influences the costs. Now what about increased expressiveness?

                                                                  A more expressive language might be more complex (that’s bad), more error prone (that’s bad), and allow shorter programs (that’s good), or even clearer programs (that’s good). By only looking at the complexity of the language, you are ignoring many factors that often matter a whole lot more.

                                                                  Besides, that kind of reasoning quickly breaks down when you take it to its logical extreme. No one in their right mind would use the simplest language possible, which would be something like the Lambda Calculus, or even just the iota combinator. Good luck writing (or maintaining!) anything worth writing in those.

                                                                  Yes, generics makes a language more complex. No, that’s not a good enough argument. If it was, the best language would only use the iota combinator. And after working years in a number of languages (C, C++, OCaml, Ptython, Lua…), I can tell with high confidence that generics are worth their price several orders of magnitudes over.

                                                                  1. 2

                                                                    I agree with you that generics can be hugely net positive in the cost/benefit sense. But that’s a judgment that can only be made in the whole, taking into account the impact of the feature on the other dimensions of the language. And that’s true of all features.

                                                                    1. 1

                                                                      Just popping in here because I have minimal experience with go, but a decent amount of experience in languages with generics, and I’m wondering: if we set aside the implementation challenge, what are some examples of the “other dimensions” of the language which will be negatively impacted by adding generics? Are these unique to go, or general trade offs in languages with generics?

                                                                      To frame it in another way, maybe a naive take but I’ve been pretty surprised to see generics in go being rejected due to “complexity”. I agree that complexity ought to be weighed against utility but can we be a little more specific? Complexity of what specifically? In what way will writing, reading, compiling, running, or testing code become more complicated when my compiler supports generics. Is this complexity present even if my own code doesn’t use generics?

                                                                      And just a final comparison on language complexity. I remember when go was announced, the big ticket feature was its m:n threaded runtime and support for CSP-style programming. These runtimes aren’t trivial to implement, and certainly add “complexity” via segmented stacks. But the upside is the ability to ergonomically express certain kinds of computational processes that otherwise would require much more effort in a language without these primitives. Someone decided this tradeoff was worth it and I haven’t seen any popular backlash against it. This feature feels very analogous to generics in terms of tradeoffs which is why I’m so confused about the whole “complexity” take. And like, maybe another naive question, but wouldn’t generics be significantly less tricky to implement than m:n threads?

                                                                      1. 5

                                                                        It isn’t just implementation complexity of generics itself. It’s also sure to increase the complexity of source code itself, particularly in libraries. Maybe you don’t use generics in your code, but surely some library you use will use generics. In languages that have generics, I routinely come across libraries that are more difficult to understand because of their use of generics.

                                                                        The tricky part is that generics often provides some additional functionality that might not be plausible without it. This means the complexity isn’t just about generics itself, but rather, the designs and functionality encouraged by the very existence of generics. This also makes strict apples-to-apples comparisons difficult.

                                                                        At the end of the day, when I come across a library with lots of type parameters and generic interfaces, that almost always translates directly into spending more time understanding the library before I can use it, even for simple use cases. That to me is ultimately what leads me to say that “generics increases complexity.”

                                                                        1. 2

                                                                          what are some examples of the “other dimensions” of the language which will be negatively impacted by adding generics?

                                                                          From early golang blog posts I recall generics add substantial complexity to the garbage collector.

                                                                          The team have always been open about their position (that generics are not an early priority, and they will only add them if they can find a design that doesn’t compromise the language in ways they care about). There have been (numerous proposals rejected)[https://github.com/golang/go/issues?page=3&q=generics++is%3Aclosed+label%3AProposal] for varied reasons.

                                                                          Someone decided this tradeoff was worth it and I haven’t seen any popular backlash against it

                                                                          There’s no backlash against features in new languages, because there’s nobody to do the backlash.

                                                                          Go has already got a large community, and there’s no shortage of people who came to go because it was simple. For them, adding something complex to the language is frightening because they have invested substantial time in an ecosystem because of its simplicity. Time will tell whether those fears were well-founded.

                                                                    2. 1

                                                                      No, expressiveness is the only reason for languages to exist. As you say, humans are the primary audience. With enough brute force, any language can get any task done, but what we want is a language that aids the reader’s understanding. You do that by drawing attention to certain parts of the code and away from certain parts, so that the reader can follow the chain of logic that makes a given program or function tick, without getting distracted by irrelevant detail. A language that provides the range of tools to let an author achieve that kind of clarity is expressive.

                                                                      1. 2

                                                                        I think we are using “expressive” differently. Which is fair, it’s not really a well-defined term. But for me, expressiveness is basically a measure of the surface area of the language, the features and dimensions it offers to users to express different ideas, idioms, patterns, etc. Importantly, it’s also proportional to the number of things that it’s users have to learn in order to be fluent, and most of the time actually exponentially proportional, as emergent behaviors between interacting features are often non-obvious. This is a major cost of expressiveness, which IMO is systemically underestimated by PLT folks.

                                                                    3. 3

                                                                      I implemented generics. You’re trying to convince me that it’s worth implementing generics. Why?

                                                                      Besides, the complexity of a language is never a primary concern.

                                                                      I disagree. I think implementation matters.

                                                                  2. 2

                                                                    That’s an intersting observation; thanks for sharing it.

                                                                    they just aren’t exposed to the end developer

                                                                    I think this supports my point better than I’m able to. Language design is just as much about what is hidden from developers as what is exposed. That generics are hidden from end users is something I greatly appreciate about Go. So when I refer to generics, I’m referring to generics used by every day developers.

                                                                    I’d be curious what your thoughts on why this discrepancy is OK and why it shouldn’t be fixed by adding generics to the language.

                                                                    In my opinion the greatest signal that Go doesn’t need generics is the wonderfully immense corpus of code we have from the last decade – all written without generics. Much of it written with delight by developers who chose Go over other langauges for it’s pleasant simplicity and dearth of features.

                                                                    That is not to say that some of us offasionally could have written less code if generics were available. Particularly developers writing library or framework code that would be used by other developers. Those developers absolutely would have been aided by generics. They would have written less code; their projects may have cost less to initially develop. But for every library/framework developer there are five, ten, twenty (I can’t pretend to know) end user application developers who never had the cognitive load of genericized types foisted on them. And I think that is an advantage worth forgoing generics. I don’t think I’m particularly smart. Generics make code less readable to me. They impose immense cognitive load when you’re a new developer to a project. I think there are a lot of people like me. After years of Java and Scala development, Go to me is an absolute delight with its absence of generics.

                                                                    1. 6

                                                                      In my opinion the greatest signal that Go doesn’t need generics is the wonderfully immense corpus of code we have from the last decade

                                                                      I don’t have a ready example, but I’ve read that the standard library itself conspicuously jumped through hoops because of the lack of generics. I see it as a very strong sign (that’s an understatement) that the language has a dire, pervasive, need for generics. Worse, it could have been noticed even before the language went public.

                                                                      If you had the misfortune of working with bright incompetent architects astronauts who used generics as an opportunity to make an overly generic behemoth “just in case” instead of solving the real problem they had in front of them, well… sorry. Yet, I would hesitate to accuse the language’s semantics for the failings of its community.

                                                                  3. 7

                                                                    I don’t remember exact details, it was super long ago, but I once wanted to write an editor centered around using a nontrivial data structure (“table chain” or “string table” or whatever was the name). Also the editor had some display aspect structures (~cells of terminal). At some point I needed to be able to experiment with rapidly changing the type of the object stored both in the “cells” and “chains” of the editor (e.g. to see if adding styles etc. per character might make sense from architectural point of view). If you squint, those are both kind of “containers” for characters (haskeller would maybe say monads? dunno). I had to basically either manually change all the places where the original “character” type was used, or fall back to interface{} losing all benefits of static typing that I really needed. Notably this was long before type aliases which would have possibly allowed me to push a bit further, though it’s hard for me to recall now. But the pain and impossibility of rapid prototyping at this point was so big I didn’t see it possible to continue working on the project and abandoned it. Not sure if immediately then or some time later I realized that this is the rare moment where generics would be valuable in letting me explore designs I cannot realistically explore now.

                                                                    In other words, what others say: nontrivial/special-purpose “containers”. You don’t need them until you do.

                                                                    Until then I fully subscribed to “don’t need generics in Go” view. Since then I’m in “don’t need generics in Go; except when do”. And I had one more hobby project afterwards that I abandoned for exactly the same reason.

                                                                    And I am fearful and do lament that once they are introduced, we’ll probably see everyone around abusing them for a lot of unnecessary purposes, and that this will be a major change to the taste of the language. That makes me respect the fact that the Team are taking their time. But I do miss them since, and if the Team grudgingly accepts the current draft as passabke, this is such a high bar that it makes me extremely excited for what’s to come, that it will be one of the best ways how this compromise can be introduced. Given that most decisions in languages are some compromises.

                                                                    1. 6

                                                                      Yeah, Go is very much not a language for rapid prototyping. It expects you to come to the table with a design already in mind.

                                                                      1. 2

                                                                        Umm, what? Honestly not sure if you’re meaning this or being sarcastic (and if yes, don’t see the point). I prototyped quite a lot of things in Go no problem. I actually hold it as one of the preferred languages for rapid prototyping if I expect I might want to keep the result.

                                                                        1. 5

                                                                          I’m being totally serious. Go is chock full of stuff that makes typical rapid prototyping extremely difficult. A lack of a REPL. Compiler errors on unused variables. Verbose error handling. And so on. All of these things combine to make it harder to “design on the fly”, so to speak, which is what rapid prototyping frequently means.

                                                                          With that said, Go works great for prototyping in the “tracer bullet” methodology. That’s where your prototype is a complete and production quality thing, and the iteration happens at a higher level.

                                                                          1. 1

                                                                            Got it, thanks! This made me realize that I reach for different languages in different cases for prototyping. Not yet really sure why now. But I feel that sometimes the dynamic types of Lua make me explore faster, whereas sometimes static types of Go or Nim make me explore faster.

                                                                    2. 4

                                                                      I’m going to assume you’re arguing in good faith here, but as a lurker on the go-nuts mailing list, I’ve seen too many people say “I don’t think generics are necessary” or “I haven’t heard a good enough reason for the complexity of generics”. It’s worth pointing out the Go team has collected feedback. Ian Lance Taylor (one of the current proposal’s main authors) spends a large portion of time responding to emails/questions/objections.

                                                                      I read a comment from someone who was on the Kubernetes team that part of the complexity of the API (my understanding is they have a pseudo-type system inside) is based on the fact that proto-Kubernetes was written in Java and the differences between the type systems compounded with a lack of generics created lots of complexity. (NOTE I don’t remember who said this, and I am just some rando on the net, but that sounds like a decent example of the argument for generics. Yes, you can redesign everything to be more idiomatic, but sometimes there is a compelling need to do things like transfer a code base to a different language)

                                                                      1. 1

                                                                        Ouch, I was wondering why the Kubernetes API looks so painfully like Java and not like Go. TIL that’s because it was literally a dumb translation from Java. :/ As much as I’m a pro-generics-in-Go guy, I’m afraid that’s a bad case for an argument, as I strongly believe it is a really awful and unidiomatic API from Go perspective. Thus I by default suspect that if its authors had generics at their disposal, they’d still write it Java-style and not Go-style, and probably still complain that Go generics are different from Java generics (and generally that Go is not Java).

                                                                      2. 3

                                                                        I don’t know if the author’s example was a good one to demonstrate the value of generics, but a cursory look at the diff would suggest he didn’t really gain anything from it. I always thought a huge benefit of generics was it saved you 10s or even 100s of lines of code because you could write one generic function and have it work for multiple types. He ended up adding lines. Granted, the author said it was mostly from tests, but still there doesn’t seem to be any dramatic savings here.

                                                                        1. 3

                                                                          I recommend taking more than a cursory look. The value here is very much in the new library interface. In effect, the package provides generalize channels, and before the change, that generalization meant both a complicated interface, and losing compiler-enforced type safety.

                                                                      1. 1

                                                                        As much as Erlang is an interesting language, I believe its niche is more narrow than the one Go fell into on the marketplace, and mostly in the same area of “microservices” (or I’d generalize it as “network services”). So I’m really confused what do you expect of it if you suggest you don’t feel challenged/fulfilled by this domain. Or if you do like this domain, just want to get into huge legacy systems (but do you really?), how about some C++ or Java? In other words, what kind of fulfillment are you looking for?

                                                                        1. 2

                                                                          I was wondering if by carefully choosing a technology, one can match projects fulfilling certain expectations. For example, using Go which I believe is still riding a hype-train, you can expect a lot of projects built by inexperienced programmers, writing X in Go (X being a previously used language). Many times I have seen a Ruby or JavaScript code written with Go syntax.

                                                                          What I was curious is if picking an established environment like Erlang (not Elixir), one can expect that a project using Erlang will consist of professionals that are more interested in building quality software rather than chasing the latest trends.

                                                                          I agree that the choice of the technology should be dictated by the problem. For this reason I considered Erlang.

                                                                        1. 12

                                                                          Some more interesting articles on the website, including: http://jehanne.io/2018/11/15/simplicity-awakes.html

                                                                          I wonder why GCC and not e.g. tcc, if simplicity is stated as the primary goal?

                                                                          The author ponders package management in one place, I’m curious if they’d like https://github.com/andrewchambers/hermes

                                                                          1. 4

                                                                            I think the answer to this question is that much software relies on a (GNU) C standard not fully supported by tcc. As far as I know tcc only supports C89 fully (maybe even GNU89 although I am not sure)

                                                                            1. 1

                                                                              Is there an alternative minimal C toolchain that supports C11?

                                                                              1. 7

                                                                                cproc? The vast majority of packages in Oasis Linux are compiled with it

                                                                                1. 1

                                                                                  Even if it supports C11, that wouldn’t be sufficient. Many applications use so called GNU C extensions (Linux for example) which are only fully supported by clang and gcc.

                                                                            1. 33

                                                                              Disclaimer: I represent a GitHub competitor.

                                                                              The opening characterization of GitHub detractors is disingenuous:

                                                                              The reasons for being against GitHub hosting tend to be one or more of:

                                                                              1. it is an evil proprietary platform
                                                                              2. it is run by Microsoft and they are evil
                                                                              3. GitHub is American thus evil

                                                                              GitHub collaborated with US immigration and customs enforcement under the Trump administration, which is a highly controversial organization with severe allegations of “evil”. GitHub also recently fired a Jewish employee for characterising armed insurrectionists wearing Nazi propeganda as Nazis.

                                                                              It’s not nice to belittle the principles of people who have valid reasons to cite ethical criticisms of GitHub. Even if you like the workflow and convenience, which is Daniel’s main justification, other platforms offer the same conveniences. As project leaders, we have a responsibility to support platforms which align with our values. There are valid ethical and philosophical complaints about GitHub, and dismissing them because of convenience and developer inertia is cowardly.

                                                                              1. 27

                                                                                GitHub collaborated with US immigration and customs enforcement under the Trump administration

                                                                                This makes it sound worse than it actually was, ICE bought a Github Enterprise Server license through a reseller. Github then tried to compensate by donating 500.000$ to “nonprofit organizations working to support immigrant communities”.

                                                                                … other platforms offer the same conveniences.

                                                                                Maybe, but they definitely lack the networking effect that was one of main points for curl to use Github.

                                                                                1. 24

                                                                                  The inconsistency is what kills me here. Allowing ICE to have an account became a heinous crime against neoliberalism, meanwhile how many tech companies openly collaborated with the US military while we killed a million innocent people in Iraq? Or what about Microsoft collaborating with our governments surveillance efforts?

                                                                                  I’m not even engaging in what-about-ism here in the sense that you must be outraged at all the things or none. I’m suggesting that ICE outrage is ridiculous in the face of everything else the US government does.

                                                                                  Pick less ridiculous boogeymen please.

                                                                                  1. 20

                                                                                    I see a lot of the same people (including myself) protesting all of these things…

                                                                                    I feel like I should say something to make this remark longer, and less likely to be taken as hostile, but that’s really all I have to say. Vast numbers of people are consistently opposing all the things you object to. If you’re attempting to suggest that people are picking only one issue to care about and ignoring the other closely related issues, that’s simply wrong - factually, that is not what is happening. If you’re not trying to suggest that, I don’t understand the purpose of your complaint.

                                                                                    1. 13

                                                                                      The inconsistency is what kills me here.

                                                                                      Also:

                                                                                      1. Free Software and Open Source should never discriminate against fields of endeavour!
                                                                                      2. GitHub should discriminate against this particular organisation!

                                                                                      and:

                                                                                      1. We need decentralised systems that are resistant to centralised organisation dictating who can or can’t use the service!
                                                                                      2. GitHub should use its centralised position to deny this service to this particular organisation!

                                                                                      Anyway, how exactly will curl moving away from GitHub or GitHub stopping their ICE contract help the people victimized by ICE? I don’t see how it does, and the entire thing seems like a distraction to me. Fix the politics instead.

                                                                                      1. 14

                                                                                        Is some ideological notion of consistency supposed to weigh more heavily than harm reduction in one’s ontological calculus? Does “not discriminating against a field of endeavor” even hold inherent virtue? The “who” and “on what grounds” give the practice meaning.

                                                                                        If I endeavor to teach computer science to under-served groups, and one discriminated against my practice due to bigotry, then that’s bad. If I endeavor to make a ton of money by providing tools and infrastructure to a power structure which seeks to violate the human rights of vulnerable populations, you would be right to “discriminate” against my endeavor.

                                                                                        Anyway, how exactly will curl moving away from GitHub or GitHub stopping their ICE contract help the people victimized by ICE?

                                                                                        I don’t think anyone here has suggested that if curl were to move away from github that it would have an appreciable or conclusive impact on ICE and it’s victims. The point of refusing to work for or with with ice or their enablers is mainly to raise awareness of the issue and to build public opposition to them, which is a form of direct action - “fixing the politics” as you put it. It’s easy to laugh at and dismiss people making noise online, or walking out of work, or writing a heated blog post, but as we’ve seen over the last decade, online movements are powerful forces in democratic society.

                                                                                        1. 8

                                                                                          Is some ideological notion of consistency supposed to weigh more heavily than harm reduction in one’s ontological calculus?

                                                                                          If you’re first going to argue that 1) is unethical and should absolutely never be done by anyone and then the next day you argue that 2), which is in direct contradiction to 1), is unethical and should absolutely never be done by anyone then I think there’s a bit of a problem, yes.

                                                                                          Because at this point you’re no longer having a conversation about what is or isn’t moral, and what the best actions are to combat injustices, or any of these things, instead you’re just trying to badger people in to accepting your viewpoint on a particular narrow issue.

                                                                                          1. 3

                                                                                            If you’re first going to argue that 1) is unethical and should absolutely never be done by anyone and then the next day you argue that 2), which is in direct contradiction to 1), is unethical and should absolutely never be done by anyone then I think there’s a bit of a problem, yes.

                                                                                            does anyone say that though

                                                                                        2. 12

                                                                                          Your first two points are a good explanation of the tension between the Open Source and Ethical Source movements. I think everyone close to the issue is in agreement that, yes, discriminating against militant nationalism is a form of discrimination, just one that ought to happen.

                                                                                          There was some open conflict last year between the Open Source Institute, and the group that became the Organization for Ethical Source. See https://ethicalsource.dev/ for some of the details.

                                                                                          Your second two points, also, highlight a real and important concern, and you’ve stated it well. I’m personally against centralized infrastructure, including GitHub. I very much want the world to move to decentralized technical platforms in which there would be no single entity that holds the power that corporations presently do. However, while centralized power structures exist, I don’t want those structures to be neutral to injustice. To do that is to side with the oppressor.

                                                                                          (Edit: I somehow wrote “every” instead of “everyone”. Too many editing passes, I guess. Oops.)

                                                                                          1. 11

                                                                                            To clarify: this wasn’t really intended as a defence of either the first or second points in contradictions, I just wanted to point out that people’s views on this are rather inconsistent, to highlight that the issue is rather more complex than some people portray it as. To be fair, most people’s worldviews are inconsistent to some degree, mine certainly are, but then again I also don’t make bold absolute statements about these sort of things and insult people who don’t fit in that.

                                                                                            I think that both these issues are essentially unsolvable; similar to how we all want every criminal to be convicted but also want zero innocent people to be convicted unjustly. This doesn’t mean we shouldn’t try, but we should keep a level head about what we can and can’t achieve, and what the trade-offs are.

                                                                                            I don’t want those structures to be neutral to injustice. To do that is to side with the oppressor.

                                                                                            In Dutch we have a saying I rather like: “being a mayor in wartime”. This refers to the dilemma of mayors (and journalists, police, and so forth) during the German occupation. To stay in your position would be to collaborate with the Nazis; but to resign would mean being replaced with a Nazi sympathizer. By staying you could at least sort of try to influence things. This is a really narrow line to walk though, and discussions about who was or wasn’t “wrong” during the war continue to this day.

                                                                                            I don’t think GitHub is necessarily “neutral to injustice”, just like the mayors during the war weren’t. I know people love to portray GitHub as this big evil company, but my impression is that GitHub is actually not all that bad; I mean, how many other CEOs would have joined youtube-dl’s IRC channel to apologize for the shitty situation they’re in? Or would have spent time securing a special contract to provide service to Iranian people? Or went out of their way to add features to rename the default branch?

                                                                                            But there is a limit to what is reasonable; no person or company can be unneutral to all forms of injustice; it would be debilitating. You have to pick your battles; ICE is a battle people picked, and IMO it’s completely the wrong one: what good would cutting a contract with ICE do? I don’t see it, and I do see a lot of risk in alienating the government of the country you’re based in, especially considering that the Trump administration was not exactly know for its cool, level-headed, and calm responses to (perceived) sleights. Besides, in the grand scheme of injustices present in the world ICE seems small fries.

                                                                                            And maybe all tech companies putting pressure on ICE would have made an impact in changing ICE’s practices, I don’t really think it would but let’s assume it would. But what does that mean? A bunch of undemocratic companies exerting pressure to change the policy of a democratically elected government. Yikes? Most of the time I see corporate influence on government it’s not for the better and I would rather we reduce this across the board, which would also reduce the potential “good influences”, but the bad influences vastly outnumber the good ones that this is a good trade.

                                                                                            1. 6

                                                                                              Yes, those are all fair and thoughtful points. I agree very much that with any system, no matter how oppressive, if one has a position of power within the system it’s important to weigh how much good one can do by staying in, against how much they can do by leaving. I rather wish I were living in times that didn’t require making such decisions in practice so frequently, but none of us get to choose when we’re born.

                                                                                              On the strategic point you raise, I disagree: I do think the GitHub/ICE issue is a valuable one to push on, precisely because it prompts conversations like this. Tech workers might be tempted to dismiss our own role in these atrocities; I think it’s important to have that reminder. However, I very much acknowledge that it’s hard to know whether there’s some other way that might be better, and there’s plenty of room for disagreement, even among people who agree on the goals.

                                                                                              When I was young, I was highly prone to taking absolute positions that weren’t warranted. I hope if I ever fall back into those old habits, you and others will call me out. I do think it’s really important for people who disagree to hear each other out, whenever that’s feasible, and I also think it’s important for us all to acknowledge the limits of our own arguments. So, overall, thank you for your thoughts.

                                                                                              1. 2

                                                                                                I recently read a really approachable article article from Stanford Encyclopedia of Philosophy (via HN), which I found really interesting and balanced in highlighting the tensions between (in this case study) “free speech” and other values. To me it also helps to understand that those apparent “conflicts of interest” are still rather possible to balance (if not trivially) given good will; and IMO that the “extreme positions” are something of a possibly unavoidable simplifications - given that even analyzing the positions of renowned philosophers, skilled at precise expression, it’s not always completely clear where they sat.

                                                                                                https://plato.stanford.edu/entries/freedom-speech/

                                                                                                edit: though I am totally worried when people refuse to even discuss those nuances and to explore their position in this space of values.

                                                                                                1. 7

                                                                                                  Anyone with a sincere interest in educating themselves about the concept of free speech and other contentious issues will quickly learn about the nuances of the concepts. Some people will however not give a fig about these nuances and continue to argue absolutist positions on the internet, either to advance unrelated political positions or simply to wind people up.

                                                                                                  Engaging with these people (on these issues) is generally a waste of time. It’s like wrestling with a pig - you’ll get dirty and the pig enjoys it.

                                                                                                  1. 3

                                                                                                    I’m not sure I agree that anyone who makes a sincere effort will learn about the nuances. The nuance is there, but whether people have the chance to learn it is largely a function of whether the social spaces they’re in give them the chance to. I’m really worried about how absolutist, reactionary positions are the bulk of discussion on social media today. I think we all have an obligation to try to steer discussions away from reductive absolutism, in every aspect of our lives.

                                                                                                    With that said, it’s clear you’re coming from a good place and I sympathize. I only wish I felt that not engaging is clearly the right way; it would be easier.

                                                                                                    1. 5

                                                                                                      I’ll have to admit that my comment was colored by my jaundiced view of the online conversation at this point in time. “Free speech” has become a shibboleth among groups who loudly demand immunity from criticism, and who expect their wares to be subsidized in the Marketplace of Ideas, but who would not hesitate to restrict the speech of their enemies should they attain power.

                                                                                                      I’m all for nuanced discussion, but some issues are just so hot button it’s functionally useless in a public forum.

                                                                                                      1. 3

                                                                                                        I completely understand, and that’s very fair.

                                                                                                        I agree with your assessment but, purely for myself and not as something I’d push on others, I refuse to accept the outcome of stepping back from discussion - because that would be a win for reactionary forms of engagement, and a loss for anyone with a sincere, thought-out position, wherever they might fall on the political spectrum.

                                                                                                        It’s fine to step back and say that for your own well being, you can’t dedicate your efforts to being part of the solution to that. You can only do what you can do, and no person or cause has a right to demand more than that. For myself, only, I haven’t given up and I’ll continue to look for solutions.

                                                                                            2. 6

                                                                                              There are a lot of people in the OSS community who don’t agree with your first point. You might find it contradictory, or “wrong” (And sure, I guess it wouldn’t be OSI certified if you codified it in a license). But it’s what a decent part of the community thinks.

                                                                                              And the easy answer to your comment about helping, let’s do the contrary. ICE has policies. Selling them tools to make it easier is clearly helping them to move forward on those policies. Just like AWS was helping Parler exist by offering its infrastructure. You can have value judgements or principles regarding those decisions, but you can’t say that it doesn’t matter at all.

                                                                                              And yeah, maybe there’s someone else who can offer the services. But maybe there are only so many Github-style services out there! And at one point it starts actually weighing on ICE’s ability to do stuff.

                                                                                              Of course people want to fix the politics. But lacking that power, people will still try to do something. And, yeah, people are allowed to be mad that a company is doing something, even they probably shouldn’t be surprised.

                                                                                              1. 4

                                                                                                And yeah, maybe there’s someone else who can offer the services. But maybe there are only so many Github-style services out there! And at one point it starts actually weighing on ICE’s ability to do stuff.

                                                                                                I’d expect ICE to be more than capable of self-hosting GitLab or some other free software project.

                                                                                                Of course people want to fix the politics. But lacking that power, people will still try to do something.

                                                                                                I don’t think it’s outside of people’s power to do that, but it is a lot harder, and requires more organisation and dedication. And “doing something” is not the same as “doing something useful”.

                                                                                                As for the rest, I already addressed most of that in my reply to Irene’s comment, so I won’t repeat that here.

                                                                                            3. 12

                                                                                              no disagreement with your main point, but… a crime against neoliberalism?

                                                                                              1. 4

                                                                                                I think they mean against the newest wave of liberal politics in the US. Not the actual term neoliberalism which—as you clearly know—refers to something completely different, if not totally opposite.

                                                                                              2. 10

                                                                                                there are active campaigns inside and outside most companies about those issues. It’s not like https://notechforice.com/ exists in a bubble. Amazon, Google, Microsoft, Palantir, Salesforce and many others have been attacked for this. Clearly the DoD created the Silicon Valley and the connections run deep since the beginning, but these campaigns are to raise awareness and build consensus against tech supporting imperialism, concentration camps and many other crimes committed by the American Government against its citizens or foreign countries. But you have to start somewhere: political change is not like compiling a program, it’s not on and off, it’s nuanced and complex. Attacking (and winning) stuff like Project Maven or ICE concentration camps is a way to show that you can achieve something, break the tip of the iceberg and use that to build bigger organizations and bigger support for bigger actions.

                                                                                                1. 1

                                                                                                  Clearly the DoD created the Silicon Valley and the connections run deep since the beginning

                                                                                                  Oh, I’d love to be red-pilled into that!

                                                                                              3. 22

                                                                                                This makes it sound worse than it actually was, ICE bought a Github Enterprise Server license through a reseller.

                                                                                                LA Times:

                                                                                                In a fact sheet circulating within GitHub, employees opposing the ICE contract wrote that the GitHub sales team actively pursued the contract renewal with ICE. The Times reviewed screenshots of an internal Slack channel after the contract was renewed on Sept. 4 that appear to show sales employees celebrating a $56,000 upgrade of the contract with ICE. The message, which congratulated four employees for the sale and was accompanied by emojis of a siren, bald eagle and American flag, read “stay out of their way. $56k upgrade at DHS ICE.” Five people responded with an American flag emoji.

                                                                                                It was not as at arm’s length as they’d like you to believe. Several prominent organisations rejected offers of parts of the $500k donation because they didn’t want to be associated with the ICE contract. Internally the company was shredded as it became clear that GitHub under MSFT would rather be torn apart inside than listen to employees and customers and commit to stop serving ICE in the future.

                                                                                                There were plenty of calls to cancel the contract immediately, which might’ve been a pipedream, but even the more realistic “could we just not renew it in future” was met with silence and corporatespeak. Long-serving employees asking “well, if this isn’t too far for us, what concretely would be over the line?” in Q&A’s were labelled hostile, and most certainly not answered.

                                                                                                1. 15

                                                                                                  We could debate the relative weight of these and other grievances here, but I’d rather not. My point is simply that the ethical concerns are based on reason, and Daniel’s blithe dismissal of them is inappropriate.

                                                                                                  1. 7

                                                                                                    Could you elaborate on the reasons?

                                                                                                    You state that the reasons exist, and you give an example of someone you think github should reject as a customer. But you don’t talk about what those reasons are, or really go into principles, rationales or philosophy at all.

                                                                                                    I worry that without a thought-through framework, your attitude degenerates into mindless shitstorms.

                                                                                                    1. 4

                                                                                                      He has not engaged with the ethical concerns you raise. That may well be because he is simply not aware of them. You are overinterpreting that as “blithe dismissal”.

                                                                                                  2. 10

                                                                                                    The firing of the employee has been reversed.

                                                                                                    1. 10

                                                                                                      Just a honest question: does this poop management actually makes them look better to you? Despite this being a reaction to public outrage that would have hurt the company? Like, do you think they that out of guilt or something like that?

                                                                                                      1. 3

                                                                                                        Considering the fired employee was reinstated and the head of HR resigned, this looks like a much more substantive concession than the employment status Ctrl-Z that internet outrages usually produce.

                                                                                                        1. 3

                                                                                                          how? isn’t the “let’s sacrifice a scapegoat without fundamentally changing anything” a quite common strategy?

                                                                                                          1. 2

                                                                                                            None of us know the details of this case. It’s way too easy to form a conclusion from one party, especially if they’re not bound by law from discussing sensitive HR details openly.

                                                                                                            So while I can project a hope that this is a lasting change at GH, you are free to cynically dismiss it as window dressing. The facts, as we know them, support either view.

                                                                                                      2. 16

                                                                                                        Aye, and I commend them for that. But that doesn’t change the fact that “retaliated against an employee who spoke out against Nazism” is a permanent stain on their reputation which rightfully angers many people, who rightfully may wish to cease using the platform as a result. Daniel’s portrayal of their concerns as petty and base is not right.

                                                                                                        1. 2

                                                                                                          Not only that but the HR person who fired him was fired.

                                                                                                          1. 4

                                                                                                            Probably out of convenience and not actually the person who gave the order. At least, I think that’s the case more than we know.

                                                                                                            1. 5

                                                                                                              The person who resigned was the head of HR. It almost certainly wasn’t the person who made the call, or even their manager, it was likely their manager’s manager. That sends a pretty strong signal to the rest of HR that there will be consequences for this kind of thing in the future.

                                                                                                              1. 1

                                                                                                                Damn, the head of HR!? What a turnover. Maybe that means they’re taking this more seriously than I thought at first.

                                                                                                        2. 7

                                                                                                          Every time someone asked me to move away from GitHub it’s been because “it’s not Free Software” and various variants of “vendor lock-in” and “it’s centralized”. I am aware there are also other arguments, but those have not been stated in the two instances people asked me to move away from GitHub. What (probably) prompted this particular Twitter thread and that doesn’t mention ICE or anything like that (also: 1 2). Most comments opposed to GitHub on HN or Lobsters don’t focus on ICE either.

                                                                                                          That you personally care a great deal about this is all very fine, but it’s not the most commonly used argument against GitHub.

                                                                                                          There are valid ethical and philosophical complaints about GitHub

                                                                                                          According to your view of ethics, which many don’t share.

                                                                                                          1. 2

                                                                                                            I think that asking someone to change their infrastructure based solely on personal preferences is a step or two too far, be it based on ethics or ergonomics (“all the other code I use is on GitHub, yours should be too”).

                                                                                                            It’s at the very least a bunch of work to move, and the benefit is likely small. You’ve already made a choice when deciding to put your code where it is, so why would you want to change it?

                                                                                                            If asked, I’d recommend using something other than Github to work against the monoculture we’re already pretty deep in, but I don’t see myself actively trying to persuade others to abandon them.

                                                                                                          2. 4

                                                                                                            Isn’t sr.ht hosted and incorporated in the US? Or are only points (1) and (2) valid? :-D

                                                                                                            GitHub also fought the US Gov to get the Iranian developer access to their platform, which is also helping your platform as far as I know. https://github.blog/2021-01-05-advancing-developer-freedom-github-is-fully-available-in-iran/

                                                                                                            Any organization that is large enough will have some incidents which, when cherry-picked, can be used to paint the organization as evil. But really what happens is that they represent humanity. In terms of evil, you don’t have to look far to see much worse groups of people than GitHub.

                                                                                                            IMO a more compelling argument would be centered around how he is an open-source developer, depending on a closed platform. Daniel’s utilitarian view is understandable but also short-thinking. He is contributing towards building this monolith just by using it.

                                                                                                            1. 20

                                                                                                              Or are only points (1) and (2) valid? :-D

                                                                                                              None of the points Daniel raises are valid, because they’re strawmen, and bad-faith portrayals of actual positions.

                                                                                                              Actual argument: “GitHub, an American company, is choosing to cooperate with ICE, an American instutition which is controversial for its ethical problems”

                                                                                                              Bad faith re-stating: “GitHub is American thus evil”

                                                                                                              There is nuance here, and indeed you’ve found some of it, but a nuanced argument is not what Daniel is making.

                                                                                                            2. 6

                                                                                                              collaborated with US immigration and customs enforcement

                                                                                                              I think “is American and thus evil” definitely covers this.

                                                                                                              1. 2

                                                                                                                Why are two [1, 2] of your most popular projects primarily hosted on github?

                                                                                                                1. https://github.com/swaywm/sway

                                                                                                                2. https://github.com/swaywm/wlroots

                                                                                                                1. 19

                                                                                                                  I have been gradually moving off of GitHub, but not all at once. A few months ago I finished migrating all of the projects under my user namespace (github.com/ddevault) to SourceHut. Last week I also announced to my GitHub Sponsors supporters that I intend to leave the program, which is almost certain to cause me to lose money when many of them choose not to move to my personal donation platform (which has higher payment processing fees than GitHub does, so even if they all moved I would still lose money). If you intend to imply that I am a hypocrite for still using GitHub, I don’t think that holds very much weight.

                                                                                                                  Regarding those two projects in particular, some discussion was held about moving to gitlab.freedesktop.org last year, but it was postponed until the CI can be updated accordingly. In any case, I am no longer the maintainer of either project, and at best only an occasional contributor, so it’s not really my place nor my responsibility to move the projects elsewhere. I think that they should move, and perhaps a renewed call for doing so should be made, but it’s ultimately not my call anymore.

                                                                                                                  1. 10

                                                                                                                    If you intend to imply that I am a hypocrite for still using GitHub, I don’t think that holds very much weight.

                                                                                                                    Nope, I was just genuinely curious since I don’t follow you that closely, and hadn’t heard any explanation or reasoning why those repos are still on github when I have heard you explain your position regarding github multiple times. So it seemed odd, so I asked.

                                                                                                                    In any case, thanks for explaining! I hope those projects are moved off too (@emersion !)

                                                                                                                    1. 6

                                                                                                                      Cool, makes sense. Thanks for clarifying.

                                                                                                                    2. 2

                                                                                                                      I love that you represent another point of view here. I firmly believe that free software needs free tools. We don’t want history to repeat. And Yes, there will be some sacrifice for the switch.

                                                                                                                      Watching your actions closely for months, You represent how a free software leader should be.

                                                                                                                1. 2

                                                                                                                  A little tangential:

                                                                                                                  I think at this point it’s reasonable to assume that all systems have privilege escalation vulnerabilities: OS’s attack surfaces are just too big to defend. I would not ever feel confident in assuming that a system is only partially breached once it’s had any hostile code running on it at all.

                                                                                                                  Privilege separation as a defense-in-depth mechanism still make sense, of course, but in dealing with a breach I would prefer to proceed as though it hadn’t worked.

                                                                                                                  1. 1

                                                                                                                    There are systems like GenodeOS that are purposefully designed in an attempt to reduce the attack surfaces by compartmentalization & minimizing the trusted computing base (TCB), aiming to make it robust enough that it won’t have privilege escalation vulnerabilities.

                                                                                                                  1. 12

                                                                                                                    (very subjective thoughts, hopefully not too off-topic. I think that this is the right community though)

                                                                                                                    I think the appearance of new “Hobby OSs” is one of the nicest things to happen in the recent years. There was a bit of a drought, as some projects slowly died. That’s not to say there aren’t any, there certainly are quite a few that made constant progress over all these years.

                                                                                                                    However things like developing an OS mostly for fun is something that seems to be lacking lately. A part of that also seems to be that doing some things just for fun became harder in the mainstream world, if you wanna call it like that. Doing an app just for fun and distribute it to people is quite a hassle. One needs to pay certain fees for distribution, potentially even get a specific device to program, there’s usually quite a few things involved to keep things working, both on newer phones and not too rarely certain rules change.

                                                                                                                    Overall things seem to move faster and the time for projects to be obsolete (unusable or close to) when doing nothing seems to speed up, in some fields more than other.

                                                                                                                    Maybe it’s just my perception, but also it feels as if the willingness or let’s say the motivation to do a bigger project as a hobby in the free time goes down. A lot of the time people only do so if compensated (thanks to Patreon, etc. this is easily possible though), or if it at least makes well on the resume.

                                                                                                                    Please don’t get me wrong. I certainly have no intention to tell people with their free time and completely understand things cost money. Please don’t take this as a criticism.

                                                                                                                    What I am getting to is that with the growth of the amount of people being in IT it seems that - for the lack of a better word - the percentage of people doing “silly little things” is going down, especially when they can not be achieved within a few days.

                                                                                                                    I have been wondering why this is. To me a lot of it feels like an increase of “wanting to feel professional” (again, no criticism!), even when not acting so. Maybe it’s also a general society or psychology topic. Maybe it’s how time is given a value and with such projects that are somewhere between work (with effort, like programming) and hobby, which for most people is a clearer categorization thing when watching a show on Netflix, playing a video game or listening to music hobbies taking effort make people more feel like they didn’t spend their time productively, nor considering it time to relax.

                                                                                                                    A lot of that I perceive as “taking fun out of computing” so to say. But OS development projects like these, just like the tildeverse make me feel like a lot of it is returning after it was partly lost or at least not perceived by me.

                                                                                                                    Curious on whether I am the only person seeing it like that or if you have different views on this.

                                                                                                                    1. 7

                                                                                                                      I too am happy to see these projects again, as hobby OS dev has always been a favourite interest of mine. But I do find it rather depressing that they all seem to be just yet another UNIXalike, rather than any sort of attempt to do something new, or even something else old that was abandoned.

                                                                                                                      1. 4

                                                                                                                        … the percentage of people doing “silly little things” is going down, especially when they can not be achieved within a few days. I have been wondering why this is. To me a lot of it feels like an increase of “wanting to feel professional”

                                                                                                                        Yes, I agree with the sentiment. I think that the invention and the spread of internet in the mainstream has been a double edged sword. On one hand, it is so much easier now to learn how to do things and to make your creations accessible to the world. On the other hand, this benefit applies to everyone, not just to you, so you suddenly find yourself “competing” with a horde of amateurs and hobbyists just like you.

                                                                                                                        Because if we’re honest, very few people want to make and do things in perfect isolation. There is not always a desire for a monetary reward, but I think that in the overwhelming amount of cases there is a desire for some kind of recognition from peers or others inside or outside the current social circle. But in this new era the bar to get that recognition is getting higher and higher. Not only the quality of the work rises, but also the expectation of what proper recognition is rises. I might be looking back with nostalgia, but I like to think that 50 years ago, if your mother had some skill in knitting sweaters, her skill would be recognized and valued in her family/village/street. So if she was able to impress 20 people, she would gain some real status and respect. If you were to try to get the same level of respect these days, you would need at least a couple of thousand followers on youtube/instagram/pinterest/whatever. Ideally you should also make some nice extra cash on the side by selling the the designs, or the sweaters themselves on etsy, or create tutorials on youtube, or..

                                                                                                                        So the bar is much higher now and distractions are plentiful. So not as much people bother anymore. But that is relatively speaking. I think that in absolute numbers there are still way more people doing interesting stuff. They just get drowned out. But I don’t have any research to back that up.

                                                                                                                        1. 1

                                                                                                                          That’s very insightful. So far I took that more as effects of walled gardens, and raising the bar by complexity, the wanting to feel professional (not doing “hacky” things out of love - despite the whole “Do you have passion for X?” at many job ads).

                                                                                                                          But sure, when you look for online likes, comments, stars and subscribers things are seen differently. And those measures typically don’t convey much. GitHub stars do oftentimes not even convey user base or general interest (readme-only repos with thousands of stars because it was posted on some news page, never even started out implementation). They mostly tell how many people have somewhat actively seen a headline or similar.

                                                                                                                          And of course the attention span and new things popping up, together with the “newer is always better” assumption one has a hard time.

                                                                                                                          The thing with research might turn out hard or at least I don’t know what the right approach is. A longer time ago I actually got interested in different ways of measuring impact of technologies (different kinds of, purely economical for example). The background was that things like measured programming language popularities seemed off, when looking at how they are perceived online, compared to when you looked into the real world.

                                                                                                                          A lot of these are community and philosophy based. To stay with programming language popularity. A project with excellent documentation, clear guides, its own widely used communication channels tend to have a lot fewer questions on Stack Overflow, etc. A language that is often taught at university, has been hyped, etc. has more. Also the more centralized a community is the fewer post you’ll find with largely the same content.

                                                                                                                          This doesn’t make a huge difference in the large, especially when putting in more factors, you still get a picture, but when it comes to finding patterns it is very easy to only end up with researching a specific subset, which might be interesting, but also might lead to in a way self-confirming assumptions. Or in other words, it’s hard to specify parameters and indicator to research without accidentally fooling yourself.

                                                                                                                        2. 3

                                                                                                                          Personally, I disagree. I would conjecture that there are actually more people doing “silly little things” (including the “bigger projects”) than “before”, but there are also many times more people now doing things “for money/popularity” than “before”. It’s just that as a result, those in the first group lost visibility among the second group — esp. compared to the “before” times, when I believe the second group was basically an empty set.

                                                                                                                          As a quick example off the top of my head, I’d recommend taking a look at the Nim Conf 2020 presentations. Having attended this online conference, personally I was absolutely amazed how one after another of those were in my opinion quite sizeable “silly little things”. Those specific examples might not be OS-grade projects, but then there’s https://www.redox-os.org/, there’s https://genodians.org/, there’s https://www.haiku-os.org/, there’s https://github.com/akkartik/mu

                                                                                                                          I mean, to see that nerd craziness is alive and well, just take a look at https://hackaday.com/blog!

                                                                                                                          1. 4

                                                                                                                            Thank you for the shout out to Mu! @reezer, Mu in particular is all about rarely requiring upgrades and never going obsolete. It achieves this by avoiding closed platforms, only relying on widely available PC hardware (any x86 processor of the last 20 years), and radically minimizing dependencies.

                                                                                                                            My goal is a small subculture of programmers where the unit of collaboration is a full stack from the bootloader up, with the whole thing able to fit in one person’s head, and with guardrails (strong types, tests) throughout that help newcomers import the stack into their heads. Where everybody’s computer is unique and sovereign (think Starship Enterprise) rather than forced to look and work like everybody else’s (think the Borg). Fragmentation is a good thing, it makes our society more anti-fragile.

                                                                                                                            I’ve been working on Mu for 5 years in my free time, through periods when that was just 1 hour a day and yet I didn’t lose steam. I don’t expect ever to get paid for it, and the goal above resonates enough with me that I expect to work on it indefinitely.

                                                                                                                            1. 2

                                                                                                                              Just wanna say that despite Mu not being a tool I particularly want to use yet, I do read all your stuff about it that I encounter, and I’m very glad you’re out there doing it. And I’m certainly not alone.

                                                                                                                            2. 1

                                                                                                                              Thank you for your response. You give great examples. I actually meant to give them as well. Just to clarify. For me Redox would be part of that new wave (maybe even the start of it), while Haiku is a project that had continuous progress, but is one of the old surviving ones, just like Genode.

                                                                                                                              AROS is another example for an old project.

                                                                                                                              What I meant with things that died during that period was for example Syllable and some projects of similar philosophy.

                                                                                                                              I also agree with the sentiment that there is more people, but it doesn’t feel like it grew in proportion (that’s what I meant with percentage). But it feels like it is changing, which I really like. It feels like a drought being over. The Haiku community also had ups and downs.

                                                                                                                              But I also don’t think it’s just operating systems. That’s why I mentioned the Fediverse. A completely different thing seems to be the open source game scene, which also feels like it’s growing again, insanely so. Especially when looking at purely open source games, which feel like they have massive growth now.

                                                                                                                              However, I still have some worries about the closed platform topic, making it harder. Tablets and phones are becoming the dominant “personal computers” (as in things you play games on, do online shopping, communicate). And they are very closed. If you in the late 90s or early 2000s wanted to install an alternative OS on your average personal computer you could, even on your non-average sometimes. For your average smartphone or tablet that’s a lot less likely and unlike back then the drive (at large, with some exceptions) seems to go into things being even more shut off, giving less room to play.

                                                                                                                              I don’t know that area, but it seems similar things are true for video game consoles. Less homebrew, and at least I did not hear about OSs being ported there, which seems a bit odd, given that by all that I know the hardware seems to be closer now to your average desktop computer.

                                                                                                                              I did not know about Mu. Also I will take a look at the Nim Conf. So thanks for that as well!

                                                                                                                              1. 1

                                                                                                                                Not much of a metric, but I guess you could try and graph the announcements on the OSDev.org forum thread year by year and see if there’s anything to read from them. Though on a glance, I’d say they’re probably too sparse to warrant any kind of trendline (but IANAStatistician). And the first one is 2007 anyway, so still no way to compare to “the late 90s or early 2000s”.

                                                                                                                                Yeah, I also kinda lament the closing of the platforms; but on the other hand, Raspberry Pi, ARM, RISC-V, PinePhone, LineageOS… honestly, I think I’m currently more concerned about Firefox losing to Chrome and a possible monoculture here. Truth said, whether I like it and admit it to myself or not, the browsers are probably in a way the de facto OS now.

                                                                                                                                And as to Fediverse and in general non-OS exciting things, there’s so many of them… for starters just the (for me) unexpected recent bloom of programming languages (Go, Rust, Zig, Nim, but also Janet, Jai, Red, etc. etc. etc.); but also IPFS, dat/hyper, etc. etc; then Nix/NixOS; then just around the corner an IMO potential revolution with https://github.com/enso-org/enso; as you say with games there’s Unity, Godot, the venerable RPGMaker, there’s itch.io; dunno, for me there’s constantly so many exciting things that I often can’t focus exactly because there’s so many things I’d love to play with… but then even as a kid I remember elders saying “you should focus on one thing and not constantly change your interests”, so it’s not that it’s something new…

                                                                                                                            3. 2

                                                                                                                              I wonder how virtualization improvements over time might have also driven some of this?

                                                                                                                            1. 1

                                                                                                                              Looks cool! Is the source code to this project available?

                                                                                                                              1. 2

                                                                                                                                Yes, both the library on which this is built (https://github.com/felixpalmer/procedural-gl-js/) and this specific implementation (https://github.com/felixpalmer/volcanoes-of-japan) are open source

                                                                                                                                1. 1

                                                                                                                                  you can usually find them if you transform the github.io domain into github.com:

                                                                                                                                  <user>.github.io/<repo> becomes:

                                                                                                                                  github.com/<user>/<repo>

                                                                                                                                  https://github.com/felixpalmer/volcanoes-of-japan/

                                                                                                                                1. 13

                                                                                                                                  I’ve just started experimenting with NixOS, and so I’m particularly interested in criticisms of NixOS that might make me want to abandon it now before I get too invested. Here are my thoughts on this person’s arguments:

                                                                                                                                  Small community.

                                                                                                                                  I’m only using NixOS in my capacity as a hobbyist running my own personal server infrastructure. I don’t care about hiring people or paying for support in this context.

                                                                                                                                  Documentation

                                                                                                                                  Yeah, it could be better. Documentation usually could be. On the other hand I do understand functional programming, so I don’t care if the Nix configuration language is hard for someone coming from an OOP background. I think those people should become comfortable with functional programming. I did have a little bit of trouble figuring out exactly what things were possible with nix configuration syntax, which, again, better documentation of the language would fix.

                                                                                                                                  Configuration management.

                                                                                                                                  System state and configuration described in a special file configuration.nix. It’s like an entry point to endless amount of functions and dependencies for your OS.

                                                                                                                                  Yes this is the entire point. I like the idea that I can describe everything relevant about the state of my system in one file. Writing custom systemd services is something anyone using a modern linux with systemd might have to do, this is not unique to NixOS.

                                                                                                                                  Kernel upgrade.

                                                                                                                                  I just realized that the version of the kernel on my test NixOS system is 5.4, which is older than I would like. https://nixos.wiki/wiki/Linux_kernel tells you how you can upgrade to the latest kernel, which I just did without trouble.

                                                                                                                                  Cloud support

                                                                                                                                  This sounds like some issues specific to AWS (which I don’t use for my personal infrastructure), and with a version of NixOS a couple years old. Not sure if this is relevant to anyone today, but it’s not relevant to me.

                                                                                                                                  Cache

                                                                                                                                  Problems start happening when you spinning up your personal cache, which is used, for example to store your proprietary build artifacts. You always need to keep eye on the community channel version you’re using as a base for your builds. And yes, your own cache size will also grow very fast!

                                                                                                                                  I haven’t done this yet, so I’m not sure how problematic it is in practice.

                                                                                                                                  Security.

                                                                                                                                  People are thinking, that NixOS deterministic approach of dealing with software is very secure. Maybe yes, it is really not possible to hijack the build results once the derivation bin build. But… You always need to pay for it…

                                                                                                                                  During the upgrade from 18.03 to 18.09 Nix community decided to change Docker tools they’ve been use to calculate derivations checksums where you were dealing with Docker images.

                                                                                                                                  As a result we rewrote all Docker based derivation declarations and more over to rebuild them all. It was really painful and unmanaged migration.

                                                                                                                                  I don’t understand from this description what the problem was. It makes sense that when you do an OS update, you’ll have different versions of some of the binaries and need to rebuild things that depend on a hash generated from those binaries. I’m not sure how Docker images are coming into play here, or if this is relevant to what I would do with my linux systems.

                                                                                                                                  Windows support

                                                                                                                                  I’m not sure what it would mean for a linux distribution to “support windows”.

                                                                                                                                  System requirements

                                                                                                                                  I’ve been running the Nix tools on my laptop. Haven’t noticed them taking up too much space so far, and if they do I understand there’s a pruning mechanism.

                                                                                                                                  Anyway I don’t find these points super-convincing, especially not for my own usecase (which is admittedly different from the use case of someone supporting production infrastructure at their day job).

                                                                                                                                  1. 8

                                                                                                                                    I’ve just started experimenting with NixOS, and so I’m particularly interested in criticisms of NixOS that might make me want to abandon it now before I get too invested.

                                                                                                                                    For some quick background, I’m a casual hobbyist Nix user & fan, I first tried installing NixOS maybe ~5 years ago, but shelved it after some heavy & deep experimentation; discovered home-manager since and swear by it now; installed NixOS again recently on a secondary personal laptop that I bought for Linux (my main one is Windows) and now consider it a keepers for purposes of tinkering and Linux-based hobby development.

                                                                                                                                    I will try to reply to your first sentence, though it’s not easy for me to put many thoughts in a concise reply. My main takaway is probably such, that it will break in different ways than your now-“HN-standard” Ansible/Chef/… setup. Mind me, not necessarily in more or less ways, but different ones. And given that it’s still absolutely an emerging technology, those ways will be comparably less well-known, and the community is smaller (esp. compared to current golden standard ArchWiki or askubuntu.com), so at times you might/will be a trailblazer in solving some issue. Also sometimes re-blazing some trail that someone else quite probably blazed before, but most people are still tinkering in their garages and there’s not that many bloggers & other public knowledge-sharers yet, and that one person you need might not be on IRC just now or they are but didn’t notice your question.

                                                                                                                                    In other words, it’s an emerging/bleeding-edge technology with all that it typically entails. As to investment, if we knew for sure which investment is the good one we’d be all millionaires ;) Personally, if you like tinkering and playing, my subjective belief is that it’s a better moment to start playing with Nix than ever before. I would be hesitant to recommend that to anyone seriously those ~5 years ago, other than an interesting talking point, but over last months I’m definitely advertising home-manager left and right to anyone who has enough intellectual curiosity and hacker spirit. Going NixOS is still brave IMO, “jumping in deep water”, but if you like it and manage to stay afloat, you might find you’re reaping some benefits, and ahead of the pack splashing merrily in the shallow waters. Personally, if you’re really into deep water, I’d hertily recommend trying to go towards Nix Flakes ASAP. That’s a somewhat higher risk bet, but I seem to find they’re fixing a lot of issues I had with “classic” Nix originally.

                                                                                                                                    I think one thing that’s kinda still open question with Nix, which to me personally (bear in mind, hobbyist user) is the only major point of risk where Nix might have trouble, is with managing secrets. I know some people are trying to approach this from various angles, but I haven’t yet seen an approachable (to me) writeup explaining if someone really found a sensible and practical way to do that. I’m not sure if some solution/workaround will emerge, or will it require a major breaking change/evolution in Nix. I personally think there’s some chance this might be a risk to Nix’s solid mainstream position sooner or later. (And alternatively, some unexpected future technology might possibly unexpectedly outpace Nix to its own success.) But personally, I see Nix (esp. recently) on a trajectory to mainstream.

                                                                                                                                    Said a (pseudo-)random stranger on teh internets.

                                                                                                                                  1. 1

                                                                                                                                    Does anyone know of some guide on how to add static typing to a language, that would be easy to understand to a “commercial” programmer (i.e. avoiding the mathy/CompSci type-theory “equations”, or translating them from CompSci lingo to ELI5 programmer lingo)? Ideally still covering the now-mandatory extensions like composite types (e.g. structs, arrays, inheritance/structured typing, etc.). “Type deduction” (e.g. Hindley-Milner) would be cool but not strictly required (most of those that I could find seemed to not be able to resist the temptation of going the CompSci-lingo path).

                                                                                                                                    Apart from trying to understand H-M, I remember also watching some reportedly “simple” YouTube video about mini- (or micro-?) -Kanren, and it seemed to start somewhat approachable, but the guy was talking rather fast and he couldn’t resist spiralling it out of control to rather dense format rather quickly, so I lost it too :’-(

                                                                                                                                    IIUC a GC is something you can even “plug in” as a library (I believe Boehm is distributed as such?), whereas static typeing seems to be something you must bake into your language design, and that will actually influence it a lot, so I’d love to learn that…

                                                                                                                                    1. 3

                                                                                                                                      IIUC a GC is something you can even “plug in” as a library…

                                                                                                                                      It can be, but it also starts to enable very different ways of thinking about and handling data. A language with GC built into it from the beginning tends to look quite different than one without.

                                                                                                                                      1. 1

                                                                                                                                        Does anyone know of some guide on how to add static typing to a language, that would be easy to understand to a “commercial” programmer

                                                                                                                                        “ELIC”. Nice.

                                                                                                                                        An easy way to implement static typing is to simply implement your interpreter twice: The second “run” is your type-checker, where all of your primitives are implemented in such a way that they return the types of their arguments instead of evaluating them.

                                                                                                                                        IIUC a GC is something you can even “plug in” as a library (I believe Boehm is distributed as such?),

                                                                                                                                        It can be, but these sorts of GC do not tend to perform very well or have some limitations that might not be desirable in some production environments, so it usually benefits the language to consider GC carefully.

                                                                                                                                        ECL (a lisp that compiles to C and uses Boehm as its GC that I used to work on) is significantly slower than many other lisp compilers, and this is a big part of “why” that is, of course the advantage is that interfacing with C code is substantially easier than in other compilers.