Threads for enobayram

    1. 3

      I would love to see this talk if anyone has a recording!

      1. 3

        It’s a live stream with those avatar things.

        1. 7

          Vtubing is a fascinating development that I personally have no appetite for. I’m glad people can represent themselves however they like, though.

          1. 3

            I’ve watched enough YouTube videos in all sorts of styles that I don’t mind that part. I do find the voice hard to listen to for prolonged periods though.

            1. 1

              It is mostly the voice for me, as well.

          2. 2

            I find it interesting as well, if I organize a conference, I would prefer the speaker to show their face, not that Japanese cartoon doll, maybe is a generational thing, but I have hard time taking anyone in a formal academic setting seriously, with such an avatar 😅. That said, the person behind that thing, is pretty smart.

        2. 2

          Oh wow, I didn’t know you could link to interdimensional YouTube from regular internet.

      2. 2

        there’s a link to the video on there but the video is almost 4 hours long.

    2. 4

      This really is a horrifying outage - session tokens were being returned to the wrong users, allowing them to access each other’s data. And it was caused by a Terraform change. Who’s unit tests are catching that?

      1. 2

        No unit test will catch if someone does something inherently wrong (I can imagine no situation where caching a set-cookie header would make sense) and touching auth.

        The bigger problem would be that the Ops people would need to have some sort of knowledge about webdev, so I’d say this would maybe be mitigated by cross-funtional teams and someone (even just accidentally) seeing the commit, or even reviewing it.

        1. 4

          No unit test, but good integration tests could have. Especially ones that do pathological things like click your submit/login/do stuff button really fast, and also from different sessions. Sadly very very few places have this level of testing. I’ve never seen it so far.

          1. 4

            IME, the trouble with “this level of testing” is that it’s very hard to have really comprehensive tests that test system-wide corner case behaviors like this AND are not full of non-deterministic false failures. If you commit to that level of testing, you’ll reach a point where you’ll have to rerun the tests 100 times before you manage to observe a run when all of them pass. The end result is that your engineers will be spending an obscene amount of time and effort fighting with these tests and for every bug that these tests catch (once every 4 months, and that is if people didn’t get used to ignoring the failures), you’ll end up spending thousands of programmer hours fighting with the tests. Time that those programmers could’ve used for catching and fixing orders of magnitude more bugs.

            1. 2

              That just points to the need for dedicated people who just write tests. It’s a different skill, and proper tests like that can be built in a way that they aren’t likely to cause false alarms, but it’s fairly difficult and sometimes non-obvious. I’ve been seeing a slow death of testing as a part of a software production pipeline, being absorbed into programming, when IMO these parts should be separate.

              1. 2

                proper tests like that can be built in a way that they aren’t likely to cause false alarms, but it’s fairly difficult and sometimes non-obvious

                I agree with this very much and it’s difficult enough that you usually can’t expect the average programmer (that you will encounter in your company) to write tests like this and you will almost certainly not encounter tests like this when you join a new project. An organizational issue that often makes this problem even worse is that if you have dedicated test engineers, they will typically get a lower pay, so your prospects of keeping a programmer with the required caliber for writing race-free deterministic tests gets even worse. And reconsider the kind of testing GP has brought up:

                Especially ones that do pathological things like click your submit/login/do stuff button really fast, and also from different sessions.

                Writing a test like this in a race-free manner isn’t something a dedicated test engineer can do without changing the application code and the deployment infrastructure. You typically need to turn the code inside out to expose the time-dependent aspects in a way that you can interact with in a controlled fashion. F.x if two parallel DB queries are racing with each other and your test needs to target a particular order, you need to be able to plug in a mock database or at least some sort of middleware or an abstraction layer between the DB and the application to arrange that order in your test. BTW, I’m not at all suggesting that’s how software should be written. I’d much rather spend my time on making sure that this complexity doesn’t arise in the first place, instead of letting it arise and then dance around it with my tests. But left to the average programmer, the complexity will arise and if you have dedicated test engineers in a separate team, they will have to deal with it.

        2. 1

          What’s the point of tests that don’t prevent someone from doing something wrong?

    3. 4

      I’m guessing docker compose isn’t an option here… you have to have some pretty serious infrastructure to not run on a high spec laptop. Or infrastructure you don’t have containers for.

      1. 3

        I think nomad really gives us an option here of running the same thing in prod as on our laptop. Docker compose feels like the wrong hammer for a prod environment.

        1. 2

          Okay, so install nomad on your laptop instead of using docker compose. It’s more about the concept than the specific tool. :)

      2. 2

        An interesting direction we’ve taken for our use case is to generate a single container with a bunch of services in it managed by process-compose. We use devenv to define the set of services needed for development as a collection of Nix modules and it sets up a process-compose configuration that will run them. We then create a docker image with this process-compose setup. The container even has an nginx in it reverse-proxying various routes of 8080 to the internal services as well as a main page (pieced tohether from the modules) for the whole container documenting the contents and their versions etc.

        1. 2

          process-compose definitely looks interesting, but I can’t help but wonder if running systemd inside the container would have been sufficient. :)

          1. 1

            interesting, but I can’t help but wonder if running systemd inside the container would have been sufficient. :)

            I’ve never tried running systemd inside a container or seen anyone else do that before, but that’s definitely an interesting idea, particularly since that would allow you to reuse a lot of the NixOS module definitions. That said, process-compose is a really nice fit for this application. When you start it, it shows a simple text UI with the status of all the services and you can scroll through them and see the logs. It also has an HTTP API that exposes the same functionality as that text UI, making it very convenient to manage the environment in tests etc.

    4. 44

      You already have your solution, but you haven’t tried it. NixOS.

      1. 8

        Note that Nix isn’t quite completely deterministic due to the activation script which is a giant pile of shell commands.

        (It maybe be mostly deterministic, but as a user you can add basically whatever you want to it and break this.)

        1. 10

          Nix is only as deterministic as it’s builders and compilers that’s true. It gives you the scaffolding you need but you still have to ensure you follow the rules. If you use a builder that creates random data somewhere then you still won’t have a deterministic output. That’s true of pretty much anything that tries to solve this problem though. They are working with tools that make it a little too easy to violate the rules unexpectedly.

          1. 7

            Yeah I was actually surprised by this, from some brief experience with a shell.nix

            It seems like it is extremely easy to write Nix files “wrong” and break the properties of Nix

            Whereas with Bazel, the build breaks if you do something wrong. It helps you write a correct configuration

            Similar issue with Make – Make provides you zero help determining dependencies, so basically all Makefiles have subtle bugs. To be fair Ninja also doesn’t strictly enforce things, but it’s easy to read and debug, and has a few tools like a dep graph exporter.

            1. 7

              from some brief experience with a shell.nix

              nix-shell is a bit of a lukewarm compromise between Nix’s ability to supply arbitrary dependencies and the guarantees of the build sandbox. It’s not really trying to be hermetic, just ~humane.

            2. 2

              I would like to hear more about your shell.nix experience. It has some issues, for sure, but I have found the platform to be extremely reliable overall.

        2. 2

          Is that really relevant? A determined user could flash their BIOS with garbage and break anything they want. Does it matter that you can go out of your way to break your own NixOS system? The point is that as a user you’re not supposed to mess with your system manually, but you’re supposed to make changes by changing your configuration.nix. This is not the case for most other distros where you make changes to files under /etc side by side with files maintained by the distro packages.

          1. 3

            The point is that I don’t think anyone is checking. IDK how many options in nixpkgs break this. I don’t think we need to stop the user from purposely breaking it but we should make “official” flags not break it and make it easy for users to not break this.

      2. 6

        I did try Nix (briefly).

        However, I do think there is a place for classical distributions, as there is a place for distributions like Nix, Guix, and even something like GoboLinux.

        1. 24

          Pretty much every concern you share in that post is solved by NixOS. Some points

          • You don’t modify files in /etc, you use the NixOS configuration to do so
          • Changes are idempotent
          • Changes are reversible, there’s a history of config revisions called generations, or you can just remove the config and apply to roll forward
          • System configuration is reproducible and deterministic
          • Services are systemd units declared through the configuration
          • Packages reference their direct dependencies allowing for multiple versions of the same lib/package/etc

          NixOS isn’t perfect by any means, but it is a solution you seem to be looking for. Guix probably as well, but it’s a smaller and more limited distro.

          I took the multiple comments that “NixOS may do this but I’ll have to research” to mean you haven’t tried it. And this entire blog post screams to me that NixOS is a solution for you.

          1. 5

            You don’t modify files in /etc, you use the NixOS configuration to do so

            In a way, that’s second system syndrome. Suse Linux was bashed for doing something like that with YaST for a long time…

            In case of NixOS, I found that new language (not the syntax, the semantics) changed too fast for my liking…

            System configuration is reproducible and deterministic

            With a few caveats: Software might still get updated (I suppose you can lock that down to specific revisions, but who does?). From a bugfix/security perspective this may be desirable, but it’s not entirely reproducible, and updates can always introduce new bugs or incompatibilities, too.

            1. 6

              All packages are locked in Nix. So With a few caveats: Software might still get updated is not a problem there.

              1. 1

       offers “download a specific release tarball of buildInput specs” to improve matters. Other than that, buildInputs = [ bash ] may or may not point at the same version. discusses this for 3 years and there doesn’t seem to be a resolution.

                1. 10

                  You’ll get the same version with the same Nixpkgs, every time. buildInputs = [ bash ] will always point at the same bash package for some given Nixpkgs. Package versioning is a different issue.

                  1. 1

                    I count package versioning as part of the system configuration: Sure, your package names are always the same, but the content might differ wildly.

                    1. 12

                      The content will be the same every time with the same system configuration, assuming you’ve pinned your Nixpkgs version.

                    2. 5

                      I think you’re misunderstanding the way nixpkgs works with versions and what the versions actually mean there. Check it out in practice. Unless you update the channels/flakes or whatever you use between runs, nothing will change - there’s no explicit pinning required.

                    3. 4

                      The way I manage this is using several different versions of nixpkgs; a “known good” version for each package I want to lock, so for agda I would have nixpkgsAgda, the regular nixpkgs which is pinned to a stable release, and nixpkgsLatest which is pinned to master.

                      Most of my packages are on stable nixpkgs. Every now and then when I run nix flake update, it pulls in new versions of software. Things pinned to stable have never broken so far. Things pinned to latest are updated to their latest versions, and things pinned to specific packages never change.

                      While it does involve pulling in several versions of nixpkgs, I build a lot of software from source anyway so this doesn’t matter to me very much. I do hope that nixpkgs somehow fixes the growing tarball size in the future…

                2. 5

                  buildInputs = [ bash ] may or may not point at the same version

                  That’s a snippet of code written in the Nix expression language. bash is not a keyword in that language, it’s just some arbitrary variable name (indeed, so is buildInputs). Just like any other language, if that bash variable is defined/computed to be some pure, constant value, then it will always evaluate to that same thing. If it’s instead defined/computed to be some impure, under-determined value then it may vary between evaluations. Here’s an example of the former:

                  with rec {
                    inherit (nixpkgs) bash;
                    nixpkgs-src = fetchTarball {
                      sha256 = "10wn0l08j9lgqcw8177nh2ljrnxdrpri7bp0g7nvrsn9rkawvlbf";
                      url = "";
                    nixpkgs = import nixpkgs-src { config = {}; overlays = []; system = "aarch64-linux"; };
                  { buildInputs = [ bash ]; }

                  This evaluates to { buildInputs = [ «derivation /nix/store/6z1cb92fmxq2svrq3i68wxjmd6vvf904-bash-5.2-p15.drv» ]; } and (as far as I’m aware) always will do; even if disappears, it may still live on if that sha256 appears in a cache!

                  Here’s an example of an impure value, which varies depending on the OS and CPU architecture, on the presence/contents of a NIX_PATH environment variable, the presence/contents of a ~/.config/nixpkgs/overlays.nix file, etc.

                  with { inherit (import <nixpkgs> {}) bash; };
                  { buildInputs = [ bash ]; }

                  I recommend sticking to the former ;)

                  PS: I’ve avoided the word “version”, since that concept doesn’t really exist in Nix. It’s more precise to focus on the .drv file’s path, since that includes a hash (e.g. 6z1cb92fmxq2svrq3i68wxjmd6vvf904) which is the root of a Merkle tree that precisely defines that derivation and all of its transitive dependencies; whereas “version” usually refers to an optional, semi-numerical suffix on the file name (e.g. 5.2-p15) which (a) is easy to spoof, and (b) gives no indication of the chosen dependencies, compiler flags, etc. which may have a large effect on the resulting behaviour (which is ultimately what we care about).

            2. 4

              In case of NixOS, I found that new language (not the syntax, the semantics) changed too fast for my liking…

              Do you happen to have an example of such an semantic change at hand? Curious as nix is often regarded as rather conservative and trying to keep backwards-compatibility as good as possible.

        2. 7

          Part of the reason Nix/Guix take their particular approach is specifically because the global mutable state of a traditional FHS style distribution is nigh-impossible to manipulate in a deterministic way; not just because of the manipulation by the package manager, but because there is no delineation between mutations by the package manager and those by the user. A great example is /etc/passwd, which contains both locally created users as well as “service accounts” the distro maintainers make to separate out some unprivileged services.

          1. 1

            Relevant writing from Lennart Poettering on state in Linux distributions and how to accomplish a factory reset mechanism.

        3. 1

          There is a place for “classical” (read: legacy) distributions, yes, in the same way there is a place for collecting stamps.

      3. 3

        I want to switch to NixOS so badly, but I also want SELinux support.

      4. 1

        it needs a lot of work to enforce 1) determinism/reproducibility, 2) non-hysterisis (when you stretch an object and it doesn’t return /exactly/ back… like hidden side effects in ansible), 3) ease of use, & 4) acceptance testing.

    5. 10

      I posted an answer to this on Discourse.

      I’d also like to link to this very relevant new blog post, “Nix Flakes is an experiment that did too much at once…”

      1. 8

        I appreciate your perspective, but I disagree with your opinions. Specifically:

        • We don’t need a version resolver. You may want one, but it is clearly not strictly necessary.
        • flake.nix‘s grammar and semantics are a strict subset of Nix; it’s not inconsistent at all. It is a downgrade in expressive power, and that’s a good thing when we consider POLA and packages-as-capabilities.
        • system must be explicit for good cross-compilation. Yeah, it’s a little irritating, but the alternative is vaxocentrism, and anybody who survived the 32-to-64-bit transition will agree on the value of “i686” and “amd64” being different strings.
        1. 2

          We don’t need a version resolver. You may want one, but it is clearly not strictly necessary.

          If FlakeHub is going to bolt on a centralized version resolver, I really want flakes to properly tackle it in a decentralized manner instead. FlakeHub version resolution meshes poorly with input following and machine-scope flake registries. I am generally worried about FlakeHub becoming how the Nix ecosystem deals with flakes; it’s a glaring point of centralization in a feature set designed for decentralization. Flakes currently only have one enforced point of centralization: the global flake registry, managed by the NixOS foundation.

          system must be explicit for good cross-compilation. Yeah, it’s a little irritating, but the alternative is vaxocentrism, and anybody who survived the 32-to-64-bit transition will agree on the value of “i686” and “amd64” being different strings.

          I agree! System sets also need to be extensible. systems in the centralized flake registry is the only working solution I’ve seen so far, and it’s woefully underused. We need to heavily enforce one convention across the whole flake ecosystem for extensible systems to be usable, or we need to extend Nix with an official flake system registry that’s also extensible.

        2. 2

          I’m curious: Don’t you dislike the fact that basically the entire point of flakes is to avoid the Nixpkgs monorepo? The ability to have a monorepo is a key advantage of free software, and the main reason to avoid it is to support proprietary software. I’d rather we keep the monorepo and just educate people better on how to use it, rather than dismantle it with flakes.

          1. 9

            I’m not entirely sure that is the point; I have a bunch of github projects that are deployed out of flakes. There is little to no reason for putting them into the nixpkgs monorepo, as they’re basically used by me only. There is no wider interest that would justify putting them into nixpkgs and nixpkgs would only add massive delays to trying to push new updates to my servers.

            edit: I would also like to add in a quick edit that nixpkgs contains plenty of non-free software, such as VSCode. Free software does not enable monorepos any more than non-free software does and monorepos can contain both free and non-free software.

            1. 1

              Aren’t you just agreeing with me that the point of flakes is to avoid the Nixpkgs monorepo?

              Anyway, if your projects don’t have dependencies on each other, then you don’t need flakes. If they do, then maybe the common component should be published as something shareable, which could go into Nixpkgs. Or just put the projects which depend on each other into a single repo.

              1. 5

                The project in question is a collection of various web services, they do not make sense in isolation but don’t share much common code (but some do and the flakes in question are pointing the right direction), if they’re even in the same language.

                The issue is that again; this is code solely deployed by me that in parts even only makes sense being deployed by me, in other parts simply stuff I deploy to servers for others. There is no reason to pull them into nixpkgs and increase the maintenance burden of everyone there because they have to maintain some random code I’ve barely touched for a few years.

                In this matter, I see nix flakes no different to a Dockerfile; it’s a way for a maintainer to bring availability of their code to a wider audience without having to play the maintainer game of thrones. The upstream is in full control of the deployment and determines the defaults and configurations, as it should be (before going for NixOS I used ArchLinux in part for the reason that they have very minimal patches on top of every package).

                There is also no reason to put all my projects into a single repo, I do not like the monorepo approach. It’s a repository approach propagated by the likes of Google and Microsoft, it’s hostile to small open source projects due to the increased maintenance burden. Nixpkgs gets away because it’s got plenty of subtree maintainers, I don’t wanna put that on myself just for my own silly bits of code.

                And no, flakes are not there to avoid the nixpkgs monorepo, evidenced by the fact that all of them pull it in for common components. That argument would hold up if the majority or a good slice of flakes, didn’t pull in nixpkgs at all purposefully to avoid it. But almost all flakes hit nixpkgs in various ways.

              2. 5

                So nixpkgs should be filled with a bunch of “caterns_repos” folder for each developer having a public hobby project? You can surely agree that it doesn’t scale there.

                1. 1

                  Nikpkgs does not scale as it is. This is an absurd suggestion.

              3. 1

                This is not a reasonable suggestion and does not make sense for really any project I think.

          2. 8

            the entire point of flakes is to avoid the Nixpkgs monorepo

            It is a point, but I don’t believe “entire point” does the multitude of motivations for flakes justice. I’ll defer to today’s other frontpage Nix post [1.] and provide this list of Flakes features:

            - Dependency locking/tracking between Nix expressions (flake.lock)
            - Fetching dependencies with a common interface (builtins.fetchTree)
            - Harmonized declarative interface
            - New CLI interface semantics
            - Pure evaluation semantics (not exactly, but currently linked to the Flakes mental model)

            I’m not expert enough in Nix/Flakes or the history of Nix to say whether these other are the most salient motivations for flakes, but collectively they constitute a body of work that goes significantly beyond what you describe as avoiding the nixpkgs monorepo.

            The ability to have a monorepo is a key advantage of free software, and the main reason to avoid it is to support proprietary software.

            Similarly, I believe this statement is too absolute. The main reason to avoid a monorepo depends on a user/project/organization’s motivations. E.g. my main reason to avoid nixpkgs for a silly little player [2.] that I develop is that I don’t think it belongs in the nixpkgs monorepo. Few Nix users would benefit from it, as it has few users, and never will have a significant number of users. Going through an additional step of review every time I make a release would take away some of the joy I get out of developing the application. This doesn’t mean di-tui has proprietary aspirations. It’s open source and will remain that way.



          3. 4

            nixpkgs is massive, and it is absolutely inconceivable to be able to store, say, the whole python library ecosystem, plus the whole of nodejs and whatnot. There is simply a limit on what can sanely fit inside it and it is already bordering on that limit.

            I believe the duality of Flakes and a good base monorepo is actually a quite great approach as is: have a rolling release “blob” of mostly working together, common software I can refer to always and let more exotic software be defined and linked from anywhere – you end up with a Flakes file that pins a working nixpkgs.

            1. 1

              it is absolutely inconceivable to be able to store, say, the whole python library ecosystem, plus the whole of nodejs and whatnot

              Why not? I can and do upstream Nix expressions for Python packages I use. I only use what’s packaged in Nixpkgs. Works fine for me.

              There is no scaling limit to Nixpkgs, and I’m wondering what is leading you to believe there is one.

              1. 2

                Who approves the PRs? It already takes a lot of time to have to have something merged, because the maintainers are spread so thin.

                1. 1

                  That doesn’t matter, because you can use your fork while waiting for your PRs to be merged.

          4. 3

            As an outsider that’s an interesting statement!

            Disclaimer: I only use nixpkgs on other Distros, I have never installed NixOS on its own, but I did read up a lot over the years and tinkered, and before flakes I could sum it up as “Nice in theory, but completely unusable for the amount of work I am willing to invest.” - with flakes everything seemed easier and made more sense.

          5. 2

            No, I don’t care whether a ports tree is confined to a single git repository, especially not at the size of something like nixpkgs. Flakes only act as a federation mechanism; the integration of code still happens in the same fashion either way.

            1. 1

              the integration of code still happens in the same fashion either way.

              But that’s not true! Federation in this form requires “stable APIs”, a deformity usually induced by the presence of proprietary software which can’t simply be patched and recompiled for new APIs. “Stable APIs” are only needed when implementers can’t just update their users for the new version. See

              1. 2

                We might be thinking of different actions. To me, the integration of code is what the Nix daemon is doing when I hand it a build request. That request will involve two git repositories already (nixpkgs and the upstream repo for the target package), so I’m neither bothered by the idea that the second repository will have Nix expressions, nor by the idea that the second repository is fetched prior to the first repository.

                At no point here am I advocating for non-free software. I will point out that, by design, a ports tree usually has no problem expressing the integration of free and non-free software; nixpkgs already carries many such expressions, regardless of whether I use them.

                Stability of a Flake API is almost entirely irrelevant from this perspective, because the versions of expressions are locked in flake.lock. Lockfiles provide bitemporality, disconnecting the time of integration from the time of locking.

          6. 1

            That is a great advantage of flakes though. Because you can make much more than just packages. See home-manager, for example. Installing it as a flake is considerably easier and more reliable than as a channel. It makes developing Nix module sets that wouldn’t really fit in NixOS considerably more practical.

      2. 3

        Yeah I thoroughly agree with you there. Using a lockfile without version solver and a nice UX to upgrade versions is simply calling for so much pain.

        1. 8

          I don’t get the version solver part. Those expressions don’t have versions or version constraints. You refer to some branch or tag and that’s it. An alternative version solver could be built on top, but that would also require flakes themselves to be versioned differently than people do at the moment. (One foo-2.0 doesn’t equal another foo-2.0 when they pull in different refs of the whole system they depend on)

          And that’s more of a nixpkgs thing than a flake thing.

          1. 6

            Version solvers make a lot of sense in systems like apt or pip, in which you can only have at most one version of a dependency so everyone needs to agree on what that version is.

            However, this can have some weird non-local effects. For example, you can have libA depending on libB ^1.2, and in libA’s CI they use libB 1.2.0 to run their tests. Now a downstream project that depends on libA but that also (either directly or via one of its other deps) requires libB ~1.3 would silently use libA with libB 1.3.0, which it was never tested against.

            In Nix, everyone can have their own versions of all their deps, and everything is used with the same versions of dependencies when integrated into a larger project as it does as an individual software project. This makes reasoning nice and local, but it also wastes a lot of disk space and network bandwidth.

            I think that’s a decent tradeoff a lot of the time.

            1. 10

              Yeah, it also seems to me like a usual case of people recognizing a pattern they’re familiar with and assuming that they can carry over all of their experience and mindset because of the similarity. There are specific reasons why version resolving works well for programming language package managers, but they don’t necessarily apply to nix flakes. In addition to what you said, here are some differences that come to mind:

              • The public facing interface of a flake is very hard to pinpoint compared to a library in a programming language, especially one with static types. Unless the flake is a Nix-only utility library, pretty much any change would require a major version number bump because of how deeply the consumer might be interacting with your flake and its outputs.
              • Nix doesn’t have a runtime! It’s very important to think carefully about the backwards compatibility of packages in programming languages, because you’re mostly concerned about what happens at runtime, when you, the programmer don’t have the means to swoop in and fix the issue. Of course it’s important to care about your consumers’ builds not failing unnecessarily, but the real nightmare fuel is what can happen at runtime. In that respect, since Nix only exists at build-time, which is when a developer is at hand to fix the issue, version issues are less critical.
              • You are much less likely to be forced to upgrade your flake dependencies. With programming language packages, you often need to upgrade a bunch of mutually interacting packages all at once because everything depends intricately on each other. But with Nix flakes, it’s so easy to add a = github:myorg/myrepo/my-ad-hoc-branch and use my-nixpkgs-for-a-very-specific-purpose for that specific purpose without disturbing anything else.

              I often see a similar thing happening when people criticize Nix for being Turing-complete or untyped etc. They just carry over reasoning from related but different domains without reevaluating them for the unusual case that Nix is.

          2. 1

            Yes and right now it is what channels handle. But with flakes, the whole point is that you only change things when you decide; that is what the lock is about. So now you have to take care of every single version change of your dependencies and how they may not interact well with each other yourself. It kindly becomes unwieldly

            1. 2

              I don’t believe that’s a technical problem which can be solved. You always had to do that yourself. The versions constraints in libraries are mostly just indicative anyways. (we’re fixing some random issue from package updates every other week) If you’re depending on two unrelated projects and reference them both in your flake, that means they don’t depend on each other anyway and there would be no constraint anyway, because you can’t realistically do a full compatibility matrix of everything installable.

              For example someone writes a Frobnicator which uses /tmp/foo as a lock file and someone else writes a Bloberator which does the same. When they get independent flakes, who would you expect to test all the combinations and where would the constraint of “those two can’t work together” even live? (And how do you express that they can work, but not at the same time)

    6. 9

      Tim sounds like a good staff/principal engineer doing “improve the team” work.

      But, a minor caution.

      The cruel equations come down to this: if you have four developers and a Tim, then the amount of extra work you get out of them by letting Tim mentor instead of deliver has to exceed what Tim would do as just a line developer. Say you have 4 developers who can each do 10 story points. Tim pairs with them, and they can now do 15 while pairing. If Tim can deliver 20 story points when not mentoring, then that’s a debuff of 15 points to the team (30 + 15 + 0 vs 30 + 10 + 20)!

      This analysis doesn’t hold true all the time for various reasons, like:

      • The juniors retain their buff when Tim isn’t pairing
      • Tim is only occasionally putting out 20 points, but is usually 5 or less because he’s depressed and lonely and not mentoring
      • Juniors without Tim’s mentorship are unable to work on a story (for example, they can’t deliver without some base level of explanation of a legacy system or similar)

      The really tricky part is that it’s comparatively easy to fake being a Tim if you have some social skills, and then you can easily ask basic questions (consider Eliza Tim: “And how do you know that this is the right data structure?”) and flatter people and help them feel like you’re mentoring them, and that’s the social proof you need to avoid having to do real work. These Fake Tims (and they can happen accidentally from developers who misunderstand their mandate!) take up valuable space in an org chart and left untreated can silently tank the efficiency of your team.

      If you don’t believe that this is possible, consider a common way that orgs fail their juniors: letting them pair with people all the time and yet never drive or get their names on the work/tickets…so when perf review time comes around, there’s no concrete evidence of their competence that can be used to justify raises or promotions. The mechanism of action is similar.

      1. 15

        This makes sense if you think of productivity as the amount of code that is created.

        However, code is technical debt. What a computer engineer actually provides is maintenance of a codebase, availability of a service, and health care of a distributed system. If Tim carries a pager, then Tim is doing most of what he is paid to do!

        1. 3

          This study seems to be about some of the more self defeating aspects of measuring productivity that way

      2. 4

        A similar calculus applies to senior engineers (or any engineer, really) prioritizing internal tooling over feature work.

        The main difference there being that typically once a tool is created everybody can use it, so it tends to be a team-wide buff, and also that it by default survives the departure of the creator (until systems change).

        It’s also possible that the tool moves the team to another Pareto frontier (much like how a mentor can, if the mentees hang on to what they’ve been taught, move the entire team to a new level).

        1. 7

          It’s so depressing making internal tooling that works and does increase productivity when used, but then failing at the adoption step because people are too busy to train. The soft skills for pushing adoption through the org take you even further from the feature work. It’s definitely a trap that you can fall into if you aren’t careful.

          1. 10

            For some reason, I always end up finding myself in this position in every org I join and I’ve noticed that getting people to use your tooling is another soft skill you learn by experience, but I think there are some common principles:

            • Make the tools extremely accessible! If it’s possible, deploy the tool as a public web application behind OAuth, it should be one bookmark away and zero mental overhead to reach it. Or if it’s a CLI application, make it accessible without any effort. My tools can usually be run by nix run github:myorg/sometool, zero installation or update effort, one bash alias away. Whatever the medium, make it zero effort to access the tool.
            • Mention it all the time, everywhere without asking people to use it. In every single PR or Slack message that you use it, include the full instructions to access the tool and the steps you followed with the tool. Of course don’t spam communication channels, instead put the details in Slack threads or <spoiler> tags on GitHub. When someone decides to give it a try, they should have a million examples to look at and the examples should show up all over the place.
            • It’s generally the “third generation” that fully adopts the tool. You are the first generation, the people you teach are the second generation. The second generation will never make full mental commitment. They’ll always think of it as the thing you know about. If you follow the previous two points they’ll start using it here and there, but they won’t ever read the README or –help. But the people that learn from the second generation, aka the third generation will see the tool as a treasure trove of valuable functionality they can learn to impress the second generation.

            So, make it accessible and mention it without pushing it and then be patient, your real users will come after a delay.

      3. 3

        I think of this in terms of additive and multiplicative effect. How much do you add to the team by your own work and by how much do you multiply the efficiency of other people. Some developers have multiplicative effects less than one. They make everyone on the team less productive. For junior people, that’s fairly common but the hope is that it’s a short-term thing: you train them and their multiplicative effect at least reaches one (they do work, they do not positively or negatively affect other people’s work) and ideally reaches more than one (they make everyone else’s work better by doing glue activities).

    7. 5

      What if Postgres had a “footgun-avoidance” command, especially for database migrations, that would prevent certain locks from being taken during a session? For example, an access exclusive lock is quite painful on a production system, so it would be great to disable commands using that lock, unless in a maintenance window.

      1. 4

        It’s only really a problem if the lock will be held “too long”, but that depends on the situation - large tables or ones that receive lots of writes.

        1. 1

          Even if the lock is held momentarily, it’s not too hard to introduce deadlocks to the overall system. Especially if more than one table is locked at a time and if the connected applications are running complex transactions.

      2. 2

        I feel like between ANALYZE and the really solid docs this isn’t that big of an issue though.

      3. 2

        Sort of like BSD pledge(2), but for “I promise to acquire no locks other than {set}” instead of making system calls? I like it.

      4. 1

        Maybe lock_timeout is good enough to control when you want locks to hold. But I agree that a static analysis can be great too.

        1. 2

          lock_timeout aborts a query instead of waiting for a lock to become available. The problem @tedchs is describing is that the migration would acquire the initial lock (which presumably wouldn’t have to wait, depending on situation of course). You’d still have outage with lock_timeout in such situations.

          1. 1

            for sure lock_timeout will abort, I forgot to include that it requires a retry-loop (ideally with some exponential/random backoff) for it to be useful, a bit like I did in

    8. 31

      On the subject of simplicity, I think one axis that, IME, divides people 50/50 is whether we’re talking about intentional or extensional simplicity. I.e. which one do you think is simpler in a car; Automatic transmission or manual transmission? Automatic transmission has simpler external “semantics”, while manual transmission has a simpler implementation.

      Some people think Nix is unnecessary complexity because it wraps everything and does unspeakable things, but I think Nix is essential simplicity, because it does these things to make their semantics simpler.

      1. 7

        New Jersey style vs MIT approach.

      2. 1

        I think nix is essential complexity, because wrapping everything is spackle over a cracked foundation.

        You can’t make something simpler by piling on complexity.

        1. 2

          You can’t make something simpler by piling on complexity.

          This is a tautology from an extensional perspective and largely irrelevant from an intensional one. Thank you for demonstrating the other side of the argument.

    9. 7

      How do we change it?

      1. 26

        Make it a law that paid parking lots have to accept payment by cash?

        1. 25

          “To pay with cash please buy a single-use code in one of the authorized points” (nearest one 2 districts away, opening tomorrow morning).

          I agree with the spirit of what you said though.

          1. 6

            You are experienced with the dark patterns, sir

        2. 3

          Or make it a law that it should be absolutely evident and understandable at a glance how you can pay to 9 out of 10 randomly selected people so if you find yourself in a situation where it’s not evident how you pay, you just turn on your phone’s camera, record a 360 video and go about your business knowing that you can easily dispute whatever fee they throw at you.

          1. 1

            This is probably the best answer. No cost to “plot of land for parking” operators, no cost to people. Just record that you couldn’t clearly tell what’s going on and move on with your day.

        3. 1

          Ah yes, big cash boxes under unmotivated observation, sitting out in public. That won’t raise the cost of parking.

          1. 2

            Has parking become cheaper when those boxes were replaced with apps?

            1. 1

              Maybe? This entire discussion is severely lacking in generality. People are extrapolating wildly from one ranty post in one US city. I could fake another rant saying that parking is free as long as you scan your eyeballs with Worldcoin and it would add as much data…

      2. 22

        Plant asphalt-breaking flora at the edges of the lots. Bermudagrass is a good choice if you can obtain it, but standard mint and thyme will do fine for starters. In some jurisdictions, there may be plants which are legal to possess and propagate, but illegal to remove; these are good choices as well.

      3. 17

        We’d can start by not forcing people to use an app to begin with.

        In Chicago, they have a kiosk next to a row of on-street parking. You just put in your license plate number, and pay with a credit card. No app needed. At the O’Hare airport, short term parking gives you a receipt when you enter the lot. Then you use it to pay when you exit. No app needed.

        1. 12

          Right. The way it used to be everywhere, until relatively recently.

          A root problem is that, for a lot of systems like this, a 95% solution is far more profitable than a 99% solution. So companies will happily choose the former. Mildly annoying when the product is a luxury, but for many people access to parking is very much a necessity.

          So there’s one way to change this: companies providing necessities have to be held to stronger standards. (Unfortunately in the current US political climate that kind of thing seems very hard.)

        2. 2

          You’re talking about public (on-street) parking. This post is talking about private parking lots, which exist for the sole purpose of profit maximization.

          1. 12

            The cities could pass laws to regulate the payment methods. Parking lots that don’t confirm can be shut down.

            Depending on the city, getting such regulations passed may be difficult though.

      4. 13

        The way I see it, the issue is that every random company has to do a positively amazing job of handling edge cases, or else people’s lives get disrupted. This is because every interaction we have with the world is, increasingly, monetized, tracked, and exploited. Most of these companies provide little or no value over just letting local or state governments handle things and relying primarily on cash with an asynchronous backup option. Especially when it comes to cars, this option is well-tested in the arena of highway tolls.

        To put it succinctly: stop letting capital insert itself everywhere in our society, and roll back what has already happened.

      5. 7

        First do no harm. Don’t build stuff like this.

        Learn and follow best practices for device independence and accessibility. Contrast. Alt text. No here links. No text rendered with images.

        Those are things we can and should do.

        But likely things like this won’t change until there are law suits and such. Sigh.

      6. 2

        This seems like it’s just some random for-profit Seattle parking lot (cheap way to go long on a patch of downtown real estate while paying your taxes) that, consistent with the minimal effort the owner is putting in generally, has let whatever back-alley knife fight parking payments startup set up shop as long as they can fork over the dough. It is essentially a non-problem. Even odds the lot won’t exist in two years. There are many more worthwhile things to care about instead.

        1. 4

          I disagree. This is going on outside Tier-1 and Tier-2 cities with high population density. Small cities and large towns are finally coming to terms with (using Shoup’s title) the high cost of free parking and replacing meters with kiosks (usually good but not necessarily near where you need to park) or apps (my experience is they’re uniformly bad for all the reasons in the link) to put a price on public parking.

          One nearby municipality has all of:

          • Missing or incorrect signs.
          • Unclear hours. Is it free after 6pm? Sunday? Holidays? This zone? Seasonally?
          • Very few kiosks.
          • QR codes and stale QR codes.
          • Apps acquired by other app companies and replaced.
          • Contracts ended or changed where the QR code or app doesn’t work or worse takes the payment but is invalid (this only happened to me once).

          Even if you’re local and know the quirks you’ll have to deal with it.

        2. 4

          It’s not just “some random for-profit Seattle parking lot”. I’ve run into frustrating and near-impossible experiences trying to pay for parking in plenty of places. Often compounded by the fact that I refuse to install an app to pay with.

          The other day I was so happy when I had to go to the downtown of (city I live in) and park for a few minutes and I found a spot with an old-fashioned meter that accepted coins.

        3. 2

          History does not bear you out.

        4. 1


      7. 1

        Establish a simple interoperable protocol standard, that every parking lot must support by law. Then everyone can use a single app everywhere which fits their needs. I mean, this is about paying for parking, how hard can it be?

        1. 3

          I mean, this is about paying for parking, how hard can it be?

          I think that’s the thing, though. A company comes in to a municipality and says “this is about paying for parking, we make it easy and you no longer have to have 1) A physical presence, 2) Employees on site, or (possibly) 3) Any way to check if people have paid.” They set you up with a few billboards that have the app listed on them, hire some local outfit to drive through parking lots with license plate readers once or twice a day, and you just “keep the profit.” No need to keep cash on hand, make sure large bills get changed into small bills, deal with pounds of change, give A/C to the poor guy sitting in a hut at the entrance, etc.

          I write this having recently taken a vacation and run into this exact issue. It appeared the larger community had outsourced all parking to a particular company who has a somewhat mainline app on the Android and Apple stores, and hence was able to get rid of the city workers who had been sitting around doing almost nothing all day as the beach parking lots filled up early and stayed full. I am very particular about what I run on my phone, but my options were leave the parking lot, drive another 30 minutes in hopes that the next beach had a real attendant with the kids upset, or suck it up. I sucked it up and installed long enough to pay and enough other people were that I don’t see them caring if a few people leave on principle of paying by cash, either way the lot was full.

          I say all this to point out that some companies are well on their way to having “the” way to pay for parking already and we might not like the outcome.

          1. 2

            I get that digital payment for parking space is less labor intensive (the town could also do that themselves, btw), but we can by law force these companies to provide standardized open APIs over which car drivers can pay for their parking spot, why don’t we do that?

            1. 2

              I’m always in favor of citizens promoting laws they feel will improve society, so if you feel that way I’d say go for it! I don’t, personally, think that solves the issue of standardizing on someone needing a smart phone (or other electronic device) with them to pay for parking. That to me is the bigger issue than whose app is required (even if I can write my own, until roughly a year ago I was happily on a flip phone with no data plan). So if this law passes, the company adds the API gateway onto their website and… we’re still headed in a direction for required smart device use.

              But, again, I strongly support engaging with your local lawmakers and am plenty happy to have such subjects debated publicly to determine if my view is in the minority and am plenty happy to be outvoted if that is the direction it goes.

    10. 2

      Does this mean if you host YourNextCoolApp on self-hosted nomad, that you’d need to pay the BSL?

      1. 5

        End users can continue to copy, modify, and redistribute the code for all non-commercial and commercial use, except where providing a competitive offering to HashiCorp.

        Seems like they are okay with it as long as you don’t make a product that competes with them directly

        1. 13

          What if they start (or acquire) a new offering that does compete with me? It seems like this allows them to effectively revoke the license at any time, and I can’t do anything about it.

          1. 10

            What if they decide to exit by selling their IP to a “license troll” company that buys it with the sole intention of extorting money out of anyone that uses their products by threatening them with lawsuits.

            Seriously, the premise seems to be “it’s OK, we’re the good guys”, but we all know how that plays out in the long term in a capitalistic society.

        2. 3

          So what if you sell your managing of a client’s nomad cluster? All open, all friendly… but does involve competing a tiny bit with their managed service options.

          1. 5

            This almost perfectly describes which I believe uses Nomad internally (or at least used to?). I wonder what constitutes competition.

            1. 4

              Indeed. Thankfully for, they are on the tail end of migrating away from Nomad (apps v1) to their own orchestrator (machines and apps v2), so they won’t be impacted.

              It’s probably the final nail in the coffin for Nomad outside of the enterprise use case. I used yo root for it as a simpler alternative, but it’s not safe to use for any dev-tooling / dev-experience company.

          2. 1

            I’m not familiar enough with the license discussed, it isn’t even on the repositories yet, so, I don’t know, but it’s an interesting point.

    11. 16

      we can’t figure out how to make them stop

      I would hope that we all recognize that OpenAI could pull ChatGPT from the market, and also that it is within the ambit of consumer-protection agencies to force OpenAI to do so. We could, at any time, stop the lies.

      I suppose that this makes it a sort of trolley problem. We could stop at any time, but OpenAI and Microsoft will no longer profit.

      1. 8

        It’s too late for that now. I have half a dozen LLMs downloaded onto my own laptop - and they’re significantly worse than ChatGPT when it comes to producing lies.

        1. 1

          Ah, you were thinking along different lines with that statement. No worries.

          I read you as saying that ChatGPT, as a product, is misleadingly advertised to consumers as an algorithm which is too smart to lie. This is a misleading statement on the part of OpenAI, and could be construed as false advertising.

          The problem, to me, is not that LLMs confabulate, but that OpenAI is selling access to LLMs without warning their customers about confabulation.

      2. 4

        ChatGPT is pretty darn factual. I’m curious what you’re comparing it to… If we are going to start purging things that lie to us there are other places we should start.

        1. 3

          If you’re going to use a whataboutist argument, you need to actually say “but what about this other thing?” Don’t rely on me to fill out your strawman.

          1. 2

            Please, let’s keep this civil.

            It’s not a fallacious argument, I’m not constructing a strawman or asking you to figure it out as some kind of sinister rhetorical technique meant to deceive you (and if it was, wouldn’t it prove my point?)

            I just wanted to keep things short… But I’m happy to engage.

            Here are a few things which famously lie or tell untruths:

            • advertisers
            • politicians
            • scientists (claims of perpetual motion for example)
            • schoolteachers
            • books
            • news reports
            • illusions (lie to the eyes, or your eyes lie to you)
            • statistics

            It’s not a whataboutism argument I’m trying to make (whatever that is, pointing at the big book of fallacies is the biggest fallacy of them all if you ask me).

            Failing to be factual is not something we should condemn a new tool for, it’s a fundamental part of human existence. It’s claims to the contrary (absolute certainty) which have to be met with skepticism.

            1. 1

              An LLM isn’t a human, so we shouldn’t afford it the credence we usually sign-off on as human nature. ChatGPT is not factual, ChatGPT generates statements that generally appear to be factual to the extent one doesn’t feel the need to fact-check or confirm it’s statements (at least initially). Comparing a machine that generates lies by it’s very nature (without malice or want) to human action is a category error. ChatGPT is a computer that lies to us and “humans lie more!” doesn’t affect that observation’s being better or worse (though software that mimics the worst parts of human nature is arguably worse than software which doesn’t). With respect to the above category error, it seems like whataboutism.

              (Hopefully we understand “lie” in the same way with respect to computers as opposed to people, that is, people lie knowingly (else they are simply wrong), whereas computers don’t know anything, so the consensus seems to be an LLM is “lying” when it’s confidently producing false statements. Do correct me if I’m mistaken on that)

              1. 1

                I would include lying in the sense of being factually incorrect in addition to lying in the sense of telling an intentional untruth.

                For what it’s worth, I also believe that GPT has as much or more intentionality behind it’s statements as you or I… Unfortunately, that is a matter for metaphysics or theology, but I wouldn’t mind hearing anyone’s arguments around that and I have the time.

                I also support the premise of the original article! We should tell people that GPT is capable of lying.

      3. 2

        And also the benefit of having it available is huge

        1. 8

          For who and what? I’ve found them largely useless.

          1. 8

            I use them a dozen or more times a day. I talked about the kinds of things I use them for here:

            1. 2

              This is really useful, thanks.

              It would be much easier to read on a phone if you fixed the meta tags as per - I wrote that post for you and Substack (unfortunately I can’t find any way of contacting them)

          2. 2
            • Use it to discover obscure command line options and use cases of tools I use. It’s often wrong, but the right answer is usually a Google search away.
            • When I narrow down a bug to a file, I just copy paste the code and describe the bug, it occasionally pinpoints exactly where it is and suggests a bad fix.
            • I feed it a JSON value and ask it to write its schema or maybe the NixOS options definition for a configuration structure like it. Unlike a mechanical translation, it uses common sense to deduce which fields have given names and which fields are named in a key:value fashion.
            • Billion other little use cases like that…

            I usually have it open in 5 tabs while I’m working.

            1. 10

              How does this not drive you insane? Having to question the validity of everything it gives you at every turn sounds exhausting to me. I already find it immensely frustrating when official documentation contains factually incorrect information. I have no time and energy to deal with bugs that could’ve been prevented and going down rabbitholes that lead to nowhere.

              1. 1

                I use mostly perplexity and secondarily bing. It’s good for things where there’s a lot of largely accurate documentation, to generate code examples. It’s effectively a way to have a computer skim the docs for you when you’re trying to figure out how to do a task. You can then integrate snippets into what you’re doing, test them, and consult the cited docs.

                Telling it to rewrite something is often tedious, but can be advantageous when e.g. rushing to get something done.

                Tbh I anticipate that LLM based tools will continue to evolve for code-related tasks as basically better refactoring and automated review engines, and as generators of low stakes text that people then review. They’re not AI but they do provide a new tool for manipulating text, and like all tools are great when used right, but if they’re your only tool you’ll have a bad time.

              2. 1

                In all fairness, I do get more tired per unit time when I deal with ChatGPT. In the past, coding would tire out one part of my brain, but that wouldn’t affect the social side too much. But coding + ChatGPT tires out both parts. That said, if I reflect on how my brain processes the information it gives me, I don’t treat it as a logical statement that needs to be validated, I treat it as a hunch that’s quite likely to be wrong. Whenever I need to pause to think, I jot down my thoughts into the ChatGPT prompt, which, at worst, serves as a note taking medium. Then I press enter and move onto doing something else and check back and skim once it’s finished to see if there’s anything potentially useful. When I spot a potentially useful sentence, I copy-paste it to the prompt, and ask “are you sure?”. It sometimes says, “sorry for the confusion…” so I don’t have to do anything else, sometimes it’ll justify it in a reasonable manner, then I’ll google the statement and its justification and see if it holds water.

                The bottom line is, I think it takes a little bit of practice to make efficient use of it. You need to learn the subtle hints about when it’s more likely to be lying and the kinds of questions that it’s likely to answer well. As you said, it IS tiring to deal with, but with practice you also grow the muscles to deal with it so it gets less tiring.

                1. 7

                  The bottom line is, I think it takes a little bit of practice to make efficient use of it. You need to learn the subtle hints about when it’s more likely to be lying and the kinds of questions that it’s likely to answer well.

                  So it’s like pair-programming with a very confident and possibly sociopathic junior developer?

                  1. 1

                    Yes. But one that has read significantly more documentation than you. If fact, it has read the entire internet.

                    1. 5

                      AKA the “Net of a million lies”. LLMs are backstopped by a vast mass text that is broadly “true”, or at least internally logical. The poisoning of this well is inevitable as long as text is considered to be semantic content, devoid of any relationship to real facts. And as the entire commercial mainspring of the current internet is to serve ads against content, there will be a race to the bottom to produce content at less and less cost.

                      1. 1

                        Yes, that is true. “Standalone” LLM’s will most likely decline in quality over time.

                        There probably is more future for ChatGTP’s that are bundled with, or pointed to, specific source material. Something like that you buy all volumes of Knuth’s The Art of Computer Programming and that you get a digital assistant for free that can help you navigate the massive text.

                        1. 3

                          We’re going to see an example of Gresham’s Law, where bad (LLM-generated content) drives out good (human-generated). In the end, the good stuff will be hid behind paywalls and strict rules will be in place to attempt to keep it from being harvested by LLMs (or rather, the operators of “legit” LLMs will abide by their requests), and the free stuff will be a sewer of text-like extruded product.

                          This is the end of the open internet.

                2. 3

                  Thanks, that makes sense. I guess I’m too old and grumpy to get used to new tools like this. I guess I’ll just grow irrelevant over time.

                  1. 1

                    Here’s hoping we don’t grow irrelevant before we retire 🍻, but I honestly don’t see ChatGPT as a threat to programmers at all. Quite the contrary, it will bring computing to ever more places and deliver more value, so whatever it is that you’re currently programming for a living, society will need much more of it not less.

            2. 2

              If Google was as useful as it was 5 years ago, I wouldn’t be asking a random text generator how to do things.

            3. 2

              You’d literally rather have a computer lie to you than read a man page or some other documentation?

              I’d have thought the task of extracting schematic information from a structure was well within the realms of a regular tool, that the author could imbue with actual common sense through rules based on the content, rather than relying on a tool that (a) has no concept of common sense, only guessing which word sounds best next; and (b) habitually lies/hallucinates with confidence.

              I don’t want to tell you how to do your job but I really have to wonder about the mindset of tech people who so willing use such an objectively bad tool for the task just because it’s the new shiny.

              Weird flex but ok.

              1. 3

                I’d rather have the computer read that man page or documentation and then answer my question correctly based on that.

                Have you spent much time working with these tools? You may be surprised at how useful they can be once you learn how to use them effectively.

                1. 3

                  Did you miss this part of parent comment (emphasis mine)

                  It’s often wrong, but the right answer is usually a Google search away.

                  1. 3

                    You’re really doing yourself a disservice by depriving yourself of a useful tool based on knee-jerk emotional reactions. Why would you interpret that as the computer lying to you? It’s just a neural network, and it’s trying to help you based on the imperfect information it was able to retain during its training. Exactly as a human would be doing when they say “I’m not sure, but I think I remember seeing a –dont-fromboblugate-the-already-brobulgaded-fooglobs argument in a forum somewhere”. When you google that argument, it turns out it was –no-… instead of –dont-…, and the official documentation doesn’t mention that obscure argument and the only Google hit is a 12 year old email that would take you weeks of reading random stuff to stumble upon.

                    1. 2

                      “I’m not sure, but I think I remember seeing a –dont-fromboblugate-the-already-brobulgaded-fooglobs argument in a forum somewhere”.

                      But that’s the point. The person doesn’t (unless they’re a psychopath) just hallucinate options out of thin air, and confidently tell you about it.

                      1. 3

                        I don’t know about you, but my own brain hallucinates about imaginary options and tells me about them confidently all the time, so I’m quite experienced in processing that sort of information. If it helps, you could try mentally prepending every ChatGPT response with “I have no idea what I’m talking about, but …”

                        BTW, I’m actually glad for the way the current generation of AI is woefully unaware of the correctness of its thoughts. This way it’s still a very useful assistant to a human expert, but it’s hopeless at doing anything autonomously. It’s an intellectual bulldozer.

              2. 1

                Sometimes it’s far easier to say “how should i do this thing? explain why, and give me an example in x” than to trawl documentation that isn’t indexable using only words I already know

        2. 1

          Consumer protection is not a matter of costs vs benefits. If a product is unsafe or hazardous to consumers, then it ought to be regulated, even if consumers find it useful.

    12. 17

      I can attest to the advice in this article being extremely good, almost unreasonably so. Pretty much all of the best software I’ve written has been a ‘version 2’ after spending day/weeks/months exploring the design space and then throwing it all away to start again. A few personal examples of this:

      • Veloren’s first engine was our first attempt at writing a game engine in Rust. It superficially worked, but fell foul of many missteps and was plagued by instability, deadlocks, latency, and an abysmal concurrency model. After 9 months of development we ditched it and started from scratch. We took all of the lessons we learned writing the first one and we’ve never looked back. The new engine scales extremely well (better than almost every other voxel game out there, thanks to its highly parallel design built on top of an ECS and careful attention being paid to data access patterns), is easy to work on, is conceptually simpler, is much more versatile, and uses substantially fewer resources.

      • chumsky, my parser combinator library, had a relatively mundane and hacky design up until I decided to rewrite it from scratch about a year ago. I took everything I learned from the first implementation and fixed everything I could, including completely redesigning the recovery and error prioritisation system. It’s now much more powerful, can parse a far wider set of grammars, and is extremely fast (our JSON parser benchmark can often outpace hand-optimised JSON parsers)

      • Tao, my functional programming language (and compiler) went through several revisions that allowed me to explore the best way to design the various intermediate representations and the type solver. The type solver (which supports HM-style inference, generalised algebraic effects, generics, typeclasses, associated types, and much more) is without a shadow of a doubt the single most complex piece of software I’ve ever written, and writing it without it collapsing in on its own complexity was only possible because I’d already taken several shots at implementing it before, then consciously starting afresh.

      I’d argue that consciously prototyping is not simply a nice-to-have, but an essential step in the development of any non-trivial software system and most systemic development failures have their origins in a lack of prototyping, leading to the development team simply not being aware of the shape of the problem space.

      1. 8

        I can’t help but point out that all of your examples are Rust projects. I tend to think that language choice has a big impact on the feasibility of incrementally improving the architecture.

        Back when I used to program mostly in Java and C++ (ages ago), I used to find it extremely hard to make incremental changes to the architecture of my programs. I think that was mostly due to these languages forcing me to bend over backwards. In the case of Java, it was due to how inflexible the language is and in the case of C++, it was because the concern of manual memory management permeated every design decision. The thing with bending over backwards is that you aren’t left with much room to bend any further and any foundational architectural change means mostly a complete rewrite. I suspect that Rust might be suffering from the same thing I experienced with C++.

        As a counterpoint, I’ve been finding it easy to evolve the program architecture with my shifting understanding since I started writing Haskell full time a few years ago. And that’s been the case even in a half-decade-old code base written by a combination of junior programmers and short term contractors.

        All of that said, no language can save your from backwards compatibility baggage. Your API, user-observable program semantics and old user data lying around all accumulate this baggage over the years and even Haskell programs grind to a halt trying to juggle all of that. The trouble is, even a total rewrite can’t save you from backwards compatibility…

        1. 8

          I don’t think this really has much to do with the language. I tend to write Rust in a heavily functional style anyway: there’s not much Rust I write that couldn’t be trivially transpiled to Haskell. When I talk about complexity and understanding the design space, I’m not talking about more trivial syntactic choices, or even choice of abstractions available within the language: I’m talking about the fundamental architecture of the program: which systems belong where, how data structures are manipulated and abstracted, how the data I care about is represented and partitioned so as to minimise the complexity of the program as it grows, what aspects of the problem space matter and how the program might evolve as it moves to cover more use-cases. Those are factors that are largely independent of the language, and even more so for Rust/Haskell which have extremely similar feature sets.

        2. 7

          Back when I used to program mostly in Java and C++ (ages ago), I used to find it extremely hard to make incremental changes to the architecture of my programs. I think that was mostly due to these languages forcing me to bend over backwards… I suspect that Rust might be suffering from the same thing I experienced with C++.

          As a counterpoint, I’ve been finding it easy to evolve the program architecture with my shifting understanding since I started writing Haskell full time a few years ago. And that’s been the case even in a half-decade-old code base written by a combination of junior programmers and short term contractors.

          I’ve found that a strong type system is paramount to evolving the program architecture. Haskell is one of the best examples, and Rust’s type system isn’t quite as powerful but near enough for most use-cases. A rewrite of the system, or a portion of it, with a type system which guides you is paramount to safely evolving the code. Having used Rust professionally for 4 years now, it is far closer to working with Haskell than C++ or Java in terms of incremental changes due to the type system.

          1. 3

            When using a strong type system, I imagine “evolving the program architecture” is simply “rewriting massive swathes of code to satisfy the type checker”. It’s essentially throwing out mostly everything minus some boilerplate.

            1. 2

              The type system is the scaffolding in strongly typed languages. It allows you to refactor in a safe way, because the type checks are thousands of tests that you don’t have to write (and are far more likely to be correct).

              I think a lot of the hesitation with strongly typed languages comes from the unfamiliarity with strong types (they’re a powerful tool), but

              simply “rewriting massive swathes of code to satisfy the type checker”. It’s essentially throwing out mostly everything minus some boilerplate.

              Couldn’t be further from the truth imo. Maybe for someone new, but with an experienced person on the team this wouldn’t happen.

              1. 1

                If you have to re-architecture your strongly typed program it WILL cause you to rewrite LARGE portions of the program. I’m not sure what you’re talking about it being “further from the truth”. Are you assuming I have no extensive experience with type systems?

                1. 1

                  If you have to re-architecture your strongly typed program it WILL cause you to rewrite LARGE portions of the program. I’m not sure what you’re talking about it being “further from the truth”.

                  As with many things, it depends on the context. Re-architecture which involves changing a core invariant relied upon by the entire codebase? Yes, that will probably require rewriting a lot of code. But this is the case for any language (strongly typed or not).

                  In my experience, dynamically typed codebases have to rely entirely on tests to provide the scaffolding for any refactor, which vary in completeness and can be buggy. Strongly typed codebases get to rely on the type system for the scaffolding, which when done right is the equivalent of thousands of tests that are far more likely to be correct (barring compiler errors). This is night and day when it comes to a large-scale refactor, as being able to lean heavily on the type system to guide can make the difference between a refactor which Just Works and one which has a few more bugs to iron out.

                  At the end of the day it all comes to tradeoffs. Dynamic languages allow you to get away with hacky workarounds, whereas a strongly typed language might make that harder (eg. require punching a hole in the types, which can require advanced knowledge of the type system). But I take issue with the blanket statement that strongly typed languages require a significant amount of rewrites for any refactor – that is completely opposite of the experience I’ve had (5 years of working with Rust professionally, and a mix of C++/Python/JS for years before that).

                  Are you assuming I have no extensive experience with type systems?

                  I did not assume that originally, and was speaking from my own experience. But given your language, at this point yes I do assume you don’t have extensive experience with type systems :)

                  1. 1

                    The point of my original comment is no matter the language, you will essentially have to throw everything out.

                    It’s too bad you have to make assumptions - your arguments would have a bit more weight to them. Really you’re just writing walls of text that say “type system make refactor easy”. The topic is throwing out projects. A re-architecture is going to ripple type system changes everywhere to the point you should throw everything out.

                    1. 1

                      Haha I was just being cheeky, I don’t know you or your experience, and my arguments are standalone. We are both responding to a sub-thread about evolving the architecture, which I agree, is tangential to the original article. But… if you’re arguing that the original article talks about throwing out everything from a PoC and that is the same for any language, then yes I of course agree with that. That is kindof the point.

                      But just want to note, that is not what we originally were talking on this thread about (I made a comment about evolving architecture being easier based on strong typing).

    13. 47

      “Where do you discuss computer related stuff now?”

      I usually don’t.

      Maybe it’s my age showing and maybe I’m just in an unusually dour mood, but I don’t really think discussion is what happens now in most places.

      In places like Twitter and Mastodon, we have lots of hottakes and shitty opinions by equally shitty people–and I’ve seen enough things posted to such great fanfare that are fundamentally wrong or midwit that I question if any cycles spent yield a dividend (minor exception for purely technical things like certain gamedev and graphics feeds).

      On Reddit and other fora, we have heavy bias towards people that have the time to goof around on forums instead of, you know, doing things. Said goofing around is frequently enough inane oneupsmanship or shilling for whatever their current tech stack is.

      Places like Discord and Twitch are content farms for lonely nerds looking to form parasocial relationships. At some point in the last decade it feels like “discussing technology” turned into just another marketing gimmick or a chance for people with slick production, some technical knowledge, and a desire for attention to peddle themselves and get their fifteen minutes.

      (I unironically suggest 4chan’s /g/ or lainchan, because then at least there is no pretense of quality.)

      I can’t even always count on work to be a place to have technical discussions, because the rejection of engineering in favor of product development on one side and the full-hearted embrace of imposter syndrome and acceptance of mediocrity on the other has put a squeeze on the very notion of technical excellence and expertise. What sort of weirdo derails a sprint pokering by talking about database sharding? What entitled single white dude has the gall to suggest people learn anything about complexity theory or automata in their free time, when it’s well known that literally any request that a worker spend time honing their craft is a massive blow against Labor and a chance to enrich Capital at the expense of underrepresented groups?

      (You laugh or jeer, but I’ve seen variants of both of these play out in real workspaces. This is a real thing that happens–and often with the best of intentions!)


      I think that technical discussion certainly still exists, but there’s just so much garbage and such an aggressive gentrification of the culture on the one side and exploitation on the other that anybody who does have their private little space should quite rightly seek to preserve it and not talk much about it.

      I think the culture–at least the one I grew up in, which is the familiar one to me and the one I miss, quite aside from whether it is objectively morally optimal–has been under active attack from both within and without, and that under such circumstances I despair for the sorts of discussions I used to learn from with the sorts of people I used to enjoy the company of.


      The things I’ve seen work best are people having a space to log/discuss their current problems or current projects, and then having a way to field questions or chat with that as a starting point. Otherwise, you become clogged with a bunch of marketing, dick-waving, shit-stirring, and navel-gazing.

      Don’t waste time “discussing” things with people that aren’t doing anything worth discussing, and don’t confuse volume or novelty for utility.

      1. 30

        Kinda disappointed to see this as the top comment. I generally like, but it does seem like there are many threads where a highly-upvoted comment saying “someone else is doing it wrong” sits at the top. (This does not seem to happen on Hacker News, there are different issues there)

        That may or may be true, but either way, writing a screed about it doesn’t really solve the problem … especially when the problem is the lack of technical discussions :-P

        FYI I clicked through to your comments, and what I overwhelmingly see is comments about people’s behavior. Not necessarily bad or wrong, but that’s what you seem pre-occupied with.

        I didn’t see any substantive technical comments.

        If you want to have a technical discussion, you can hide the comments you don’t like, and post what you do like … I think you tend to get back what you put out there. At the very least, it will help the site a bit

        1. 8

          writing a screed about it doesn’t really solve the problem

          I give what I consider actionable advice on how to solve the problem of technical discussion at the end: look for places where people who do things talk about the things they’re doing. If that isn’t useful to you, hey, that’s fine.

          The author asked a question, you seem unimpressed with my answer, here I reply with some minor elaboration, and nothing of substance is accomplished. This is the sort of discussion that led to my current position.

          I overwhelmingly see is comments about people’s behavior.

          I don’t see the same thing; I see comments on:

          • Reflecting on software not being hard physical labor and reminding teams of that as a manager.
          • Reflecting on it being okay to charge money for software.
          • Noting that a rust project with a single binary is friendly from an ops standpoint.
          • Being amused at the singularity being stopped by copyright enforcement.
          • Complaining about the misuse (based on my own experience) of feature flags in production apps.
          • Pointing out that old DOS games kinda shipped their own OS.
          • Making a joke about alternative uses for digital watches.

          Logging out, there are a few extra comments that show up:

          • Asking a question about how much people ran into Fediverse peering issues
          • Explaining why I flagged a story about an employment change.
          • Expressing a concern about the push for a Zig book this early.
          • Expressing that I think it’s not okay to remove submissions with popups.
          • (follow-on to the above) explaining my concern about the misuse of the precedent.
          • Explaining a tagging suggestion.

          If anything, I think that somewhat substantiates my claim that I don’t frequently discuss a lot of technical details in places like this anymore.

          I didn’t see any substantive technical comments.

          You might also enjoy my story submissions, and I’ll note that the lookback capability for user comments is I believe limited to one page–and my posting history is long.

          If you want to have a technical discussion, you can hide the comments you don’t like, and post what you do like

          That is one way to play the Lobsters MUD, yes.

          1. 7

            Right, this is my point

            Don’t waste time “discussing” things with people that aren’t doing anything worth discussing

            my claim that I don’t frequently discuss a lot of technical details in places like this anymore.

            What are you doing / building that’s worth discussing? Honest question – I don’t know. Many people have a link in their profiles, or I can tell from their past comments

            i.e. you seem to have set up a self-fulfilling prophecy. There is A LOT of technical discussion on this site

            1. 6

              Not, the author, but was there a claim of something discussion worthy being built?

              Also how come you focus mostly on someone writing a post rather than the content of the post? I ask because at least back in Usenet times that was considered impolite.

              Sure when someone always likes to troll people they’ll be ignored after some time, but I don’t see that here.

        2. 5

          GP has a history of this sort of rhetoric; i’d go on, but u/aphyr said it much better than i could a few years back.

          i really enjoy this site, but it’s disappointing to log on and see a thread like this dominated by discussion that defeats the purpose of the thread in the first place.

          i’m also disappointed in myself that this is what i’m choosing to contribute, but i have a similar desire as the OP’s and i was hoping that the comments would be a place where we could all talk about what works and what doesn’t in this context!

          1. 4

            Yeah, I think there are a bunch of people like aphyr – people who have done interesting things in the systems programming space, but stay away from because of the bad attitudes.

            I don’t actually mind one comment like that – there will always be a few differing opinions – but my issue is when it sits at the top of the thread, and invites a pile-on of negativity. It’s just not interesting. It’s boring.

            Personally I have found a few great and prolific contributors to Oils through, so I see value in staying. Although they don’t seem to post much! There is often an inverse correlation between the people doing the talking and the people doing things.

      2. 19

        4chan’s /g/ and/or lainchan are the best forums to visit if you’re a teenager who wants to massively stunt your learning in favour of wasting time on the internet. What a misuse of my life.

        1. 9

          Okay, perhaps…but have you installed Gentoo, LFS, Arch, done anything with LLMs or Stable Diffusion, messed around with plan 9, or any of those other things that show up in threads?

          There’s a lot of trash, and some occasional neat stuff–but I find it more earnest in its buffoonery than other places. If it’s not to your liking, that’s cool too.

          1. 4

            I’ve never hung out of 4chan, yet I have heard about all those things. In fact, I’m pretty sure classic Gentoo-bashing site “Gentoo is for Ric*rs” predates 4chan. Plan9 certainly does.

            1. 2

              Sure–my point is that a lot of people’s first introduction to those topics was probably through /g/. The saga of the great attempt at revisiting Plan 9 some many years ago by /g/ is a whoooole thing.

      3. 10

        OP: Hey guys, where to go for interesting discussions?

        Most upvoted comment: Nowhere. crowd: Best answer!

        I kind of sometimes wish that social hubs like this would hide the username and hide the upvote counter. Because sometimes I have a feeling that comments are upvoted because of who puts them, and because others upvoted it, not because of what’s inside it.

        1. 2

          Oh wow, amazing how I hadn’t noticed that GP’s comment was only a single word. Here I was thinking that the comment had struck a nerve about the cultural decay around my favourite craft and that I enjoyed the reflection on the state of various communities, but what do I know, I’m just an idiot that upvotes whatever is the top comment, not a genius like you that can read other people’s minds.

        2. 1

          I’d like to believe there’s a bit more depth to my comment than you’re giving credit for–but, I will thank you for providing an example of why I don’t often do technical discussion in public.

          Consider: If somebody like yourself can so easily and willfully misread and misrepresent a relatively straightforward handful of paragraphs with little outside context, what are the odds of conducting a useful conversation on an involved technical topic that has nuance and requires experience?

      4. 8

        While I agree with a lot of what you said, I definitely disagree with your take on Discord. I’ve found, for example, the Nim discord server to be full of helpful people discussing all kinds of programming-related things, especially the gamedev/graphics people. Ditto for the r/EmuDev and Zig Discord servers.

        I’ve certainly come across my fair share of shitty Discord servers though.

        1. 11

          Discord isn’t a specific place in and of itself; it’s a (non-free, proprietary, and centrally-managed) platform upon which other communities build spaces for chatting, and the individual quality of all of those spaces is what is actually meaningful. It makes as much sense to criticize the discussion quality of Discord as a whole as it does to criticize IRC as a whole, or Facebook messenger as a whole.

        2. 4

          I think you’re fair here–I have a bias against technical communities setting up shop in walled gardens they don’t control.

          1. 1

            While I understand your point, and agree that communities shouldn’t rely on proprietary platforms, this way of phrasing the problem is poor. Practically no community relies fully on infrastructure they “control”. Whether that’s the public IRC server or the internet connections individuals are using.

        3. 1

          The strictly linear model makes it impossible to follow discussions. I definitely prefer the Nim forum.

          1. 1
            1. 1

              That’s useless when people aren’t actually using them to group discussions. I just checked that there aren’t any threads in Nim’s #main.

      5. 4

        The things I’ve seen work best are people having a space to log/discuss their current problems or current projects, and then having a way to field questions or chat with that as a starting point.

        This is an area where I believe Digital Gardens [0] can actually generate great discussion if their platform allows for two way communication.


      6. 1

        literally any request that a worker spend time honing their craft is a massive blow against Labor and a chance to enrich Capital at the expense of underrepresented groups?

        Gold, Jerry! Gold!

    14. 8

      While this certainly fits with my experience, what about people who don’t get joy from programming, don’t want to learn new stuff, don’t find the puzzle fun?

      1. 15

        Maybe they are in the wrong industry, i don’t think anyone is happy at a job that sucks the fun out of life

        1. 18

          Im not sure people are in general expected to be happy in their job? So long as it pays the bills?

          1. 6

            I find it hard to imagine a professional football player that doesn’t or at least didn’t for a substantial amount of time in the past like playing football. I also can’t do the same for an influential physics professor. I’m willing to believe that not all jobs are equal in this sense. I have a burning passion for programming and I still have to push myself hard in order to endure the mental pain of trying to align a hundred stars and solve difficult programming challenges. I can’t imagine how one could motivate oneself to suffer that 8 hours a day without feeling the kind of joy that comes with finding the solution.

            It’s hard to describe this to non-programmers, but I believe I have the right audience here. Programming is a very stressful job. Not stressful like a surgeon or a stock broker who get stressed due to what’s at stake, but stressful because you have to push yourself to your limits to turn your brain into a domain specific problem solving apparatus and find the solution.

            BTW, I know that there are a lot of programming jobs out there which don’t resemble what I’m describing here at all, but I know that there are jobs like this too, but we don’t have a different name for them.

            1. 2

              I have a burning passion for programming and I still have to push myself hard in order to endure the mental pain of trying to align a hundred stars and solve difficult programming challenges.

              There is so much programming out there where you do some boring crud service on some db or where you assemble 4 different json blobs in a different format and pass it to the next microservice or cloud endpoint. That’s not truly exciting or challenging.

              1. 1

                I know that and I respect those jobs and programmers, but as I’ve mentioned some programming jobs require constant puzzle solving and creativity. I think my comment would be more agreeable if I said “compiler engineer” or “game AI engineer” or “database engineer”, but I don’t know of any term that can be used about those jobs collectively. Maybe we need a term like “R&D programmer” or maybe I should just have said “R&D engineer” and decoupled my point from programming per se.

          2. 5

            I think most people strive for being happy in their jobs, but yes the main factor for having one is to not starve or be homeless

            1. 8

              I’ve seen clock-in clock-out devs who didn’t give a shit about anything they did. Took no joy nor pride in their work. They were government contractors and so they did the absolute least possible (and least quality) that the gov asked for and would accept, and no more. They didn’t seem to care about what they got personally out of their jobs, they seemed to think it was normal. Drove me nuts, quit the company in 6 months.

              1. 2

                I had the exact same experience with some additional slogging through warehouses (cutting cardboard; I wish I were joking) and testing security hardware while waiting for a security clearance shortly after OPM got hacked (~6 months to get the clearance). Then to finally be surrounded by people warming their chairs, I couldn’t stand it. I understand the need to have stability in your job but pride is also important, at least to me.

        2. 8

          It depends on why you do it. Let’s not forget that programming is a very well paid profession. Maybe you use the good salary to finance the life-style you want to have (buy a house/appartment, have kids, maybe expensive hobbies). I can certainly imagine a more fun place to work than my current job, but the pay is very good. Therefore I stay because it enables my family and me to have the live we want.

          1. 3

            Thanks, that is very interesting points. Indeed, I think there is a lot of reasons to take the job beside fun and this is very respectable. On the other end, I would state that people having fun doing it get a better chance at performing and improve their skills on the long run.

            1. 4

              That is interestingly quite controversial in the research and we have solid data pointing to both.

              Note also that not having fun does not equate to sucking your soul out of you.

              Being meh about a job is ok. That is the case of nearly everyone.

              1. 1

                Thanks so much for the reply ! It would be so nice if you could point me to some of this research !

    15. 13

      It’s a little ironic the title is about shooting yourself in the foot, which is something most of us have learned to avoid WITHOUT actually experiencing it.

      1. 12

        I think it works quite well as an analogy though, because one of the ways that we’ve learned not to shoot ourselves in the foot is by dropping things on our feet as children. We know that this hurts and so we extrapolate that a bullet would hurt more, but we have a visceral immediate reaction to the thing that will cause pain. Similarly, most of the things in this article aren’t real foot-shooting (they weren’t in production, they were quickly fixed) but they were toe-stubbing events that led the author to being more careful of his feet in the future.

      2. 5

        I agree with your sentiment in general. I also find it idiotic when I read people write things like “We’ve done X, but it resulted in catastrophe Y, but we’ve learned from the experience and are now better for it” where, for any half-sane person, Y is an obvious outcome of X and nobody in their right mind would give it a try to begin with. However, IMO the points mentioned in this article don’t fall under this category. The things the author resisted at first are truly complicated and I agree that people shouldn’t accept such things as ground truth until convinced otherwise by strong evidence.

      3. 3

        I’ve dropped plenty of heavy objects on it though!

    16. 9

      Write a service stub: test against a stub object that internally implements the semantics of the service.

      Isn’t this… a mock?

      1. 5

        I feel like there’s probably one of those tedious arguments about super-picky distinctions between “mocks” and “stubs” and “test doubles” here, similar to the arguments about “unit” versus “integration” versus “end-to-end” tests, that’s unlikely to produce any satisfactory result. So I’d just call it a mock :)

      2. 2

        I think what’s meant to be avoided here is the kind of mocking where you “inject” case by case behaviour to the mock right there in the test case. Like, manually adding a user to the mock response of an insert endpoint. This is distinct from somebody implementing the semantics of an external service as an independent project without any direct reference to a test suite, maybe except for defining the scope.

    17. 16

      No longer worried about a job! Got one thanks to a friend who is also on this site (Thank you!)

      Don’t start for a few weeks now, so I have to decide how to spend my now-vacation time! Should be fun and relaxing.

      Probably work a bit on a small-community “D&D Social” webapp i’m working on.

      1. 3

        Congratulations about the job!

        1. 1

          Thanks! I’m pretty pleased to have moved on from the last place. It was pretty toxic.

      2. 2

        Enjoy those few weeks, that’s by far the greatest kind of vacation you can get, because not only do you not have to work, you also get to enjoy a fully free mind that doesn’t have to worry about the tasks that wait for you afterwards.

    18. 68

      * for some stuff[1]

      i am the creator of htmx & obviously glad to see it get some attention, but I also don’t want to overpromise what it can achieve. I think it makes a lot more possible in within the hypermedia paradigm of the web but, of course, there are times when it’s the right choice and times when it isn’t.

      i have been working on a free book on hypermedia-based systems with a few other authors here:

      [1] -

      1. 2

        I don’t get why htmx allow other http verb than GET and POST. from my point of view it add a layer of complexity without real benefit.

        1. 13

          Are you saying DELETE and PUT should be replaced by POST? The idempotence property of these operations would be lost if you did that.

          1. 6

            Yes. Other verbs are a waste of effort. They don’t benefit anything and it adds another design decision you don’t need to make.

          2. 1

            I would say that in 99% (or more) of the existing http request DELETE and PUT are replaced by POST, and using something different is likely to break something, for little benefit

            for exemple if you make two DELETE which should be idempotent and that your backend doesn’t treat them in a non idempotent way, your app is suddenly having a bug which can be hard to reproduce.

            1. 17

              Not sure I get your point. If your backend is treating deletes as not idempotent you’re already wrong. And deletes in particular seem like a quite easy to make idempotent, just check if it has been deleted already before deleting.

        2. 11

          Being limited to get and post was the biggest mistake forms ever made and held the web back for years with shitty hacks like ?_method=DELETE

          1. 4

            What does it mean to delete something? Should I use DELETE if it’s a soft delete? What if can be undone for three hours? What if it deletes one thing but adds an audit record somewhere else?

            DELETE adds nothing. The point is “can an intermediary cache this?” If yes then use GET. If not, POST.

            1. 2

              But then how are you going to indicate a delete action? Just make up something to stuff in the post body? How is that any better?

              1. 1

                URL paths are function names. The arguments to a GET function are query parameters. The arguments to POST functions are a JSON body (or form fields if you’re doing AJAX). You make up requests and responses that fit the domain instead of assuming everything is a resource with the same verbs. I’m also against putting resource IDs into URLs for APIs (you can do it for stuff end users see to make the URLs pretty, but not for APIs).

                1. 10

                  URL paths are function names.

                  What does the R stand for

                  1. 11

                    clearly the R stands for RPC ;)

                  2. 2

                    Right tool for the job.

                2. 6

                  You make up requests and responses that fit the domain instead of assuming everything is a resource with the same verbs.

                  Aren’t you just stating that you prefer RPC, without actually engaging with the argument for hypermedia?

                  The argument is that “Uniform Interface Constraint” (resources with a standard set of methods) allows you to make general clients that can interact with your resources with no out of band information (like API docs), a la web browsers + html.

                  Admittedly, what you describe is, in fact, how the majority of most APIs today work, and there is an interesting discussion about why that is if hypermedia supposedly has so many advantages.

                  I think your argument would be more interesting if you stated why you think that goal doesn’t have value, especially given that the web itself works that way, that it solves problems like API versioning, and so forth.

                  I’m also against putting resource IDs into URLs for APIs

                  Why do you not like this specifically, out of curiosity? What do you prefer?

                  1. 1

                    Second question first, resource IDs belong in query parameters. HTTP has many overlapping ways of sending data from client to server and server to client. The client can send information as a method verb, a URL path, a query parameter, a header, or a request body. There has to be some system to organize the arbitrary choices. The system is:

                    • Verb is for read vs write.
                    • URL path is the function to call.
                    • Query parameters are the arguments to reads. Bodies are the arguments to writes.
                    • Header is for authentication.

                    IDs are prettier in the URL path, but for an API, that doesn’t matter. You just need a convention that is easy to follow.

                    As for Hypermedia, I just care that the website is good for users and developers. Whatever the theory is behind it isn’t very important. It’s great that web browsers are universal tools, but that’s true for JSON APIs too, so “hypermedia-ness” only matters if it makes development easier or harder. I think probably HTMX is easier for most web apps, so that’s why I like it. Even then, I’m more partial to Alpine.JS because I feel like the core motivation for a lot of the HTMX partisans is just wanting to avoid learning JS.

                    1. 2

                      Thanks for explaining.

                      I feel like the core motivation for a lot of the HTMX partisans is just wanting to avoid learning JS.

                      I know this is a selling point, but fwiw I have a lot of JS experience, consider myself good with it, have used React and other similar frameworks, but still like the simplicity of the old MPA model – htmx being for me just an upgrade. Having just the single data model of the resources on the server to think about it.

                      The system is:….

                      Agree with your point about “overlapping ways of sending data from client to server,” and also agree that having any clear system is more important than “the best system,” if there is one. I guess I’m not seeing why the classic system of “url path to describe resource”, “http verb to describe action” is not good enough… It can be stilted and noun-y, yes, but imo that’s not enough reason to throw away a perfectly good existing “system,” as it were.

                      1. 1

                        I don’t like the classic “everything is a noun” system because it ends up needing a lot of requests to get anything done, and requests are the slowest thing you can do with a computer.

                        I’m working with the MailChimp V3 API this week, and to send a campaign takes three requests: 1. Create campaign 2. Set content 3. Send. In MailChimp API V2, it’s a single request, but they’re finally shutting it down at the end of the month, so I’m being forced to make the change. Because it’s three requests instead of one, I probably need to put it into a queue now because it won’t be reliably fast enough to get done in one request for my users, so now I have to deal with all the complications of asynchrony, and for what? The old API was better.

                        1. 1

                          Seems like a reasonable complaint but orthogonal to the naming/noun-ness of the system. They could continue to support the version you prefer with something like POST “” or “outbox” or whatever. That is, from a pure naming persepective, you can always transform back and forth between the two systems.

                          To your point, classic REST typically has aggregate endpoints return a list of ids, and then you have to make requests (possibly parallel) to each id resource to get the details, which is cumbersome. But nothing forces you to follow that if it doesn’t suit your use-case.

                3. 3

                  I call this “BrowserTP” since it’s a squished down version of HTTP based on what browsers supported in the early 00’s.

                  I would say thusly: HTTP is an RPC protocol where the method name (or function or procedure name if you wish) is the method (sometimes called the verb). The URL path is the object that you are calling the method on. The arguments are query string + body as you say.

                  And sure I can say “please create a deletion for the element in this collection with id 1” it’s not wrong per se, but why wouldn’t I just say “please delete element 1”

        3. 7

          because those are part of the HTTP spec and have reasonable meanings useful for implementing resource-oriented url schemes:

          DELETE /comments/33


          POST /comments/33/delete

        4. 2

          I agree that they aren’t really useful, but it takes very little effort for the author to add them and a lot of people want them.

          Aside from the obviously minor difference of sending POST /thing/delete vs DELETE /thing, other HTTP verbs can introduce additional overhead with CORS:

          Additionally, for HTTP request methods that can cause side-effects on server data (in particular, HTTP methods other than GET, or POST with certain MIME types), the specification mandates that browsers “preflight” the request, soliciting supported methods from the server with the HTTP OPTIONS request method […]

          - MDN: Cross-Origin Resource Sharing (CORS)

          1. 3

            I was going to argue the opposite with approximately the very same line quoted. If you and your webapp rely on the browser for some of the protection against some level of cross-site request forgery attacks, you can use the verbs as they were intended and rely on the browser to enforce them to be usable according to CORS rules.

            Guess why lists the X-HTTP-Method-Override header (and friends) to forbidden headers in CORS? Lots of vulnerable web pages, that’s why :(

            1. 1

              CORS still works with GET and POST, it just doesn’t require multiple requests.

        5. 2

          I disagree, they actually simplify things by disentangling POST from update and deletion actions.

          Edit: in other words, they add a couple terms to our vocabulary…I don’t see how they add an entire layer.

          1. 1

            layer is maybe a wrong term (i’m french)

            i want to say that it add complexity: if you have a reverse proxy, it has to support the new verbs, if you have logs it should be aware of this verbs, if you have something in your infrastructure which used to ignore DELETE, and suddenly support it, it suddenly delete unwanted things

    19. 4

      Be weary that while the article contains some interesting technical details it doesn’t suggest any mitigation strategies other than using their commercial product.

      1. 3

        TBF the only “mitigation” the article is an identification and explanation of the issue and already 3500 words, the only “mitigation” it mentions (more or less as an aside) is keeping track of when postgres screws itself up.

        However it does conclude with a “to be continued” for the next instalment of the series.

        And awareness of issues is an important tool in an ops’ belt.

      2. 2

        Yeah, there’s something really eerie about the article. In a way, it makes me think that it betrays its reader. Let me try to put it to words: The feeling I get is that the article wasn’t written to inform the reader, it was written to scare the readers and lead them towards an intended conclusion for commercial benefit. In that sense I think it’s manipulative and exploitative.

        To be honest, I want to think that everything written in this article is obvious to anyone who’s using Postgres seriously in production. We know that these are the weak spots of Postgres’s MVCC implementation (as trade-offs for properties we desire), so you design your database interaction to avoid them. It’s like saying human legs are the worst because if you kick them from this precise direction you can break their knees.

        1. 2

          Accidental duplicate comment, by the way.

          1. 1

            Oops thanks.

    20. 2

      So so true. I spent about 10 years of my life trying to find something better than C for programming stuff that needed strong control over memory. You can do it in C#, Lisp, etc but it requires incredibly detailed knowledge of the implementation.

      1. 5

        C++? The adoption/learning curve is so shallow — for instance you can keep writing C code but just use (zone or more of) std::string, std::vector and std::unique_ptr; and most of your memory management code and its bugs go away. And of course use new/delete/malloc/free if you really need to.

        1. 9

          After writing an OS in C++, I really hate having to go back to C. I have to write far more code in C, and (worse) I have to think about the same things all of the time. Even just having RAII saves me a huge amount of effort (smart pointers are a big part, but so is having locks released at the end of a scope). For systems code, data structure choice is critical and C++ makes it so much easier to prototype with one thing, profile usage patterns, and then replace it with something else later.

          1. 1

            Would you think it would be helpful to have a C w/ Lisp-style metaprogramming to implement higher-level constructs that compile down to C? And then you use what you’re willing to pay for or put up with?

            One that got into semi-usable form was ZL Language which hints at many possibilities in C, C++, and Lisp/Scheme.

            Since you wanted destructors, I also found smart pointers for C whose quality I couldn’t evaluate because I don’t program in C. It looked readable at least. There’s been many implementations of OOP patterns in C, too. I don’t have a list of them but many are on StackOverflow.

            1. 5

              Or you could just use a language that’s widely supported by multiple compilers and has these features. I implemented a bunch of these things in C, but there were always corner cases where they didn’t work, or where they required compiler-specific extensions that made them hard to port. Eventually I realised I was just implementing a bad version of C++.

              In the example that you link to, for example, it uses the same attribute that I’ve used for RAII locks and for lexically scoped buffers. From the perspective of gcc, these are just pointers. If you assign the value to another pointer-type variable, it will not raise an error and will give you a dangling pointer. Without the ability to overload assignment (and, ideally, move), you can’t implement these things robustly.

              C++ metaprogramming got a lot better with constexpr and the ability to use structural types as template arguments. The only thing that it lacks that I want is the ability to generate top-level declarations with user-defined names from templates.

              1. 1

                The only thing that it lacks that I want is the ability to generate top-level declarations with user-defined names from templates.

                Can you elaborate on this? I wonder if it’s related to a problem I have at work.

                We have a lot of std::variant-like types, like using BoolExpr = std::variant<Atom, Conjunction, Disjunction>;. But the concise name BoolExpr is only an alias: the actual symbol names use the full std::variant<Atom, Conjunction, Disjunction>. Some of these variants have dozens of cases, so any related function/method names get reaallly long!

                I think I would want a language feature like “the true name of std::variant<Atom, Conjunction, Disjunction> is BoolExpr”. Maybe this would be related to explicit template instantiation: you could declare this in bool_expr.h and it would be an error to instantiate std::variant<Atom, Conjunction, Disjunction> anywhere else.

                1. 2

                  The main thing for me is exposing things to C. I can use X macros to create a load of variants of a function that use a name and a type in their instantiations, but I can’t do that with templates alone. Similarly, I can create explicit template instantiations in a file (so that they can be extern in the header) individually, but I can’t write a template that declares extern templates for a given template over a set of types and another that generates the code for them in my module.

                  The reflection working group has a bunch of proposals to address these things and I’ve been expecting them to make it into the next standard since C++17 was released. Maybe C++26…

              2. 1

                My motivation was this. At one point, I was also considering embedded targets which only support assembly and C variants.

                What do you think of my Brute-Force Assurance concept that reuses rare, expensive investments in tooling across languages?

                1. 2

                  I think platforms without C++ support are dying out. Adding a back end for your target to LLVM is cheaper than writing a C compiler and so there’s little incentive to not support C++ (and Rust). The BFA model might work, but I’d have to see the quality of the code that it generated. Often these tools end up triggering UB, which is a problem, or leave the kind of microoptimisations that are critical to embedded systems out and impossible to add in the generated code.

                  1. 1

                    Makes sense. Fortunately, there is more work happening for LLVM targets. Thanks for the review!

        2. 3

          a) this started in 2002 when half that stuff didn’t exist, and

          b) C++ is a hateful morass of bullshit and misdesign, and that won’t change until they start removing things instead of adding it.

          Yes, I am biased. Not going to change though.

          1. 1

            Pretty sure at least string and vector existed in 2002; not unique_ptr but you can implement that yourself in 10 minutes.

            1. 5

              You couldn’t implement unique_ptr in 2002 with the semantics that it has today. unique_ptr requires language support for move semantics in order to give you that uniqueness promise automatically and move semantics came to C++ in 2011.

      2. 3

        but it requires incredibly detailed knowledge of the implementation.

        Not just that - you actively need to work around the problems and limitations of the runtime. E.g. when garbage collection bogs your application down, you need to start creating object pools. Hence, you end up manually managing memory again - precisely the thing you tried to avoid in the first place. Many runtimes do not let you run the garbage collector manually or specify fine-granular garbage collection settings. In addition to that, an update to the runtime (which you often do not control because it’s just the runtime that is installed on the user’s machine) can ruin all your memory optimizations again and send you back to square one, which is a heavy maintenance burden. It just doesn’t make any sense to use these languages for anything that requires fine-grained control over the execution. Frankly, it doesn’t make any sense to use these languages at all if you know C++ or Rust, unless the platform forces you to use then (like the web pretty much forces you to use JavaScript if you want to write code that is compatible with most browsers).

        1. 2

          It’s been a long time (over a decade) since I had to deal with GC problems being noticeable at an application level. A lot of these problems disappeared on their own, as computers became faster and GC algorithms moved from the drawing boards into the data center. (I was working on software for trading systems, algo trading, stock market engines, operational data stores, and large scale distributed systems. Mostly written in Java.)

          In the late 90s, GC pauses were horrendous. By the late aughts, GC pauses were mostly manageable, and we had had enough time to work around the worst causes of them. Nowadays, pauseless GC algorithms are becoming the norm.

          I still work in C and C++ when necessary, just like I still repair junk around the house by hand. It’s possible. Sure, it would be far cheaper and faster to order something brand new from China, but there’s a certain joy in wasting a weekend trying to do what should be a 5 minute fix. Similarly, it’s sometimes interesting to spend 40 person years (e.g. a team of 10 over a 4 year period) on a software project in C++ that would take a team of 5 people maybe 3 months to do in Go. Of course, there are still a handful of projects that actually need to be built in C or C++ (or Rust or Zig, I guess), but that is such a tiny portion of the software industry at this point, and those people already know who they are and why they have to do what they have to do.

          You said “It just doesn’t make any sense to use these languages for anything that requires fine-grained control over the execution.” But how many applications still require that level of fine grained control?

          1. 4

            For literally decades, people have been saying that GC has now improved so much that it’s become unnoticeable, and every single time I return to try it, I encounter uncontrollable, erratic runtime behavior and poor performance. Unless you write some quick and dirty toy program to plot 100 points, you will notice it one way or another. Try writing a game in JavaScript - you still have to do object pooling. Or look at Minecraft - the amount of memory the JVM allocates and then frees during garbage collection is crazy. Show me a garbage collector and I’ll show you a nasty corner case where it breaks down.

            Similarly, it’s sometimes interesting to spend 40 person years (e.g. a team of 10 over a 4 year period) on a software project in C++ that would take a team of 5 people maybe 3 months to do in Go.

            Okay, I’m not a big C++ fan but this is obviously flamebait. Not even gonna comment on it further.

            But how many applications still require that level of fine grained control?

            A lot. Embedded software, operating systems, realtime buses, audio and video applications… Frankly, I have a hard time coming up with something I worked on that doesn’t require it. Not to mention, even if the application doesn’t strictly require it, a GC is still intrinsically wasteful, making the software run worse, especially on weaker machines. And even if we say performance doesn’t matter, using languages with GC encourages bad and convoluted design and incoherent lifetime management. So, no matter how you look at it, GC is a bad deal.

            1. 3

              Okay, I’m not a big C++ fan but this is obviously flamebait. Not even gonna comment on it further.

              I managed a large engineering organization at BigTechCoInc for a number of years, and kept track (as closely as possible) of technical projects and what languages they used and what results they had. Among other languages we used in quantity: C, C++, Java, C#. (Other languages too including both Python and JS on the back end, but not enough to draw any clear conclusions.) The cost per delivered function point was super high in C++ compared to everything else (including C). C tended to be cheaper than C++ because it seemd to be used mostly for smaller projects, or (I believe) on more mature code bases making incremental changes; I think if we tried building something new and huge in C, it may have been as expensive as the C++ projects, but that never happened. Java and C# are very similar languages, and had very similar cost levels, much lower than C or C++, and while I didn’t run any Go projects, I have heard from peers that Go costs significantly less than Java for development (but I don’t know about long term maintenance costs). One project I managed was implemented nearly simultaneously in C++, C#, and Java, which was quite illuminating. I also compared notes with peers at Amazon, Facebook, Google, Twitter, Microsoft, eBay, NYSE (etc.), and lots of different financial services firms, and their anecdotal results were all reasonably similar to mine. The two largest code bases for us were Java and C++, and the cost with C++ was an order of magnitude greater than Java.

              Embedded software, operating systems, realtime buses, audio and video applications

              Sure. Like I said: “Of course, there are still a handful of projects that actually need to be built in C or C++ (or Rust or Zig, I guess), but that is such a tiny portion of the software industry at this point, and those people already know who they are and why they have to do what they have to do.

              Or look at Minecraft - the amount of memory the JVM allocates and then frees during garbage collection is crazy.

              This is absolutely true. The fact that Java works at all is a freaking miracle. The fact that it manages not to fall over with sustained allocation rates of gigabytes per second (mostly all tiny objects, too!) is amazing. That Minecraft succeeded is a bit shocking in retrospect.

              1. 1

                Very interesting. Do you have more fine-grained knowledge about the cost per delivered function point with respect to C++? Is the additional cost caused by debugging crashes, memory leaks, etc.? Is it caused by additional training and learning or tooling and build systems? Does the usage of modern C++ idioms make a difference? Or does everything simply take longer, death by a thousand cuts?

                1. 4

                  Some more data points occurred to me. I was thinking about an old presentation I did at a few different conferences on the topic, e.g.

                  Specifically, looking at areas that Java was able to leverage:

                  • gc (enabled cross component memory management without RAII)
                  • simpler builds
                  • elimination of header file complexity
                  • binary standard for build outputs
                  • dynamic linking as a concept well-supported by the language
                  • good portability
                  • more rigid type system defined
                  • reflection (enabling more powerful libraries)
                  • elimination of pointers, buffer over-runs, etc.

                  My thinking has evolved in the subsequent decade, but there are some key things in that list that really show the pain points in C++, specifically around the difficulty of re-using libraries and components. But the other thing that’s important to keep in mind is that the form of applications has changed dramatically over time: An app used to be a file (.bin .com .exe whatever). Then it was a small set of files (some .so or .dll files and some data files in addition to the executable). And at some point, the libraries went from being 1% of the app to 99% of the app.

                  Just like Java/C# ate C++’s lunch in the “Internet application” era, some “newer” platforms (the modern browser plus the phone OSs) show how ill equipped Java/C# are, although I think that stuff like React and Node (JS) are just interim steps (impressive but poorly thought out) toward an inevitable shift in how we think about applications.

                  Anyhow, it’s a very interesting topic, and I wish I had more time to devote to thinking about this kind of topic than just doing the day job, but that’s life.

                2. 3

                  I’m going to go into opinion / editorial mode now, so please discount accordingly.

                  • C++ isn’t one language. It’s lots of different languages under one umbrella name. While it’s super powerful, and can do literally everything, that lack of “one true way to do everything” really seems to hurt it in larger teams, because within a large team, no subgroups end up using the same exact language.

                  • C++ libraries are nowhere near as mature (either in the libraries themselves, or in the ease of using them randomly in a project) as in other languages. It’s very common in other languages to drag in different libraries arbitrarily as necessary, and you don’t generally have to worry about them conflicting somehow (even though I guess they might occasionally conflict). In C++, you generally get burnt so badly by trying to use any library other than boost that you never try again. So then you end up having to build everything from scratch, on every project.

                  • Tooling (including builds) is often much slower and quite complicated to get right, particularly if you’re doing cross platform development. Linux only isn’t bad. Windows only isn’t bad. But Linux + Windows (and anything else) is bad. And compile times can be strangely bad, and complex to speed up. (A project I worked on 10+ years ago had 14 hour C++ builds on Solaris/Sparc, for example. That’s just not right.)

                  • Finding good C++ programmers is hard. And almost all good C++ programmers are very expensive, if you’re lucky enough to find them at all. And a bad C++ programmer will often do huge harm to an entire project, while a bad (for example) Python developer will tend to only shit in his own lunchbox.

                  I think the “death by 1000 cuts” analogy isn’t wrong. But it might only be 87 cuts, or something like that. We found that we could systematize a lot of the things necessary to make a C++ project run well, but the list was immense (and the items on the list more complex) compared to what we needed to do in Java, or C#, etc.

                  1. 3

                    Finding good C++ programmers is hard

                    This depends a lot on your baseline. It’s easier to find a good C++ programmer than a Rust programmer of any skill level. Over the last 5 years, it’s become easier to find good C++ programmers than good C programmers. It’s orders of magnitude easier to find a good Java, C#, or JavaScript programmer than a good C++ programmer and noticeably easier than finding C++ programmers of any competence level.

            2. 2

              Embedded software, operating systems, realtime buses, audio and video applications…

              Yep! In other words, almost all the things I’m most interested in!

              So, no matter how you look at it, GC is a bad deal.

              …Ok I gotta call you out there. :P There’s plenty of times when a GC is a perfectly fine and/or great deal. The problem is just that when you don’t want a GC, you really don’t want a GC, and most languages with a GC use it as a way to make simplifying assumptions that have not stood the test of time. I think a bright future exists for languages like Swift, which use a GC or refcounting and have a good ownership system to let the compiler optimize the bits that don’t need it.

              1. 3

                It’s a bad deal you can sometimes afford to take when you have lots of CPU cycles and RAM to spare ;-)

                Don’t get me wrong, I’m open to using any tool as long as it gets the job done reliably. I wouldn’t want to manage memory when writing shell scripts. On the other hand, the use-case for shell scripts is very narrow, I wouldn’t use them for most things. The larger the project, the more of a liability GC becomes.

                1. 2

                  It’s a bad deal you can sometimes afford to take when you have lots of CPU cycles and RAM to spare ;-)

                  It’s not always that clear cut. Sometimes the performance gains from being able to easily use cyclic data structures that model your problem domain and lead to efficient algorithms can significantly outweigh the GC cost.

                2. 1

                  Ok, fair. :-) Hmmmm though, I actually thought of a use case where GC of some form or another seems almost inevitable: dealing with threads and/or coroutines that have complex/dynamic lifetimes. These situations can sometimes be avoided, but sometimes not, especially for long-running things. Even in Rust it’s pretty common to deal with them via “fiiiiiiine just throw the shared data into an Rc”.

                  Also, since killing threads is so cursed on just about every operating system as far as I can tell, a tracing GC has an advantage there in that it can always clean up a dead thread’s resources, sooner or later. One could argue that a better solution would be to have operating systems be better at cleaning up threads, but alas, it’s not an easy problem.

                  Am I missing anything? I am still a novice with actually sophisticated threading stuff.

                  1. 2

                    dealing with threads and/or coroutines that have complex/dynamic lifetimes

                    The more code I write, the more I feel that having a strong hierarchy with clearly defined lifetimes and ownership is a good thing. Maybe I’m developing C++ Stockholm syndrome, but I find myself drawn to these simpler architectures even when using other languages that don’t force me to. About your point with Rc, I don’t think this qualifies as a garbage collector because you don’t delegate the cleanup to some runtime, you still delete the object inside the scope of one of your own functions (i.e. the last scope that drops the object) and thus on the time budget of your own code. Additionally, often just a few key objects/structs need to be wrapped in a std::shared_ptr or Rc, so the overhead is negligible.

                    Also, since killing threads is so cursed on just about every operating system as far as I can tell

                    Threads are supposed to be joined cooperatively, not killed (canceled). At the point of being canceled, the thread might be in any state, including in a critical section of a Mutex. This will almost certainly lead to problems down the road. But even joining threads is cursed because people do stuff like sleep(3), completely stalling the thread, which makes it impossible to terminate the thread cooperatively within a reasonable time frame. The proper way for threads to wait is to wait on the thing you want to wait on plus additionally a cancellation event which would be triggered if the thread needs to be joined. So you wait on two things at the same time (also see select and epoll). It’s not so much the OS that is the problem (though the OS doesn’t help because it doesn’t provide simple to use, good primitives) but the programmer. Threads should clean up their own state upon being joined. The owner of the thread, the one who called join (usually the main thread) will clean up potential remains like entries in thread lists. There should never be an ownerless thread. Threads must be able to release their resources and be stopped in a timely manner anyway, for example when the system shuts down, the process is stopped or submodules are detached. Here, a garbage collector does not provide much help.

          2. 3

            The Go projects I’m (somewhat) involved with still very much have GC related performance issues. Big-data server stuff. Recent releases of Go have helped, though.