Threads for colemickens

    1. 28

      Some good responses to this post on Mastodon:

      Red Hat: those who use open source code and don’t contribute back are “a real threat to open source companies everywhere” I call them: users. I fight for the users.

      https://mastodon.social/@geerlingguy/110612337176962302

      How are you going to get people to contribute back if they aren’t first users? Sure not everyone will, but if you don’t let them be users first no one will

      Well, they were freeloading off your work, now you are both even. And everyone else has lost something.

      Also, worth reading Jeff Geerling’s response to the original post: https://www.jeffgeerling.com/blog/2023/im-done-red-hat-enterprise-linux

      My two cents: I’ve seen a lot of people say that this is a perfectly predictable move by Red Hat, that in order to turn a profit companies will want to capture the value of their work. Sadly, this is textbook enshittification: a healthy ecosystem doesn’t let one company capture all of the value of a community, including a community built around open source software. The value of the community comes in part from letting other people and businesses enjoy some of the value that they are also contributing. Capturing more / too much of the value risks destroying the community, and if that happens, sadly in the end not only will Red Hat no longer be able to capture that value, the community will also lose the value that it contributed when the business+community dies or is killed by the corporation.

      I don’t use RHEL or the now threatened clones, but… Seeing this power grab has made me nervous about all of the other stuff Red Hat is responsible for, directly or indirectly, including my primary computer’s current OS, Fedora, and the configuration management software behind all my web infrastructure, Ansible.

      It really feels like the pace of enshittification has been accelerating lately, with Twitter, Reddit, and other entities I’ve relied on for years suddenly becoming so bad so quickly that it’s become intolerable for me.

      1. 20

        Some good responses to this post on Mastodon:

        Red Hat: those who use open source code and don’t contribute back are “a real threat to open source companies everywhere” I call them: users. I fight for the users.

        https://mastodon.social/@geerlingguy/110612337176962302

        Now let’s look at the actual wording in the blog post:

        Simply rebuilding code, without adding value or changing it in any way, represents a real threat to open source companies everywhere.

        I don’t particularly agree with this statement, but it’s a quite different one from what Jeff Geerling is making it out to be. Sadly, a certain section of critics always resort to such tactics whenever an OSS company does something they don’t like.

        this is textbook enshittification: a healthy ecosystem doesn’t let one company capture all of the value of a community, including a community built around open source software.

        This is a very confused mumbo-jumbo that sounds vaguely economic, but it’s not really. It takes five seconds of thought to realize there are other significant Linux distros, and Red Hat is nowhere near capturing the entire community.

        1. 14

          It takes five seconds of thought to realize there are other significant Linux distros, and Red Hat is nowhere near capturing the entire community.

          How many of those distros managed to avoid adopting things that RedHat pushed on the ecosystem that were controlled by RedHat, such as PulseAudio and systemd?

          1. 18

            Or like Pipewire so you can actually use the AAC codec with your Bluetooth headphones, damn Red Hat for improving the desktop experience.

            1. 13

              Yeah… I’m no RH or RHEL fan but GNOME, PulseAudio, PipeWire, a number of Wayland-adjacent projects, including leaning-in on Firefox’s Wayland support… There’s a lot of things RH touches that I appreciate. I also worked at a large tech company and realize that the folks making the decisions about RHEL source availability are probably not the people spearheading these efforts, or even the ones setting technical direction.

            2. -1

              yeah because bluetooth headphones are an improvement of the desktop experience.

              1. 2

                Are you implying that they’re not? Because in fact, they are for me, I happen to use a desktop with BT headphones.

          2. 16

            In other words–what you’re asking is, How many of those distros managed to avoid the free software and labour that Red Hat funded for them and the rest of the Linux community? Also, give it a rest about systemd already. It’s over and it won for a technical reason. It’s not some huge conspiracy theory.

            1. 1

              You’re the one theorizing a conspiracy wherein Red Hat investors and managers act altruistically, against the economic incentives they would have to follow if they didn’t conspire.

              What else could you mean by saying Red Hat does things “for them and the rest of the Linux community”?

              1. 1

                So let me get this straight, according to you, I am saying that Red Hat (which has been an open source company from day one) is engaged in a conspiracy to do open source work. Did I understand right?

                1. 1

                  I am asking how else we should interpret “for them and the rest of the Linux community,” other than the idea that they acted altruistically, which would require a conspiracy.

          3. 8

            Or DRI. The Direct Rendering Infrastructure and Direct Rendering Manager (DRI/DRM) are a joint effort from a bunch of contractors, including Red Hat. It was an open secret that kernel modesetting (KMS) was pushed in order to have a faster prettier boot screen for some unknown client of Red Hat; I remember standing next to the DRI/DRM maintainer while Linus yelled at him about this.

          4. 4

            Those distros have only themselves to blame for those things, in particular. I’ll not shed tears for them.

            1. 6

              I have a lot of sympathy with them. FreeBSD managed not to adopt these things but ended up having to do a lot of work. When a single company is pushing a vertically integrated solution and has enough developers to be able to upstream changes that add it as a dependency across a load of packages, it’s very hard to keep things working if you don’t adopt their solution.

              1. 2

                My memory could be fuzzy, but I recall a lot of capitulation and even mockery of folks’ concerns in and by those distros. That said, I suppose you’ve got a point. :)

              2. 1

                OTOH that work could be shared if major linux distros did not all capitulate.

      2. 8

        My two cents: I’ve seen a lot of people say that this is a perfectly predictable move by Red Hat, that in order to turn a profit companies will want to capture the value of their work. Sadly, this is textbook enshittification: a healthy ecosystem doesn’t let one company capture all of the value of a community, including a community built around open source software. The value of the community comes in part from letting other people and businesses enjoy some of the value that they are also contributing. Capturing more / too much of the value risks destroying the community, and if that happens, sadly in the end not only will Red Hat no longer be able to capture that value, the community will also lose the value that it contributed when the business+community dies or is killed by the corporation.

        I don’t think you’re actually saying what amount of value Red Hat was capturing, how much more it’s capturing now, and why you think the lower amount is correct. Calling it “enshittification” does not make an argument.

      3. 8

        How are you going to get people to contribute back if they aren’t first users? Sure not everyone will, but if you don’t let them be users first no one will

        From the original article, emphasis mine:

        We also provide no-cost Red Hat Developer subscriptions and Red Hat Enterprise Linux (RHEL) for Open Source Infrastructure. The developer subscription provides no-cost RHEL to developers and enables usage for up to 16 systems, again, at no-cost. This can be used by individuals for their own work and by RHEL customers for the work of their employees. RHEL for Open Source Infrastructure is intended to give open source projects (whether or not they’re affiliated with Red Hat in any way) access to no-cost RHEL for their infrastructure and development needs.

        16 machines is a pretty generous free license, definitely not a barrier to entry.

        1. 12

          The barrier to entry is that automating the fetching of one of those licenses for a CI/CD system, then making sure you don’t run more than 16 of them in parallel (docker containers count as a license BTW, so it’s not even full VM’s). Then making sure you check that license back in.

          On top of that the licensing system is fickle, and sometimes just doesn’t license a system or takes a while to do so, so now your CI/CD pipeline is sitting idle while RHEL sorts out its entitlement and gives you a certificate for use with their repositories to fetch the packages you need to run your compile/integration testing…

          Lastly you are required to renew the license on a yearly basis and there is no automated way to do that, so every year your CI/CD pipeline will need to get updated with new secrets because it stops working.

          None of those limitations existed with CentOS. I could easily grab the RPM’s I needed from a mirror, I could easily cache those packages (legally) and install them on a system without first having to register it as licensed. I didn’t have to figure out how to distribute the right license key for registering the system with RHEL in the first place and keeping it safe…

          It’s not something that can’t be overcome and solved, it’s just an additional hurdle that you have to jump over, and yet one more thing that can go wrong/has to be debugged/tested/validated.

          1. 1

            It’s not something that can’t be overcome and solved, it’s just an additional hurdle that you have to jump over, and yet one more thing that can go wrong/has to be debugged/tested/validated.

            which is no barrier to a big company who would blow past the 16 system limit, but is a barrier instantly for small one-man teams.

    2. 0

      I’d be interested in the backstory here: was Red Hat ever profitable before these changes? Did something stop them from turning a profit when they did before? Or did someone at IBM just decided they could be squeezed for more profit?

      1. 10

        A quick search for their financial reports showed that, as of 2019, they were making quite large piles of money. I was quite surprised at how much revenue they had: multiple billions of dollars in revenue and a healthy profit from this.

        It’s not clear to me the extent to which CentOS affected this. My guess is that, by being more actively involved for the past few years, they’ve made it easier to measure how many sales that get as a conversion from CentOS users and found that it’s peanuts compared to the number of people that use CentOS as a way of avoiding paying an RHEL subscription.

        I didn’t see any post-acquisition numbers but I wouldn’t be surprised if they’re seeing some squeezes from the trend towards containerisation. Almost all Linux containers that I’ve seen use Ubuntu if they want something that looks like a full-featured *NIX install or Alpine or something else tiny if they want a minimal system as their base layer. These containers then get run in clouds on Linux VMs that don’t have anything recognisable as a distro: they’re a kernel and a tiny initrd that has just enough to start containerd or equivalent. None of this requires a RedHat license and Docker makes it very easy to use Windows or macOS as client OS for developing them. That’s got to be eroding their revenue (and a shame, because they’re largely responsible for the Podman suite, which is a much nicer replacement for Docker, but probably won’t make them any money).

        1. 3

          I’d say they’re not hurting in the containerization space, with OpenShift as the enterprisey Kubernetes distro, quay as the enterprisey container registry, and the fact that they own CoreOS.

          1. 2

            If you’re deploying into the cloud, you’re using the cloud provider’s distro for managing containers, not OpenShift. You might use quay, but it incurs external bandwidth charges (and latency) and so will probably be more expensive and slower than the cloud provider’s registry. I don’t think I’ve ever seen a container image using a CoreOS base layer, though it’s possible, but I doubt you’d buy a support contract from Red Hat to do so.

            1. 2

              You’re missing that enterprises value the vendor relationship and the support. They can and will do things that don’t seem to make sense externally but that’s because the reasoning is private or isn’t obvious outside that industry.

              I’ve never seen a CoreOS-based container but I’ve seen a lot of RHEL-based ones.

              1. 1

                You’re missing that enterprises value the vendor relationship and the support.

                Possibly. I’ve never had a RHEL subscription but I’ve heard horror stories from people who did (bugs critical to their business ignored for a year and then auto closed because of no activity). Putting something that requires a connection to a license server in a container seems like a recipe for downtime.

                1. 2

                  I expect that big enterprise customers will not suffer stamping a license on every bare-metal host, virtual machine, and container. My experience is that connectivity outside the organization, even in public cloud, is highly controlled and curtailed. Fetching from a container registry goes through an allow-listed application-level proxy like Artifactory or Nexus, or through peculiarly local means. Hitting a license server on the public internet just isn’t going to happen. Beyond a certain size these organizations negotiate terms, among them all-you-can-eat and local license servers.

                  It’s going to be interesting.

      2. 4

        All this is easily findable on the Internets, but the tl;dr - yes. Red Hat was profitable. That’s Red Hat’s job, to turn a profit. It’s also Red Hat’s job to remain profitable and try to grow its market share, and to try to avoid being made irrelevant, etc.

        Being a public company means that shareholders expect not only profit, but continual growth. Whether that’s a reasonable expectation or healthy is a separate discussion, but that’s the expectation for public companies – particularly those in the tech space. IBM paid $34 billion for Red Hat and is now obliged to ensure that it was worth the money they paid, and then some.

        If RHEL clones are eating into sales and subscription renewals, Red Hat & IBM are obliged to fix that. I don’t work at Red Hat anymore, but it’s no secret that Red Hat has a target every quarter for renewals and new subscriptions. You want renewals to happen at a pretty high rate, because it’s expensive to sign new customers, and you want new subscriptions to happen at a rate that not only preserves the current revenue but grows it.

        That’s the game, Red Hat didn’t make those rules, they just have to live by them.

        Another factor I mean to write about elsewhere soon is the EOL for EL 7 and trying to ensure that customers are moving to RHEL 8/9/10 and not an alternative. When CentOS 7 goes EOL anybody on that release has to figure out what’s next. Red Hat doesn’t have any interest in sustaining or enabling a path to anything other than RHEL. In fact they have a duty to try to herd as many paying customers as possible to RHEL.

        So it isn’t about “aren’t they making a profit today?” It’s about “are they growing their business and ensuring future growth sufficiently to satisfy the shareholders/market or not?”

      3. 4

        My guess is that revenue was expected to start declining. Density of deployments has been rising rapidly since Xen and VServer came. Red Hat had to adjust pricing to cope multiple times, but I don’t believe they were able to track the trend.

        Nowadays with containers, the density is even higher. We are at PHP shared hosting level density, but for any stack and workload. For simple applications, costs of running them are approaching the cost of the domain name.

        Instead of fleet of 10 servers, each with their own subscription (you had in 2005-2010 with RHEL 4 & 5), you now have just 2U cluster with a mix of VMs and containers, with just two licenses.

        And sometimes not even that. People just run a lightweight OS with Docker on top pretty frequently.

        This is a band-aid on a bleeding wound, I believe.

        They should be pursuing some new partnerships. It’s weird that e.g. Steam Deck is not running an OS from Red Hat. Or that you can’t pay a subscription for high quality (updated) containers running FLOSS, giving a portion of the revenue to the projects.

        1. 3

          The Steam Deck might be a poor business case for Red Hat or Valve. Since the Steam Deck hardware is very predictable and it has a very specific workload, I don’t know if it would make sense to make a deal with Red Hat to support it. It would be a weird use case for RHEL/Red Hat, too, I think. At least it would’ve when I was there - I know Red Hat is trying to get into in-vehicle systems so there might be similarities now.

          1. 1

            I am not saying Red Hat should be trying to support RHEL on a portable game console. It should have been able to spin a Fedora clone and help out with the drivers, graphics and emulation, though.

            Somebody had to do the work and they made profits for someone else.

            Concentrating on Java for banks won’t get them much talent and definitely won’t help get them inside the next generation of smart TVs that respect your privacy. Or something.

          2. 1

            It would be a weird use case for RHEL/Red Hat, too. I know Red Hat is trying to get into in-vehicle systems so there might be similarities now.

            One business case for Red Hat would be a tremendous install base, which would increase the raw number of people reporting bugs to Fedora or their RHEL spin. And that in turn could led IVI vendors to have really battle tested platorm+software combo. Just don’t let them talk directly to the normal support other companies are paying for.

            1. 3

              My understanding is that Canonical has benefitted hugely from WSL in this regard. It’s practically the default Linux distro to run on WSL. If you want to run Linux software and you have a Windows machine, any tutorial that you find tells you how to install Ubuntu. That’s a huge number of users who otherwise wouldn’t have bothered. Ubuntu LTS releases also seem to be the default base layers for Mose dev containers, so if you open a lot of F/OSS repos in VS Code / GitHub Code Spaces, you’ll get an Ubuntu VM to develop in.

    3. 1

      If I recall correctly, they’re publishing riscv64 builds for this release too! Ball’s in your court GHC, please public bindists so we can bootstrap on riscv64 properly. Afaict it’s the last thing holding me back from claiming that basically everything in my NixOS config works across x86/aarch64/riscv64.

    4. 12

      i hate when github hijack my firefox search functionality. and moreover when it doesn’t do it consistently.

      1. 2

        Doesn’t the article say that GitHub considered hijacking the search hotkey, but decided to use the browser’s native search in the end? What’s wrong with their current solution?

      2. 1

        Do they still do it for you? They very recently stopped doing it for me in the non-edit view (even when logged in).

        They also fixed the mismatched tab width issue I had with the new view, so I might have been reverted to the old view.

      3. 1

        They broke ctrl-clicking links in source repos as part of this too. Gotta love it.

    5. 56

      I can get behind renewing the push for copyleft, and even attributing much of the original promotion to RMS, but c’mon: the artful B/W portrait, repeated calls out to his prescience, etc. reads like hagiography, brief mention of “toxicity” aside.

      The dude is a misogynistic ideologue who abused his platform as a Free Software pioneer to subject other people to his gross views.

      So yeah, support copyleft, but don’t sweep aside how the FSF backed RMS and ignored his willingness to blithely dismiss child abuse as “not that big of a deal”.

      1. 9

        He’s definitely an ideologue, but for the cause of software freedom, which is a good thing. Labeling him a misogynist is just a politicized insult, based on disliking other political opinions adjacent to gender he has expressed at some point, or just finding him personally awkward and spergy. There’s hell of a lot of prominent technologists who I’d want to see ostracized for their tech-unrelated stated political views ahead of Stallman.

        1. 40

          Labeling him a misogynist is just a politicized insult, based on disliking other political opinions adjacent to gender he has expressed at some point

          This can be used to handwave away any level of complaint. After reviewing the GeekFeminism wiki article, with citations, I feel comfortable saying that I’m not throwing my lot in with him.

          There’s hell of a lot of prominent technologists who I’d want to see ostracized for their tech-unrelated stated political views ahead of Stallman.

          He has said and done plenty of things directly related to tech, or software projects, that this too comes off as handwave and dismissive.

          I think RMS was/is right about a lot and I don’t care about canceling him, or being upset, but I’m not going to bat for him either, and I’d love to have a figure like him, that I could fully respect and endorse.

      2. -1

        a misogynistic ideologue who abused his platform as a Free Software pioneer to subject other people to his gross views.

        This is the sort of accusation that nowadays just rolls off my brain like water off a duck’s back.

        Oh so he has cooties huh. He’s “gross” and “misogynistic” which basically means the 7/10 mean girls trying to play queen bees of the autists find him unbearable. That’s all. That’s what you sound like: a moralistic christian nun, the kind that people hated as nurses, because you can tell deep down they think you deserve your suffering.

        And when you say “abuse” what you actually mean is that he earned his way into his position, but others who are resentful and jealous didn’t. They hate that he doesn’t share their views that every statement and assertion should be padded and bubble wrapped to avoid misinterpretation by people who get off on being offended.

    6. 9

      I’m a notorious flakes opponent, so take my comment with a grain of salt, but I read a lot of things written by newcomers to Nix like this:

      I see words like “flakes” and “derivations,” and I currently don’t know what they mean.

      And it is really quite jarring. Flakes have such a PR machine behind them (through the consulting companies that push their adoption) that newcomers believe they’re some fundamental concept that needs to be understood, whereas in reality they are an abstraction layer on top of the fundamental concept ands - in my opinion - should not be used by people who haven’t understood what’s happening below. In beginner chats (e.g. the @ru_nixos Telegram group) a huge portion of posts are problems people have with flakes, which they only have because of flakes, and which they would be able to solve if they had understood the fundamentals.

      I’m just braindumping here, this isn’t intended to attack the newcomers that think like this, but the people that keep pushing out blog posts etc. that perpetuate it.

      I don’t understand Nix’s language syntax, but it’s enough like JavaScript and Python that I can fake my way through at this point. But to use Nix effectively, I’m obviously going to need to learn the language.

      Check out https://code.tvl.fyi/about/nix/nix-1p

      1. 23

        Flakes have such a PR machine behind them (through the consulting companies that push their adoption)

        This seems to imply that the consulting companies have something to gain by pushing flakes, but that doesn’t seem very plausible to me.

        If anything, flakes solve problems that non-experts would have shot themselves in the foot with, which is something consulting companies could bill for.

      2. 9

        For me personally, flakes were what unlocked nix and nixos for me. Inputs are explicit and locked, as opposed to outside the declaration as channels require. (Technically inputs don’t have to be specific because of the flake registry, but I wouldn’t use that). With flakes config/code sharing between nixos systems declarations has a structure I can follow and build on. They’re not perfect by far, but without them I would probably not have gone as deep into the ecosystem.

        Anecdotally I see just as many people in Discord struggling with flake-specific problems as I see struggling with channel problems.

        1. 2

          Flakes vs. channels is a false dichotomy though, both are bolted on features on top of the core concept. Note that almost no experienced, non-flake users use channels, instead preferring to just pin nixpkgs commits directly.

          1. 11

            I don’t follow this at all; despite considering myself more than proficient in Nix, I don’t even know how to use NixOS without channels or flakes. niv doesn’t even have an example of how to use it for NixOS configurations. Certainly as far as I know, nixos-rebuild is going to invoke the channel by default to build the config. I guess for non-NixOS scenarios, I see the point (builtins.fetchTarball to get a nixpkgs).

            And I can’t count the … 3, 4, 5 dozen times that users have had issues because of channel management. And the complexity of supporting them because you have to interrogate the state of the world to determine if they’re on the right channel, oh no, are they really on the right channel since root/users have different channels, etc, etc.

            That having been said, I do wish that there was a clearer, up-front understanding that flakes is “sugar” on top, and doesn’t fundamentally change the Nix underneath. (well, pure eval is a big thing too, but that feels like a different point).

          2. 10

            Could you elaborate on what “core concept” you’re referring to?

            Side note, “almost no experienced, non-flake users use channels” is a heck of a statement, do you have any evidence to back that up?

          3. 6

            Comments like this make me legit mad. I have spent actual time trying to learn this stuff and coming out frustrated, with all resources I’ve found pointing at one of these two solutions, and then this.

            Just get the core concept y’all. Grok it. If you know you know.

          4. 5

            You said flakes are a problem for beginners, and I pointed out channels are also a problem for beginners. Given that channels is the default for both nix and nixos installations, this is not a false dichotomy it’s a valid concern. If we want to talk about experienced users, that’s a different topic and is moving the goalposts.

            As you point out flakes are nothing revolutionary, they’re just a set of tooling on top of nix.

          5. 3

            I personally never use channels or flakes - I just manually use nixpkgs as git submodule and set NIX_PATH. I find that far better.

      3. 4

        Literally every guide that’s coming out now takes flakes as a starting point.

        Surely all of these people must be wrong… How could it be any different?

        But seriously: I haven’t seen even a case made for coherent non-flakes usage of Nix. Let me know what I’ve missed.

    7. 2

      Props to how you’ve phrased this, river, because [the rest of what] I want to say is too blunt. Just more platforms conning/(baiting?) users into filling their site with IP to then monetize via paywalls and nagware. Couldn’t be me. If only there were a relevant current event to show us what is in store…

      Others might disagree, but I’d ask you to consider how readers relate to this type content when they’re forced to dismiss a popup modal ad in the middle of consuming it, from an emotional and trust standpoint.

    8. 3

      I think that the model underlying git is not well understood by some folks, and I think learning it is unavoidable, if you want to use it effectively.

      But also, the git cli UX is atrocious. Id actually be interested to read about its history and how so many commands became overloaded for so many purposes.

      I’m re-setting up CI for my nix monorepo and luckily, from reading git release notes over the years, I know about “git switch” and other newer commands that are more intuitive but boy howdy, it’s amazing how many old resources there are, or even new ones that don’t use these nicer, newer commands.

      Finally, using “jj” for about a month made me upset - git can be good. Like, actually track every change, never ever lose code on accident, easily manipulate many branches - easy. I hope that some notable bloggers can give it a shot and write about it so that it can gain some momentum.

    9. 5

      I’m in the process of abandoning NixOS. It is just not suitable for Unix novices like myself. My most recent build is Linux Mint, which offers a more friendly app install and update experience for the uninitiated. I’m going to rebuild my NixOS machine on that distro soon too.

      Knock on wood I’ve never had to “roll back”, which was the feature of NixOS which at the time was most appealing to me.

      1. 14

        NixOS is a piece of software, so it has no feelings. Welcome back whenever you feel like it.

      2. 8

        NixOS is still hard to get started with even with lots of *nix experience. Any Ubuntu or derivation like Mint is probably a good bet for “normal” users. They provide the vast majority of the value, at a fraction of the startup complexity. My own adventure was a little similar:

        1. Tried installing Debian back in the early 2000s, and couldn’t even get Ethernet to work (in those days it was very much not plug-and-play).
        2. Installed a bunch of distros, and was never too happy with any of them.
        3. Ubuntu was the first point of no return. The usability was just next level, and it set a new bar for every Linux distro since then.
        4. Arch Linux was the second point of no return. A rolling distro strangely turned out to be more stable than Ubuntu’s big releases, at the cost of a really complex setup process and clunky package management.
        5. NixOS is the latest point of no return. Being able to configure all of my systems with a few hundred lines of extremely easy to read configuration.nix is vastly preferable to Puppet, never mind manually setting everything up.
      3. 6

        I’m an experienced Linux system administrator and I make heavy use of NixOS for managing my homelab and cloud-hosted VMs, but I stil use Arch Linux on my desktop and laptop PCs. NixOS is great for servers but less good at being a desktop OS, and in any case for the computers I use daily I care more about being able to make quick and arbitrary config changes than I do about codifying their state with a configuration language.

        1. 3

          I care more about being able to make quick and arbitrary config changes than I do about codifying their state with a configuration language.

          So I think that’s more of a home manager thing than a NixOS thing. HM does seem to be the dominant way people use NixOS but I really hated not being able to hot reload my RC files (eg when configuring i3) and decided I wasn’t going to do it. And it works great. I just drop stuff in systemPackages and use the traditional configuration mechanisms and life is good.

          I really think home manager is kinda misguided. Dropping RC files into place was never the painful thing about setting up a new machine, that’s the easy part. The sucky bits were remembering what constellation of packages you had installed (solved by NixOS already) and secrets management (eg creating and installing ssh keys, which HM doesn’t really do anything to help you with)

          1. 3

            With my config in HM, I can do things like change a single variable and have my font or primary/secondary colors across all apps/environments change instantly. I can switch between wezterm/alacritty for my main term and things like running prs (pass in Rust) with a single variable change. I have all of my dotfiles version controlled alongside the package versions, meaning that when I rollback to before zellij changed their config format, everything “just works”.

            I used stow for a very long time. HM is to stow, what NixOS is to Puppet. Though I do lament the eval/build times making trivial changes a bit annoying. But I find that’s easy to resolve too - when I’m iterating rapidly, I just do it in-place and then hoist that config into Nix/HM whenever it’s done and ready to be a part of my permanent config.

          2. 2

            So I think that’s more of a home manager thing than a NixOS thing. HM does seem to be the dominant way people use NixOS but I really hated not being able to hot reload my RC files (eg when configuring i3) and decided I wasn’t going to do it. And it works great. I just drop stuff in systemPackages and use the traditional configuration mechanisms and life is good.

            I’ve only very recently started using Home Manager, mostly as a replacement for several previous methods of managing dotfiles, but I’m finding that being able to codify a selection of packages I have installed is pretty useful. On the other hand, I’ve already run into some bugs with Home Manager; more importantly, I don’t trust the Nix ecosystem as it stands today to completely reliably manage every system configuration file I have on my system, or build every single package I care about (it seems like NodeJS packages are particularly vulnerable to having them break during a NixOS deploy, I think because a lot of the derivations for them were autogenerated from the Node metadata with some tool?).

          3. 1

            Same here. I don’t know why, but I like to get to know the inner working of things and the whole “Just use home manager” way didn’t appeal to me.

            Now after working out my own configuration setup I don’t see the appeal of it at all.
            And I still have a lot of things I’d like to know about the nix language, but I’m getting there step by step.

        2. 1

          Hell, I use Manjaro for my several home servers even. The Arch distros are very usable and lightweight, and AUR is a godsend.

      4. 2

        It is just not suitable for Unix novices like myself.

        IMO that’s reasonable. When you have more experience, though, it’s definitely worth it to re-evaluate Nix and NixOS for your use cases.

    10. 13

      I feel like this is something like nushell is also trying to solve. I’ve not daily driven nushell, only experimented with it, and I’ve not touched powershell in some years, so I can’t give a definitive answer on it. Both feel better UX and repeatability from a programmatic standpoint.

      ls | where type == "dir" | table
      
      1. 12

        ironically, i think that nushell on *nix systems is a harder sell than powershell on windows because of compat, despite shells being a much larger part of the way people typically work on *nix systems.

        i tried using nushell as my daily driver, and pretty frequently ran into scripts that assumed $SHELL was going to be at least vaguely bash-compatible. this has been true on most *nix boxes for the past 20+ years and has led to things quietly working that really shouldn’t have been written.

        OTOH cmd.exe is a much smaller part of the windows ecosystem and (at least, it seems to me) there is much less reliance on the “ambient default shell behaves like X”, so switching to the new thing mostly just requires learning.

        (i ultimately dropped nushell for other reasons, but this was also over 30 releases ago so its changed a bit since then)

        1. 16

          You’ve brought up several valid points, but I want to split this up a bit.

          The reason using something other than bash (or something extremely close, like zsh) is painful is due to the number of tools that want to source variables into the environment. This is actually a completely tractable problem, and my default *nix shell (fish) has good tools for working with it. It’s certainly higher friction, but it’s not a big deal; there are tools that will run a bash script and then dump the environment out in fish syntax at the end so you can source it, and that works fine 95% of the time. The remaining 5% of the time almost always has a small fish script to handle the specific use-case. (E.g., for ssh-agent, there’s https://github.com/danhper/fish-ssh-agent. Hasn’t been updated since 2020, but that’s because it works and is stable; there’s nothing else I could imagine really doing here.) And you could always set whatever shell for interactive-only if you really want it (so that e.g. bash would remain the default $SHELL).

          PowerShell on Windows actually has to do this, too, for what it’s worth. For example, the way you use the Windows SDK is to source a batch file called vcvarsall.bat (or several very similar variants). If you’re in PowerShell, you have to do the same hoisting trick I outlined above–but again, there are known, good ways to do this in PowerShell, to the point that it’s effectively a non-problem. And PowerShell, like fish, can do this trick on *nix, too.

          Where I see Nushell fall down at the moment is three places. First, it’s just really damn slow. For example, sometimes, when I’m updating packages, I see something fly by in brew/scoop/zypper/whatever that I don’t know what it is. 'foo','bar','baz','quux'|%{scoop home $_} runs all but instantly in PowerShell. [foo bar baz] | each { scoop home $it } in Nushell can only iterate on about one item a second. But on top of that, Nushell has no job control, so if I want to start Firefox from the command line, I have to open a new tab/tmux window/what-have-you so I don’t lock my window. And third, it’s still churning enough that my scripts regularly break. And there are dozens of things like this.

          I really want to like Nushell, and I’m keeping a really close eye on it, but, at the moment, running PowerShell as my daily shell on *nix is entirely doable (even if I don’t normally do it). Nushell…not so much.

        2. 7

          You’re absolutely right. It’s a VERY hard sell.

          There are all the software problems, some of which you’ve detailed, and then there’s IMO the even bigger problem - the human problem :)

          UNIX users don’t just use, love, and build with the “everything is a stream of bytes” philosophy, it almost becomes baked into their DNA.

          Have you ever tried to have a discussion about something like object pipelines or even worse yet, something like what AREXX or Apple’s Open Scripting Architecture used to offer to a hardcore UNIX denizen?

          99 times out of 100 it REALLY doesn’t go well. There’s no malice involved, but the person on the other end can’t seem to conceptualize the idea that there are other modes with which applications, operating systems and desktops can interact.

          As someone whose imagination was kindled very early on with this stuff, I’ve attempted this conversation more times than I care to count and have pretty much given up unless I know that the potential conversation partner has at least had some exposure to other ways of thinking about this.

          I’d say it’s kind of sad, but I suspect it’s just the nature of the human condition.

          1. 5

            I believe that, in the end, it all boils down to the fact that plain text streams are human readable and universal. You can opt-in to interpreting them as some other kind of a data structure using a specialized tool for that particular format, but you can’t really do it the other way around unless the transmission integrity is perfect and all formats are perfectly backward and forward compatible.

        3. 2

          I would argue that scripts that don’t include a shebang at the top of them are more wrong than the shell that doesn’t really know any better what to do with them.

          I don’t want to pollute this thread with my love for nushell, but I have high expectations for it, and I’ve previously maintained thousands-of-lines scripts that passed shell-check and were properly string safe. Something I think many people just avoid thinking about. (example, how do you build up an array of args to pass to a command, and then properly string quote them, without hitting the case where you have an empty array and output an empty string that inevitably evokes an error in whatever command you’re calling – in nushell this doesn’t matter, you just call let opts = ["foo", "bar xyz"]; echo $opts and the right thing happens)

          I’ll just leave a small example, so as to not go overboard here. I ought to compile my thoughts more fully, with more examples. But even something like this: https://github.com/colemickens/nixcfg/blob/02c00ef7a3e1e5dd83f84e6d4698cba905894fc7/.github/clean-actions.nu would not exactly be super fun to implement in bash.

          1. 2

            I would argue that scripts that don’t include a shebang at the top of them are more wrong than the shell that doesn’t really know any better what to do with them.

            Oh, the scripts absolutely are the problem. Unfortunately, that doesn’t mean that they don’t exist. Just another annoying papercut when trying out something new

          2. 2

            My experience with FreeBSD defaulting to csh is that scripts aren’t really the problem. Sure, some of them hard-code bash in the wrong location, but most of them are easy to fix. The real problem is one-liners. A lot of things have ‘just run this command in your shell’ and all of those assume at least a POSIX shell and a lot assume a bash-compatible shell. FreeBSD’s /bin/sh has grown a few bash features in recent years to help with this.

            1. 1

              FreeBSD’s /bin/sh has grown a few bash features in recent years to help with this.

              Oh that’s interesting, didn’t know that but makes sense. I’ve noticed this with busybox ash too – it’s been growing bash features, and now has more than dash, which is a sibling in terms of code lineage.

              A related issue is that C’s system() is defined to run /bin/sh. There is no shebang.

              If /bin/sh happens to be /bin/bash, then people will start using bash features unconsciously …

              Also, system() from C “leaks” into PHP, Python, and pretty much every other language. So now you have more bash leakage …

          3. 1

            example, how do you build up an array of args to pass to a command, and then properly string quote them, without hitting the case where you have an empty array

            FWIW my blog post is linked in hwayne’s post, and tells you how to do that !!! I didn’t know this before starting to write a bash-compatible shell :)

            Thirteen Incorrect Ways and Two Awkward Ways to Use Arrays

            You don’t need any quoting, you just have to use an array.

            a=( 'with spaces'   'funny $ chars\' )
            ls -- "${a[@]}"   # every char is preserved;  empty array respected
            

            The -- protects against filenames that look like flags.

            As mentioned at the end of the post, in YSH/Oil it’s just

            ls -- @a
            

            Much easier to remember :)

            This has worked for years, but we’re still translating it to C++ and making it fast.

            1. 1

              Ah, yes, you certainly know what I (erroneously) was referencing (for others since I mis-explained it: https://stackoverflow.com/questions/31489519/omit-passing-an-empty-quoted-argument). Indeed most times starting with an empty array and building up is both reasonable and more appropriate for what’s being composed/expressed.

      2. 1

        I’m very tempted. I tried it out way back when it first came out and checked back in recently on them and it’s amazing at how far the Nushell project has come.