1. 48

  2. 18

    I was a nixos user for a year or so maybe 4 years ago. I recently learned how containers work at the kernel level, which deepened my understanding of Nix.

    I’m still not sure this is right, but this is how I’ve been thinking about nix:

    Packaging things is hard on Linux because you have dynamic dependencies. Everything written in C probably loads in libc. And it gets worse from there. You have all kinds of dependencies that are loaded dynamically at runtime, some crypto library, P threads, et cetera. If everything was linked into a static fat binary, it would be easy to package a deploy things onto Linux machines, but that’s not the case.

    So one way to view Docker is as a hack around this issue. You can ship an application easily – You can package it up easily – If you put it inside of a box and inside that box, you put an entire Linux file system and all its dependencies.

    A second solution is the one that Nix has, which is to rethink all this. When you build things just be very explicit about what the dependencies are. And when you link them, link them by hash that’s made of all the inputs, and then you don’t have collision problems. And you’ve solved the packaging problem for Linux. But it requires changing how you build programs.

    I’m sure nix experts on here have a deeper understanding of things. But it’s been a useful mental model for me.

    1. 19

      That’s a pretty good way to put it. I think of it even more in terms of namespaces: In the traditional world, the filesystem is one big shared mutable namespace, and everyone has to agree on what e.g. /usr/lib/libssl.so.1 means. This causes the usual problems with shared mutable state that we know from programming languages, but at the systems level instead. Docker takes that shared mutable namespace and turns it into a private mutable namespace - implementation-wise, it’s even literally a “mount namespace” on Linux. So you get to keep your existing software that cares about /usr/lib/libssl.so.1, but you lose the global coherent view on things. Nix takes that shared mutable namespace and turns it into a shared immutable namespace. Everyone still has to agree on what /nix/store/xzn56dy54k0sdgm4lx98c20r81hq41nl-openssl-3.0.8/lib/libssl.so.3 means, but because of the hash addressing it could really be only one thing. Your existing software now breaks and needs to be rebuilt, but you get to have a globally consistent namespaces, which makes the final running system (but not the build system) easier to reason about.

      In a way, if Docker is the Erlang of software packaging, Nix is the Haskell.

      1. 9

        This is a great read!

        In a way, if Docker is the Erlang of software packaging, Nix is the Haskell.

        Lol, I am stealing this.

        1. 4

          How is Docker Erlang?

          A Dockerfile looks like BASIC to me – no abstraction. Less abstraction than shell.

          Erlang is a functional and concurrent language with Prolog-like syntax. I don’t see the relation to Docker!

          I definitely see Nix and Haskell. Nix is based on lazy expressions, and Haskell is too, although Haskell has a bunch of other stuff like I/O and so forth.

          1. 6

            Dockerfiles indeed offer little to no abstraction. But docker is more than that. I was thinking more of the runtime view - the way state is isolated into smaller components that run and can fail independently of another.

            1. 7

              OK I see what you mean, but that “failing independently” is due to the operating system itself – Unix processes, and Linux containers (which fix the “leaks” in processes).

              Let’s not give Docker too much credit, OR too little! It’s really layer on top of the OS, not the OS itself.

              It should be a small layer, but the implementation is pretty bad and tightly coupled, so it’s a big layer.

              Too much credit: Docker is like Erlang! No, Unix processes are like Erlang – Erlang itself uses the word “process” since the VM is a like a little “monoglot” operating system.

              Fault tolerant Linux clusters (“cattle not pets”) are known at Google / Facebook these days, but go back to Inktomi in the 90’s:



              As a data point, Google was using Linux containers / cgroups in their clusters starting ~2005, almost a decade before Docker launched.

              Docker had literally nothing to do with this. They actually failed to build their own cloud – the company was called “dot Cloud” before they pivoted to Docker.

              Too little credit: Docker is just LXC, or a bad version of Solaris Zones or Jails (I’ve heard this a lot). No, I would say the central innovation of Docker, and a very useful one, is LAYERS – and the DSL for specifying layers.

              Layers are important for both storage and networking. They solve the “apt-get sprays files all over the file system” problem, and so you can just use apt packages instead of rewriting your whole build system. (This has advantages and disadvantages, but it’s clearly here to stay)

              FWIW I built containers from scratch with shell scripts, pre-Docker, and Docker definitely adds something. But it’s more about tools and less about the runtime architecture of your software.

              tl;dr Please don’t say Docker is like Erlang :) It’s sort of a category error, because you can build OCI/Docker containers with both Nix and Bazel. Nix has a functional language, and Bazel has a Python-like language.

      2. 10

        Worth noting for that the nix store is mounted read-only. This not only eliminates library imposter attacks (also referenced as “DLL hijacking”), but also makes it a lot fewer steps to significantly improving your security profile by having executable files generally only be allowed in the nix store.

        You also get great amounts of NixOS flexibility on file system segregation to do anything from keeping the nix store on a separate, non-backed-up, volume on fast flash to really fancy things like running with an ephemeral root mount.

        I’m absolutely thrilled about Nix and NixOS. It just complete hit home in the “this is how you are supposed to do computing” kind of way I haven’t felt since comp sci.

        Having been 100% on MacOS through macOS I dipped a toe in the Linux waters four years ago and a year later jumped on NixOS. Aside from a couple of Proxmox hypervisors, absolutely every system I touch runs it - including the headless VM we use for in-home game streaming - and it is just an absolute joy.

        At work all services now run on NixOS and nix shell scripts seamlessly support our tools on macOS and Linux. So many prior really annoying challenges have just been obliterated by shared declarative certainty.

        1. 8

          Finding out Nix is 20 years old is like remembering Python is 32 years old. It fees like it came out 5 years ago.

          1. 7

            This is my hot take on Nix, from using it for the last two weeks and thinking about it for a few weeks back in November. I hope a Nix person can read it and correct me.

            Nix is similar to a content-addressable store, like git. The objects in the store are packages, and they are addressed by their dependency graph. Nix the language exists to manage this store and make it possible to create things and put them in the store. But it is a language and not just a package manager because it has to subsume all of the other package managers—not just the OS but all the language-specific ones as well—in order to build a complete image. Making it a language and not just a package manager binary of some kind (like RPM) makes sense because there will always be new languages and new package repositories that you need to retrieve artifacts from and integrate into the total system. The fact that Nix is a language is also why NixPkgs can be both so complete and so up-to-date—because commonalities can be abstracted out, reducing the work of generating the packages. Nobody else seems to have figured this out. The other noteworthy attributes of the Nix language sort of fall out of the requirements for managing this store. The super powers of isolation, multiple active profiles or shells, and non-interference sort of arise naturally from the structure. Knowing the closure is what enables the super powers of generating container images or AppImages or whatever, from your system.

            NixOS is basically a program written in Nix that provides a framework for building a bootable Linux system with a big hook in the middle for your configuration. Again, the fact that Nix is a language is what leads to the super powers of being able to take this and turn it into a bootable ISO image or whatever.

            I see Nix as having similar problems to git in that there are a lot of footguns because there are a lot of ways to do things and situations in which those variations might be right, but there are fewer people using it to ascertain the right ways and get the word out about the right ways. Because it’s got fewer users, there’s less content overall, so you often search and find conflicting advice from before and after Flakes existed. And the pre-Flake workflows still work, mostly. In fact, having a good roadmap of how to learn it seems important. I have enjoyed the one in this video, which is briefly: 1. Use NixPkgs, 2. Use nix-env to install packages, 3. Use nix-shell, 4. Learn the basics of Nix (the language), 5. Move from nix-env to nix profile, 6. Rewrite nix-shell to nix flake, 7. Home-manager as a flake, 8. Handle your dotfiles with home-manager, 9. Install and configure NixOS, 10. Learn the rest of Nix (the language).

            My impression is that it is good to say “Care about X and not about the other stuff” so that you don’t feel like you have to learn everything at once. But I do feel like, with Nix, there is a lot there to learn, so you need to believe in the dream, or it isn’t going to work for you. You have to understand why things are the way they are, in terms of how they help to make the dream a reality. Otherwise, you will be frustrated at the weird language, frustrated that it isn’t a normal package manager, frustrated at how hard it is to get things installed, frustrated at missing packages, and so forth. But none of that stuff is frustrating if you understand why you are doing it, what the underlying reasons are for things, and you’ve accepted that you are now a “Nix developer” and not simply a user using some OS or package manager. If you don’t have that dream, then you’ll ask weird questions like, why can’t I keep using maven, why can’t I change something that’s in the store, why can’t I put passwords in my derivations, etc, and then you’ll be offended and turned off when you find out you can’t or shouldn’t.

            1. 3

              Nix is similar to a content-addressable store, like git.

              Maybe someday the Nix store will be truly content-addressed, or “intensional”.

              The fact that Nix is a language is also why NixPkgs can be both so complete and so up-to-date—because commonalities can be abstracted out, reducing the work of generating the packages. Nobody else seems to have figured this out

              I recall that Gentoo abstracts out common elements of package definitions, though their Bash-like language is different from Nix’s lazy functional language and I don’t know that the Bash-like language is as good at abstraction as the Nix language (not that I think Nix couldn’t be outdone in turn in that respect, especially by something statically typed).

              1. 1

                FWIW, I believe the Gentoo “eclass” system being referred to pre-dates Nix entirely.

              2. 3

                because commonalities can be abstracted out, reducing the work of generating the packages. Nobody else seems to have figured this out.

                There’s a lot of abstractions in Debian / debhelpers. They’re unfortunately documented even worse than autoconf. Even since “not your grandpa’s debhelper” https://joeyh.name/talks/debhelper/debhelper-slides.pdf I still see mostly the old patterns repeated.

                1. 2

                  This analysis made me think that you might enjoy reading or at least skimming Eelco’s thesis:

                  The Purely Functional Software Deployment Model - Eelco Dolstra https://edolstra.github.io/pubs/phd-thesis.pdf

                  (I’m not implying that it answers these questions or corrects you or anything. Just a sense, from how you dissected it, that you might find some of the detail there interesting.)

                  1. 3

                    I’m on page 30. :)

                2. 1

                  Does Nix still have value if most of your stuff is statically linked? Memory and disk space are cheap, so more and more stuff seems to be going that way. Admittedly, it’s not as elegant to have lots of fat binaries around.

                  1. 9

                    Yes! You still need to specify your build system, versions of the libraries that get statically linked in, etc. Dynamic linking is just a tiny piece of Nix.

                    1. 6

                      What constitutes “your stuff”? The vast majority of what I’m using NixOS for is configuring a set of virtual machines running all sorts of software I didn’t write personally, and I have no idea whether most of it is statically linked or not - what does it even mean for, say, my random manga-reader app running in a Docker container and written in Crystal to be statically-linked anyway? Nonetheless, I need to configure it correctly or it breaks and I can’t read my manga anymore, and I’d like that configuration to be version-controlled, not break if I change something unrelated it depends on like my database machine’s address, etc.

                      1. 4

                        Yeah lots of value: reproducibility, consistent build tooling, code reuse. The dynamic linking that nixpkgs configures is such a tiny part of using Nix, I really don’t care about it.

                        1. 5

                          I’d like to say the Nix is ‘Hybrid Linking’ , Artefacts / outputs / Binaries (what ever you call them) , look like they are dynamically link, but nix-build’s them so they are static.

                          So it is neither Dynamically linked like every other Unix/Linux out there, it’s neither statically link (very popular with golang developers) …it’s both.

                          Some things are shared when possible, other things are not shared (like having two versions of program or dependency to keep the software ‘operational’).

                          Nix treats ‘the package manager’ as computer memory, it is allocated (malloc) , used, and then can be free (free)


                          to the effect , once a ‘piece of memory’ (a program or library) are no longer need it is simply garbage collected.

                          It’s really that simple!


                          it’s just treats the package manager (nix) and the operating system (nix/OS) as a program.

                          The fact that nix ‘builds’ the linux kernel, and put libc together, and then add all the other scripts and programs to make a ‘distribution’ is purely coincidental.

                          It could as easly build a freebsd kernel, or a solaris one, or anything in the past or the future.

                          It’s a build a system.

                          ‘If you gave it a pile and bricks and some cement, it would nix-build a wall for you’ and then simple vanish.

                          Hence Nix/OS is a meta distribution.



                          Please feel free to fork it / overlay it and build your own.

                          Guix also uses the same paradigm ….. good idea’s…. get copied, and rightfully so!

                        2. 4

                          Yes, in fact I think Nix solves a particular problem with static linking very well:

                          When you do need to upgrade a library everywhere, for example to fix a security issue, you need to rebuild everything using that library. Nix makes this really easy and automatic.