1. 37
    1. 20

      The great thing about the Nix (and Guix) package manager is that fact that they unite these two worlds. You can install and configure software globally for your system, but in contrast to traditional “program managers” you can install different versions of the same package simultaneously. But you can also locally fetch development depedendencies, and while most “module managers” can only provide libraries written in their specific programming language, Nix can provide any library, compiler or build tool (or even graphical IDE, web server, database server, …) written in any language within your local development environment.

      [I also later posted this comment below the article]

    2. 12

      My hope is that in the future the module managers gracefully degrade to a set of standards on module metadata and the program managers automatically consume this metadata to produce native packages. There will be fights over this but the experience seem to be on the side of distros and the program managers.

      1. 9

        I would love to live in that world. It looks as if the current trend is to avoid program managers and just ship docker containers that have a load of things in them that make it difficult to do any kind of supply-chain audit.

      2. 3

        In part, this is already reality. See for example Gentoo’s hackport which generates Gentoo build metadata (ebuild) from Haskell’s cabal files. A similar tool exits for Rust: cargo-ebuild.

    3. 7

      Great article, it clarified my own thinking!

      One interesting issue here is how to merge the two. Say, you are a distro, and you want to package (by compiling from source) a bunch of programs, some of which are written in Rust. You probably want there to be a single canonical version of regex or rustls crate (module in article’s terminology) which is used by all Rust programs. I don’t think we know how to do that properly though!

      The simplest approach is to just trust program’s lockfiles, but then you get duplication if one lockfile says regex 1.0.92 and another uses regex 1.0.93. I think that’s the approach used by nix to package Rust.

      A different approach is to package each individual module as a separate package, but that is a lot of work, pollutes package namespace (when the users searches for a program, they see a bunch of libraries), and hits the impedance mismatch between flexible versioning of module manager and rigid versioning of program manager. I think that’s the approach used by Nix to package Haskell.

      1. 4

        One interesting issue here is how to merge the two.

        I come from the C/C++ background (so shared libraries) and Debian instead of Nix and to me it seems the right approach is to use module manager for development and program manager for end-user delivery. To support this, the module manager should allow “detaching” the package manager part from the “package manager-build system” combo used during development and replacing it with the program manager’s package manager. Then the folks working on the program manager (e.g., Debian developers) can decide which version(s) of the regex library to package and programs that depend on it. That’s the approach we’ve adopted in build2.

    4. 7

      This distinction is very similar to the one made in this article, except it splits the module manager into two subcategories:

      • Language package managers, e.g. go get, which manage packages for a particular language, globally.
      • Project dependency managers, e.g. cargo, which manage packages for a particular language and a particular local project.

      To be fair, many package managers play both roles by allowing you to install a package locally or globally. I tend to think that global package installation is an anti-pattern, and the use cases for it are better served by improving the UX around setting up local projects. For example, nix-shell makes it extremely easy to create an ad-hoc environment containing some set of packages, and as a result there’s rarely a need to use nix-env.

      1. 13

        I tend to think that global package installation is an anti-pattern

        From experience, I agree with this very strongly. Any “how to do X” tutorial that encourages you to run something like “sudo pio install …” or “sudo gem install …” is immediately very suspect. It’s such a pain in the hindquarters to cope with the mess that ends up accruing.

        1. 3

          Honestly I’m surprised to read that this still exists in newer languages.

          Back when I was hacking on Rubygems in 2008 or so it was very clear that this was a mistake, and tools like isolate and bundler were having to backport the project-local model onto an ecosystem which had spent over a decade building around a flawed global install model, and it was really ugly. The idea that people would repeat those same mistakes without the excuse of a legacy ecosystem is somewhat boggling.

        2. 3

          Gah, this is one thing that frustrates me so much about OPAM. Keeping things scoped to a a specific project is not the default, global installations of libraries is more prominently encouraged in the docs, and you need to figure out how to use a complicated, stateful workflow using global ‘switches’ to avoid getting into trouble.

        3. 3

          One big exception… sudo gem install bundler ;)

          (Though in prod I do actually find it easier/more comfortable to just use Bundler from APT.)

    5. 5

      This is a useful distinction that helps clarify why we have faulty security expectations from module managers, IMO. We’re used to packages being vetted (to some extent) before they’re included in a program manager — you don’t run any great risks by running apt-get install libxyz. In contrast, running npm install xyz is about as (in)secure as running curl randomdomain.com | bash, but most of us aren’t as cautious as we maybe should be when installing new/unknown modules.

    6. 5

      Another way to think about the split is static vs dynamic binding.

      If I’m shipping something to production, I want all dependencies frozen so that I can test them and know that they work and not change them again until I’m ready. This is static binding.

      If I’m authoring a library or other program designed to slot into someone else’s production system, I want to be able to say “hey as long as you give me version X or greater of dependency Y, I’m cool and can deal with it.” This is dynamic binding.

      I think the thing the world needs is better ability to declare dynamic dependencies across platforms and then transform those dynamic dependencies into static dependencies for production—and then change back to dynamic and refreeze when e.g. there’s a security bug in some library somewhere.

    7. 4

      I think this article has it backwards: we got here by assuming that the two types are distinct, but the moment a module starts to interact with ABI surfaces outside of its language, the distinction breaks down. It’s fine to install a perl module that is written in perl; but if starts to depend on system libraries, now we have a program manager providing things that a module manager is consuming. Things go haywire from there, because there’s no way to be certain that a binary module works across different program manager environments, and there’s no way to be certain that upgrading a program won’t break a module. Things like semantic versioning don’t help here, because the program manager can determine nothing is using an old ABI within the program manager’s realm, but it doesn’t understand the module manager.

      That brings us to the current place, where the program manager folks think the module manager folks are breaking them and vice-versa, and users are left assembling things into containers and testing them together. Every upgrade needs to go through that manual validation, because the competing managers can no longer promise that any change is globally safe.

    8. 3

      I like this distinction, except a couple quibbles:

      • What he’s calling a “program package manager” like apt installs libraries too. It installs libc, libopenssl, etc.
      • Nix, Guix, and the experiment Distri probably belong in a different (third) category. They don’t have a single global version number for all libraries. You can install libx 1.0 and libx 2.0 side by side, or two programs that use them side by side.

      I was calling the latter “binary-centric” rather than “library-centric”, and I think new distros will gradually migrate to that model. At least distros that are intended for distributed systems.

      The problem is that right now we’ve sort of repurposed desktop and single node distros (Debian, even Alpine, etc.) for distributed systems, which need more stability and reproducibility. Hence the Docker “hack”, etc.


      Also pip does install binaries like bin/pygmentize. So the space is very confused. I think package managers start out with a certain design and then they have to “go to war” against real use cases and get incoherent pretty quickly. They become filled with special cases. For example Debian’s apt has hacks for resolving dependencies around certain packages, etc.

    9. 1

      This is an insightful and useful article. A few people have already touched on one species of crossover - how to manage modules with your system package manager - but there’s another crossover I’ve been wondering about recently: how to manage programs with your module package manager. Pip, for instance, can install binaries and Python applications are often installed with it when they aren’t available in a distro’s package listing.

      I’ve been thinking recently about jpm, the Janet package manager, which is in a similar vein. It can not only build but install binaries.

      Both pip and jpm can install globally using sudo. However, both have a local mode as well. Pip’s local mode is localized to the user, which means that you can add a directory to your path and be able to install binaries without ever running sudo.

      Jpm however is like npm. Its local mode is project-based, not user based. In terms of modules this is certainly better. Project modules can’t conflict the way that user modules can. However, one loses the ability to install a binary ‘locally’ into a consistent place.

      1. 1

        With npm, I have things installed in “projects” in ~/stuff and then I symlink scripts from ~/stuff/foo/node_modules/.bin into a directory on my PATH. This works well enough for me.

        1. 2

          What’s the advantage of splitting it up like that? I.e. why not just set npm’s prefix to something in ~ and use npm install -g?

          1. 1

            Not strongly opinionated on either, makes it a little easier to cleanly nuke one version of a program in order to replace it with another.

    10. 1

      Reminds me of a different distinction made in this hackaday article about additions being open or restricted in various package managers. Such as npm, crates, PyPi being open to everyone, and Debian being restricted to maintainers.

      Personally, not in love with how many node_modules get pulled in. I do like python’s system being open to everyone, and it feels like less dependencies get pulled in. Probably because of how large the standard library is.