Devbox is not a leaky abstraction: you can use Devbox and never think about or understand a single aspect of Nix, ever.
Only a Sith deals in absolutes.
But seriously, from personal experience, only installing Nix so that I could use Devbox:
I learned the hard way that one must not use the default Nix installer and choose the “single user” installation methon on my Linux distro of choice. One should instead use the installer from Determinate Systems.
I now have Nix stuff in my PATH and other environment variables. I do actually manage my PATH environment variable carefully (because of other tools), which means I need to understand the consequences of moving it in front of, or behind, other items in PATH.
I’m not saying Devbox is bad (I’m not finished evaluating it yet). But yeah, I have yet to encounter a leak-free abstraction when dealing with computers.
lima with fixed Alpine packages is damn close on macOS, but then you’re cheating by basically just making a throwaway Linux installation, not a repeatable Mac installation
I guess a more reasonable thing to say would have been that you definitely won’t need to write any Nix (the language) to use it. It’s very solid at abstracting that away. Having an extremely rudimentary understanding of the fact that there’s a Nix store, what it’s doing to your paths, etc is probably still useful.
How does base Nix not isolate your projects? How is Nix not supported by direnv when you can echo "use flake" >> $PROJECT/.envrc? How is configuring with JSON an improvement to Nix? You’re also going to have an easier time with Nix if you aren’t trying to keep a lot of software pinned at specific versions instead of using a Nixpkgs pin where everything was built & tested to (usually) work together. With most of the pros being inherited from Nix, I don’t get the appeal of abstracting over it in a rigid way rather than learning the tool underneath which will unlock a lot more for you.
There are a TON of people who would love the joy of isolated, reproducible dev environments but aren’t willing to learn and use Nix. Devbox is for them (and me). If that isn’t you that’s totally cool.
EDIT: oh and:
How does base Nix not isolate your projects?
I mean, yeah you can build that for sure. I was just picturing how on my NixOS machines I install packages globally whenever I need them and I’m not using project based flakes. The whole Nix column in the comparison was too a small a space to actually convey any information like that. shrug
But you are willing to learn to use Devbox-Nix-in-JSON? At this rate of Nix adoption it just looks like delaying the inevitable result of writing Nix directly. Aside from that, it seems incredibly misleading to say Nix does not do per-project isolation just because you’re not doing it already. It’s not just for flakes, if you have a shell.nix for your project, does that count? (Yes, it totally does.)
But you are willing to learn to use Devbox-Nix-in-JSON?
Executing trivial Devbox commands is so much easier than writing Nix flakes it’s not even funny. I don’t even edit the JSON file.
it seems incredibly misleading to say Nix does not do per-project isolation just because you’re not doing it already.
Sorry, to be clear, I wasn’t justifying that part of the comparison. I was explaining why I made an error that I intend on correcting when I’m next at my laptop, and I’ll revisit the rest of the column keeping flakes in mind.
Learning the tools you use is a part of the job. What happens when I need to apply a patch to tool or add an overlay in Devbox? It’d be nice to have a configuration language for the task…
There’s also another important missing part to this: Nix can build projects stateless… not just a stateless environment to build stateful projects inside that environment. A dev shell can be used in some extent to onboard someone to the system, but they should eventually be reaching for Nix to build the project too. When you throw a JSON config & abstraction layer atop Nix, you aren’t exposing the user to better parts (packages, apps, & overlays) of Nix that a flake.nix would eventually lead that user to playing with. It’s getting folks into an elevator cab of a good idea & then taking away the buttons to reach a higher floor.
This is a false dichotomy. At the bottom of the article are links to using flakes in a devbox project: path: path/to/local/flake or github: ... etc. So, users have a fairly accessible path to using native flakes if they want to get the full power of nix tooling.
I don’t see the links but adding new flake inputs is not the same as adding overlays & overriding derivations with patches which necessarily require some manual intervention.
As Savil said, you can just use Devbox as a way of orchestrating flakes if you want. It’s pretty easy to do.
But more fundamentally we live in different worlds. I’m not interested in bringing Nix to the masses. I’m not interested in learning Nix lang. I’m not interested in converting my entire build pipeline to Nix. I’m only interested in solving the concrete problems of developer environment rot, bootstrapping, config sharing, etc. and Devbox solves all my problems.
How can you see the advantages of a reproducible dev shell & not see the next-step value in the output also being reproducible? Is Nix, not even a complex language, really that big of a barrier?
If you’ve decided that I desperately need the One True Pure Nix™️ in my development environment, and anything short of that is worse than useless, without knowing anything about how I build software, than I think that says more about how much Nix koolaid you’ve been drinking and less about how I build software.
In the time you spent learning & arguing for Devbox you would already know enough Nix to do most things you needed—even beyond a dev shell. There’s nothing Kool-Aid about the language—it’s configuration in an ML dialect + wrapper around Bash & is like picking up Make or YAML.
You can do as you please, but I don’t think taking the Devbox route is good a long-term recommendation for most folks.
I was a bit confused about what this offers relative to an OCI container. If you have a Dockerfile / Containerfile to build your development environment then there’s a load of off-the-shelf tooling that works with it already. If you have a .devcontainer directory in your repository then VS Code or IntelliJ will grab it automatically and use it and so you can provide the environment with GitHub Code Spaces.
This is what we do with CHERIoT. We have a custom LLVM (which needs a moderately fast machine or a lot of patience, and a fair amount of free disk space) and a couple of simulators that are slightly annoying to build (needing ocaml or verilog tools that most people won’t have installed), so we build a dev container. We then run out CI in the container (both GitHub Actions and Cirrus CI have a ‘run in a container created from this image’ option, so there’s no additional config needed) and people can hit a couple of buttons in the browser from GitHub to have the full developer environment where they can build firmware images and run them in a simulator.
It’s increasingly safe to assume that developers have an OCI container runtime installed.
I just don’t want to develop inside a Docker container inside a VM (Docker Desktop).
I don’t want my tooling to all need to be “devcontainer” aware. I don’t want my battery life to suffer from running a VM, etc etc.
I just want software to be running on my machine, contextually available depending on what project I’m working on. It’s all the benefit and none of the drawback.
I just don’t want to develop inside a Docker container inside a VM (Docker Desktop).
Because…?
A lot of my development over the last 20 years has been in VMs and there’s nothing that I miss from not running bare metal. Containers on top of that give me an easier way of isolating the environments for different projects and pulling pre-built development environments. It also has the advantage that my flow for local and remote development is the same, so I can easily move to working on a big server or cloud VM if my laptop is too slow.
I don’t want my tooling to all need to be “devcontainer” aware.
It doesn’t need to be. I can pull a container image, spin up a container and use vim in it. I can bind mount the project from the host and use external editors for the build. Or I can use something that knows about the dev container and have it manage container lifecycle.
I tend to do the former, but recommend the latter for onboarding. Last week I taught a compartmentalisation workshop with CHERIoT and it took about two minutes to have everyone in the room launch a GitHub code space connected to a dev container using our image. No other approach that I’ve tried comes close.
I don’t want my battery life to suffer from running a VM, etc etc.
My battery suffers far more from building LLVM or from running Stable Diffusion than anything VM related. I have a Linux VM managed by Docker, a FreeBSD VM managed by Podman, and a FreeBSD VM that isn’t used for containers. These don’t even show up in the energy consumption monitor when idle, the virtualisation cost is in the noise when they’re actually doing work.
You can do almost all of that in Nix without the container layer. Common docker pattern is to run package updates in the container making it stateful/not reproducible, which is not the most common Nix pattern.
Cointainer integration with other services is better but that’s a maturity/uptake thing rather than a techincal limitation.
I just don’t want to develop inside a Docker container inside a VM (Docker Desktop).
Because…?
I’ve spent most of my life in VMs and containers so this isn’t a casual preference. My very first remote development environment used SFTP to mount a remote file system where I would edit my PHP files directly on a server. I started my career with several PCs under my desk, each with one working copy of Microsoft Office’s development environment, ready to be remoted into over Microsoft RDP. When I worked at Meta/Facebook my development environment was a number of ephemeral, monster VMs that I could request and release, that I would remote into through VS Code. My personal setup at home constantly changes (I love playing with development environments almost as I love rewriting my personal website’s tech stack), but my last setup before this was an lxc container running on Proxmox that I would use the SSH plugin to access remotely through VS Code.
There are so many benefits to these kinds of setups, as you’ve said:
Trivial on-boarding
Reproducibility (not as good as Nix, but better than let’s say instructions on a wiki)
Pick-up where you left off, from any machine
I could go on for hours
I am no stranger to all the pros. But they also come with a lot of little drawbacks that are annoying. Most of them are super obvious, like remote development environments requiring an internet connection.
But as a single example of a non-obvious one, that affects both local and remote containerized development environments: I love my external diff tools like Beyond Compare. Most of my setups made a GUI diff tool impossible (or at least untenably unwieldy), unless it was baked into my primary, development-environment-aware tool, like VS Code’s built-in (but not up to par) diff tool.
I don’t want my tooling to all need to be “devcontainer” aware.
It doesn’t need to be. I can pull a container image, spin up a container and use vim in it. I can bind mount the project from the host and use external editors for the build. Or I can use something that knows about the dev container and have it manage container lifecycle.
If you are still using tools on your local desktop to get GUI tools, like my example of an external git diff tool, then you’re splitting your setup between your containerized development environment and your local machine, deciding when to bridge the gap between the two. You’d have to setup git, Beyond Compare, on your local machine. You’d have to replicate your dotfiles. Then you’d bind mount between the two so files are in both places. Just so you can diff some code.
My battery suffers far more from building LLVM or from running Stable Diffusion than anything VM related.
Sure, that’s true of some people. But I’m a web developer and the right tools can mean the difference between 2 hours of battery life or 15.
It’s a bit of a false dichotomy, they just seem similar on the surface. Imagine this being a layer below the OCI container. In fact, devbox generate devcontainer straight up generates a .devcontainer folder you can use in the way you described.
Your .devcontainer by itself is far from reproducible and might fail at any time. If for example you apt-get install your packages in the Dockerfile, then a new team member building the container might find out that the currently distributed package in that distro has a bug that breaks compatibility with your project. The rest of you, working off cached layers and images, might not even have noticed.
Your .devcontainer by itself is far from reproducible and might fail at any time.
The .devcontainer bit can do one of two things:
Build from a Dockerfile
Pull from a container registry.
I would assume that you would always do the latter for anything non-trivial. We don’t even include the Dockerfile in our repo, it’s in a separate repository and we have CI to build it (and test that it can build our project with it before pushing it to the container registry). It’s always built in the equivalent of --no-cache mode (no local cache on the CI machines).
This is the kind of thing that CI is designed to do: test whether something works and produce artefacts from it if it does.
Imagine this being a layer below the OCI container. In fact, devbox generate devcontainer straight up generates a .devcontainer folder you can use in the way you described.
So it’s another way of producing OCI containers? I guess the more the merrier (I personally like Buildah). Or does it require building locally? I consider it a failure of our infrastructure if anyone ever has to build the devcontainer (they may choose to build it, or separately build any of the things in it, but they should never need to). About the only reason anyone does at the moment is the person using PowerPC64 Linux: we don’t (yet?) have CI infrastructure for building a PowerPC image.
Devbox is not a leaky abstraction: you can use Devbox and never think about or understand a single aspect of Nix, ever.
This is some serious hubris.
I think there are two core problems with nix-based dev environments:
You do not own your computer. I wish it weren’t true, but OS vendors will put whatever they want on your box to make it work how they want. When there are OS updates, the vendors will change things. If you have software installed on your system that conflicts with that, you lose. Thus, a “automate the install of native software” approach to managing dev environments means any OS update will create a drag as the automation vendor catches up to whatever the changes are. Meanwhile, you and your team are either stuck on a potentially old/insecure OS or you can’t to development.
The abstraction will leak, and thus you will need to learn Nix, and from what I can gather, Nix is incredibly complicated. Given that almost no team writing any software would otherwise need to learn Nix, it means your dev environment—what you need to literally do any work—is dependent on a language and ecosystem you will never be able to build up a competence in.
This isn’t to say a virtualization-based approach doesn’t have problems, but it doesn’t have these problems.
Hubris is a weird word to use, since I didn’t make Devbox. But it is both the goal of the project and my lived experience.
That said, you’re the second person to object to that language, which is a little confusing to me. It’s possible to use a website and not know how the Linux kernel works or how nginx parses GET request headers. Like… absolute abstractions exist in the world…? I must be missing something about how that line is interpreted.
This isn’t to say a virtualization-based approach doesn’t have problems, but it doesn’t have these problems.
VM-based development environments don’t have software updates and never have any leaky abstractions? I can’t think of a leakier abstraction for development environments than VM-based development environments, and I don’t see how either of those issues are made better by them?
Recently I saw something called Garnix and I think just like CDK is a joy to use (not sure if this is controversial, but it’s been pretty great for us) there could be something to the approach of:
Use one of the most popular programming languages in the world
Type everything strongly
Use imperative statements to derive a declarative setup
Only a Sith deals in absolutes.
But seriously, from personal experience, only installing Nix so that I could use Devbox:
I’m not saying Devbox is bad (I’m not finished evaluating it yet). But yeah, I have yet to encounter a leak-free abstraction when dealing with computers.
lima
with fixed Alpine packages is damn close on macOS, but then you’re cheating by basically just making a throwaway Linux installation, not a repeatable Mac installationI guess a more reasonable thing to say would have been that you definitely won’t need to write any Nix (the language) to use it. It’s very solid at abstracting that away. Having an extremely rudimentary understanding of the fact that there’s a Nix store, what it’s doing to your paths, etc is probably still useful.
I love the structure of this article. I’m not shopping for what it’s selling but I read it anyways because it was a pleasure to do so.
How does base Nix not isolate your projects? How is Nix not supported by direnv when you can
echo "use flake" >> $PROJECT/.envrc
? How is configuring with JSON an improvement to Nix? You’re also going to have an easier time with Nix if you aren’t trying to keep a lot of software pinned at specific versions instead of using a Nixpkgs pin where everything was built & tested to (usually) work together. With most of the pros being inherited from Nix, I don’t get the appeal of abstracting over it in a rigid way rather than learning the tool underneath which will unlock a lot more for you.There are a TON of people who would love the joy of isolated, reproducible dev environments but aren’t willing to learn and use Nix. Devbox is for them (and me). If that isn’t you that’s totally cool.
EDIT: oh and:
I mean, yeah you can build that for sure. I was just picturing how on my NixOS machines I install packages globally whenever I need them and I’m not using project based flakes. The whole Nix column in the comparison was too a small a space to actually convey any information like that. shrug
But you are willing to learn to use Devbox-Nix-in-JSON? At this rate of Nix adoption it just looks like delaying the inevitable result of writing Nix directly. Aside from that, it seems incredibly misleading to say Nix does not do per-project isolation just because you’re not doing it already. It’s not just for flakes, if you have a shell.nix for your project, does that count? (Yes, it totally does.)
Executing trivial Devbox commands is so much easier than writing Nix flakes it’s not even funny. I don’t even edit the JSON file.
Sorry, to be clear, I wasn’t justifying that part of the comparison. I was explaining why I made an error that I intend on correcting when I’m next at my laptop, and I’ll revisit the rest of the column keeping flakes in mind.
Learning the tools you use is a part of the job. What happens when I need to apply a patch to tool or add an overlay in Devbox? It’d be nice to have a configuration language for the task…
There’s also another important missing part to this: Nix can build projects stateless… not just a stateless environment to build stateful projects inside that environment. A dev shell can be used in some extent to onboard someone to the system, but they should eventually be reaching for Nix to build the project too. When you throw a JSON config & abstraction layer atop Nix, you aren’t exposing the user to better parts (packages, apps, & overlays) of Nix that a
flake.nix
would eventually lead that user to playing with. It’s getting folks into an elevator cab of a good idea & then taking away the buttons to reach a higher floor.This is a false dichotomy. At the bottom of the article are links to using flakes in a devbox project:
path: path/to/local/flake
orgithub: ...
etc. So, users have a fairly accessible path to using native flakes if they want to get the full power of nix tooling.I don’t see the links but adding new flake inputs is not the same as adding overlays & overriding derivations with patches which necessarily require some manual intervention.
As Savil said, you can just use Devbox as a way of orchestrating flakes if you want. It’s pretty easy to do.
But more fundamentally we live in different worlds. I’m not interested in bringing Nix to the masses. I’m not interested in learning Nix lang. I’m not interested in converting my entire build pipeline to Nix. I’m only interested in solving the concrete problems of developer environment rot, bootstrapping, config sharing, etc. and Devbox solves all my problems.
How can you see the advantages of a reproducible dev shell & not see the next-step value in the output also being reproducible? Is Nix, not even a complex language, really that big of a barrier?
If you’ve decided that I desperately need the One True Pure Nix™️ in my development environment, and anything short of that is worse than useless, without knowing anything about how I build software, than I think that says more about how much Nix koolaid you’ve been drinking and less about how I build software.
In the time you spent learning & arguing for Devbox you would already know enough Nix to do most things you needed—even beyond a dev shell. There’s nothing Kool-Aid about the language—it’s configuration in an ML dialect + wrapper around Bash & is like picking up Make or YAML.
You can do as you please, but I don’t think taking the Devbox route is good a long-term recommendation for most folks.
@toastal / @WilhelmVonWeiner: I made the correction: https://alan.norbauer.com/articles/devbox-intro#comparison-to-other-developer-environment-setups (and noted the change). Thank you for that feedback.
I was a bit confused about what this offers relative to an OCI container. If you have a Dockerfile / Containerfile to build your development environment then there’s a load of off-the-shelf tooling that works with it already. If you have a .devcontainer directory in your repository then VS Code or IntelliJ will grab it automatically and use it and so you can provide the environment with GitHub Code Spaces.
This is what we do with CHERIoT. We have a custom LLVM (which needs a moderately fast machine or a lot of patience, and a fair amount of free disk space) and a couple of simulators that are slightly annoying to build (needing ocaml or verilog tools that most people won’t have installed), so we build a dev container. We then run out CI in the container (both GitHub Actions and Cirrus CI have a ‘run in a container created from this image’ option, so there’s no additional config needed) and people can hit a couple of buttons in the browser from GitHub to have the full developer environment where they can build firmware images and run them in a simulator.
It’s increasingly safe to assume that developers have an OCI container runtime installed.
I just don’t want to develop inside a Docker container inside a VM (Docker Desktop).
I don’t want my tooling to all need to be “devcontainer” aware. I don’t want my battery life to suffer from running a VM, etc etc.
I just want software to be running on my machine, contextually available depending on what project I’m working on. It’s all the benefit and none of the drawback.
Because…?
A lot of my development over the last 20 years has been in VMs and there’s nothing that I miss from not running bare metal. Containers on top of that give me an easier way of isolating the environments for different projects and pulling pre-built development environments. It also has the advantage that my flow for local and remote development is the same, so I can easily move to working on a big server or cloud VM if my laptop is too slow.
It doesn’t need to be. I can pull a container image, spin up a container and use vim in it. I can bind mount the project from the host and use external editors for the build. Or I can use something that knows about the dev container and have it manage container lifecycle.
I tend to do the former, but recommend the latter for onboarding. Last week I taught a compartmentalisation workshop with CHERIoT and it took about two minutes to have everyone in the room launch a GitHub code space connected to a dev container using our image. No other approach that I’ve tried comes close.
My battery suffers far more from building LLVM or from running Stable Diffusion than anything VM related. I have a Linux VM managed by Docker, a FreeBSD VM managed by Podman, and a FreeBSD VM that isn’t used for containers. These don’t even show up in the energy consumption monitor when idle, the virtualisation cost is in the noise when they’re actually doing work.
You can do almost all of that in Nix without the container layer. Common docker pattern is to run package updates in the container making it stateful/not reproducible, which is not the most common Nix pattern.
Cointainer integration with other services is better but that’s a maturity/uptake thing rather than a techincal limitation.
I’ve spent most of my life in VMs and containers so this isn’t a casual preference. My very first remote development environment used SFTP to mount a remote file system where I would edit my PHP files directly on a server. I started my career with several PCs under my desk, each with one working copy of Microsoft Office’s development environment, ready to be remoted into over Microsoft RDP. When I worked at Meta/Facebook my development environment was a number of ephemeral, monster VMs that I could request and release, that I would remote into through VS Code. My personal setup at home constantly changes (I love playing with development environments almost as I love rewriting my personal website’s tech stack), but my last setup before this was an lxc container running on Proxmox that I would use the SSH plugin to access remotely through VS Code.
There are so many benefits to these kinds of setups, as you’ve said:
I am no stranger to all the pros. But they also come with a lot of little drawbacks that are annoying. Most of them are super obvious, like remote development environments requiring an internet connection.
But as a single example of a non-obvious one, that affects both local and remote containerized development environments: I love my external diff tools like Beyond Compare. Most of my setups made a GUI diff tool impossible (or at least untenably unwieldy), unless it was baked into my primary, development-environment-aware tool, like VS Code’s built-in (but not up to par) diff tool.
If you are still using tools on your local desktop to get GUI tools, like my example of an external git diff tool, then you’re splitting your setup between your containerized development environment and your local machine, deciding when to bridge the gap between the two. You’d have to setup git, Beyond Compare, on your local machine. You’d have to replicate your dotfiles. Then you’d bind mount between the two so files are in both places. Just so you can diff some code.
Sure, that’s true of some people. But I’m a web developer and the right tools can mean the difference between 2 hours of battery life or 15.
It’s a bit of a false dichotomy, they just seem similar on the surface. Imagine this being a layer below the OCI container. In fact,
devbox generate devcontainer
straight up generates a .devcontainer folder you can use in the way you described.Your .devcontainer by itself is far from reproducible and might fail at any time. If for example you
apt-get install
your packages in the Dockerfile, then a new team member building the container might find out that the currently distributed package in that distro has a bug that breaks compatibility with your project. The rest of you, working off cached layers and images, might not even have noticed.The .devcontainer bit can do one of two things:
I would assume that you would always do the latter for anything non-trivial. We don’t even include the Dockerfile in our repo, it’s in a separate repository and we have CI to build it (and test that it can build our project with it before pushing it to the container registry). It’s always built in the equivalent of
--no-cache
mode (no local cache on the CI machines).This is the kind of thing that CI is designed to do: test whether something works and produce artefacts from it if it does.
So it’s another way of producing OCI containers? I guess the more the merrier (I personally like Buildah). Or does it require building locally? I consider it a failure of our infrastructure if anyone ever has to build the devcontainer (they may choose to build it, or separately build any of the things in it, but they should never need to). About the only reason anyone does at the moment is the person using PowerPC64 Linux: we don’t (yet?) have CI infrastructure for building a PowerPC image.
If this helps me to get over Nix style config I’ll give it a try. How come it doesn’t get more visibility like determinant system?
It’ll get more visibility soon, I’m pretty sure. As more people try it and see how great it is, it’ll get more hype.
When you try it and love it tell your friends :)
This is some serious hubris.
I think there are two core problems with nix-based dev environments:
This isn’t to say a virtualization-based approach doesn’t have problems, but it doesn’t have these problems.
Hubris is a weird word to use, since I didn’t make Devbox. But it is both the goal of the project and my lived experience.
That said, you’re the second person to object to that language, which is a little confusing to me. It’s possible to use a website and not know how the Linux kernel works or how nginx parses GET request headers. Like… absolute abstractions exist in the world…? I must be missing something about how that line is interpreted.
VM-based development environments don’t have software updates and never have any leaky abstractions? I can’t think of a leakier abstraction for development environments than VM-based development environments, and I don’t see how either of those issues are made better by them?
Recently I saw something called Garnix and I think just like CDK is a joy to use (not sure if this is controversial, but it’s been pretty great for us) there could be something to the approach of:
The website is a little light on information. Can you say more on what it is?
says “comparison to other dev setups” but then doesn’t compare vscode’s own solution, devcontainers (https://code.visualstudio.com/docs/devcontainers/containers) purpose built for that.
devcontainers are built on Docker, which is in the chart. But yes, I did not explicitly compare Devbox to every single thing.