Once the post got to using a tool to manage tools, I started thinking “this is job for Nix”, and lo, I was not disappointed. (Well, I do think it’d be worth it to write it in Nix directly, but I concede not everyone wants to know how the sausage is made.)
Now we can drop the cross-platform Makefile hacks we added above and know that developers using Devbox will have the same version of sed available, regardless of the OS they are running on.
Well, the example installed @latest, so I’m not sure that’s actually true. But the spirit is there.
Devbox has a lockfile that can be checked into source control. So as long as the lockfile is intact, all the devs who use it will get the same version. @latest will apply when you update the packages with devbox update
We use a similar approach with Devbox – generate a simple Dockerfile, install nix, and then use nix to install your packages. We tried using dockerTools previously, but it cross compilation was pretty finicky to get right, and Dockerfiles were easier to integrate with existing workflows.
I do wish dockerTools worked better – the layer optimization it provides is really useful if you need to upgrade a few packages.
I think it’s a lot of different factors, many of which aren’t attitudes or beliefs but complex systemic issues that happen in all knowledge fields. At the very least, there’s not much curation of software knowledge. When he says
Where had all the ideas about reusability from the 1970s, the 1980s, the 1990s and all the years after gone?
My first thought is “so where can I read about all these ideas about reusability in one place?” Did someone make a guide to them, or do I have to sift through dozens of primary sources?
It’s a good point, there are books out there which try to collect all the good practices of software development in one place, but a lot of them disagree, and the actual practices are different across different software paradigms and use cases.
I also think one reason why some of these ideas keep coming up as new is that new companies don’t need a lot of these practices when they start, and only really discover them at a certain size? So maybe each wave of engineers rediscovers this history anew.
As for things like old game saves, old university assignments, and source files for old videos I finished years ago? 99% of that stuff, I would never touch in a decade anyway. It’s a shame to lose it, for sure. But because I couldn’t afford to protect everything I had, I ended up protecting none of what I had. I could have lost so much more.
I slowly came around to this line of thinking for my personal use – Have a small number of things I want to backup, and focus on securing + storing them safely, and accept that most everything else is ephemeral. This has the added benefit of making it easy to switch or upgrade machines.
With regard to reproducibility, I don’t really understand how ‘it works on my machine’ is any different from ‘it works on my cloud environment’, unless you are only creating a web application and it is being hosted by the same infrastructure you develop on. If you want to make a desktop or mobile application you are just as likely to get situations where some quirk of how the dev environment is set up makes it behave differently than some subset of customer devices in the wild. The same goes if you decide to change a web application to a different hosting provider.
Even if you are making a web application and hosting on the same platform, different browsers and browser configurations will likely mean that it does not work the same for everyone. Please correct me if I am missing some innovation in the tech that resolves all reproducibility issues.
If your cloud (or any non-local) environment is rebuilt at least a couple times chances are very good that some sort of recipe exists and then it’s a lot easier to compare that to a staging or testing environment. No cruft will accrue over time because you’re always working from a real set.
Basically the same as CI, you’re starting “fresh” (depending on how many stages you prepare or cache) and you won’t have a random version of some lib somewhere, you won’t have odd env variables from your interactive shell, and so on.
It might be hard to control absolutely everything (e.g., you can’t ensure that the RAM and hardware on your machine is the same as the production machine), but the more things that can be controlled and standardized, the less likely you will be to encounter spurious errors. If you use Nix (or Devbox) to ensure that everyone has the same, reproducible set of packages, you’ve eliminated one major area for quirks.
I’m one of the developers behind devbox, one of the tools described in the article (https://github.com/jetpack-io/devbox). I’m a big believer in the idea that dev environments should be reproducible and portable: take them with you anywhere you want, whether you want to run locally or on the cloud, and ensure you always get the exact same environment no matter what.
We’d love to hear your thoughts on how you’ve used a Cloud Development environment, and what your experience has been with them.
I don’t think I have have ever used a “real” cloud environment, but I did programming work on a remote server for a while. Nothing special, just me typing into a couple of terminal windows.
This all went really well, until I started to work while riding the train. It then worked well most the time, but at least twice during the journey the WiFi would stutter a bit and there would be noticeable lag when typing. Occasionally up to a couple of seconds, so that it disturbed my thought process and I completely lost my focus. This was in a West European country with a high standard of internet connectivity.
This inconvenience alone was already enough for me to turn away from the idea of developing in the cloud. The idea that I could do just the same tasks on the machine on my lap without any needing any external services is a no-brainer. As developers, we know it is best to avoid external dependencies. If you are going to include one, specially a flaky one like connectivity, it better well be worth it. The advantages I see listed are nice. But they can also be worked around.
So I agree with the author. There is the opportunity to build some nice things, but I should be able to turn it all off and work offline.
We agree totally. The ideal “cloud environment” is really a portable environment – one where you can easily run it on the cloud, or locally, and where switching is as seamless as possible. This way you can stay productive even if you lose connectivity, or if you are in an environment where you can’t access your dev machine.
Yes. Unlike Go, neither Python nor Node has a “Go 1 guarantee” equivalent, and a change of versions very typically breaks a significant amount of working code. As a result, most projects have some system for pinning the Python/Node version and only upgrade periodically. I had a really fun bug in Node a few years ago where they broke Babel, a popular JavaScript transpiler, in something like version 12.5, not even 12.0.
Another difference is that Go since the addition of modules will cache downloaded modules at the user level, but will use module versions specific to each project. Python by default installs packages at the system level. You have to use “virtual environments” to create a project specific copy of the Python install so that packages instead install at a project level. AFAIK, package downloads are cached at the Python install level, although I could be wrong about this. Node is the opposite: everything is installed to a project local folder by default, and there’s no download caching sharing across projects. (Yarn tried to add cross-caching, but that version of Yarn never got popular. Deno caches correctly, IIUC.) The easiest thing to do is to not care about it, but on a naively implemented CI system, it can add a lot of wait time as you download the same dependencies over and over again. Anyhow, it’s all much more bespoke and harder to ignore if you want sensible defaults than in Go, where you only run into trouble if you want to do something the Go team hasn’t thought of.
Python by default installs packages at the system level. You have to use “virtual environments” to create a project specific copy of the Python install so that packages instead install at a project level.
This is correct for both Python and Ruby (as well as other languages) – packages and binaries are installed in a central location, and switching between them for different projects can create conflicts or issues unless you use virtual environments. I’ve head a lot of headaches trying to make sure I have the right Ruby installed for projects I’m working on :(
Making this easy was part of the inspiration for Devbox, and Nix provides a great backend to make that possible
Replacing flake.nix#shells or the legacy shell.nix with a JSON file doesn’t simplify things—it hides them it another layer. This would be a step back since you lose a lot of flexibility in the Nix language, but also that you can have more than the default shell. Besides that they dared use the word “magical” when Nix isn’t magic, also hiding under this veneer is that Nix is best served as the build tool so the entire build system is repoducible/predictable. The fact that NGINX “no longer work[s] in a standard way” is the feature because it isn’t mutable.
We’re huge fans of Nix, and Nix flakes are really powerful tools for creating reproducible builds. We first built Devbox as an internal tool to make it easy to create Nix-based dev environments, while providing configuration options that are more familiar to developers who have previously used packages like NGINX. We’re hoping that a more approachable interface for creating dev environments can help drive adoption of Nix overall.
I think this is sending the wrong message. There is a lot of people searching for “how do I get my Nix shell into Docker for deployment” because they missed the point that the dev shell isn’t the killer feature of Nix–replacing Docker, and your dev shell, and your build tool is (or building your container with Nix because of deployment constraints dictating containers). You want the build to be top-to-bottom immutable. Creating state at any level of that ruins the reproducibility part–including a mutable shell. I get it: Nix is a learning cliff and the dev shell seems like a gentle introduction, but until you go further, you’re missing out on the biggest benefits. And the thing is that setting up the dev shell is the easy part, and getting familiar with Nix the language with a flake.nix or shell.nix is the only way you’ll start to get there (that said, Guile Scheme+Guix is also a worthy alternative to Nix doing roughly the same thing). So is the idea that developers can’t be asked to learn the tools they are using so it’s behind a JSON layer?
Mutability is not necessarily bad if it is declarative rather than imperative. You can still reproduce the entire shell top to bottom with a declarative mutable local store that is ephemeral, which is created at installation time. In this case the reproducibility is not compromised at all since the configuration is still declarative, but it allows a more standard way of using packages that expect mutability. The bad part of mutability is when it is used in an imperative way stored in a permanent location.
On dev shell isn’t a killer feature of Nix - why not? I think it actually can be. Well, I certainly hope so!
We developed Devbox to make it easier to create deterministic dev environments locally with Nix, and then package those same environments with Docker to deploy to the cloud. We have a documentation page with more details at jetpack.io/devbox/.
Happy to answer any questions you might have! It’s still under development and we’d love community feedback
While we use nix and nix shell under the hood, we think Devbox provides 3 nice benefits over using nix-shell directly:
We provide a simpler interface, so a newcomer to nix can get their shell up and running without having to learn the nix language first.
The same devbox.json that you use to spin up your shell can also be used to build a Docker container with the same packages.
Devbox can autodetect and create a shell + container for you based on the language and framework you are using. For example, if we detect a Python + Poetry project, we can spin up a shell with the required packages installed and the correct build + run steps configured.
Once the post got to using a tool to manage tools, I started thinking “this is job for Nix”, and lo, I was not disappointed. (Well, I do think it’d be worth it to write it in Nix directly, but I concede not everyone wants to know how the sausage is made.)
Well, the example installed
@latest
, so I’m not sure that’s actually true. But the spirit is there.Devbox has a lockfile that can be checked into source control. So as long as the lockfile is intact, all the devs who use it will get the same version.
@latest
will apply when you update the packages withdevbox update
We use a similar approach with Devbox – generate a simple Dockerfile, install nix, and then use nix to install your packages. We tried using dockerTools previously, but it cross compilation was pretty finicky to get right, and Dockerfiles were easier to integrate with existing workflows.
I do wish dockerTools worked better – the layer optimization it provides is really useful if you need to upgrade a few packages.
I think it’s a lot of different factors, many of which aren’t attitudes or beliefs but complex systemic issues that happen in all knowledge fields. At the very least, there’s not much curation of software knowledge. When he says
My first thought is “so where can I read about all these ideas about reusability in one place?” Did someone make a guide to them, or do I have to sift through dozens of primary sources?
It’s a good point, there are books out there which try to collect all the good practices of software development in one place, but a lot of them disagree, and the actual practices are different across different software paradigms and use cases.
I also think one reason why some of these ideas keep coming up as new is that new companies don’t need a lot of these practices when they start, and only really discover them at a certain size? So maybe each wave of engineers rediscovers this history anew.
I slowly came around to this line of thinking for my personal use – Have a small number of things I want to backup, and focus on securing + storing them safely, and accept that most everything else is ephemeral. This has the added benefit of making it easy to switch or upgrade machines.
With regard to reproducibility, I don’t really understand how ‘it works on my machine’ is any different from ‘it works on my cloud environment’, unless you are only creating a web application and it is being hosted by the same infrastructure you develop on. If you want to make a desktop or mobile application you are just as likely to get situations where some quirk of how the dev environment is set up makes it behave differently than some subset of customer devices in the wild. The same goes if you decide to change a web application to a different hosting provider.
Even if you are making a web application and hosting on the same platform, different browsers and browser configurations will likely mean that it does not work the same for everyone. Please correct me if I am missing some innovation in the tech that resolves all reproducibility issues.
If your cloud (or any non-local) environment is rebuilt at least a couple times chances are very good that some sort of recipe exists and then it’s a lot easier to compare that to a staging or testing environment. No cruft will accrue over time because you’re always working from a real set.
Basically the same as CI, you’re starting “fresh” (depending on how many stages you prepare or cache) and you won’t have a random version of some lib somewhere, you won’t have odd env variables from your interactive shell, and so on.
It might be hard to control absolutely everything (e.g., you can’t ensure that the RAM and hardware on your machine is the same as the production machine), but the more things that can be controlled and standardized, the less likely you will be to encounter spurious errors. If you use Nix (or Devbox) to ensure that everyone has the same, reproducible set of packages, you’ve eliminated one major area for quirks.
I’m one of the developers behind devbox, one of the tools described in the article (https://github.com/jetpack-io/devbox). I’m a big believer in the idea that dev environments should be reproducible and portable: take them with you anywhere you want, whether you want to run locally or on the cloud, and ensure you always get the exact same environment no matter what.
We’d love to hear your thoughts on how you’ve used a Cloud Development environment, and what your experience has been with them.
I don’t think I have have ever used a “real” cloud environment, but I did programming work on a remote server for a while. Nothing special, just me typing into a couple of terminal windows.
This all went really well, until I started to work while riding the train. It then worked well most the time, but at least twice during the journey the WiFi would stutter a bit and there would be noticeable lag when typing. Occasionally up to a couple of seconds, so that it disturbed my thought process and I completely lost my focus. This was in a West European country with a high standard of internet connectivity.
This inconvenience alone was already enough for me to turn away from the idea of developing in the cloud. The idea that I could do just the same tasks on the machine on my lap without any needing any external services is a no-brainer. As developers, we know it is best to avoid external dependencies. If you are going to include one, specially a flaky one like connectivity, it better well be worth it. The advantages I see listed are nice. But they can also be worked around.
So I agree with the author. There is the opportunity to build some nice things, but I should be able to turn it all off and work offline.
We agree totally. The ideal “cloud environment” is really a portable environment – one where you can easily run it on the cloud, or locally, and where switching is as seamless as possible. This way you can stay productive even if you lose connectivity, or if you are in an environment where you can’t access your dev machine.
Is this really still such a big problem with Python and Node? I never had the need for multiple Go SDKs on my machine.
Yes. Unlike Go, neither Python nor Node has a “Go 1 guarantee” equivalent, and a change of versions very typically breaks a significant amount of working code. As a result, most projects have some system for pinning the Python/Node version and only upgrade periodically. I had a really fun bug in Node a few years ago where they broke Babel, a popular JavaScript transpiler, in something like version 12.5, not even 12.0.
Another difference is that Go since the addition of modules will cache downloaded modules at the user level, but will use module versions specific to each project. Python by default installs packages at the system level. You have to use “virtual environments” to create a project specific copy of the Python install so that packages instead install at a project level. AFAIK, package downloads are cached at the Python install level, although I could be wrong about this. Node is the opposite: everything is installed to a project local folder by default, and there’s no download caching sharing across projects. (Yarn tried to add cross-caching, but that version of Yarn never got popular. Deno caches correctly, IIUC.) The easiest thing to do is to not care about it, but on a naively implemented CI system, it can add a lot of wait time as you download the same dependencies over and over again. Anyhow, it’s all much more bespoke and harder to ignore if you want sensible defaults than in Go, where you only run into trouble if you want to do something the Go team hasn’t thought of.
This is correct for both Python and Ruby (as well as other languages) – packages and binaries are installed in a central location, and switching between them for different projects can create conflicts or issues unless you use virtual environments. I’ve head a lot of headaches trying to make sure I have the right Ruby installed for projects I’m working on :(
Making this easy was part of the inspiration for Devbox, and Nix provides a great backend to make that possible
Replacing
flake.nix#shells
or the legacyshell.nix
with a JSON file doesn’t simplify things—it hides them it another layer. This would be a step back since you lose a lot of flexibility in the Nix language, but also that you can have more than thedefault
shell. Besides that they dared use the word “magical” when Nix isn’t magic, also hiding under this veneer is that Nix is best served as the build tool so the entire build system is repoducible/predictable. The fact that NGINX “no longer work[s] in a standard way” is the feature because it isn’t mutable.We’re huge fans of Nix, and Nix flakes are really powerful tools for creating reproducible builds. We first built Devbox as an internal tool to make it easy to create Nix-based dev environments, while providing configuration options that are more familiar to developers who have previously used packages like NGINX. We’re hoping that a more approachable interface for creating dev environments can help drive adoption of Nix overall.
I think this is sending the wrong message. There is a lot of people searching for “how do I get my Nix shell into Docker for deployment” because they missed the point that the dev shell isn’t the killer feature of Nix–replacing Docker, and your dev shell, and your build tool is (or building your container with Nix because of deployment constraints dictating containers). You want the build to be top-to-bottom immutable. Creating state at any level of that ruins the reproducibility part–including a mutable shell. I get it: Nix is a learning cliff and the dev shell seems like a gentle introduction, but until you go further, you’re missing out on the biggest benefits. And the thing is that setting up the dev shell is the easy part, and getting familiar with Nix the language with a
flake.nix
orshell.nix
is the only way you’ll start to get there (that said, Guile Scheme+Guix is also a worthy alternative to Nix doing roughly the same thing). So is the idea that developers can’t be asked to learn the tools they are using so it’s behind a JSON layer?Mutability is not necessarily bad if it is declarative rather than imperative. You can still reproduce the entire shell top to bottom with a declarative mutable local store that is ephemeral, which is created at installation time. In this case the reproducibility is not compromised at all since the configuration is still declarative, but it allows a more standard way of using packages that expect mutability. The bad part of mutability is when it is used in an imperative way stored in a permanent location.
On dev shell isn’t a killer feature of Nix - why not? I think it actually can be. Well, I certainly hope so!
Hey, this is a project my company is working on!
We developed Devbox to make it easier to create deterministic dev environments locally with Nix, and then package those same environments with Docker to deploy to the cloud. We have a documentation page with more details at jetpack.io/devbox/.
Happy to answer any questions you might have! It’s still under development and we’d love community feedback
A nice explanation of the benefits over shell.nix/nix-shell would be welcome.
While we use nix and nix shell under the hood, we think Devbox provides 3 nice benefits over using nix-shell directly: