But how do you even get anything done without using Kubernetes, Nomad, 3 different service discovery mechanisms, thousands of lines of YAML, and at least two different key/value stores?
I am incredibly thankful for the existence of Docker.
I have less-than-fond memories of trying to set up local instances of web stacks over the past 20 years, and the pile of different techniques that range from Vagrant boxes (which didn’t ever build properly) to just trying to install all the things – web server, DB server, caching server, all of it – on a single laptop and coordinate running them.
Now? Docker + docker-compose, and it works. The meme is that Docker takes “works on my machine” and lets you deploy that to production, but the reality is that it goes the opposite direction: it lets you write a spec for what’s in production and also have it locally with minimal effort.
Things like k8s I can take or leave and would probably leave if given the choice, but Docker I will very much keep, thank you.
Containers are absolutely here to stay but I think the parent comment is mostly complaining about the vast ocean of orchestration solutions, which have created whole new categories of complexity.
Oh I like Docker, or more specifically the idea of containers. My issue is more that on one side you have containers, and on the other side you have this incredibly complex stack of services you probably don’t need. And yet people tend to lean towards that side, because some blog post told them you really can’t have containers without at least a dozen other services.
docker and nix solve a very particular problem that many but not all developers experience. If you don’t experience that problem then awesome! count yourself lucky. But some amount of us need to use multiple versions of both languages and libraries in our various activities. Some languages provide a way to do this within the language and for specific C libraries. But almost none of them solve it in a cross language polyglot environment. For that you will need Nix, Docker, or something in the Bazel/Pants build family.
But as you astutely note those are not pain free. You should really only use them if the pain of not using them is worse than the pain of using them.
An interesting observation I have mostly about the discussion around this article and any Docker article: Everyone seems to think that the only use case for it is production deployments at scale, but in reality Docker and docker-compose, even with all their warts (and there are plenty) have made it vastly easier for small shops or folks doing self hosting to run complicated server side applications.
Here’s our app, here’s a nice canned docker-compose that WILL WORK. That’s a significant value add I don’t think gets talked about enough.
This is exactly why I’m a fan of Docker. I’ve been through way too many onboarding processes that took multiple days to get a local dev stack up and running properly, and I have seen the light of “clone this repo, then docker-compose up”.
Have you never been been with a Nix’d team? Nix is literally the only tool you need and then you can clone the repo you can nix develop to have a dev shell, run to start apps, and build to create and inspect the whole build locally.
Right now, everyone and everything I work with is likely to already have heard of Docker and have some ability to support or work with it. So I’m unsure why I’d want to switch to something that feels less mature, and which has less adoption, just to re-solve a problem Docker has already solved for me.
simpler to start, harder to get the details right - especially as your matrix of (supported platforms to run and test on * variants of your software to package * external depdencies to handle) grows.
The learning curve with nix is much steeper - far too steep atm IMO, but its being worked on; flakes and the new nix cli are a step forward. - but nix covers much more ground than dockerfiles as well.
Come for the dockerfile replacement, stay for the generic build system, whole-system configurations, home-manager and more.
As someone who has made both Dockerfiles and a flake.nix for a project, as well as currently use both Nix and Docker/OCI tooling almost daily, I just cannot agree with most of your assertions.
Even amongst nix flake enthusiasts there is conflicting info on how to do a nix flake “properly”. The examples in the video showing the problems with Dockerfiles are convoluted. How often do packages go missing from debian or ubuntu? If you’re using “ubuntu:latest” and don’t mean to, then you’re doing something wrong (and also unusual) - normally you would pin to a version (i.e. “ubuntu:20.04”) or even specify the exact hash of the image you want to use.
My feeling is that Nix flakes and Dockerfiles prioritize different strengths and to say one is the replacement for the other is kind of like saying “shoes are better than sandals! you won’t stub your toes and you wont get wet.” okay… well what if that’s not what I prioritize…
I’ve worked with both as well, and I can’t disagree ;)
Even amongst nix flake enthusiasts there is conflicting info on how to do a nix flake “properly”.
Yes, this is unfortunate. But to be fair, flakes are still “experimental” and might not be “there” yet for any possible use case, but they do already support a wide variety of use cases. Especially when combined with flake-aware tools such colmena or support libraries such as flake-utils(-plus).
My feeling is that Nix flakes and Dockerfiles prioritize different strengths
That’s a good point, that I’ve tried to make as well. Dockerfiles are definitely more “portable” between your co-workers or community members.
But nix(os) offers a rich eco system, where you can choose tools and parts to build beautiful OCI images (or whole vms, from the same expression!) and which don’t break because some dependency in your ubuntu base image does not match with your third-party repo or npm dependency. We’ve all been there, I guess :D
Sidetrack to say that I’ve done the pin to ubuntu sha hash and that ends up not working after a while when a package is renamed or does go away because of version bump in the package name itself or similar. I don’t remember the exact details but it sucked being bitten by pinning of the image but that doesn’t matter when the apt repository changes between image build runs (on the order of months).
Reproducibility is a term which means different things to different people. There are plenty of occasions when nix expressions are pulling down binary blobs. So definitely not reproducible from source in all cases. I like Nix, and use it daily, it’s just more pain than benefit in most places where I use Dockerfiles.
If you had watched the talk, I explain that Nix is only making the claim that the inputs will be the same every time. That’s all it does, and all it can do. Build determinism is assisted by the sandbox. Read the section of the thesis called “enforcing purity” to learn more. https://edolstra.github.io/pubs/phd-thesis.pdf
Great presentation on nix flakes (I’ve never used it but I admit it looks really nice from a reproducability perspective).
Though I have to point out his initial comparison to docker is a bit disingenuous IMHO, he basically picks up some of the worst things you can do in a Dockerfile for the initial example including:
FROM ubuntu:latest; it’s best practice to use the most specific tagged image possible, at bare minimum ubuntu:xenial, but even better would be ubuntu:xenial-20210804. Expanding on that it’s also generally better to pull your toolset’s image rather than a generic one if possible, e.g. at work for our Golang projects we use golang:1.xxx where 1.xxx is the version of Go we need. That way it doesn’t change underneath us between builds.
apt-get update && apt-get upgrade; yes, this changes every time. So if you’re really worried about that you should have a “base” Dockerfile image to build, tag that and push it to your registry, and then have your app’s image use that in its FROM line so that you don’t have to worry about that layer ever changing.
CMD ["hello"]; ok, I’ll concede this one since the path can change but usually it’s a better idea to put the entire path to the binary as ENTRYPOINT/CMD so you don’t have to worry about $PATH being wrong (and the risk of this gets lower if you use an intermediate image after the install-ey lines too since that controls the change more).
tl;dr: this is a bad example of a Dockerfile, just like there are probably bad flake.nix out there, this is a REALLY bad Dockerfile example. It might not be quite as reproducible as a flake.nix but you can make OCI containers more reproducible than the example the speaker initially presented.
I do want to say as an outsider that learning the entire language puts a weird taste in my mouth compared to just describing the state of a container via a Dockerfile but that might be because I’ve been happily using them for too long. I really need to do more research and fiddle more with nix…
It might be a bad example of Dockerfile, but sadly I’m seeing them on a daily basis. That’s why I’m hopeful that Nix prevents one to shoot themselves in the foot.
Agreed, if teams don’t have container experts (or expensive consultants) things get out of hand quickly. I wish I could tag my post as a rant because I’m not really “mad” about it.
User error ultimately is the problem and not the tool!
Nah. I don’t use dockerfiles. I don’t use nix. I’ll wait another 5 years for the sediment to settle and let everyone else dig through the shit
But how do you even get anything done without using Kubernetes, Nomad, 3 different service discovery mechanisms, thousands of lines of YAML, and at least two different key/value stores?
I am incredibly thankful for the existence of Docker.
I have less-than-fond memories of trying to set up local instances of web stacks over the past 20 years, and the pile of different techniques that range from Vagrant boxes (which didn’t ever build properly) to just trying to install all the things – web server, DB server, caching server, all of it – on a single laptop and coordinate running them.
Now? Docker + docker-compose, and it works. The meme is that Docker takes “works on my machine” and lets you deploy that to production, but the reality is that it goes the opposite direction: it lets you write a spec for what’s in production and also have it locally with minimal effort.
Things like k8s I can take or leave and would probably leave if given the choice, but Docker I will very much keep, thank you.
Containers are absolutely here to stay but I think the parent comment is mostly complaining about the vast ocean of orchestration solutions, which have created whole new categories of complexity.
Oh I like Docker, or more specifically the idea of containers. My issue is more that on one side you have containers, and on the other side you have this incredibly complex stack of services you probably don’t need. And yet people tend to lean towards that side, because some blog post told them you really can’t have containers without at least a dozen other services.
apt, mostly, and some ansible. :-P
docker and nix solve a very particular problem that many but not all developers experience. If you don’t experience that problem then awesome! count yourself lucky. But some amount of us need to use multiple versions of both languages and libraries in our various activities. Some languages provide a way to do this within the language and for specific C libraries. But almost none of them solve it in a cross language polyglot environment. For that you will need Nix, Docker, or something in the Bazel/Pants build family.
But as you astutely note those are not pain free. You should really only use them if the pain of not using them is worse than the pain of using them.
Can confirm. Have a Pants managed monorepo. It’s painful.
Look forward to watching this talk.
An interesting observation I have mostly about the discussion around this article and any Docker article: Everyone seems to think that the only use case for it is production deployments at scale, but in reality Docker and docker-compose, even with all their warts (and there are plenty) have made it vastly easier for small shops or folks doing self hosting to run complicated server side applications.
Here’s our app, here’s a nice canned docker-compose that WILL WORK. That’s a significant value add I don’t think gets talked about enough.
This is exactly why I’m a fan of Docker. I’ve been through way too many onboarding processes that took multiple days to get a local dev stack up and running properly, and I have seen the light of “clone this repo, then
docker-compose up
”.Have you never been been with a Nix’d team? Nix is literally the only tool you need and then you can clone the repo you can
nix develop
to have a dev shell,run
to start apps, andbuild
to create and inspect the whole build locally.Right now, everyone and everything I work with is likely to already have heard of Docker and have some ability to support or work with it. So I’m unsure why I’d want to switch to something that feels less mature, and which has less adoption, just to re-solve a problem Docker has already solved for me.
Exactly!
Docker-as-convenient-packaging-and-sandboxing-tool as opposed to docker-as-production-deployment-mechanism-at-scale.
It’s a nice presentation. Personally I enjoy using podman-run docker/OCI containers mixed with systemd services on nixos servers.
Well-stocked toolbox vs. search for golden hammer etc.
but Dockerfile is much more simple and easy to use.
simpler to start, harder to get the details right - especially as your matrix of (supported platforms to run and test on * variants of your software to package * external depdencies to handle) grows.
The learning curve with nix is much steeper - far too steep atm IMO, but its being worked on; flakes and the new nix cli are a step forward. - but nix covers much more ground than dockerfiles as well.
Come for the dockerfile replacement, stay for the generic build system, whole-system configurations, home-manager and more.
As someone who has made both Dockerfiles and a
flake.nix
for a project, as well as currently use both Nix and Docker/OCI tooling almost daily, I just cannot agree with most of your assertions.Even amongst nix flake enthusiasts there is conflicting info on how to do a nix flake “properly”. The examples in the video showing the problems with Dockerfiles are convoluted. How often do packages go missing from debian or ubuntu? If you’re using “ubuntu:latest” and don’t mean to, then you’re doing something wrong (and also unusual) - normally you would pin to a version (i.e. “ubuntu:20.04”) or even specify the exact hash of the image you want to use.
My feeling is that Nix flakes and Dockerfiles prioritize different strengths and to say one is the replacement for the other is kind of like saying “shoes are better than sandals! you won’t stub your toes and you wont get wet.” okay… well what if that’s not what I prioritize…
I’ve worked with both as well, and I can’t disagree ;)
Yes, this is unfortunate. But to be fair, flakes are still “experimental” and might not be “there” yet for any possible use case, but they do already support a wide variety of use cases. Especially when combined with flake-aware tools such colmena or support libraries such as flake-utils(-plus).
That’s a good point, that I’ve tried to make as well. Dockerfiles are definitely more “portable” between your co-workers or community members.
But nix(os) offers a rich eco system, where you can choose tools and parts to build beautiful OCI images (or whole vms, from the same expression!) and which don’t break because some dependency in your ubuntu base image does not match with your third-party repo or npm dependency. We’ve all been there, I guess :D
Sidetrack to say that I’ve done the pin to ubuntu sha hash and that ends up not working after a while when a package is renamed or does go away because of version bump in the package name itself or similar. I don’t remember the exact details but it sucked being bitten by pinning of the image but that doesn’t matter when the apt repository changes between image build runs (on the order of months).
Looks simpler, but has way more pitfalls
At a great cost. That cost is reproducibility. And I value that more than anything. Though different people value different things.
Reproducibility is a term which means different things to different people. There are plenty of occasions when nix expressions are pulling down binary blobs. So definitely not reproducible from source in all cases. I like Nix, and use it daily, it’s just more pain than benefit in most places where I use Dockerfiles.
If you had watched the talk, I explain that Nix is only making the claim that the inputs will be the same every time. That’s all it does, and all it can do. Build determinism is assisted by the sandbox. Read the section of the thesis called “enforcing purity” to learn more. https://edolstra.github.io/pubs/phd-thesis.pdf
Great presentation on nix flakes (I’ve never used it but I admit it looks really nice from a reproducability perspective).
Though I have to point out his initial comparison to docker is a bit disingenuous IMHO, he basically picks up some of the worst things you can do in a Dockerfile for the initial example including:
FROM ubuntu:latest
; it’s best practice to use the most specific tagged image possible, at bare minimumubuntu:xenial
, but even better would beubuntu:xenial-20210804
. Expanding on that it’s also generally better to pull your toolset’s image rather than a generic one if possible, e.g. at work for our Golang projects we usegolang:1.xxx
where 1.xxx is the version of Go we need. That way it doesn’t change underneath us between builds.apt-get update && apt-get upgrade
; yes, this changes every time. So if you’re really worried about that you should have a “base” Dockerfile image to build, tag that and push it to your registry, and then have your app’s image use that in its FROM line so that you don’t have to worry about that layer ever changing.CMD ["hello"]
; ok, I’ll concede this one since the path can change but usually it’s a better idea to put the entire path to the binary as ENTRYPOINT/CMD so you don’t have to worry about$PATH
being wrong (and the risk of this gets lower if you use an intermediate image after the install-ey lines too since that controls the change more).tl;dr: this is a bad example of a Dockerfile, just like there are probably bad flake.nix out there, this is a REALLY bad Dockerfile example. It might not be quite as reproducible as a flake.nix but you can make OCI containers more reproducible than the example the speaker initially presented.
I do want to say as an outsider that learning the entire language puts a weird taste in my mouth compared to just describing the state of a container via a Dockerfile but that might be because I’ve been happily using them for too long. I really need to do more research and fiddle more with nix…
It might be a bad example of Dockerfile, but sadly I’m seeing them on a daily basis. That’s why I’m hopeful that Nix prevents one to shoot themselves in the foot.
Agreed, if teams don’t have container experts (or expensive consultants) things get out of hand quickly. I wish I could tag my post as a rant because I’m not really “mad” about it.
User error ultimately is the problem and not the tool!
Unfortunately flake.nix is more complicated and harder to use than traditional default.nix, but definitely one should use Nix rather than Docker.