I’d say the main win from docker is that configuration is generally stored near the containerized service. Environment vars, docker compose files etc. rather than changing many files in various places in /etc. That’s not an insurmountable barrier to the suggestions, but it is something that helps ensure that software released as a docker image is mostly the same in most deployments. That’s the part of this that’s the human answer not just the technical.
Environment vars, docker compose files etc. rather than changing many files in various places in /etc.
While docker definitely made it more popular, all those apps still take the environment variables anyway. That means you can configure either directly though the systemd overrides / etc/defaults, or through the NixOS config if you use it.
This unfortunately won’t work with things like mysql which handle all variables in the entrypoint instead.
Second, the software distribution - docker definitely made things easier to ship.
This is quite the understatement.
Linux’s approach of making everything dynamically linked and depend on everything else, combined with the general complexity explosion we’ve seen in software means Linux has degraded to the point where you basically can’t run software on it.
Static linking has solved this problem for longer than it has even been a problem, but for GNU flavoured historical reasons you can’t use it for most software. So instead people have reinvented it but worse, and the default way to ship software is to ship a tarball but worse containing a disposable operating system that only runs your app.
You still need a host OS to run the on the hardware itself, which will have a half-life measured in months before it self destructs. You can push this out if you never, ever touch it, but even as someone who has repeatedly learned this lesson the hard way (still using Firefox 52, iOS 13, etc) I still can’t keep myself from occasionally updating my home server, which is generally followed by having to reinstall it.
I find it’s easier to bring up a PostgreSQL instance in a Docker container, ready to go, than to install and configure it from apt. Both are pretty easy though.
I’m on the opposite side of this matter: I have a dev db, I put everything in it, single version, configured once, run since then. When I played with docker and considered how useful it could be I decided to not go that direction, because for my use-case, docker didn’t seem added value.
The difference is that you have to learn how apt works if you run an apt-based system. If you learned to use Docker for some other reason (probably for work, because why else would you?) that’s not as widely applicable.
But learning apt and learning docker, it’s still a huge difference.
If you want to do an extensive customization, you still have to learn apt to fiddle with the things in the image itself, plus a lot of docker things on top.
actually, you might argue that docker (and podman) are more applicable because what you learn there can be used on any distro running docker, whereas only knowing how to use apt limits you to only distros that use apt…
Not at all, in the last year or so I’ve had two installs with almost nothing on them (htop process list comfortably fits on 1 page) self destruct (boot into unusable state/refuse to boot) on their equivalents of apt-get upgrade.
I’d recommend trying to understand what exactly happened and what’s failing when you run into situations like that, especially if it happened more than once. Things don’t normally self destruct. Sure, you can run into a bug that renders the system unbootable, but those are pretty rare. A significant part of the world computing runs on Linux and it runs for years. If your experience is “will have a half-life measured in months before it self destructs”, it may be worth learning why it happens to you.
Wellllll… Debian systems don’t self-destruct on apt upgrade, but there are many other downstream variants that still use apt but don’t believe in old-fashioned ideas like … making sure things actually work before releasing.
At least, not if you upgrade them regularly. I’ve hit a failure mode with older Debian systems because apt is dynamically linked and so when the package / repo format changes you end up not being able to upgrade apt. This isn’t a problem on FreeBSD, where pkg is statically linked and has a special case for downloading a new version of the statically linked binary that works even if the repo format changes.
15 years ago I probably would have. Nowadays I understand my time is too valuable for this. When I spend my time to learn something there are so many wonderful and useful ideas in the world to immerse myself in. Understanding why my almost completely vanilla OS nuked itself for the nth time after I used it normally is not one of them.
Windows and Mac both have comfortable access to the good parts of Linux through WSL/docker (WSL is by far the most unreliable thing on my PC despite not even needing to be a complete OS) while also not dropping the ball on everything else. For the one machine I have that does need to be Linux, the actual lesson to learn is to stop hitting myself and leave it alone.
Things don’t normally self destruct. Sure, you can run into a bug that renders the system unbootable, but those are pretty rare.
For me that’s: because I can do something about it, as opposed to other systems. For you the bad luck hit on Linux. I’ve had issues with updates on Linux, Windows, Macs. Given enough time you’ll find recurring issues with the other two as well. The big difference is that I can find out what happened on my Linux boxes and work around that. When Windows update service cycles at 100% CPU, manually cleaning the cache and the update history is the only fix (keep running into that on multiple servers). When macos after an update can’t install dev tools anymore, I can’t debug the installers.
Given enough time you’ll find recurring issues with the other two as well.
This is dishonest, the rate and severity of issues you run into while using Linux as intended are orders of magnitude worse than on other OS. In the above, they bricked their OS by installing a common piece of third-party software (Steam). Software which amusingly ships with its own complete Linux userspace, another implementation of static linking but worse, to protect your games from the host OS.
because I can do something about it, as opposed to other systems
This is untrue, Windows at least has similarly powerful introspection tools to Linux. But even as someone who ships complex software on Windows (games) I have no reason to learn them, let alone anyone trying to use their computer normally.
For example the first link is trivially fixable and documented
In this case you can trivially fix it, you can also trivially design the software such that this never happens under normal conditions, but the prevailing Linux mentality is to write software that doesn’t work then blame the user for it.
This is dishonest, the rate and severity of issues you run into while using Linux as intended are orders of magnitude worse than on other OS.
It’s not dishonest. This is my experience from dealing with large number of servers and few desktops. Including the ability to find actual reasons/solutions for the problem in Linux, and mostly generic “have you tried dism /restorehealth, or reinstalling your system” answers for Windows.
This is untrue, Windows at least has similarly powerful introspection tools to Linux.
Kind of… ETL and dtrace give you some information about what’s happening at the app/system boundary. But they don’t help me at all in debugging issues where the update service hangs in a busy loop or logic bugs. You need either a lot of guesswork or the source for that one. (or reveng…)
Meanwhile, the host OSes are refusing to properly package programs written in modern programming languages like Rust because the build system doesn’t look enough like C with full dynamic linking.
Now with all the things that get better, there are some things which don’t have a great interface yet. The major one is a replacement for docker-compose. You can group the services, make them talk to each other, etc. But, for example, setting up multiple databases is more manual with the systemd/nix combo. You’ll need to either ensure correct privileges on a shared instance, or do extra configuration to run two daemons in parallel. With multiple instances, the need to keep the ports consistent for the backend services is also added - no more: just connect to hostname “db” on default port. It’s not a showstopper though and I expect that some tool will be created to deal with the scenario in the future.
And for me, docker-compose is the docker killer app.
I recognize that Docker and containerization more generally has some serious security problems to contend with, but for self hosting at home, the ultra convenience of docker-compose more than compensates at lest for me.
I can reconstitute the entirety of my environment anytime with a handful of docker-compose files. I hear people saying similar things about Nix, but it’s super interesting to read the author cite the lack of an analogue in the Nix-i-verse.
I’m in the same boat. My collection of docker-compose stacks is very configuration driven and reproducible enough for my home environment. I’ve been playing with NixOS but haven’t been able to justify moving over my Docker setup yet.
I’m also still using Docker Machine so I can edit all my configs on a separate machine that isn’t the deploy target. I use dotenv to make it so that Docker Machine anutomatically executes commands against the correct deploy target when I cd into a dir. Very simple setup that works well.
To maintain compatibility with FreeBSD (although there is no business need for it), we switch our dev env from Linux to FreeBSD regularly.
The backend is in java so this is basically a no-issue. However setting up Postgres on FreeBSD is different than linux and that has hindered the ‘seamless switching’.
Looking into docker, I realized that it does not work for FreeBSD (or other BSDs for that matter) – and that’s the reason why docker was not considered for PG setup, in our case.
I have not found a docker-compatible PG install/mgmt tooling on freebsd. Otherwise, it would have helped a lot.
Looking into docker, I realized that it does not work for FreeBSD (or other BSDs for that matter) – and that’s the reason why docker was not considered for PG setup, in our case.
There is work underway to fix this. There are a bunch of layers here:
There’s a shim that manages a container instance, whatever that means. Some shims run full VMs, the default Linux one (runc) manages a tangle of cgroups, namespaces, and so on. runj uses jails on FreeBSD, but it’s not really ready for widespread use. Or, wasn’t last time I tried it. It looks as if a load of changes landed in the last few weeks though.
The shim is used by containerd, which manages snapshots (on FreeBSD, it can use ZFS), fetching images, and so on. I believe the latest version supports FreeBSD but the container image spec doesn’t yet everything that it would want for defining FreeBSD containers.
CNI provides plugins for configuring the network.
The thing people call Docker is moby. There are alternatives such as Buildah. These talk to containerd via a control interface and do some things. moby almost worked with FreeBSD last time I tried it but it wasn’t very reliable. It made a bunch of assumptions about the specific CNI plugins that work on Linux that didn’t work on FreeBSD.
The FreeBSD Foundation is currently hiring (or has just hired, not sure what the status is) a Go developer to work on improving this tooling.
Unfortunately, the Docker tooling is not very usefully modular if you want to replace the lower-level parts of the stack. Even switching from Debian to Arch, for example, requires modifying the Dockerfiles to specify different base layers. I think Buildah might be better here because it replaces Dockerfiles with shell scripts that run commands in a container and so is able to more easily add conditional execution.
thank you for the insightful reply.
Mimicking interfaces across deep layers within FBSD Jails, hive, networking – such that the whole ‘user-visible’ Docker ecosystem ‘just works’, is an arduous undertaking.
It shouldn’t be that bad, I hope. The OCI container infrastructure is pretty modular and supports things like:
Windows containers on Windows, so the filesystem images must not include any Linux-specific assumptions.
Using gVisor or KVM (or Hyper-V, or Xen) and a small Linux VM image instead of cgroups and namespaces to handle the isolation, so it can’t leak too many assumptions about how Linux’s shared-kernel isolation works.
Using a variety of different network sharing or isolation models, so it can’t leak anything about how the network works.
Getting these things to work with FreeBSD jails (and, I hope, bhyve), involves:
Specifying how FreeBSD containers are described (for example, resource limits in something that maps to RACCT, minimum FreeBSD kernel versions, and so on)
Implementing a containerd shim that uses jails (OpenBSD has one that uses their hypervisor to run Linux VMs, I hope FreeBSD will also get one that can use bhyve for Linux and FreeBSD VMs).
Configuring the network setup.
I suspect the last one will be the most complex because FreeBSD makes a bunch of assumptions about how jails are mapped to networks that may be less flexible than the OCI container model expects.
I have not found a docker-compatible PG install/mgmt tooling on freebsd. Otherwise, it would have helped a lot.
I’ve been using CBSD to manage this and the workflow is pretty smooth, but perhaps under-documented. The basic workflow is described in a nice article which you can use with the correct cbsd form.
Thank you, from what I understood about CBSD, it cannot use docker image definition files.
I would not mind executing cbsd-compose up if that’s the only thing we needed to change.
But it seems that CBSD would require separate definitions
True you need your own definitions, but at least personally I find the CBSD approach comforting in that it just uses Puppet/Chef/Ansible for the configuration. I know those tools and enjoy a lot of their features not available in Docker world. I will admit, if you only use CBSD for development that effort is likely not worth it. I’ve take to just running a docker machine in bhyve for work that uses docker, but I’m not completely happy with it.
I’d say the main win from docker is that configuration is generally stored near the containerized service. Environment vars, docker compose files etc. rather than changing many files in various places in /etc. That’s not an insurmountable barrier to the suggestions, but it is something that helps ensure that software released as a docker image is mostly the same in most deployments. That’s the part of this that’s the human answer not just the technical.
I also wonder whether there’s a nix alternative approach to https://hub.docker.com/r/containrrr/watchtower (automatic container updates).
Thought provoking article though. And an interesting contrast to the approach one dev is working on for kubernetes (removing systemd). https://medium.com/@kris-nova/why-fix-kubernetes-and-systemd-782840e50104 via https://news.ycombinator.com/item?id=32888538 A lot of the same arguments there could be made of swapping to systemd managing everything.
While docker definitely made it more popular, all those apps still take the environment variables anyway. That means you can configure either directly though the systemd overrides / etc/defaults, or through the NixOS config if you use it.
This unfortunately won’t work with things like mysql which handle all variables in the entrypoint instead.
Just run rebuild in cronjob. The effect will be identical thanks to the idempotency of building Nix derivation.
This is quite the understatement.
Linux’s approach of making everything dynamically linked and depend on everything else, combined with the general complexity explosion we’ve seen in software means Linux has degraded to the point where you basically can’t run software on it.
Static linking has solved this problem for longer than it has even been a problem, but for GNU flavoured historical reasons you can’t use it for most software. So instead people have reinvented it but worse, and the default way to ship software is to ship a tarball but worse containing a disposable operating system that only runs your app.
You still need a host OS to run the on the hardware itself, which will have a half-life measured in months before it self destructs. You can push this out if you never, ever touch it, but even as someone who has repeatedly learned this lesson the hard way (still using Firefox 52, iOS 13, etc) I still can’t keep myself from occasionally updating my home server, which is generally followed by having to reinstall it.
It really only holds when you’re talking about software which hasn’t been packaged by your host OS tho, right?
If I want to run something that’s in apt, it’s much, much easier to install using apt.
I find it’s easier to bring up a PostgreSQL instance in a Docker container, ready to go, than to install and configure it from apt. Both are pretty easy though.
I’m on the opposite side of this matter: I have a dev db, I put everything in it, single version, configured once, run since then. When I played with docker and considered how useful it could be I decided to not go that direction, because for my use-case, docker didn’t seem added value.
The difference is that you have to learn how apt works if you run an apt-based system. If you learned to use Docker for some other reason (probably for work, because why else would you?) that’s not as widely applicable.
But learning apt and learning docker, it’s still a huge difference.
If you want to do an extensive customization, you still have to learn apt to fiddle with the things in the image itself, plus a lot of docker things on top.
actually, you might argue that docker (and podman) are more applicable because what you learn there can be used on any distro running docker, whereas only knowing how to use apt limits you to only distros that use apt…
Not at all, in the last year or so I’ve had two installs with almost nothing on them (htop process list comfortably fits on 1 page) self destruct (boot into unusable state/refuse to boot) on their equivalents of apt-get upgrade.
I’d recommend trying to understand what exactly happened and what’s failing when you run into situations like that, especially if it happened more than once. Things don’t normally self destruct. Sure, you can run into a bug that renders the system unbootable, but those are pretty rare. A significant part of the world computing runs on Linux and it runs for years. If your experience is “will have a half-life measured in months before it self destructs”, it may be worth learning why it happens to you.
Wellllll… Debian systems don’t self-destruct on apt upgrade, but there are many other downstream variants that still use apt but don’t believe in old-fashioned ideas like … making sure things actually work before releasing.
At least, not if you upgrade them regularly. I’ve hit a failure mode with older Debian systems because apt is dynamically linked and so when the package / repo format changes you end up not being able to upgrade
apt
. This isn’t a problem on FreeBSD, wherepkg
is statically linked and has a special case for downloading a new version of the statically linked binary that works even if the repo format changes.Frankly, why would I?
15 years ago I probably would have. Nowadays I understand my time is too valuable for this. When I spend my time to learn something there are so many wonderful and useful ideas in the world to immerse myself in. Understanding why my almost completely vanilla OS nuked itself for the nth time after I used it normally is not one of them.
Windows and Mac both have comfortable access to the good parts of Linux through WSL/docker (WSL is by far the most unreliable thing on my PC despite not even needing to be a complete OS) while also not dropping the ball on everything else. For the one machine I have that does need to be Linux, the actual lesson to learn is to stop hitting myself and leave it alone.
In other circles:
For me that’s: because I can do something about it, as opposed to other systems. For you the bad luck hit on Linux. I’ve had issues with updates on Linux, Windows, Macs. Given enough time you’ll find recurring issues with the other two as well. The big difference is that I can find out what happened on my Linux boxes and work around that. When Windows update service cycles at 100% CPU, manually cleaning the cache and the update history is the only fix (keep running into that on multiple servers). When macos after an update can’t install dev tools anymore, I can’t debug the installers.
In short: everything is eventually broken, but some things are much easier to understand and fix. For example the first link is trivially fixable and documented (https://wiki.archlinux.org/title/Pacman/Package_signing#Upgrade_system_regularly)
To largely rehash the discussion on https://lobste.rs/s/rj7blp/are_we_linus_yet, in which a famous tech youtuber cannot run software on Linux:
This is dishonest, the rate and severity of issues you run into while using Linux as intended are orders of magnitude worse than on other OS. In the above, they bricked their OS by installing a common piece of third-party software (Steam). Software which amusingly ships with its own complete Linux userspace, another implementation of static linking but worse, to protect your games from the host OS.
This is untrue, Windows at least has similarly powerful introspection tools to Linux. But even as someone who ships complex software on Windows (games) I have no reason to learn them, let alone anyone trying to use their computer normally.
In this case you can trivially fix it, you can also trivially design the software such that this never happens under normal conditions, but the prevailing Linux mentality is to write software that doesn’t work then blame the user for it.
It’s not dishonest. This is my experience from dealing with large number of servers and few desktops. Including the ability to find actual reasons/solutions for the problem in Linux, and mostly generic “have you tried dism /restorehealth, or reinstalling your system” answers for Windows.
Kind of… ETL and dtrace give you some information about what’s happening at the app/system boundary. But they don’t help me at all in debugging issues where the update service hangs in a busy loop or logic bugs. You need either a lot of guesswork or the source for that one. (or reveng…)
Meanwhile, the host OSes are refusing to properly package programs written in modern programming languages like Rust because the build system doesn’t look enough like C with full dynamic linking.
What do you mean by this?
I’m a package maintainer for Arch Linux and we consistently package programs written in post-C languages without issue.
Via collaboration and sharing with other distributions, we (package maintainers) seem to have this well under control.
I mean, maybe some distros, but you seem to think all do? That’s incorrect :)
And for me, docker-compose is the docker killer app.
I recognize that Docker and containerization more generally has some serious security problems to contend with, but for self hosting at home, the ultra convenience of docker-compose more than compensates at lest for me.
I can reconstitute the entirety of my environment anytime with a handful of docker-compose files. I hear people saying similar things about Nix, but it’s super interesting to read the author cite the lack of an analogue in the Nix-i-verse.
I’m in the same boat. My collection of docker-compose stacks is very configuration driven and reproducible enough for my home environment. I’ve been playing with NixOS but haven’t been able to justify moving over my Docker setup yet.
I’m also still using Docker Machine so I can edit all my configs on a separate machine that isn’t the deploy target. I use dotenv to make it so that Docker Machine anutomatically executes commands against the correct deploy target when I cd into a dir. Very simple setup that works well.
I think NixOS containers are probably the Nix analogue to docker-compose, but they’re a lot more clunky (especially when it comes to networking)
I feel like it’s not (yet?) well-known, but there’s a project which wraps docker-compose into nix-tooling https://docs.hercules-ci.com/arion/
To maintain compatibility with FreeBSD (although there is no business need for it), we switch our dev env from Linux to FreeBSD regularly. The backend is in java so this is basically a no-issue. However setting up Postgres on FreeBSD is different than linux and that has hindered the ‘seamless switching’.
Looking into docker, I realized that it does not work for FreeBSD (or other BSDs for that matter) – and that’s the reason why docker was not considered for PG setup, in our case.
I have not found a docker-compatible PG install/mgmt tooling on freebsd. Otherwise, it would have helped a lot.
There is work underway to fix this. There are a bunch of layers here:
runc
) manages a tangle of cgroups, namespaces, and so on.runj
uses jails on FreeBSD, but it’s not really ready for widespread use. Or, wasn’t last time I tried it. It looks as if a load of changes landed in the last few weeks though.containerd
, which manages snapshots (on FreeBSD, it can use ZFS), fetching images, and so on. I believe the latest version supports FreeBSD but the container image spec doesn’t yet everything that it would want for defining FreeBSD containers.moby
. There are alternatives such as Buildah. These talk tocontainerd
via a control interface and do some things.moby
almost worked with FreeBSD last time I tried it but it wasn’t very reliable. It made a bunch of assumptions about the specific CNI plugins that work on Linux that didn’t work on FreeBSD.The FreeBSD Foundation is currently hiring (or has just hired, not sure what the status is) a Go developer to work on improving this tooling.
Unfortunately, the Docker tooling is not very usefully modular if you want to replace the lower-level parts of the stack. Even switching from Debian to Arch, for example, requires modifying the Dockerfiles to specify different base layers. I think Buildah might be better here because it replaces Dockerfiles with shell scripts that run commands in a container and so is able to more easily add conditional execution.
thank you for the insightful reply. Mimicking interfaces across deep layers within FBSD Jails, hive, networking – such that the whole ‘user-visible’ Docker ecosystem ‘just works’, is an arduous undertaking.
It shouldn’t be that bad, I hope. The OCI container infrastructure is pretty modular and supports things like:
Getting these things to work with FreeBSD jails (and, I hope, bhyve), involves:
containerd
shim that uses jails (OpenBSD has one that uses their hypervisor to run Linux VMs, I hope FreeBSD will also get one that can use bhyve for Linux and FreeBSD VMs).I suspect the last one will be the most complex because FreeBSD makes a bunch of assumptions about how jails are mapped to networks that may be less flexible than the OCI container model expects.
I’ve been using CBSD to manage this and the workflow is pretty smooth, but perhaps under-documented. The basic workflow is described in a nice article which you can use with the correct cbsd form.
https://freebsdfoundation.org/wp-content/uploads/2022/03/CBSD-Part-1-Production.pdf
https://github.com/cbsd/modules-forms-postgresql
It’s been problem free and I quite like the mix of TUI and shell scripting.
Thank you, from what I understood about CBSD, it cannot use docker image definition files. I would not mind executing
cbsd-compose up
if that’s the only thing we needed to change. But it seems that CBSD would require separate definitionsTrue you need your own definitions, but at least personally I find the CBSD approach comforting in that it just uses Puppet/Chef/Ansible for the configuration. I know those tools and enjoy a lot of their features not available in Docker world. I will admit, if you only use CBSD for development that effort is likely not worth it. I’ve take to just running a docker machine in bhyve for work that uses docker, but I’m not completely happy with it.