This article wouldn’t convince me to use containers instead of using Terraform and Ansible (or perhaps NixOS) to spin up and destroy disposable hosts. Manually messing with production hosts is what gets you into the kinds of trouble the article describes, but infrastructure-as-code approaches avoid most of it.
What’s more, I don’t see how containerizing even solves problems of the form, “I just manually messed with the host and now it won’t boot.” You are still going to need to apply OS updates to the underlying host from time to time, and you can still get it wrong if it’s an ad-hoc process you don’t test first.
Don’t get me wrong: I use containers in production too, but sometimes they don’t buy you much compared to launching a service as a systemd unit if you’re deploying code directly to specific hosts rather than using an orchestration system.
For me, at least, the deciding factor is often whether the application code is a self-contained deployable artifact like a Java app packaged in a fat jarfile or a statically-linked Go program. Those, to me, are less compelling to containerize.
All cloud providers have APIs that allow you to provision and destroy instances. Terraform and ansible are just a layer on top of it and shellscripting respectively.
I spin up up instances using aws-cli or whatever equivalent and throw a shellscriot at it to set it up.
Sure, those specific tools weren’t really the thrust of my argument. No matter what tools you’re using, if you are following a “Create a clean new instance and configure it by running a testable, version-controlled piece of software” approach to system management, you will mostly avoid the problems the article talks about.
This article omits that if you also upgrade your containers, they can also fail, so you should test them. Therefore, you still can install OS packages upgrades on a staging environments, see if it still works or not, or rollback…
The only part I’m buying is when you have different JVM version or libc versions where this gets interesting, since you can collocate everything on the same server. Although, let’s face it, that’s probably not so common.
Some of you out there are still stuck on old deployment workflows that drop software directly onto shared hosts.
[…]
There. You’re done. Now you can go live your life instead of updating a million operating system packages.
I honestly don’t follow this logic. Isn’t the point of shared hosting that you pay someone else to update your operating system packages? This looks like a lot more work than what I’m doing now.
If your OS has good tools, it can generate a report of all the versions of the installed packages and correlate that with security fixes, then install the fixes.
If only Docker had shipped with an automated and mandatory reporting mechanism to do the same.
It doesn’t go away, but if you have enough services on one instance, this makes life much easier. Being able to migrate each service on its own to a new version of the OS and allowing the base OS to be updated independently can really help.
I use LXC containers in my home lab, with automated backups (local and remote). Before any major changes I just create a new backup (takes a few seconds) and do whatever. When things blow up and I can’t find a quick fix, I just roll back and deal with it later when I have more time. To me this is a good mix of getting to control of what I run (not a container image that I have to trust was done right) while still getting some of the benefits of containers. Obviously this is a home lab setting so it won’t scale too well.
I use Proxmox! It has a web UI and console utilities to manage LXC containers. You can run VMs also the same way. The backups take a few seconds for small containers, but can take a lot more depending on the container/vm disk size and the underlying storage (hdd vs flash, etc).
I love containers for many things but I wonder how many hours it would take for me to put my Wordpress site into a container when I can spin up a new server and reload it from my backups in under two hours. (And yes, I’ve done this to spin up a staging server.)
Not everything needs to be in a container. The old system admin builds a server model still works.
Your build doesn’t have to be good, or scalable. I will take 25 garbage shell scripts guaranteed to run isolated within a container over a beautifully maintained deployment system written in $YOUR_FAVORITE_LANGUAGE that installs arbitrary application packages as root onto a host any day of the week.
I both agree and disagree here. Containers are just a packaging format that is slightly better than the dumpster fire that is most of the Linux packaging out there (I wonder if only NixOS/Guix get it right…). Let’s use containers and save ourselves some of the headaches at $WORKPLACE.
Where I disagree is that containers are the final solution to our software distribution problem. We should not get complacent about the software distribution story. I remember the times when you could arj x a DOS game or copy a Mac OS binary and it just worked. We can get that back.
My multi-purpose, shared-with-friends VPS’s have been around for at least 15 years and they’ve never suffered the fate described here. We use Debian stable and perhaps most importantly the vast majority of our services are from regular Debian stable packages.
This article wouldn’t convince me to use containers instead of using Terraform and Ansible (or perhaps NixOS) to spin up and destroy disposable hosts. Manually messing with production hosts is what gets you into the kinds of trouble the article describes, but infrastructure-as-code approaches avoid most of it.
What’s more, I don’t see how containerizing even solves problems of the form, “I just manually messed with the host and now it won’t boot.” You are still going to need to apply OS updates to the underlying host from time to time, and you can still get it wrong if it’s an ad-hoc process you don’t test first.
Don’t get me wrong: I use containers in production too, but sometimes they don’t buy you much compared to launching a service as a systemd unit if you’re deploying code directly to specific hosts rather than using an orchestration system.
For me, at least, the deciding factor is often whether the application code is a self-contained deployable artifact like a Java app packaged in a fat jarfile or a statically-linked Go program. Those, to me, are less compelling to containerize.
All cloud providers have APIs that allow you to provision and destroy instances. Terraform and ansible are just a layer on top of it and shellscripting respectively.
I spin up up instances using aws-cli or whatever equivalent and throw a shellscriot at it to set it up.
Sure, those specific tools weren’t really the thrust of my argument. No matter what tools you’re using, if you are following a “Create a clean new instance and configure it by running a testable, version-controlled piece of software” approach to system management, you will mostly avoid the problems the article talks about.
This article omits that if you also upgrade your containers, they can also fail, so you should test them. Therefore, you still can install OS packages upgrades on a staging environments, see if it still works or not, or rollback…
The only part I’m buying is when you have different JVM version or libc versions where this gets interesting, since you can collocate everything on the same server. Although, let’s face it, that’s probably not so common.
This seems to be trying to get me to overcomplicate my production infrastructure just so their favourite toy gets more users…
Containers have a use, but deploying a bunch of services all managed by one team onto the same one box is not one of them.
I honestly don’t follow this logic. Isn’t the point of shared hosting that you pay someone else to update your operating system packages? This looks like a lot more work than what I’m doing now.
I think they mean shared as in “we use this server for more than one thing”.
Now that’s the progress we’ve all been promised, right? We’ve gone from dealing with outdated OS packages to dealing with outdated docker images. :P
If your OS has good tools, it can generate a report of all the versions of the installed packages and correlate that with security fixes, then install the fixes.
If only Docker had shipped with an automated and mandatory reporting mechanism to do the same.
It doesn’t go away, but if you have enough services on one instance, this makes life much easier. Being able to migrate each service on its own to a new version of the OS and allowing the base OS to be updated independently can really help.
I use LXC containers in my home lab, with automated backups (local and remote). Before any major changes I just create a new backup (takes a few seconds) and do whatever. When things blow up and I can’t find a quick fix, I just roll back and deal with it later when I have more time. To me this is a good mix of getting to control of what I run (not a container image that I have to trust was done right) while still getting some of the benefits of containers. Obviously this is a home lab setting so it won’t scale too well.
Big fan of LXC here, too.
How do you back them up exactly? I found that creating & exporting snapshots of my containers takes much more than a few seconds.
I use Proxmox! It has a web UI and console utilities to manage LXC containers. You can run VMs also the same way. The backups take a few seconds for small containers, but can take a lot more depending on the container/vm disk size and the underlying storage (hdd vs flash, etc).
I love containers for many things but I wonder how many hours it would take for me to put my Wordpress site into a container when I can spin up a new server and reload it from my backups in under two hours. (And yes, I’ve done this to spin up a staging server.)
Not everything needs to be in a container. The old system admin builds a server model still works.
I both agree and disagree here. Containers are just a packaging format that is slightly better than the dumpster fire that is most of the Linux packaging out there (I wonder if only NixOS/Guix get it right…). Let’s use containers and save ourselves some of the headaches at $WORKPLACE.
Where I disagree is that containers are the final solution to our software distribution problem. We should not get complacent about the software distribution story. I remember the times when you could
arj x
a DOS game or copy a Mac OS binary and it just worked. We can get that back.My multi-purpose, shared-with-friends VPS’s have been around for at least 15 years and they’ve never suffered the fate described here. We use Debian stable and perhaps most importantly the vast majority of our services are from regular Debian stable packages.
Slightly off topic, are there clear axes of benefit where I can see how containers and reproducible builds stove these sorts of problems?
Maybe I want the axes to know which problems are solved?