Thank you a ton for this, whoever is behind it. It could probably use a description of how much of what Docker offers, could be achieved with LXC alone, but in a more orthogonal way — i.e. integrating better with the existing ecosystem.
I’ve seen projects that only provide a Docker image as a means of deployment, which IMHO is fairly awful.
That’s really bad. It’s not unique to Docker, though—at work we have an application running in AWS which is “deployed” by cloning an already-running instance. Whoever set it up the first time neglected to document what they did. (To be fair, I guess we’re the only people suffering from our poor deployment management in this case.) Virtualization and containerization are not fixes for the sins of the past, just ways of packaging them up nicely so you can pretend they don’t exist.
Immutable code and mutable data separation dictatorship.
Immutable code is a very, very good idea.
Overall the author of this post is very angry about docker, and my guess is they just walked into a poorly maintained ops environment and needed to rage. Many of the things they complain about are wrong (single process only) or misleading (can’t use standard linux tools, if you’re on the host just ‘docker exec bash’ and go crazy).
There is also a tinge of vendor lock-in which I find confusing. Using Docker is as much lock-in as using python. At some point you have to go with some technical stack, in the author’s mind this is evil cloud company lock-in and not, you know, engineering.
Any response to this? My impression is that docker’s just a set of tools to streamline interaction with containers, which makes all this feel a little overblown, especially since it’s open source. Having used docker to run generic applications on a bog-standard Debian install with sysvinit (as far from a “DockerOS” as one could credibly get on Linux), a bunch of these criticisms ring false to me. But on the other hand, I’ll be the first to admit I haven’t looked into it at any level of detail.
I’ve just wasted a couple days trying to get some simple stuff working in a docker container. My understanding is that docker is meant to be a solution for dependency hell, and in some cases, it might help. But in my case, its a cure that’s worse than the disease.
Docker is a solution for process management and isolation, not dependency hell.
It really shines when you have many processes running across a fleet and want to do rolling upgrades. You can tie your metrics to the docker SHA, so if one SHA is causing trouble you can kill that and roll back to the previous, known-good version.
I can’t think of any open-source solution that lets me do this with arbitrary linux processes other than docker.
I’m confused by what you’re actually saying here? In various places I’ve worked, the service being deployed had a version exposed in it tha could be used for metrics and if the version is bad you roll it to another version. What is docker doing special?
Normally you’d have to implement the roll forward/back logic yourself. Docker does it in a way that is efficient on the network, guarantees you’re running the version advertised, and limits the harm zombie processes can cause. Having worked in an environment where all three of those issues have been problems, it’s nice to see this done in a principled way.
How is docker doing this in a way that doesn’t require extra work? For example, on a previous system i worked on we used a configuration management system (ansible) to perform updates to the system and it verified the version after and gave a report. Old versions of software was kept around for a time period incase of rollback. I’m not understanding how docker solves the things you just said since doesn’t someone still have to tell docker to run different versions etc?
Yes, but it handles the download and verification of the image, executing the image, keeping it isolated (from other images and from the host system), and versioning everything about the application (not just the client code, but all libraries and literally the entire filesystem). That isolation is great too because when you kill a container, you know everything inside is dead. It’s just a good middle layer for all this stuff. If ansible already does all that then great, I’m in a position where I don’t have that and docker is a nice place to start. In a production environment you’d probably use docker with ansible or fabric: the former for process isolation and lifecycle management and the latter for fleet management.
To give another example, if an application is misbehaving I can just fire up the docker image on my local machine and poke at it. If I run ansible it would trash my workstation. Also, from what I understand, you can’t run two different ansible configs at the same time to do differential debugging.
It’s fascinating to me that microkernels try to solve essentially the same problem as Docker, but using precisely the opposite approach. Docker adds a layer on top of an OS; microkernels remove stuff from the OS.
Sigh. I couldn’t read most of this due to the sweeping generalizations and downright incorrect statements. Honestly, I had trouble reading past the first paragraph with:
Actually most software has to be heavily modified to be run under those conditions, dictated by Docker operating environment.
What? I’ve run my company’s large enterprise software with no changes at all nicely inside a Docker container. I can’t even think of what is meant by this, but with words like “dictated” and “dictatorship” being thrown around, I simply can’t take this seriously.
Docker’s not the be-all and end-all, solve everything, but it does definitely have a place and isn’t going away anytime soon.
Am I the only one unable to access it?
I was hoping more examples of applications that are poor fits for Docker. The article uses Postfix to illustrate its points about Docker’s pain points, but I cannot think of a web application that would want to fork itself and change its permissions. Maybe systems programs just aren’t Docker’s target?
Can one not fork in Docker?
Of course you can. You can easily get a shell running in a container with an already-running application, even. (It’s a good thing, too, or debugging would be an unbearable pain in the ass.) In fact, somebody’s done Postfix. (I can’t vouch for that image, but it’s a safe assumption it at least runs.) This article is very short on details and many of its complaints feel false to me without a lot of clarification.
I’m sorry, but the author here is completely missing the point. The point of Docker is that it isn’t a virtual machine, and the author apparently doesn’t grasp this fundamental concept.