For every article I read that slams docker and specifically its use in production, I think about how many teams are using docker in production successfully. It is important to be aware of a technologies faults but I don’t want it to become a meme that you can’t use docker in production.
Given the hype-based nature of industry software the last few years, I must confess some skepticism about how many companies are using it for established products.
It is entirely probable I’m just ignorant here, but I have heard the same “so many folks are using it in production” about docker, Haskell, rust, and other semi-niche things that I wonder how much of it is true.
You should also keep in mind that saying you’re using something doesn’t make it true. I know of companies who have put out blog posts claiming they use x to run their y but the truth is one engineer played with it once and they’re still using their old-trusty in prod.
It’s being used at scale in quite a number of places. For certain kinds of architectures it’s a huge win. I was just talking to the manager of a group at Facebook who was saying they don’t use any virtualization at all other than containers running on stock hardware. They have a service discovery / management layer that handles keeping track, and it works great.
Do you mean they are using containers at scale or specifically docker? There are other ways to deploy containers than Docker.
Facebook is on a homegrown thing called Tupperware.
I personally only trust what I see, and I see people using ruby, redis, node, angular, react, Linux, Docker etc and so many other technologies but not quite as the tech blogs say.
I always wonder what people mean by production. I’ve used, or tried to use, tons of shit which I was promised was production ready only to discover it really was just shit.
With enough effort, anything can be used in production. That doesn’t mean it’s a good idea.
I find this whole issue really interesting, and this post is really acutely timed for me, thanks for putting it up.
Early trials of Docker put me right off, but I’ve dug into the workstation client recently and I’ve been really pleasantly surprised. Seems a nice, simple way of running jail-like envs with nice isolation, which could most likely replace Vagrant in my workflow - if the deployment story is straight. But looking into that I find a bunch of stories like this, and this one is kind of the icing on the cake.
Is there anyone here on lobste.rs who’s using Docker really successfully in deployment systems and can give an insight into this? What’s the deal, are you getting more or less downtime and hassle? Are you having to hack round things to get things running smoothly like the guy in this post suggests? Do the benefits it brings compensate sufficiently? How comparable is the amount of work you’ve had to do to get a stable Docker workflow in place with what you’d have had to do using another system?
We’re using Docker in production at work, and not looking to back away from that decision.
I’m not gonna sit here and say the original post is wrong - a lot of stuff in it is right. Yes, you need to write a script to clean out images (and it’ll be janky). Yes, something breaks in every release (the last two changed the output format of their syslog adapter, which was frustrating).
Honestly though? It comes down to approach. If Docker doesn’t give you (or a group of people in your organisation) some clear benefits, don’t use it. That’s a cultural issue, not a technical one. If you do decide it’s worth it, then remember this quote from Julia Evans:
You don’t just set up new software and expect it to magically work and solve all your problems—using new software is a process.
Oh, side note: we don’t run our databases (or anything stateful) in containers, but never say never. Docker may not be the container system most suited to it, but I don’t think putting cgroups and namespaces up around a database process is an inherently bad idea.
So, honest, honest question (please don’t tell me it’s just because duuuuuh, Linux users are stupid, hahahha, stupid LInux users)… why are we using Docker instead of BSD jails? I don’t really know much about either, but if jails is what people seem to think we should have done, why didn’t that become the popular option? The top google hit I can find for this question is that Docker is not at all like BSD jails, without further explanation. So, someone out there thinks that Docker does something that people need which BSD jails don’t do. What is that?
And I doubt it is “runs on Linux”, because seeing how the kernel seems kind of incidental (you need a VM anyway to run Docker on Windows and macOS), there must be a deeper reason. Can someone who understands both jails and Docker well enough explain?
Docker provides a lot of management mechanics over top of raw containerization (where by my understanding—having actually used neither—e.g. LXC is much closer to jails in terms of raw functionality). I’ve personally found the Docker features I’ve used to be handy, though I can’t speak to how robust, well-designed, or generally applicable any of them are. And I think “runs on Linux”, or more precisely, “runs Linux binaries”, is actually a killer feature: there’s a surprisingly large amount of proprietary server software for Linux exclusively out there, for which jails provide zero help. Once you’re using it to run your Linux binaries on your Linux servers, the ridiculous contortions to also run it on non-Linux systems almost make sense, from the perspective of maintaining a consistent interface.
Also, Docker has a marketing department, which unfortunately almost always becomes the “killer feature” in a corporate environment.
I’m late to the party here, but figured someone might still get value out of this: We use docker containers to send between 100 and 150 million emails a day, and to keep a few legacy applications together on some old hardware.
It’s a solution that more or less works, but the ‘Docker’ bit is the least reliable part of the whole architecture (CentOS, Docker, postfix, custom scripts). Basic commands often fail and require cleanup (e.g. docker attach) and there’s the docker daemon SPOF.
Networking and logging are more complicated and limited than I feel is necessary, and we don’t do anything with storage except for mounting postfix queue directories into the containers.
Would we use it again? Maybe. Our devs say they like Docker, but I think they like the idea of containerization more than they like Docker itself. I don’t see any huge advantages over something like LXC or rkt. I actually came to Docker from LXC, expecting something significantly different or better, and was baffled by the hype and popularity.
Although they’re architecturally different, I really like FreeBSD jails, especially with ZFS, nullfs, and other goodies that don’t exist on Linux. It seems like a much more solid base to build infrastructure on top of. See projects like cbsd (https://www.bsdstore.ru/en/about.html) if you want to see some crazy-cool ideas.
At my last job we ran a few services in docker on version 1.8 and 1.9. Garbage collection was the biggest issue I experienced when trying to run many containers on a host. Every time we deployed a new image all the running containers would get trashed and take up disk. Other than that never had any major issues except for stupid CID files not getting removed. After reading this I don’t think I’d want to use docker for anything critical though but it’s definitely nice for running many services in development.