1. 13
  1.  

  2. 6

    I still don’t understand what all the hype is about. My only experience with Docker so far is that the team who look after our CI infrastructure decided to start “Docker-izing” everything recently. Rather than running build tasks directly on the build server, they’re run inside their own Docker containers. All of the little apps around the infrastructure (IRC bots, deployment tools, etc) run inside Docker containers.

    Perhaps I’m being too cynical, but I fail to see any gain in this. The apps are deployed to their own AWS instances anyway, so they don’t benefit from container isolation. We don’t do anything interesting with Docker on the build server like run jobs in parallel. What I do see is that we now have serious issues every few days. Docker is riddled with bugs (a recent one[0] keeps filling up the disk on the build server) and CoreOS which we now use instead of Ubuntu seems to be hugely unstable, with software updates breaking our apps.

    I want to understand the hype -I really do- but I guess so far Docker hasn’t “clicked” for me.

    [0] https://github.com/docker/docker/issues/8693

    1. 15

      Here’s a concrete gain with docker containers, it turns your app’s compilation phase (whether it’s an ‘actual’ comp phase, to a ‘git reset to this sha’ comp phase w/ an interpreted language like ruby) into a thing which not only compiles the code, but the environment needed to run that code. What that means practically is that now instead of having to configure a bespoke VM to run a particular version of your app, and worry about getting users correctly set up for each one, getting the right version of things installed an so on, that all happens far earlier and with a big simplifying assumption, “This container will only run one thing, ever” is a big deal.

      If I’m spinning up a cloud deployment, I’m looking to minimize cost, that means I’m looking to optimize the amount of horsepower I’m renting against the maintenance cost of keeping multiple apps running on that consolidated machine. For small deployments, the number of machines will be low enough, and the cost savings of consolidation small enough, that I probably won’t bother to consolidate. As I grow, I might try to put a couple apps on the same machine – if those apps assumed previously anything about that environment and I try to change it? Bugs for days. With Docker, I can let an app assume whatever it wants. Developers have a perfect replica of what I’m going to deploy that they can do whatever they want with. Not only that, changes to that environment are version controlled, meaning I can employ things like git bisect to figure out where a particular infrastructural problem pops up. Can you do that with an ansible or a chef? Sure, but have fun watching your VM go up and down for a few weeks, docker is way faster to build, and that makes things much nicer.

      So the big win from a pure ‘deploy the things’ Ops perspective is consistency. Consistency for developers (they get to test their code on precisely the same environment it will run in on production later. Consistency for me (an Ops guy), because I don’t have to set up a Ruby VM or a Java VM or a whatever, I just set up a Docker VM (probably a few) and toss a container on it. Ultimately, it means consistency in the deploy process, which means an app that’s more stable and responsive to change. It also vastly simplifies the deploy process – CI builds a container, I download it and replace the existing container with the new one. Done – that’s all, no think, “Oh, for the ruby one I have to do a git remote update ; git reset, but for Java I need to copy this war over, except this is the jboss stack so I need to blah blah blah”, it’s just a docker container, that stuff happens for me automatically. Sure, I could have scripting to do this at deploy time, but it’s a hell of a lot faster to download a container and run it, than have it chew through a long process that could’ve been done and cached well before now (indeed, thats a way to think about docker, it’s caches a deploy up to the actual ‘run on production’ part).

      Also, versions of the app are not just bundled with their environment, they’re bundled and versioned. There’s a SHA associated with each iteration of an app. If I need to roll back, it’s no longer a ‘how do I undo all these things?’ it’s ‘put the old docker container up, restore the database(s) to the last backup’, those databases, notably, can be docker containers, which you can save in-flight as incremental backups, versioning not only your code and it’s environment, but your database and all it’s data – that’s gold, I can afford drive space, but when backup restoration is as easy as ‘docker run db:v1.2.3-hourly-1500’, that’ll make any Ops person happy as a pig in shit.

      Now, where Docker really gets interesting is in encapsulating one-off processes. Developers (at least where I work) are finicky. They want hard things to be simple, they want really hard things to be trivial, and it always has to work or they whine and whine and whine. So take a build system – some of the devs use Intellij and its build system, some use Eclipse and its build system, some use the ant scripts we use in production, some pray to FSM to return them the compiled code. This resulting in some serious inconsistency. Compounding this is that some of the team is on Mac, some on Linux, some on Windows. How can we develop a totally consistent build process to run on three different platforms?

      Docker.

      I built a docker image that encapsulated the CI ant scripts. It has all the bells and whistles needed by the devs, and it exposes itself as a single-shot docker container. Now, instead of building via eclipse or intellij, they build via docker (they configured their IDEs to use this command as it’s build tool), they don’t have to remember the ant invocation, just docker run -v/output:/path/to/output build_the_app and away it goes. I get to have a build script that runs everywhere in a totally consistent way, on CI and on dev machines. The Devs get to avoid this whole class of “It’s in eclipse but not intellij” bugs, and I have a uniform interface to hide behind as I convert the build scripts from ant to gradle. It’s friggin' brilliant.

      This ability to get some relatively free cross-platformability, to encapsulate the environment things run in and moreover to version that environment, it’s wildly useful to a guy trying to tame a wild infrastructure. In my industry (healthcare), Auditability is king, and being able to tell an auditor that – this isn’t just the same code, it’s the same code, environment, and configuration, bit-for-bit, as we built in CI – that’s like music to their ears. Every single piece of the infrastructure has a totally unique identifier, I can prove beyond a shadow of a doubt that any piece of the infrastructure is exactly what I intended to put there. If docker did nothing else but encapsulate the environment and code and assign that identifier, it’d still be revolutionary. The other uses are just icing on the cake.

      EDIT: Forgot in my original – wrt CoreOS, I think it’s too unstable for use as well, we run Ubuntu 14.04 as our docker servers, we’re planning a move to RHEL and using Kubernetes to allow easier scaling. I haven’t seen the file handle bug hit me yet, indeed, my docker experience has been largely bug free (and my use of it is not small), but for certain it is a pretty nascent project; I still think the benefits outweigh the risks.

      1. 7

        Kubernetes

        Yay! =D

        Wonderful comment, I agree. Except I hope docker itself dies in a horrible horrible fire. The implementation is a nightmare and the docker guys don’t know how to run a project. I hope they get a real competitor soon. Proprietary, compatible forks exist, but I don’t know of any that intend to go open source.

        1. 4

          A lot of good projects start out poorly, I’m interested in seeing competitors, but would rather see the docker folks step up to the challenge and improve their product, rather than simply going away.

          1. 2

            What about Rocket by CoreOS?

            1. 1

              I have high hopes. Unfortunately I don’t know much about it, and thus can’t comment on its quality. Rocket was announced shortly before I moved my systems to FreeBSD and jails.

            2. 1

              … the docker guys don’t know how to run a project.

              Could you expand on this?

              I maintain a few open source projects so I know how hard it can be, though none with the volume of Docker.

              What are they doing wrong? What should they do to improve?

              1. 4

                They are building as many new features as they can without stabilizing their existing feature base, and bolting stuff on in order to get new features quickly rather than carefully considering the best designs. One artifact of this philosophy is the locking in docker, the devs seem to consider for about 3 seconds whether something is thread safe, and if not they just add a mutex. As a consequence, docker does basically nothing concurrently. Try adding 20 containers at the same time, it won’t happen at any reasonable speed. The problem is begging to be solved by the actor model, instead they choose sync.Mutex.

                Edit: I’m exaggerating a bit, but parallel creation of containers is a big feature that only doesn’t exist because of bad design.

                As for what they are doing wrong, they are blazing forward trying to impress the community with flashy features. Yet most of the serious users of docker just want the core functionality to work well. That’s my interpretation of what’s happening anyway.

                1. 3

                  I assume you’ve run your share of open source projects, so you know that big sweeping changes are far easier to integrate early than later in a project’s life. Given that v1.0 is barely 6 months old and there are 867 contributors pumping features into the project, locking down the project and refusing any progress from the dozens (hundreds?) of vendors who are trying to make progress would be catastrophic to the youth and future of this project.

                  If you artificially stunt the growth of a project just to catch a breath and refactor some locks which may only be bothering people who aren’t willing to fix it, you create a very real risk of losing all momentum and starting to rot slowly while a fork continues happily on its way.

                  Perhaps our experiences of running open source projects differ vastly, but from my vantage point I see the team doing the best job they can—certainly a better job than I could.

                  If you’re up for opening a PR (or already have?) to increase the parallelism of the container creation, I’d be happy to collaborate on it with you.

                  1. 1

                    Those are very good points. Though I think there is a balance between adding features and ensuring quality. I don’t think docker is at risk of losing momentum right now. The only proprietary forks I know of are designed to fix the performance issues docker has.

                    Working on docker parallelism would be interesting, but I got out of containers for a reason. If I were to choose a containerization project at this time, I would prefer to build something docker-like for FreeBSD, but I’m not likely to do that either.

              2. 8

                I think you could simplify much of that to “it’s kind of like a fancy chroot plus a tarball of an entire system” or maybe “kind of like bsd jails or solaris zones”.

                The experiences I have had with docker so far involved a team at $dayjob using it, and it simply making things more complex instead of less, reinforcing sloppy practices (docker containers become magical custom environment thingy and the app borderline refuses to work outside of it), and making any underlying issues with the shipped apps far more difficult to debug. Currently not a fan. Maybe it would be better if they used it better.

                I guess it makes more sense if your app stack sprays files all over the place and/or is unfortunate enough to be bundled with most distros/oses (like python) and thus you are guaranteed that any installed version is going to be old as hell.

                1. 3

                  Thank you so much for writing this up.

                  1. 2

                    If I’m understanding correctly, the big advantage is that I can deploy as many applications as I want on a machine and I can have their environments be completely isolated, meaning I don’t actually ever have to worry about other applications on that box requiring a specific version of something I might want a different version of etc? Seems like that would make a lot of sense once you have a large development team that loves to pile up a lot of applications in to the same machines, and they don’t want to care about what the other teams are doing.

                    Interestingly enough we drop a dozen different applications on our boxes, but our tooling is sufficiently consistent that we don’t too often run into issues of conflicting dependencies. Helps to ship binaries directly to production, no need for VMs or interpreters, but that addresses only a small fraction of the potential issues I can foresee.

                    1. 4

                      That’s a big advantage. The environment-isolation is nice because it allows dependencies and their configurations to move independently. It also acts as a nice organizing principle.

                      In my case, we support 2 main stacks (Ruby, JVM/Tomcat) and a few other smaller stacks. We also use 3 different databases (Oracle 11g, Oracle 12c, and Redis). This is split across about a dozen apps. Some of the Java apps run on 1.8, some on 1.7, one on 1.6. The Ruby apps are newer and all run on 2.1.2. Upgrading through versions of software is a somewhat laborious process because we’re pretty heavily regulated (we work with clinical trials, which means working with drugs, which means we’re good buddies with the FDA guy (by which I mean he hates us and wants to fine us into oblivion, which is the natural state of the FDA auditor)). Docker acts as a really convenient way to unify that mass of different stacks into a single deployable unit.

                      That’s not the only advantage, though – the ability to encapsulate execution environment and make it part of your revision history is valuable even if you only have one stack. Docker is lightweight, compared to Vagrant+chef / Vagrant+puppet / Vagrant+ansible, it’s just a step above shell scripts, so it’s easy to bring into your codebase (one file in your repo), and it makes it so you can have a consistent environment which is disposable, versioned, and versioned-after-compiled as well. The latter meaning that the product is also associated with a ‘version’ which is unique to the Dockerfile that created it, that’s useful for ensuring the right thing is going to production (though admittedly that’s not super hard in general, it does assert a bit more about the environment then just an embedded git sha. That low barrier to entry can pay off in development/CI too – if you’ve ever fought with jenkins to get it to run your tests because you can’t figure out what user it wants to pretend to be today (can’t tell that I’ve had that problem, huh?) docker is a good solution. Jenkins can just run a docker container that has a ci-runner script, you mount the application in the container as a volume (by passing -v /code:$JENKINS_VAR_THAT_POINTS_AT_THE_REPO), and it’ll go to town. You can also use that same container locally, which eliminates the impedance problem of ‘passes on my machine, not on CI’. Ditto with builds (as I mentioned above.

                      Another neat idea is to use docker to create a DIY build farm. Use some clustering service on top of a few beefy AWS servers, something really friendly like Shipyard. Give developers the shipyard tool and a container that builds your app and puts it in S3, then prints the URL. Now instead of building your app on your little dinky laptop, you can give it to Hanz and Franz up in cloud land and let them do the hard work. Your clustering service (Shipyard in this case) will automatically loadbalance the builds across workers, add some autoscaling and a big company with a nontrivial compile time can get a lot of value with comparatively little effort. Not only that, that same system can work as a general job-execution framework. Make a docker container that does the job, run it against the cloud, done. That isn’t just me speculating on that latter part, I have docker containers that do arbitrary jobs like uploading things to offsite storage and stuff, I don’t have the cluster set up to run them (we have clusters for actually running apps, I'ven’t built one for jobs), but it could be a pretty neat way to share resources.

                      1. 1

                        Interestingly enough we drop a dozen different applications on our boxes, but our tooling is sufficiently consistent that we don’t too often run into issues of conflicting dependencies.

                        That’s also been my experience. I hear a lot of complaints from people using Python and Ruby, though, who prefer virtualenv type setups, and now something like Docker. Perhaps it’s something to do with the way packages work in those ecosystems? I’ve used them only for small scripts so I don’t have much insight into the issues with packaging up big Python or Ruby apps.

                        In the C ecosystem, and others that work similarly, soname versioning has been good enough for my needs. It’s not really a problem to have multiple versions of a library installed simultaneously, and often Debian/Ubuntu have even taken care of the packaging for you—if you need libfoo3 for some apps and libfoo4 for others, no problem, just depend on both. And there’s always statically linking binaries as an escape hatch.