1. 24
    1. 34

      I think you could reimplement it easily yourself with a small shell script and some calls to mount; but I haven’t bothered.

      I don’t have the expertise to criticize the content itself, but statements like the above make me suspect that the author doesn’t know nearly as much about the problem as they think they know.

      1. 32

        This reminds me of a trope in the DIY (esp. woodworking DIY) world.

        First, show video of a ludicrously well equipped ‘starter shop’ (it always has a SawStop, Powermatic Bandsaw, and inexplicably some kind of niche tool that never really gets used, and a CNC router).

        Next, show video of a complicated bit of joinery done using some of the specialized machines.

        Finally, audio: “I used for this, but if you don’t have one, you can do the same with hand tools.”

        No, asshole, no I can’t. Not in any reasonable timeframe. Usually this happens in the context of the CNC. “I CNC’d out 3 dozen parts, but you could do the same with hand tools.”

        I get a strong whiff of that sort of attitude from this. It may be that the author is capable of this. It may be possible to ‘do this with hand tools’ like Shell and some calls to mount. It might even be easy! However, there is a reason docker is so popular, it’s because it’s cheap, does the job, and lets me concentrate on the things I want to concentrate on.

        1. 9

          As someone who can do “docker with hand tools,” you and @joshuacc are completely correct. Linux does not have a unified “container API,” it has a bunch of little things that you can put together to make a container system. And even if you know the 7 main namespaces you need, you still have to configure the namespaces properly.

          For example, it isn’t sufficient to just throw a process in its own network namespace, you’ve got to create a veth pair and put one end of that into the namespace with the process, and attach the other end to a virtual bridge interface. Then you’ve got to decide if you want to allocate an IP for the container on your network (common in kubernetes), or masquerade (NAT) on the local machine (common in single box docker). If you masquerade you must make snat and dnat iptables rules to port forward to the veth interface, and enable the net.ipv4.ip_forward sysctl.

          So the “small shell script” is now also a management interface for a network router. The mount namespace is even more delightful.

          1. 8

            Exactly this! One of the most egregious things about the ‘… you could do it with hand tools’ is that it is dismissive of people who really can do it with hand tools and dismissive of the folks that can do it with CNC.

            In woodworking, CNC work is complicated, requires a particular set of skills and understanding, and is prone to a totally different, equally painful class of errors that hand tools are not.

            Similarly, Hand tool work is complicated, requires a particular set of skills and understanding, and is prone to a totally different, equally painful class of errors that power/CNC work is not.

            Both are respectable, and both are prone to be dismissive of the other, but a hand-cut, perfect half-blind dovetail drawer is amazing. Similarly, a CNC cut of 30 identical, perfect half-blind dovetail drawers is equally amazing.

            The moral of this story: I can use the power tool version of containers. It’s called docker. It lets me spit out dozens of identically configured and run services in a pretty easy way.

            You are capable of ‘doing it with hand tools’, and that’s pretty fucking awesome, but as you lay out, it’s not accomplishing the same thing. The OP seems to believe that building it artisinally is intrinsically ‘better’ somehow, but it’s not necessarily the case. I don’t know what OP’s background is, but I’d be willing to bet it’s not at all similar to mine. I have to manage fleets of dozens or hundreds of machines. I don’t have time to build artisanal versions of my power tools.

        2. 2

          And then you have Paul Sellers. https://www.youtube.com/watch?v=Zuybp4y5uTA

          Sometimes, doing things by hand really is faster on a small scale.

          1. 2

            He’s exactly the guy I’m talking about though in my other post in this tree – he’s capable of doing that with hand tools and that’s legitimately amazing. One nice thing about Paul though is he is pretty much the opposite of the morality play from above. He has a ludicrously well-equipped shop, sure, but that’s because he’s been doing this for a thousand years and is also a wizard.

            He says, “I did this with hand tools, but you can use power tools if you like.” Which is also occasionally untrue, but the sentiment is a lot better.

            He also isn’t elitist. He uses the bandsaw periodically, power drillmotors, and so on. He also uses panel saws and brace-and-bit, but it’s not an affectation, he just knows both systems cold and uses whatever makes the most sense.

            Paul Sellers is amazing and great and – for those people in the back just watching – go watch some Paul Sellers videos, even if you’re not a woodworker (or a wannabe like me), they’re great and he’s incredible. I like the one where he makes a joiner’s mallet a lot. Also there’s some floating around of him making a cabinet to hold his planes.

      2. 1

        My reaction was “if you had to write this much to convince me that there are easier ways than Docker, then it sounds like this is why Docker has a market.”

        I’m late to the Docker game - my new company uses it heavily in our infrastructure. Frankly, I was impressed at how easy it was for me to get test environments up and running with Docker.

        I concede it likely has issues that need addressing but I’ve never encountered software that didn’t.

    2. 20

      If you remove the snipes at Docker, the article reads like “Here’s a single tool that can replace a ton of your in-house scripts”, which is typically a win. The fact that Docker is not mysterious is a good thing.

      For example, you talk about using Ansible - if you tried, you could write an article about “Ansible Considered Harmful”. You know, just use Fabric and write your own Python scripts. It’s just running a bunch of scripts on hosts via SSH, that’s easy shit - Ansible is not mysterious at all. Why use Ansible?

      Because it’s usually a waste of energy to write something from scratch when there’s a popular open-source version that mostly works fine and is well-tested and documented - especially if that something is not your core business.

    3. 12

      It’s just installing software. It’s not complicated unless you make it complicated.

      Yeah, no. It would be great if this would be the case, if preparing an image was so simple, and any complication were our fault, but alas, we are sadly very far from that. Things would be easier if we didn’t want to share parts of our image - but we do. Or if we never wanted to upgrade. Or upgrade a component in one image, but not the other.

      While I’ll be the last to praise Docker, it does bring some useful tools to the table. There are other solutions that provide similar features, of course, but asserting that preparing an image is only complicated if we make it so, is shortsighted.

      I think you could reimplement it easily yourself with a small shell script and some calls to mount; but I haven’t bothered.

      I’d encourage the author to try. It would be an educational endeavour.

      And if you want to do some kind of change tracking as you build the system, you should keep it at the proper layer, … [this-and-that should live here-and-there]

      If only things were this easy! Again, I’d encourage the author to try doing this with anything non-trivial, and maintain it for a few months. Naive ideas like this fall apart very quickly. A shame, really, but they do.

      For most purposes, the main interesting thing that Docker containers provide is isolated networking. That is, Docker containers prevent the application inside the container from binding ports on the external network interfaces. What else prevents applications from using ports? The firewall that you already have installed on your server. Again, pointless abstraction to address already-solved problems.

      As the author correctly states, such isolation is kinda pointless from a security point of view. But it is not security this isolation is useful for. It is isolation itself. It allows the container to not care much about the world outside. It makes it easier to have many small networks, living in their own little world, where I usually don’t need to adjust the global firewall on the host if I want to launch a new instance. I don’t need to have a global state that knows everything. I can have little components. A lot easier to maintain in the long run. Think of small, composeable functions vs a huge single-function state machine.

    4. 10

      The author suggests that you could take linux cgroups, namespaces, packaging tools and filesystems to build something that would let you package and run isolated services.

      I suggest that a group of people have already done that, and called it Docker.

      1. 1

        I don’t know much about this, but it seems like the author’s saying that while Docker does abstract over these utilities, it also adds some new overhead & complexity in exchange. And from what I can see, reduce flexibility and interoperability on top of all of that.

        Is Docker’s implementation truly as simple as you suggest? And if so, is unifying & packaging this functionality a better solution than writing a guide on how to configure the set-up yourself? After all, we developers don’t go around saying our computers’ built-in utilities are too complex for web functionality and we should use entire browsers to do the same thing instead! …oh wait.

        1. 2

          Docker is not simple (it’s 10 million+ lines of code!), and you absolutely get new overhead and complexity, as you do with anything that abstracts over a set of components and adds features on top of them. It reduces flexibility, as many abstractions do. But those things are not inherently downsides of Docker - that complexity adds features on top, and it’s able to provide a standard approach and set of features.

          The author’s suggestion that you could put those components together in a couple of scripts and get the same results as Docker is pretty silly - people have done exactly that, resulting in projects like bocker (docker in 1000 lines of bash) which is neat but avoids implementing any of the actually hard parts of docker, like networking, filesystems or system setup.

          And if so, is unifying & packaging this functionality a better solution than writing a guide on how to configure the set-up yourself?

          It think it absolutely is, in almost any situation. Giving people tools they can use without needing to understand the details is incredibly useful, as it means they can spend their time working on more important things. I find it hard to imagine any situation where the answer to that isn’t yes.

          1. 1

            I understand. In that case, what would be missing if the “hard parts” were implemented in the scripts as well? Or, from another perspective, why couldn’t Docker itself consist of a bunch of shellscripts? Wouldn’t using existing system utilities be a boon for modularity?

            1. 2

              I think you’re conflating “uses existing system utilities” (which Docker does) with “written in shell scripts” (which Docker is not). There’s not really any good reason to write things in sh/bash other than it’s ease of use for small tasks - it’s certainly not suited for writing anything at all complex.

              1. 1

                Oh. If it already uses system utilities (such as chroot or systemd-nspawn, as the author suggests), then what on earth is the author complaining about? It seems like he’s saying that Docker goes about doing this in a roundabout fashion, but to hear you say it, that is false.

    5. 9

      The article makes some good points; of course there’s not much technically interesting or novel in Docker.

      But the alternatives he suggests would be non-starters at every place I’ve worked, simply because some people insist on using Macs, which don’t support them. Docker let mac users finally join the containerization party (even if Docker for Mac secretly uses Linux behind the scenes anyway, ssshhh; don’t tell anyone.) I still think the story of “how a technically-inferior/uninteresting solution dominated the market by demonstrating that social/perception issues are usually the main thing that drives uptake” is more interesting than “Docker is bad, m’kay” takes anyway, and there’s a lot to learn there.

      1. 2

        Thanks for thinking of us, poor Mac users :-) I switched from Ubuntu to macOs a few years ago, and one of the things I missed was not being able to use Docker natively (without lauching a VM myself). The advent of Docker for Mac made things a lot easier (and standardized) and made me switch my main project from Vagrant to Docker.

    6. 6

      There’s an argument that the really good thing about docker is not in fact docker itself, but Dockerfile. By standardizing on a description for containers, you can now implement containers in whatever way you want, over zones / jails / namespaces or virtual machines, and they’ll work the same.

    7. 4

      The practical problems with “application containers” are well known. Zombie orphan processes fill up your container and consume resources with no init to reap them; the traditional cron and syslog daemons are not automatically available; etc., etc.

      This (the former) is one of the worst misfeatures of namespaces+cgroups.

      On FreeBSD, jailed processes are just reaped by the normal init. And jail_remove(2) will always force kill all processes in a jail.

    8. 4

      I skimmed but couldn’t find how “Docker [is] Considered Harmful” in the article.

    9. 5

      “Considered Harmful” Essays Considered Harmful https://meyerweb.com/eric/comment/chech.html

      1. 5

        I really like “Considered Harmful” as an identifier. If I’m trying to find articles skeptical of a position/technology/paradigm the first thing I do is google “X considered harmful”. Usually they’re pretty bad but it at least gives me a place to start.

      2. 1

        The writing of a “considered harmful” essay often serves to inflame whatever debate is in progress…

        Was there ever a debate in progress on this matter? Seems like the article questioned a rather accepted practice/tool.

        The publication of a “considered harmful” essay has a strong tendency to alienate neutral parties…

        I’m a neutral party, but I do tend to be skeptical of new tools. Granted, I’ve never really had a need for them; but if I did, then I wouldn’t really be a neutral party, would I?

        I haven’t been alienated, I’ve been informed.

        A sufficiently dogmatic “considered harmful” essay can end a debate in favor of the viewpoint the essay considers harmful.

        I don’t think this one counts as dogmatic. It generally reads as “there exists a more idiomatic solution”.

        …we’ve seen them a thousand times before and didn’t really learn anything from them…

        I almost always learn something new from “Considered Harmful” essays. They often make me realize there are multiple solutions to a problem that seems to have just one.