1. 42
    1. 32

      I know these are corporations without feelings etc. etc. but I can’t help but feel bad for Docker.

      The “it’s just BSD jails/chroot” argument is like the “Dropbox is just ftp” arguments - Docker made containerization mainstream.

      And after Docker created a pretty massive movement to start using and deploying containers, everybody and their mother started writing their own container runtimes. Runc, OCI, CoreOS, cr-io, rkt, podman. Where were these tools before Docker?

      1. 46

        I see where you’re coming from, but from someone who used containers pre-Docker, and on container tech pre-Docker for a hobby project (2011-2015), I think they did a bad job.

        Docker is a pretty low quality piece of software. The superfluous daemon is a major reason (the article mentions this). The security story is another, though they were building on unstable foundations in the kernel.

        So yes they made something popular, and created a de-facto standard. But other people have had to swim in their wake and clean up the mess (OCI, etc.).

        I like cyphar’s blog on these topics: https://www.cyphar.com/blog/

        They also raised a lot of money and hired a lot of people, which I suppose is a good way to build certain things. I’m not sure it is a great way to build container tech, although I’ll take your point that there was little cooperation in the space beforehand.

        There is some “blame” to Google, since the kernel features were contributed by them, but user space tools were “missing”:

        https://en.wikipedia.org/wiki/Cgroups

        But really I think it is more of an issue with the kernel dev model, which is good at copying APIs that AT&T or someone else made, and bad at developing new ones:

        https://lwn.net/Articles/679786/

        As maintainer Tejun Heo himself has admitted [YouTube], “design followed implementation”, “different decisions were taken for different controllers”, and “sometimes too much flexibility causes a hindrance”. In an LWN article from 2012, it was said that “control groups are one of those features that kernel developers love to hate.”

        https://lwn.net/Articles/484251/

        1. 14

          Docker is a pretty low quality piece of software. The superfluous daemon is a major reason (the article mentions this). The security story is another, though they were building on unstable foundations in the kernel.

          Even taken on its own terms (daemon architecture, etc.) Docker is pretty low-quality. The daemon tends to wedge itself in an unresponsive state after running for a few weeks!

          “design followed implementation”

          I love this quote, and I think this is a factor in Docker as well. The image layer system pretty directly exposes union filesystems, but it’s not a particularly efficient solution to the problem of distributing incremental image updates. I think that this is part of why part of the Docker community pushes for very small base images, like Alpine — it’s a workaround for an inefficient distribution mechanism.

          1. 5

            Yes, yes and thrice yes. What I also really don’t understand is why there can’t be a big push for build tooling that takes big chunks of work out of the process of building statically linked binary only images so that you don’t need a whole OS inside the container. I mean, I guess I do, because there’s clearly no motivation for any of the big companies, who make tons of money out of hosting per-mb container registry storage, to make containers smaller, let alone to expend effort helping others to do so - and the whole “resources are cheap, developer time is expensive” line that Rails & DHH popularised back in the day makes everyone think “ahh it’s half a gig, who cares? Incremental, anyway, amirite?”. Well, I care. Every single byte of every image contributes to energy usage. It makes me cross.

            1. 13

              That’s basically bazel, open sourced by Google: https://bazel.build/

              Google doesn’t ship OS images inside its containers, in the style of Docker. (And remember as mentioned above, many of the Linux kernel container features were developed by Google).

              Instead they use statically linked binaries. However it doesn’t really solve the “gigabyte images” problem. A single static dependency graph tends to blow up as well, and you end up with gigabyte binaries.

              Bazel works really well for some cases, namely if most of your code is in C++. It compiles fast and the binaries are reasonably small compared to the optimum.

              In other cases it can be the worst of both worlds, because you have to throw out the entire build system (Makefile, autoconf) and rewrite in the Bazel build language. You have a maintenance problem, in other words.


              I used to have the same question as you… but then I tried to build containers from scratch, which was one of the primary motivations for Oil.

              I guess the short answer is “Do Linux From Scratch and see how much work it is”. The dependency graph is inherently not well curated.

              Another answer is “try to build a container that makes a plot in the R language, built from scratch”. This is a very difficult problem, more difficult than anyone thinks without having tried it. (e.g. it depends on Fortran compilers, graphics libraries, etc.)


              I think there could be a big push along the lines of “reproducible builds” for “dependency graph curation”, but it is a hard problem, not many people have the expertise, and requires a lot of coordination … It basically requires fiddling with a lot of gross build systems, i.e. adding more hacks upon the big pile that’s already there.


              Another thing I learned ~2012 when looking into this stuff: Version resolution is an NP-complete problem.

              https://research.swtch.com/version-sat

              Also Debian’s package manager is not divorced from its package data. There are hacks in the package manager for specific packages. That is true for most distros as far as I know.

              So again, it’s a very hard problem … not just one of motivation.

              1. 2

                That’s a really insightful and interesting reply. Thank you.

        2. 2

          So it could be safer to say that their marketed/attractive product, even if not technically the best, may have galvanized cooperation and better developments in the container space?

          1. 3

            I would say that’s accurate. If raising a bunch of money for a non-commercially-viable company is the only way to do that, then I guess I don’t have any answers … :) But I sure wish there was a better way.

            The old way was that AT&T was a monopoly and hired smart people to design software, which was flawed in its own way too (and Xerox PARC too). Google has a similar role now, in that they open source enormous amounts of code like Android, Chrome, and cgroups, which other people build on, including Docker … But yes it is ironic that Docker motivated Google to work on container tech, when Google had the original kernel use cases for their clusters.

        3. 2

          But really I think it is more of an issue with the kernel dev model, which is good at copying APIs that AT&T or someone else made, and bad at developing new ones:

          FreeBSD Jails and Solaris Zones are both better kernel technologies for deploying containers than the cgroup / namespace / seccomp-bpf mess that Linux uses, so it appears that the Linux kernel devs are not actually very good at copying APIs that other people made either. When Solaris copied Jails, they came up with something noticeably better (Jails have now more or less caught up). Linux, with both Jails and Zones to copy from, made something significantly worse in the name of generality and has no excuse.

      2. 28

        Where were these tools before Docker?

        Lacking a marketing department.

        1. 33

          Like it or not, marketing is a part of software development. The eschew-everything hacker ethos is marketing too, it just has a different target audience.

          1. 1

            Well, I despise the former version of marketing that you mention. Software should stand on its own merits only, not because it’s “first” or whatever.

          2. 1

            The “social” in “social coding” seems to be eating the “coding” more than usual as time progresses.

            I suppose there are some that believe that to be a feature.

            1. 3

              This is a subjective judgment, which I think reflects more on the speaker and their biases than on any objective condition. I don’t share it, for the record — I think programming has been a pathologically asocial discipline for most of it’s existence, and we’re just now beginning to “course correct” in a meaningful way.

        2. 8

          LXC did have marketing, though admittedly not the sort of blitz Docker went on.

          I think that the Docker difference is willingness to sacrifice security for UX. With LXC you must type sudo all the time. The Docker way is to add yourself to the docker group. Sure, that makes your user root-equivalent, but now all your docker commands work so easily! This produces good word-of-mouth and lots of Medium tutorials, none of which mention the root-equivalence thing.

          It is probably also important that the Docker CLI runs on a Mac. In that context we might do better to compare Docker to Vagrant than LXC. Even if boot2docker/Docker Desktop/Docker for Mac don’t work that well, they are initially appealing.

          1. 9

            I think by far the biggest problem Docker solved is easy redistribution of images: docker push, docker pull. This is the part that no container system had at the time (AFAIK) and explains much of its popularity.

            Everything else: meh.

            1. 2

              This. I saw one of the initial Docker demos, at PyCon long ago, and this was what made a compelling difference from what LXC provided.

        3. 2

          Pithy and cute, but none of the tools OP mentioned existed before Docker.

    2. 48

      Read the rest of this story with a free account.

      Nah, rather not.

      1. 29

        It’s irritating that this guy has a self-hosted version of this post, but decided to post (or self-promote) the medium version instead.

        1. 12

          I wonder what’s the criteria to get banned for self-promotion … I also submit articles I wrote here sometimes – am I on the danger list for that?

          1. 35

            He had zero site use outside of promoting his own posts even after I DMd to ask him to stop. I generally give at least one warning PM unless it’s something really abusive like the LoadMill spam ring.

            1. 6

              @pushcx I’m just wondering if the ban was a tad strict?

              I looked at their profile and while it’s all true (person only posts own articles, links only to medium though they have their own website versions) they do seem to have a high lobste.rs score (8.27) indicating some popularity here for their content (unless you tell me there has been some gaming of the score, in which case, what is wrong with people?)

              I don’t know how medium works, but I guess there may be some financial reason for the author to drive traffic there? It is a gray area, but the posts all seem to have technical merit and generate useful discussion?

              Anyhow, just wondering.

              1. 6

                I think it’s always the case that if you post more links than comments, that your score rises, since more people vote on posts than comments.

                1. 1

                  Perhaps Karma garnered from posts should be balanced by the number of comments, or maybe comparing the ratio of people who click through (obviously this requires obnoxious tracking :( ) to those who upvote.

              2. 5

                I agree that his posts have generally been pretty good. The part you can’t see is that I sent him a note in June asking him to quit treating the site so exploitatively, and to take a break entirely from posting links to his site. (At this point a pretty formulaic note at this point.) He did not respond and continued.

                It’s a deliberate strategy of this author to post Medium links. Medium has a Partner Program incentivizing exactly this behavior: clicks = dollars. Another Medium-hosted spammer I’ve banned posted entreporn to YC News today to make money on the (sanitized) story of how he made money spamming links.

                1. 1

                  @pushcx, thank you for taking the time to explain. I can see the long term danger of this behavior.

                  1. 2

                    Thanks for pointing out when I might be making a mistake. I’ve certainly made them before, and I’m glad to have the questioning and feedback.

            2. 3

              Ah ok, thanks!

        2. 6

          On the plus side, I can read the medium version without having to enable javascript…

        3. 4

          Ah, he’s been banned.

          1. 1

            It’s Time To Say Goodbye to MartinHeinz

        4. 1

          you mean this black page with “We’re sorry but frontend doesn’t work properly without JavaScript enabled. Please enable it to continue.” ? Sorry but I had to say this..

    3. 16

      There are competing arguments here.

      What docker gets us is a single well trodden path forward. It’s not a great path always, but it is a path. Adding a number of new tools to do the different aspects of container work gets us several problems. Among them:

      • Are the maintainers up to snuff

      • Is the code crappy?

      • How does this differ in the subtle and painful ways than the docker de facto standard?

      • Cognitive load: mow many tools do I have to master to replicate what docker does out of the box?

      On this specific article - I have deep suspicion of anything RedHat makes after systemd, and I notice the OP is pushing RedHat projects. systemd is one of the worst architectures I have ever seen. Today, I see RedHat as an IBM consultancy with some initiating taster projects. Tarring with a broad brush, but, it’s a working business model that produces poor software.

      N.b., after I wrote that paragraph above, I looked at OP’s website- they are an IBM employee, and this is now vibing like “content marketing”.

    4. 15

      My answer is selfdock. I started it 5 years ago, and it’s been feature complete for almost as long. I was tired of Docker being slow for local development, besides doing everything wrong with disk, memory and security. I wanted to make it fast and right from the ground up. This thing is so lean that it doesn’t even use a heap.

      Besides being daemonless and not requiring root, like all modern contenders, it does away with images, and instead uses a pre-unpacked read-only root filesystem for speed. It can run docker images if you unpack them first.

      1. 6

        Gosh… I wish I discovered that 5 years ago :/ . I love the concept, unfortunately, now I’m using podman which does the same thing, with full docker compatibility.

        One question though, isn’t your project using an SUID binary?

        1. 4

          Yes, it’s a suid binary, but it doesn’t give you root. That would be a bug, unlike with docker. It drops the effective user id before spawning the process to run in the container.

      2. 2

        This is really cool. Worth it’s own post.

    5. 8

      Docker deserves credit for popularizing containers and making them easy for developers to use. The problem is that they weren’t content to focus on the developer market and tried to jump straight to enterprise software, and also force everything into Docker (e.g. Swarm) and had no respect for their community. The switch to Moby was heavy handed and displayed a lot of contempt for the open source community around Docker.

      Note that runc was broken out by Docker themselves, though, and they participated in OCI. CoreOS was an OS built around running containers but the CoreOS folks eventually spun up rkt most likely because 1) they found Docker unwilling to accept patches that would make it suitable for production and 2) they had ideas about how to make a container runtime better for production.

      The reason CRI-O exists is because Kubernetes needed a stable container runtime that kept up with its development. Docker and Kubernetes didn’t move at the same speed and updates to one wouldn’t necessarily be compatible with the other. All other considerations aside, those had to be tightly coupled and if the two projects aren’t in sync then at least one is going to suffer.

      Note: I’m probably biased, I work for Red Hat and while there were a lot of good folks at Docker I feel like the company was very heavy handed with the project and Solomon was not a great “BDFL.” The market was theirs to lose, and they lost it. It was entirely avoidable, but if you make it clear as a company and project that what the larger community wants will only happen if they go around you or without you, eventually that’s what happens.

    6. 7

      In scenarios which precise better optimization of which tools are available, sure, let’s use containerd (for example) instead of Docker in our production machines for running Kubernetes.

      But, sometimes, “monolithic” tools make sense. I want to use containers in my development workflow, which has a lot of requirements (running, building, inspecting…), what I need? Just Docker. It’s a no-brainer. And thanks to the OCI specification, that setup can generate images that run in production with a different set of tools.

      People tend to call monolithic to stuff like if it were an obviously bad thing, but those exists because, sometimes, it just makes sense to have some set of functionalities tied together in a single package that is easier to reason about than a multitude of different packages with their different versions.

      1. 4

        I would be more sympathetic to this argument if Docker wasn’t a gigantic pain in the ass to use for local development.

        1. 3

          I agree. Docker belongs on the CI server, not on your laptop. (Or in prod, for that matter.)

        2. 1

          how’s that?

          1. 3
            1. It’s slow
            2. It’s a memory hog
            3. It makes every single thing you want to do at least twice as complicated
      2. 2

        But, sometimes, “monolithic” tools make sense

        I would even say than it’s the correct approach for innovation, right after R&D and through product differentiation. They went through that quite well. Docker’s problem is no longer an architecture nor an implementation problem, more that their innovation has evolved into a commodity.

    7. 2

      the main point for me that i am missing in the post and the comments is networking: i tried hard to get a service running in docker to show up on a public interface, i.e. having all tcp/udp/sctp for a given ip address show up inside a given docker container (e.g. for osmo-mgw assigning RTP ports). also to get multicast working between docker containers (e.g. for osmo-hlr and distributed-gsm testing). it’s impossible, apparently! my main reason to switch from docker would be networking. any hints on that?

      1. 1

        Use the --network host option, and the container will use same network stack as the host operating system. This only works on Linux, though.

    8. 1

      “ Not every story on Medium is free, like this one. Become a member to get unlimited access and support the voices you want to hear more from.”

      Are paywalled stories not verboten here?

    9. 1
    10. 1

      I work on containers. We have a nice test suite system that lets us write properties that should hold on derivative containers, so, container starts, something runs, container stops, state saved, another container starts, property checked; that sort of thing. One advantage of the otherwise universally-maligned daemon model is that this is possible. We still don’t have an adequate replacement for the Docker API in the daemonless model so these tests can still only be run under Docker environments.

    11. 0

      Why do people say hello to docker in the first place? Because unix, Make, compiled code, and shell script glue isn’t powerful enuf? Or to make devops a job title that means something. –Sarcastic and jaded, systemd still sucks

      1. 3

        Well for example when you’re not working with compiled code (or you’re working with loads of shared libraries) then the system configuration starts playing a big role in how your program replies.

        For example, you have a Python webserver. Firstly, do you have the right Python available? And if this server does something like PDF rendering it might call out to another application to do so, so is that installed properly?

        I think that this stuff is more avoidable with like…. proper compilation artifacts but it can be painful to reproduce your production system locally when you are doing something more novel than just querying a DB and generating text

        1. 3

          Well for example when you’re not working with compiled code

          Have you actually ever worked with a bigger C/C++ application? :) I’ve never seen any ecosystem where it is worse to separate your system’s library version in /usr from custom installed stuff. Everything from Python, Ruby, PHP is so much easier to work with in this regard.

      2. 2

        One of the reasons is because “we have an Kubernetes / OpenShift / Frozbinitz Special cluster” that is supported by “the enterprise” and I can do my initial work on my laptop with Docker.

        I don’t like Docker the company, but Docker the command line tool is fine. Docker Desktop has the potential to be great, but it seems like it would require Docker the company to be purchased by someone less dependent on registered users as a metric for survival.

      3. 2

        Here’s one example. At my last job, I first introduced Docker to our environment because, sadly, we had a few Node.js applications written at various points in the company’s history each of which required a particular version of the Node runtime and also particular versions of various modules that weren’t compatible with all the versions of the runtime we were using.

        It made the production configuration a bit of a nightmare to keep track of. If you wanted to run two of these on the same host, you could no longer just install a Node system package, etc.

        Packaging the applications in Docker containers eliminated all those headaches in one step.

        We were not running Kubernetes or any other orchestration system; we just stopped deploying directory trees to our servers. We still used scripts to do our deployment, but they became much simpler scripts than before.

      4. 1

        Because their code doesn’t scale. Docker is a hardware solution to a software problem

        1. 1

          Can you elaborate on this? I’ve mostly used Docker to simplify dependency management, nothing whatsoever to do with any kind of scaling concern. How is it a “hardware solution” when it is a piece of software?

          1. 1

            K8s (which is just fancy docker orchestration) allows you to string containers together. You can find many pre defined config a that allow rapid scaling up and down, allowing an “app” to add additional workers when needed, add those to the load balancer and when they are idle to tear them down. It’s really “neat” but by adding (and subtracting) machines as needed it’s my point of being a hardware solution to a software problem.

            Instead of worker threads and processes and LPC you end up with many machines and RPC. The plus side of going with RPC of course is that you do get to scale on multiple physical hosts easily, while a thread aware app now needs to manage LPC and RPC to scale.

            It’s “expensive” hardware wise but look at Rancher it’s a quick and easy k8s that deploys painlessly and once you setup shared storage you can take some predefined app setup and hit the plus button from the UI and watch it auto scale, then hit minus and watch it right size.

            1. 1

              Sure, that makes total sense, but you’re talking about Kubernetes, not Docker, no? How does any of that apply if you’re running a fixed set of standalone Docker containers on a fixed set of hosts, say, or running Docker in development environments to get a consistent build toolchain?