1. 37
    1. 64

      I find Docker funny, because it’s an admission of defeat: portability is a lie, and dependencies are unmanageable. Installing dependencies on your own OS is a lost battle, so you install a whole new OS instead. The OS is too fragile to be changed, so a complete reinstall is now a natural part of the workflow. It’s “works on my machine” taken to the conclusion: you ship the machine then.

      1. 17

        We got here because dependency management for C libraries is terrible and completely inadequate for today’s development practices. I also think Docker is a bit overkill, but I don’t think this situation can be remedied with anything short of NixOS or unikernels.

        1. 9

          I place more of the blame on just how bad dynamic language packaging is (pip, npm), intersected with how bad most distributions butcher their native packages for those same dynamic languages. The rule of thumb in several communities seems to be a recommendation to avoid using native packages altogether.

          Imagine if instead static compilation was more common (or even just better packaging norms for most languages), and if we had better OS level sandboxing support!

          1. 3

            Can you explain what you find bad about pip/npm packaging?

            1. 2

              I don’t think npm is problematic to Docker levels. It always supported project-specific dependencies.

              Python OTOH is old enough that by default (if you don’t patch it with pipenv) it expects to use a shared system-global directory for all dependencies. This setup made sense when hard drive space was precious and computers were off-line. Plus the whole v2/v3 thing happened.

              1. 5

                by default (if you don’t patch it with pipenv)

                pipenv is…controversial.

                It also is not the sole way to accomplish what you want (isolated environments, which are called “virtual environments” in Python; pipenv does not provide that, it provides a hopefully-more-convenient interface to the thing that actually provides that).

        2. 4

          Yes, unikernels and “os as static lib” seem the sensible way forward from here to me, also. I don’t know why it never caught on.

          1. 4

            People with way more experience than me on the subject have made a strong point about debuggability. Also, existing software and libraries make assumptions about the filesystem and other things that are not immediately available on unikernels being there, and rewriting them to be reusable on unikernels is not an easy task. I’m also not sure about the state of the tooling for deploying unikernels.

            Right now it’s an uphill battle, but I think we’re just a couple years away and we’ll get there eventually.

            1. 6

              Painfully easy to debug with GDB: https://nanovms.com/dev/tutorials/debugging-nanos-unikernels-with-gdb-and-ops - Bryan is full of FUD

              1. 4

                GDB being there is great!

                Now you also might want lsof, netstat, strace, iostat, ltrace… all the tools which exist for telling you what’s going on in the application to kernel interface are now gone because the application is the kernel. Those interfaces are subroutine calls or queues instead.

                It’s not insurmountable but you do need to recreate all of these things, no? And they won’t be identical to what people are used to.

                I guess the upside is that making dtrace or an analogue of it in unikernel land is prolly easier than it was in split kernel userspace land: there’s only one address space in which you need to hot patch code. :)

                1. 2

                  It obviously wouldn’t be “identical to what people are used to” though, that’s kind of the point. And you don’t want a narrow slice of a full linux system with just the syscalls you use compiled in, it’d be a completely different and much simpler system designed without having to constantly jump up and down between privelege levels, which would make a regular debugger a lot more effective to track a wider array of things than it can now while living in the user layer of a full OS.

                2. 2

                  Perhaps some tools you’d put in as plugins but most of the output from these tools would be better off being exported through whatever you want to use for observability (such as prometheus). One thing that confuses a ton of people is that they are expecting to deal with a full blown general purpose operating system which it isn’t. For example if you take your lsof example - suppose I’m trying to figure out what port is tied to what process - well in this case you already know cause there’s only one.

                  As for things like strace - we actually already did implement something similar a year or so ago as it was vital to figure out what applications were doing what. We also have ftrace like functionality too.

                  Finally, as for tool parity you are right if all you are using is Linux then everything should be relatively the same, but if you jump between say osx and linux you’ll find quite a few different flags or different names.

        3. 1

          Can you further clarify? With your distribution’s package manager and pkg-config development in C and C++ seems fine. I could see docker being more of a thing on Windows with C libraries because package management isn’t really a thing on that OS (although msys seems like it has pacman which is nice). Also wouldn’t you use the same C library dependency management inside the container?

          Funny enough, we are using docker at work for non-C languages (dotnet/mono).

      2. 6

        That’s exactly what I said at work when we began Dockerization of our services. “We just concluded that dependency management is impossible, so we may as well hermetically seal everything into a container.” It’s sad that we’re here, but there are several reasons both technical and business related why I see containerization as being useful for us at $WORK.

      3. 6

        Which is what we used to do back in the 70s and 80s. Then operating systems started providing a common set of interfaces so you could run multiple programs safe from each other (in theory), then too many holes started opening up and programs relying on specific global shared libs/state which would clash, and too many assumptions about global filesystem layout, and now we’ve got yet another re-implementation of the original idea, just stacked atop of and wrapped around the old, crud piling up around us comprised of yak hair, old buffer overflows, and decisions made when megabytes of storage were our most precious resource.

      4. 1

        What if I told you that you don’t need an os at all in your docker container? You can, and probably should, strip it down to the minimal dependencies required.

      5. 0

        This is amazing insight. Wow. :O Saving and sending this.

          1. 1

            Thanks for the laugh :’)

    2. 15

      I have a medium-size pet project in Rails + Webpack that I develop under the docker-compose on Mac. I edit the code directly on local filesystem, but everything else runs in docker-compose: a dozen of images in total (rails app, faye, sidekiq, webpack; nginx, mysql, redis, memcached, elasticsearch, graphite; plus a “shell” container).

      I’ve spent quite some time on tuning it (~80 commits in 3 years), but now it could be set up on a fresh laptop in ~20 minutes (mostly waiting for download and build). I did it multiple times.

      Also, when you change any dependencies: package.json, Gemfile, version of Ruby needed, etc., it gets updated incrementally, you just Ctrl-C and docker-compose up. It runs on several small shell scripts coordinated by a Makefile). All the dependency packages (yarn and gems) are kept in docker volumes.

      When you change application code, both rails apps and Webpack update automatically with the normal hot reload.

      All of the uploaded files are kept in mounted directories.

      The configuration is not perfect: it’s slower than bare metal, of course; Mac docker had multiple issues on Mac (but it’s much better nowadays). But my main goal was to prevent human mistakes: everything loads precisely the versions that are stored in version control, the system is completely self-contained. This goal is handled perfectly.

      Ahhh, and of course it’s running with the full local HTTPS enabled, no-compromise.

      If there is a sufficient interest I may post some specific advice about this configuration. I quickly googled for containerized Rails, and the first few articles have issues that my configuration hasn’t :)

    3. 13

      This guy is doing it wrong:

      You want to install a new package? Oh, open a shell into your container, then install it.

      If you need to add dependencies, rebuild. If you really want to run a command inside your running container, docker exec is there for you to wrap with your favorite build tool.

      And I don’t have a pat answer for compiling in a container, but I will say that as a deployment artifact I don’t think you should have your compiler in there.

      If you really want a vm like experience then by all means use a vm.

      1. 9

        That’s missing the point a bit, but I addressed that in the previous paragraph in the article:

        Or you can install the dependencies into the image, but then you have to rebuild the entire image and install every dependency to just add one for testing something out, adding unnecessary friction.

        It’s quite possibly the right way to do it but for a development environment I’ve just found that’s a ton of extra friction to test things out and experiment.

        The broader point is that everything you do interactively while developing requires some extra steps, and in my experience development takes some interactive steps (to launch a test runner, to run a dev server, etc.).

        1. 1

          in my experience development takes some interactive steps (to launch a test runner, to run a dev server, etc.).

          There’s no reason any of those things need to be interactive for day to day development. Docker is popular with people who want to minimize manual steps. It sounds like your preferred style of working is quite different.

          Incidentally, you can and should run multiple containers for different processes. This usually wouldn’t take extra steps day to day, because the usual pattern is wrap the whole command line with a command runner (make, rake, whatever you like).

          Fwiw, I have different patterns for different languages. For Python I usually develop in a virtual env, and replicate that in my docker container for deployment.

          For golang, I compile outside of the container, and deploy the container to kube for testing.

          Assuming you actually want to use docker as part of your development, I’d suggest you figure out the additional tooling to make it convenient for you.

    4. 9

      Add unreliable translation of filesystem events for the purpose of live reload, live rebuild, etc.

      1. 2

        I wrote watchexec originally, which is a Rust CLI utility focused on running commands when files are modified.

        I suspect about 50% of the GitHub issues involve Docker somehow. Not that they didn’t reference some real concerns, but it felt like Docker was essentially Weird Linux and it would subtly break user workflows, encouraging tools to adapt to it.

        I really dislike cottage industries that grow up around stuff like this.

    5. 7

      The big problem with Docker is that it is touted/envisioned as a way of packaging all dependencies one needs, and then allowing one to run that tool on a machine. However, what is sorely lacking is integrating itself in the environment, what this article is pointing out. For example, if it would be easy, I should be able to list all docker containers containing the tools I need, and then use them from my favorite shell, without needing to preface anything with docker.

      Luckily for me, it is easy when using Nix, so I don’t look much into docker. Unfortunately, I sometimes work with people that like to suffer and use inferior solutions…

    6. 7

      I’m in a position where I could affect how we build our stack, and we use Docker, but only for deployment. Dev is done using normal processes straight on host. We’ve rewritten pretty much the entire stack and throughout kept a strict check on dependencies. We avoid NodeJS kitchen sink deps like webpack/babel, ava, etc. Similarly in Rust, avoid tokio, arctix, etc, unless we really benefit from them.

      It takes discipline, but with carefully managed dependencies we have both happy developers and happy opsing.

      1. 1

        We avoid NodeJS kitchen sink deps like webpack/babel, ava, etc

        Fwiw one thing I’ve seen work okay is:

        • have http API backend written for node using tsc
        • use webpack to produce a single js file bundle of the server side code
        • write a small script that invokes webpack and zips up the output plus all static assets
        • deploy that

        End result is a zip file of a few MB despite starting from a node_modules that was much bigger, since only runtime dependencies get pulled in. No need to either run “npm install” or “yarn” on servers, nor to bundle the entirety of node_modules.

        One downside is some code does dubious things with __dirname and dynamic require(). You tend to find out about it when deploying to staging.

    7. 6

      Huh?

      Performance is horrible on Macs.

      No it’s not. And then there are no numbers to back up that claim, just …

      And yet the performance is tangibly worse for Docker on Macs, because you have to run it inside a VM.

      Well, that could be a reason why performance might be worse, but VMs are actually pretty darn efficient these days for many tasks. It certainly in no way justifies the “horrible” from the headline. It doesn’t even justify “tangibly”, for that we would at least need to see some numbers.

      When making performance claims, always be measuring. Otherwise you will get egg on your face.

      Furthermore:

      Windows cleverly skirts this problem by running the Linux kernel directly, not inside a VM;

      That turns out not to be the case. It used to run directly, but, er, performance was “horrible” particularly for file-intensive operations. So with WSL2, Microsoft improved performance (dramatically) by switching to running the Linux kernel inside a VM.

      So exactly the opposite of what the author claims.

      (This is all easily duckable, for example Initial Impressions of WSL 2 - 13× Faster than WSL 1

      1. 1

        Compiling C++ in Docker is definitely not the same as compiling it on bare metal. If I could spit out Linux binaries from Mac I’d consider it, but the build process for some stuff at $work not Mac ready.

    8. 4

      Containers for a local development should be treated as a last resort. If setting up a development environment for a project on a local machine gives you a headache because of incompatible architecture, dependency conflicts, etc. then containerizing dev environment might be the lesser of two evils. Applying containerization blindly just gives you drawbacks described in an article without an immediate benefit. That said, containerizing build/test/staging/production environments is typically a good idea.

    9. 3

      I have a suspicion that a large part of big discrepancy there is in people’s reaction to using Docker, especially in development, is whether your dev machine is Linux or not, which people often forget to mention.

      As a long time Linux user, I already know how do things like install multiple versions of Postgres on my dev machine (which isn’t hard), and isolate projects as needed (virtualenvs etc), in ways that are far more lightweight and convenient than Docker, and very, very rarely hit “it works on my machine but not in prod”. I imagine that if I were a Mac user Docker might be a lifesaver.

    10. 2

      I’ve started using Docker for creating CI images1. Being able to package all the tools I need for various checks proves to be very valuable. I only cache only one image and experience improved speeds.

      When it comes to using it for packaging code, I have no experience.

      Do you guys see my use case is valid? Or do you have any suggestions?

    11. 2

      While docker is probably the most commonly used container, it seems a little disingenuous to present this article as drawbacks of developing in “containers”, when most of the issues mentioned are specific to docker and some are even specific to the docker image which the author has chosen.

    12. 2

      Wrote an article with similar sentiments: I get why many companies use it, but I wish the downsides where mentioned more often/explicitly.

    13. 1

      Some (not all) of those issues are solved by tools like toolbox: https://github.com/containers/toolbox

      1. 1

        But then we have yet another tool to learn and understand. I’m with srpablo in that regard, that we should always compare which advantages we really can use from a new technology, which downsides it brings along and then evaluate whether the benefits for our use case (which in most cases is only a subset of the benefits the tool advertises) are really higher than new abstraction layer + new tool to learn + downsides of the tool.

        btw I recommend the article linked by srpablo.

    14. 1

      Opening a shell into the container is a pain, but tools will make it easier over time. VS Code does this well with the Remote - Containers extension.

      1. 1

        Oh, and Github’s Codespaces feature really makes it easy too.