1. 19
  1.  

  2. 3

    The problem is in, however, how those images get produced. Take https://github.com/CentOS/CentOS-Do... for example, from the official CentOS Dockerfile repository. What’s wrong with this? IT’S DOWNLOADING ARBITRARY CODE OVER HTTP!!!

    What’s wrong with auditing the Dockerfile? Seems to me Docker is a lot more transparent than other methods. Thoughts?

    1. 5

      It’s nice that you can audit them, but they’re all written like this. Docker claims it can be used for reproducible builds, but the first lines in every single Dockerfile are apt-get install a-whole-bunch-of-crap and npm/pip/gem install oh-my-god-thats-a-lot-of-packages. Nobody is actually trying to manage their dependencies or develop self contained codebases, just crossing their fingers and hoping upstream doesn’t break anything.

      1. 1

        How is this different from build systems that don’t use Docker? Sure, you might be using Jenkins to build stuff (and have to manage those hosts for the OS-level packages), but the npm/pip/gem/jar, etc., there’s no difference. You still have to manage your dependencies. In my experience, the Docker stuff helps with the OS-level packages (previously we had multiple Jenkins hosts that had the versions of things specific to projects – god help you if you accidentally built your project on the wrong host).

        1. 4

          I use maven, where the release plugin enforces that releases only depend on releases, and releases are immutable, which together means that builds are reproducible (unless someone used version ranges, but the culture is to not do that). You can also specify the GPG keys to check signatures against for each dependency. It’s not the default configuration and there’s a bootstrapping problem (you’d better make sure the version of the gpg plugin cached on your Jenkins machine is one that actually checks), but it’s doable.

          1. 1

            On personal projects and at work I’ve been putting all the dependencies I use in the source repository. Usually we include the source code, for build tools (premake, clang-format) we add binaries to the repo instead.

            There are never any surprises from upstream, and you can build the code on any machine that has git and a C++ compiler.

            There’s some friction adding a new library but I don’t think that’s a bad thing. If a dependency is really too difficult to integrate with our build system then the code is probably going to be difficult too. If we need to do something easy people will write it themselves.

        2. 1

          At the risk of stating the obvious: if you audit the Dockerfile and it says “hey we downloaded this thing over HTTP and never checked the signature” there’s no way to tell if you got MITMed.

          1. 3

            Okay, so then you use another Dockerfile (or write your own). This is a very strange tack to take; you may as well say that Rust is an insecure programming languages because with a few lines of code you can create a trivial RCE vulnerability (open listener socket, accept connection, read line, spawn a shell command).

            For what it’s worth, almost every Dockerfile I’ve used installs its dependencies using something like apt or yum/rpm – and signatures are checked! And when installing via apt isn’t an option, Docker doesn’t keep you from doing the right thing (download over https, check signatures). You’re just running shell commands, after all.

            1. 1

              My point exactly. There’s nothing wrong with taking an existing Dockerfile that you find to be suspect, beefing it up by correcting some obvious security issues, and resubmitting it as a patch.

              I fail to see what the author of the article thinks is a better alternative. I’m open to be convinced otherwise, but saying it’s actively harmful seems overstated.

              1. 1

                For what it’s worth, almost every Dockerfile I’ve used installs its dependencies using something like apt or yum/rpm – and signatures are checked!

                OK, so the signatures are checked. You still don’t know what version you got.

                1. 4

                  Then pin the damned versions (apt-get install <pkg>=<version>), point at snapshot repos, and upgrade deliberately. This problem is totally orthogonal to Docker. All typical package repos suffer from it. I only know of Nix that doesn’t.

              2. 1

                This is why companies that care host their own registry for Docker images, just like they’ve done for Java, Python, Ruby, etc., for years. It is unfortunate that Docker didn’t design the registry system to be easily proxied, but this is easily worked around with current registry tools (Artifactory, for one).

            2. 2

              It’s possible I’m not reading this charitably enough, or that I’m utterly misinformed, but:

              I’ve come to the conclusion that Docker is actively harmful to organizations. Not the underlying technology…I think LXC is fantastic as are cgroups.

              As I understand it, anyway, LXC isn’t quite an underlying technology of Docker. LXC is, like Docker, a collection of user-space utilities that interact with the Linux kernel’s “container” features (namespaces, cgroups) and fancy container-friendly filesystems like btrfs, ZFS, or OverlayFS[1]. Docker and LXC provide roughly the same level of abstraction.

              This is to say nothing about the Docker project itself, which seems to think LTS is for suckers and everything should be bleeding edge. Bleeding edge FS. Bleeding edge Networking. Glibc? Fuck GNU as a staff, open source organization, and as a fucking crew. Let’s switch to Musl.

              I’m on board with the wariness at Docker’s reliance on bleeding edge technologies (it was an enormous pain to run Docker on Ubuntu 12.04, which we are finally moving away from at work), but as far as I know nothing in Docker itself relies on musl. Some folks who build images might prefer to base them on musl-using distributions like Alpine. This jab hardly seems fair to aim at the Docker project itself, though.

              (1): I believe that Docker was originally a wrapper of lxc, which might be why this misconception persists.

              1. 1

                Maybe there needs to be some kind of URL pinning, where downloaded resources are tied to SHAs. Realistically – installing Node dependencies involves going to the web. But once you have a build that works you should be able to use whatever you downloaded over and over…