1. 36
  1.  

  2. 15

    Man, remember the bad old days where we had to manually deploy security vulnerabilities?

    Thank God Docker is there to back us up. :)

    1. 28

      Yeah, it’s a really strange reversal. For a while, life was hard because you had to download and build software yourself. Then ubuntu/redhat/whatevs come along and it’s just apt-get update to always have the latest. No need to worry about security updates; there’s a whole team of people doing that. Oh, but that’s so hard! I have to install 813231 dependencies for my app stack. So now I download this docker blob which is again, completely outside my OS’s update mechanism. Now I’m at the mercy of whoever to update. Even if the teams responsible for these docker images tried to keep them up to date, it’s diffusing the work across way more people, which isn’t actually the way you want to do things. The work is all duplicated.

      Wrong solution for the wrong problem -> more problems.

      1. [Comment removed by author]

        1. 4

          This seems to rather complicate the admin overhead, rather than simplify it, no? I’m running linux disto WoolyGecko, and I install discourse using their preferred docker container, which is distro SeedlessPotato. I know nothing about SeedlessPotato, so how do I know to update it? The end game here is I have to learn some new magic apt/yum/apk/pkg incantation for every application. How is that not worse than before?

          1. 1

            Yes, It does. You either maintain another layer of OS (which works in my Docker use case) or you trust Joe Blow of the internet to install your server’s OS for you and maintain it at his discretion. Docker is not some magic sysadmin replacement, just as VMs and “the cloud” aren’t either. There are still servers and OSs that have to be maintained by someone. The mentality seems to be: shrug someone else will take care of it.

            EDIT: So on the flip side. The added overhead is easily scriptable and automatable with Docker. Building a container is a lot easier than imaging a server or deploying a VM. Although there are tools for those situations that do come close.

            1. 8

              For clarity, my perspective is mostly on the docker as packaging tool angle. As a deployment option for your own product, do whatever works for you of course. I can see the advantage. But for things (like discourse) the trend to using docker as a magic recipe to resolve insane dependency chains, I’m increasingly concerned it’s a major step backwards.

          2. 1

            Clearly if you do whatever upgrade mechanism your OS has it won’t apply it to the containers.

            1. 1

              That’s why you run it in the containers. The first thing I do in a Dockerfile is update the OS.

              1. 4

                Isn’t that frowned-upon, though, because it makes your build technically non-reproducible? In other words, if you bring down a running container (they’re supposed to be ephemeral, at least that seems to be the way people do things) and replace it with an “identical” container the new one might actually have different versions of things so it isn’t guaranteed to be truly identical.

                1. 1

                  Depends on what you are doing. If you want to redeploy the exact same thing, you use the same existing image. Don’t rebuild it. If you need to rebuild it to update your application, keep your own base OS image that you only update when planned. Rebuild your application image on top. Which would be the same situation as handling when to pull an updated image from the registry.

                  And to clarify terminology: ‘Images’ are built from a Dockerfile and are long lived and static. ‘Containers’ are the running instance of an image and are expected to be ephemeral.

                2. 1

                  Doesn’t that apply only when building the image? If you look at where it seems people are going with docker: as a distro-independent packaging system where you can download things from, what you do in your dockerfile doesn’t really seem all that relevant. On the flip side, my OS’s package manager has a lot more introspection than images currently do. As far as I have seen, at least.

            2. 1

              Wrong solution for the wrong problem -> more problems.

              Or what they say: More monkeys, more problems.

              1. 2

                Fortunately, not my monkeys, not my circus. :)

          3. 5

            I wonder how many of them are actually exploitable vulnerabilities under Docker’s actually attackable surface. Like, I imagine some amount of them are of the form “a user on the system can exploit a race condition in the file system such that…”, or “a user can read privileged files by…”.

            1. 2

              why do you think docker has a smaller attack surface?

              1. 7

                I think it probably has a larger attack surface, but I wonder if these CVEs were relevant to the one it has. I think it’s definitely got a different attack surface.

                1. 1

                  I’m guessing this is because, generally, Docker executes a single process per container. Which translates to one entry point per container. (?)

                2. 2

                  That’s a fair question, but there’s a long history of vulns that chain off other “irrelevant” vulns. The file race condition seems meaningless until somebody finds an equally meaningless directory traversal bug and combines the two into RCE. Some number of people are also going to take these images, customize them, and expose that attack surface. “Not exploitable today” has rarely converted to “not exploitable ever”.

                  1. 1

                    Well maybe. A file race condition doesn’t matter if there is no getty and you believe in cgroups and the cgroup is set up correctly. I’d agree a bug in encapsulation could be amplified, but it seems like there’s entire classes of bugs that are made essentially safe by the architectural firewall.