1. 0
  1. 15

    With containers security generally already comes baked In.

    People really believe this now?

    Using a container, not virtual machine the attack surface is already decreased.

    Excuse me. Containers add attack surface to a system, which may very well be running in a VM.

    That’s something to consider before we even consider the security aspect of everyone shipping a bit-rotting copy of their entire fucking stack in a container.

    1. 2

      By default Docker drops all capabilities except those needed, a whitelist instead of a blacklist approach. You can see a full list of available capabilities in Linux manpages.

      From: https://docs.docker.com/engine/security/security/#linux-kernel-capabilities

      If I’m not mistaken, this is actually an improvement. Adding apparmor profile is also very easy (was with systemd I know).

      I often hear that container are not as secure as they claim to be, fine, but I never see any proof that it’s less secure than a application ran with a default « deploy » user in a VM, without anything specific to security taken care of.

      I’d be happy to be proven the opposite, but I don’t see how it’s really less secure than that.

      1. 12
        By default Docker drops all capabilities except those needed, a whitelist instead of a blacklist approach. You can see a full list of available capabilities in Linux manpages.
        

        From: https://docs.docker.com/engine/security/security/#linux-kernel-capabilities

        If I’m not mistaken, this is actually an improvement. Adding apparmor profile is also very easy (was with systemd I know).

        Improvement over what? For non-root users, no capabilities is the default. So if you do the right thing, there is nothing to drop, and docker can’t improve upon that.

        Instead, I see containers request a bunch of capabilities, including CAP_SYS_ADMIN, CAP_NET_ADMIN, etcetra because the capability system is lackluster and people who work with containers seem to have forgotten everything about privilege separation, so they just run shit as root + add a bunch of caps. I don’t see how that can be an improvement over anything except maybe running everything as root.

        I often hear that container are not as secure as they claim to be, fine

        I don’t hear containers claiming to be secure, as much as they’re just claiming to be a way for people to ship their entire shit stack so nobody need to worry about updates breaking things.

        I never see any proof that it’s less secure than a application ran with a default « deploy » user in a VM, without anything specific to security taken care of.

        I’d be happy to be proven the opposite, but I don’t see how it’s really less secure than that.

        I don’t know what this default setup you try to call out actually is. Historically, daemons and operating systems have a whole bunch of security measures built in and enabled by default.

        Either way, “proving it’s less secure” isn’t the way we go about security. Proving that something is secure is what we care about. Unless proven otherwise, it’s insecure. I haven’t seen any study or proof of container security.

        1. 3

          Ok, I understand a bit better your point of view.

    2. 10

      The one problem I’ve had with Docker over the years is how “special” it is. There has been so much code we’ve added to our (Rails) application over the last year just to accommodate the Docker workflow, all of its idiosyncracies and stuff. Kubernetes too. For example, for some insane reason it sends health checks to the container’s web server using the IP 127.0.0.1:0, which is not valid according to IPAddr. Also, within the last week we’ve been experiencing test failures related to Headless Chrome not being able to run as root, which is the DEFAULT user in Docker containers (how anyone got this past the security censors is beyond me, but there you go, that’s where we’re at these days :P)

      And after all this, running Docker in development is still slow as balls. So much indirection just to save a few questions in the initial setup process? Nobody wants to do it. Docker was in use by almost all devs of our platform in the beginning of the year. Now we only have a handful of people still developing in Docker. So, “Ship the whole stack, not just code” is a great idea in theory, but in practice there ends up being enough difference between dev and prod that you might as well have just run the thing on a regular old VM or bare metal.

      1. 5

        I’ve been saying this for a while now but maybe my messaging has been off. People look at the k8s and docker brochure and forget to ask how it’s actually going to fit into their workflows. There is no way around the fact that it’s another layer of abstraction and in my experience it’s not even a good one for actual production systems. It’s great for running one-off tasks that need self-contained isolation like CI that just pull in information and then push out some result that doesn’t require persisting any local state. Going outside of this sweet spot starts to get really hairy really fast and you start needing storage plugins, network plugins, and side-cars for even the simplest systems. It’s an amazing amount of overhead for something that’s meant to be the future of development.