1. 6
  1.  

  2. 4

    We recently learned that Linux “containers” are not secure by design which makes me not sure where this fits in.

    Is this just a “because it’s so easy, we’ll do it” thing? Given that it’s Guix, there is not a problem with a package polluting the system with cruft. There is not an issue with running multiple version of a package in parallel. If you’re installing and running binaries that you don’t trust, then I don’t think Linux namespaces will help you. And if you’re concerned that a binary you run might be vulnerable, doesn’t the traditional unix permission model already protect you without a whole extra, insecure-by-design, layer?

    1. 1

      “And if you’re concerned that a binary you run might be vulnerable, doesn’t the traditional unix permission model already protect you without a whole extra, insecure-by-design, layer?”

      It does until the binary talks to the kernel or a more privileged app. Also, there’s no mitigations for covert, storage or timing channels. It can listen to secrets then leak them. Permissions, aka Discretionary Access Control (DAC), are the weakest, security model since so much can get around them.

      1. 2

        It does until the binary talks to the kernel or a more privileged app

        A container isn’t going to protect you from interacting with the kernel, though. In fact, as I understand Linux containers (which isn’t much so someone should correct me), none of the security issues you mentioned would be mitigated by them. I wonder which case is least secure: a Linux container protecting a malicious application from getting into the host system or from getting into another container. Since the idea is to run all services in their own container, the latter seems worrying if it is an issue.

        1. 1

          none of the security issues you mentioned would be mitigated by them

          That’s kind of my point. It’s why high-assurance has opposed them since all such tech got smashed during pentesting. The only ones that survived were tiny kernels virtualizing the whole OS in a lower ring or esp user-mode. The kernels were small, blocked covert channels where possible, and mediated inter-VM communications. They also did well in pentesting. This new stuff is bandaids that keep getting brushed off by attackers.

          “ I wonder which case is least secure: a Linux container protecting a malicious application from getting into the host system or from getting into another container.”

          That depends. Either are compromised if there’s 0-days in the kernel. If they can talk to each other directly, then one might compromise the other. If they can talk to kernel, then any path that lets one affect the other through kernel might be an attack vector. If one accesses memory other touched, that might be an attack one way or leak the other. If they share resources, either temporary storage or things like cache, that can become a leak. If more than one is simultaneously compromised, then could be used to chain an attack on the host. Which model is less/more secure depends on how IPC, memory re-use, resource sharing, and mediation are handled. I don’t know Linux containers enough to tell you that.

          These are the common kinds of risk that people would’ve assessed in the early 90’s for secure kernels. Each had to be dealt with. This was straight-forward, albeit laborious, far as techniques except for the timing channels. Killing them off efficiently in arbitrary situations is still an open problem. The storage channels at register or device buffer levels can also add overhead since they must be overwritten every process/VM switch to prevent leaks.