1. 18
  1. 7

    In the same vein, I’ve seen numerous instances where applications hosted in EC2 that perform some analysis of user-submitted URLs happily accept http://169.254.169.254/latest/user-data/ and spit back proprietary configuration information used to boot-up the instance =(

    1. 4

      Semi-related to this: In the limited amount I’ve been exposed to containers at my work, the suggested practice is to run everything in the container as root because you’re just going to run one thing in your container anyways, so giving it access to everything is just fine.

      Does anyone with more experience have an opinion on this?

      1. 11

        Erk. There have been several vulnerabilities which have meant that if you have root inside a container, you can break out of it and get root on the host. All of them have been patched, but I wouldn’t mind betting there will be more in time. (My guess: the next one will involve user namespaces in some way.)

        In general I recommend following the principle of least privilege.

        1. 1

          My prediction was wrong :) Here’s the latest one I’ve become aware of, and it doesn’t even require root: https://lobste.rs/s/kg6yf1/dirty_cow_cve_2016_5195_docker_container

        2. 3

          root usually has access to more parts of the kernel, no? Increasing attack surface.

          There are also a number of scenarios in which a user outside a jail and root inside a jail can collude to become root outside.

          1. 2

            If your applications do not need to write to any files, then they do not need write access within the container, even if root does (to say, update files, etc). Dropping root means that remote execution may degrade but might not deny service.

            If your application does not need to open new network connections, then it should not have the ability to do so, even if root has the ability. iptables -m owner can prevent a local vulnerability from spreading.

            If your application does not execute, then you can drop this with ulimit. Keeping root means you can’t. This eliminates entire classes of problems

            All of my programs run with very low privileges because while it is bad for my customers that service is denied, it is worse if someone can spin up a bunch of ec2 boxes with my credit card to run worthless cpu miners putting me in the poorhouse.

            1. 2

              On linux:

              • The OOM killer prefers killing user processes to root ones
              • I’ve been told (but cannot confirm) that root processes can misbehave more dangerously, by e.g. DOS-ing the local network (would be thrilled if anyone who knows linux networking could expand on this)
              1. 2

                The security ramifications aside of doing so (which are in my view serious), it’s just plain bad practice. Bugs in code can become far more serious due to having unrestricted access within the container (e.g. file system manipulation bugs), while similarly, other issues may be masked by having superuser access (e.g. insufficient permissions). If you’re designing your software to work without root privileges, which you almost always should, then why would you run it as root in the container just because you can? It should work fine without such privileges, and there are potential security and stability benefits in doing so.

                Some may argue that because you’re running in a container that such bugs can be recovered from faster by just redeploying the container instance, and they’d be right, but that doesn’t make it a good mindset. That’s just using containers as a way to mask deeper problems with the quality of software.

                1. 3

                  I’m glad you pointed out that there are classes of attacks made more difficult by e.g. using an app user without write access to the FS.

                  RE the rest of the post - I’m not sure “It’s bad practice / poor quality” is a super helpful answer when the question boils down to “Why is it bad practice”.

                  Juggling users and permissions on servers is a substantial time sink that doesn’t immediately improve my users lives and it’s not unreasonable to ask ‘why should I do it when things work fine anyway’.

                2. 1

                  Thank you for the responses @0, @tedu, @geocar, @danielrheath, and @ralish. What you said matches my suspicion and it’s nice to hear some confirmation.

                  1. 1

                    If you believe your container infrastructure is more secure than linux user/root then that makes sense. Personally I don’t consider either of them reliable enough to constitute a security boundary; I would do least-privilige at the machine level (and within my application runtime) but assume that control of a running user process == root on that machine and design my security model around that. At that point using user accounts becomes something to do only if it’s super cheap.

                  2. 2

                    In the case of EC2 seeing all of S3 from other EC2s, You can restrict appropriately but it is a pain. You need a policy for each instance that maps to each bucket. If a bucket needs to be shared you again need to write granular policies. I feel like this was done on purpose as the author claimed - easy for you to deliver the bullet to your foot, but flexibility to not.

                    1. 1

                      Excellent points. Maybe the instance metadata service is there for the benefit of Windows? Not sure how you would use XenStore from Windows. Not that that’s a good reason to make it less secure for everyone else.

                      1. 1

                        Sure a windows driver could be created to expose a new M: drive, no?

                        1. 1

                          Or you could use WMI/a dedicated API and expose it via a PowerShell VFS or something like that.

                      2. 1

                        It would be absolutely trivial for Amazon to place EC2 metadata, including IAM credentials, into XenStore; and almost as trivial for EC2 instances to expose XenStore as a filesystem to which standard UNIX permissions could be applied, providing IAM Role credentials with the full range of access control functionality which UNIX affords to files stored on disk. Of course, there is a lot of code out there which relies on fetching EC2 instance metadata over HTTP, and trivial or not it would still take time to write code for pushing EC2 metadata into XenStore and exposing it via a filesystem inside instances.

                        Less radical solution: expose the EC2 instance metadata server on a root-owned 0700 unix domain socket. You wouldn’t get the kind of fine-grained metadata access control that Colin imagines, but your instance could at least use unix groups and permissions to grant access to certain non-root processes.

                        1. 1

                          You should not store credentials that you aren’t going to save as root, but you should firewall (using iptables -m owner) access to network resources so that non-root users cannot access these things.

                          1. 1

                            Shared billing means it’s pretty easy to have multiple accounts.

                            Putting each application in its own account makes it much easier to create service boundaries because the IAM policies become much safer (e.g. s3:* is relatively fine if you only have one app in the account).

                            You can grant cross-account privileges but you have to do it on purpose.