1. 27
  1.  

  2. 11

    Using system accounts per user seems like such a good idea to start. It’s simple and obvious. And as a bonus, provides actual separation. Highly unlikely files from one user will be accidentally exposed to another user by an overlooked legacy API when there are actual kernel barriers. Alas, web scale kernels aren’t here yet.

    1. 4

      I did something ‘similar’ with a friend in the past. You can read the full story on my blog but here is a short tl;dr

      We wanted to write a user administration panel for a shared shell host. We didn’t like existing solutions as they all wanted to be running as root & we didn’t feel confident enough in writing our own authentication mechanisms.

      What we ended up with was a Perl web application using expect to log in users via ssh & performed all actions inside that session. There was a second process running with higher privileges which read modified vhost entries from the DB and applied them to the apahe configuration.

      The benefit of that design was that our sysadmin friend could use all the unix tools he was used to in order to administer & monitor the service. Without touching a line of code.

      1. 3

        Which is a shame because I’d much rather hand off a lot of that functionality to my OS rather than resolve it multiple times. Is anyone exploring a web scale kernel? What would one look like?

        1. 3

          In order to be viable, I think you’d need a 64 bit uid_t. That maybe would work for the kernel, but every filesystem is going to need adjustment to its on disk format. Ah! And I would never feel safe that programs like tar etc. weren’t using a smaller type inside. You’d probably get to about 2/4 billion ok, but then nonsense would set in. I don’t usually hate on C, but this is a change its dramatically unsuited for.

          I suppose one could also try using extended attributes, but again you’d need to very carefully all tools.

          1. 6

            What if we step back further and instead of trying to modify a Unix into this we started from scratch. We want to centralized all of the common server things that every app redoes and just put it into the OS.

            1. 2

              I would argue we have: it’s called Ruby on Rails. Almost serious. A web framework basically is the new server OS (like browser is the new client OS). They’re just somewhat lacking in the isolation department.

              Something like mirage, going in the opposite direction, too. Every request is its own VM, which you can start with only the passed users credentials.

            2. 1

              Similar shifts has been done before - time_t and off_t have both been upgraded to 64bit. It can take a bit of time, but I can’t immediately see a reason why uid_t should be harder than those (other than existing filesystem data).

              I’m trying to think of ways to attempt to verify that uid_t values taken from syscalls are only ever stored in 64bit types. gcc’s -Wconversion gives you this but also probably a lot of noise.

              I wonder if -Wconversion plus some finer grained tracking of ‘tainted’ values which arise from uid_t in the syscall interface through multiple conversions would work? I guess this might even be possible to implement as a post-processing step to -Wconversion output.

            3. 2

              Being facetious is web scale.

              1. 2

                I know web scale is a troll word but it is sufficient for the legitimate discussion I am trying to have.

            4. 1

              Yeah, it seems kind of sensible on one level, and it is pretty much how you’d organize such a thing Back In The Day if you were setting up a CVS server used by multiple groups of people.

              Actually, when you think about it is kind of odd that 99% of web stuff has umpteen website “users” running as a single Unix “user” and a single database “user” … did anyone ever write a web server which supported Kerberos login or similar?