Running your browser as a different user is an interesting challenge. Under normal circumstances I want to save my bank statement PDFs in my home directory. I want to upload my burrito pictures to Twitter. Slicing this off to a separate user is a significant usability setback.
It does seem like a silly way to approximate better ideas in privilege separation.
Better ideas like those already implemented in Chrome, which usually uses all means of sandboxing a platform can provide? (including the sec comp sys call filtering on Linux)
Running as a non-unique unprivileged user means that user can potentially access much more than was intended. If nobody is running both your web and database servers. A compromise in either is a compromise in both.
I think my issue with the OP is more the underlying semantics of the Unix model. A “user” is too heavyweight and coarse an abstraction for handling privilege separation, and carries along with it too much historical baggage. But nobody is doing capabilities, which are IMO the correct mechanism. One muddles along, I suppose.
Creating a unique UID for an operation could be a very clean separation of privs. How is a different UID to heavy weight? The coarseness is point, it is an unequivocal separation between the main account and the account of the untrusted process.
Mount a FUSE interposer in the sandbox and all kinds of FS behaviors could be proxied through.
Unix users carry a lot of implicit assumptions about privilege with them. Have you ever tried to do complex access control with UID/GID permissions? It’s a nightmare.
In a world where the default model of computation involves a large number of actual humans proxied through Unix users logging into an 11/750 or a Sparcstation 20, maybe the Unix user model holds. In a world where 99.9999% of computers are single-user at all times, it’s way too heavy and ill-fitting an abstraction.
Jails, containers, or VMs. Not the lightest thing around, but you can make it work. You do have a lot of overhead having to maintain two “systems” and make sure what you want is on the target.
It does not have to be that way. Mac App Store apps (and apps that voluntarily opt-in) are sandboxed on OS X. It’s not perfect, but it isolates an application from the rest of your home directory, while not throwing away convenience for file management (the open/save dialogs are handled by a separate process, which links the target in the application’s sandbox).
I use this idea where capabilities are not applicable to good end. For single shot applications, such as 12 factor apps, or fork-on-accept read-only network servers such as anoncvs, non-cgi http, or anonftp, I highly recommend something like this.
Let’s take Apache, a forking TCP server. Assume an attacker can compromise the process handling their connection, but doesn’t have kernel or filesystem privileges. Even if the attacker can’t touch the filesystem due to properly implemented chroots, they can still impact other user’s HTTP processes via ptrace or signals.
My issues with raru are that it might randomly land on an in-use UID/GID, and uses only 16 bits of entropy, which is too little separation for a highly used network service. I prefer my I implementation of this idea I used for the inetd replacement used in CGC: