1. 50
  1.  

  2. 57

    Honestly, for the vast majority of users, the security domain model of modern *nix desktops is incorrect.

    The vast majority of machines only ever have one user on them. If something running as that user is compromised, that’s it. Even if there were no privilege escalation, so what? You can’t install device drivers…but you can get the user’s email, overwrite their .profile file, grab their password manager’s data, etc, etc.

    I think that if I were designing a desktop operating system today, I would do something along the lines of VM/CMS…a simple single-user operating system running under virtualization. The hypervisor handles segregating multiple users (if any), and the “simple” operating system handles everything else.

    (I know Qubes does something like this, but its mechanism is different from what I’m describing here.)

    In that hypothetical simple single-user operating system, every application runs with something akin to OpenBSD’s pledge baked in. Your web browser can only write to files in the Downloads directory, your text editor can’t talk to the network, etc.

    The *nix permissions model was designed to deal with a single shared machine with a lot of users and everything-is-a-file. The modern use case is a single machine with a single user and the need for semantic permissions rather than file-level permissions.

    1. 16

      This is very insightful, and definitely has changed the way that I’m thinking about security for my OS design.

      Here’s the thought that I got while reading your comment: “The original UNIX security model was one machine, many users, with the main threat being from other users on the machine. The modern security model is (or should be) one machine, one user, but multiple applications, with the main threat being from other/malicious applications being run by that single user.”

      1. 9

        To make one small tweak to your statement, I would propose the modern model be “many machines, one user, with multiple applications…”. The idea being with those applications you will be dealing with shared risk across all of the accounts you are syncing and sharing between devices. You might only be controlling the security model on one of those machines, but the overall security risk is likely not on the one you have control over and that may make a difference. Do you let applications sync data between every device? Does that data get marked differently somehow?

        1. 3

          If you are planning some standard library/API please also consider testability. For example: Global filesystem with “static (in OOP sense)” API make it harder to mock/test than necessary. I think the always available API surface should be minimized, to provide APIs which can be tested, secured, versioned more easily, providing more explicit interactions and failure modes than the APIs we are used to.

        2. 10

          This the reason why plan9 removed completely the concept of “root” user. It has a local account used to configure the server, yet cannot access its resources, and then users will connect to it and get granted the permissions from a dedicated server (could be running on the same machine). It is much cleaner when considering a machine that is part of a larger network, because the users are correctly segragated and simply cannot escalate their privilege, they need access to the permission server to do that.

          1. 14

            I agree, and would like to extend it with my opinion:

            Global shared filesystem is an antipattern (it is a cause of security, maintenance, programming problems/corner cases). Each program should have private storage capability, and sharing data between application should be well regulated either by (easily) pre-configured data pipelines, or with interactive user consent.

            Global system services are also antipatterns. APIs suggesting services are always available by default, and unavailability is an edge case is an antipattern.

            Actually modern mobile phone operating systems are gradually shifting away from these antiqued assumptions, and are having the potential to be much more secure than existing desktop OSs. These won’t reach the mainstream UNIX worshipping world. On desktop Windows is moving in this direction, eg. desktop apps packaged and distributed via Microsoft Store each run in separate sandboxes (had quite a hard time finding HexChat logs), but Microsoft’s ambition to please Mac users (who think they are Linux hackers) is slowing the adaptation (looking at you winget, and totally mismanaged Microsoft Store with barely working search and non-scriptabilty).

            1. 10

              Global shared filesystem is an antipattern (it is a cause of security, maintenance, programming problems/corner cases).

              If your only goal is security, this is true. If your goal is using the computer, then getting data from one program to another is critical.

              Actually modern mobile phone operating systems are gradually shifting away from these antiqued assumptions, and are having the potential to be much more secure than existing desktop OSs.

              And this (along with the small screens and crappy input devices) is a big part of why I don’t do much productive with my phone (and the stuff I do use for it tends to be able to access my data, eg – my email client).

              1. 4

                Actually I have seen many (mostly older) people for whom the global filesystem is a usability problem. It is littered by uninteresting stuff for them, they just want to see their pictures, when pressing the “attach picture” button on a website, not the programs, the musics, not /boot or C:\Windows, etc…

                Also it creates unnecessary programming corner cases: if your program wants to create a file with name foo, another process may create a directory with the same name to the same location. There are conventions to lower this risk, still it is an unnecessary corner case.

                Getting data from one place to another can be solved a number of ways without global filesystem. For example you can create a storage location and share it to multiple applications, though this still creates the corner cases I mentioned above. Android does provide a globally shared storage for this task, which is not secure, its access needs explicit privilege at least. Also you can specifically share data from one app to another without the need for any filesystem, as in Android’s Activities or Intents.

                I think there are proven prototypes for these approaches, though I think the everything is a file approach is also a dead-end in itself, which also limits the need for a “filesystem”.

                Note: the best bolted-on security fix to traditional UNIX filesystem me seems to be the OpenBSD pledge approach, too bad OpenBSD has other challenges which limit its adoption. I also like the sandbox based approached, but then I’d rather go steps further.

                1. 2

                  Getting data from one place to another can be solved a number of ways without global filesystem. […] Android does provide a globally shared storage for this task, which is not secure, its access needs explicit privilege at least.

                  That is a great example of how hard it is to find the right balance between being secure and not nagging the user.

                  In order not to bother the users too much or too often, Android will ask a simple question: do you want this app to access none of your shared files (but I want this funny photo-retouch app to read and modify the three pictures I took ten minutes ago) or do you allow it to read all your shared files (and now the app can secretly upload all your photos to a blackmail/extortion gang). None of these two options are really good.

                  The alternative would be fine-grained access, but then the users would complain about having too many permission request dialogs.

                  In the words of an old Apple anti-Windows ad: «You are coming to a sad realization: cancel or allow?»

                  1. 5

                    Meanwhile in ios you can use the system image picker (analogous to setuid) to grant access to select files without needing any permission dialogs.

                    1. 1

                      This is a valid option on Android as well

              2. 6

                I disagree. Having files siloed into application specific locations would destroy my workflow. I’m working on a project that includes text documents, images and speadsheets. As an organization method, all these files live under a central directory for the project as a whole. My word processor can embed images. The spreadsheet can embed text files. This would be a nightmare under a siloed system.

                A computer should adapt to how I work, not the other way around.

                1. 7

                  In a properly designed silo’d filesystem, this would still be perfectly possible. You’d just have to grant each of those applications access to the shared folder. Parent is not suggesting that files can’t be shared between applications:

                  sharing data between application should be well regulated either by (easily) pre-configured data pipelines, or with interactive user consent.

                  1. 1

                    Even you could create security profiles, based on projects, with the same applications having different set of shared access patterns based on security profile.

                    It could be paired with virtual desktops, for example, to have a usable UX for this feature. I’d be happy in my daily work when shuffling projects, to have only the project-relevant stuff in my view at a time.

                2. 3

                  Global shared filesystem is an antipattern

                  I’d make that broader: A global shared namespace is an antipattern. Sharing should be via explicit delegation, not as a result of simply being able to pick the same name. This is the core principle behind memory-safe languages (you can’t just make up an integer, turn it into a pointer, and access whatever object happens to be there). It’s also the principle behind capability systems.

                  The Capsicum model retrofits this to POSIX. A process in capability mode loses access to all global namespaces: the system calls that use them stop working. You can’t open a file, but you can openat a file if you have a directory descriptor. You can’t create a socket with socket, but you can use recvfrom on a socket that you have to receive a new file descriptor for a socket. Capsicum also extends file descriptors with fine-grained rights so you can, for example, delegate append-only access to a log file to a process, but not allow it to read back earlier log messages or truncate the log.

                  Capsicum works well with the Power Box model for privilege elevation in GUI applications, where the Open… and Save… dialog boxes run as more privileged external processes. The process invoking the dialog box then receives a file descriptor for the file / directory to be opened or a new file to be written to.

                  It’s difficult to implement in a lot of GUI toolkits because their APIs are tightly coupled with the global namespace. For example, by returning a string object representing the path from a save dialog, rather than an object representing the rights to create a file there.

                  1. 3

                    I think that snaps (https://snapcraft.io/) have this more granular permission model, but nobody seems to like them (partially because they’re excruciatingly slow, which is a good reason).

                    1. 2

                      Yeah, Flatpak does this too. It’s why I’m generally on board with Flatpak, even though the bundled library security problem makes me uncomfortable: yes they have problems, but I think they solve more than they create. (I think.) Don’t let perfect be the enemy of good, etc.

                    2. 3

                      Global shared filesystem is an antipattern (it is a cause of security, maintenance, programming problems/corner cases). Each program should have private storage capability, and sharing data between application should be well regulated either by (easily) pre-configured data pipelines, or with interactive user consent.

                      I would not classify a global shared filesystem as antipattern. It has its uses and for most users it is a nice metaphor. As all problems that are not black or white, what is needed is to find the right balance between usefulness, usability and security.

                      That said, I agree on the sentiment that the current defaults are “too open”, and reminiscent of a bygone era.

                      Before asking for pre-configured data pipelines (hello selinux), or interactive user consent (hello UAC), we need to address real-world issues that users of Windows 7+ and macOS 10.15 know very well. Here are a couple of examples:

                      • UAC fatigue. People do not like being constantly asked for permission to access their own files. “It is my computer, why are you bothering me?” «Turn off Vista’s overly protective User Account Control. Those pop-ups are like having your mother hover over your shoulder while you work» (from the New York Times article “How to Wring a Bit More Speed From Vista”)
                      • Dialog counterfeits. If applications have the freedom to draw their own widgets on the screen (instead of being limited to a fixed set of UI controls), then applications will counterfeit their own “interactive user consent” panel. (Zoom was caught faking a “password needed” dialog, for example). Are we going to forbid apps from drawing arbitrary shapes or do we need a new syskey?
                      • Terminal. Do the terminal and the shell have access to everything by default, or do you need to authorize every single cd and ls?
                      • Caching. How long should authorization tokens be cached? For each possible answers there are pros and cons. Ask administrators of Kerberos clusters for war stories.
                      1. 4

                        If you insist on global filesystem (on this imaginary OS design meeting we are at), I’d rather suggest two shared filesystems, much like how a Harvard Architecture separates code and data, one for “system files, application installs”, and one for user data.

                        By preconfigured data pipelines I’d rather imagine something like https://genode.org/ Sculpt . I’d create “workspaces” with the virtual desktop methaphor, which could be configured in an overlay, where apps (and/or their storage spaces, or dedicated storage spaces) as graph nodes could be linked via “storage access” typed links graphically.

                        On a workspace for ProjectA and its related virtual desktop I could provide Spreadsheets, Documents, and Email apps StorageRW links to ProjectAStorage object. Even access to this folder can be made default in the given security context.

                        Regarding the terminal: I don’t think it is a legitimate usecase to have access to everything just because you use a text base tool. Open a terminal in the current security context.

                        Regarding the others: stuff can be tuned, once compartmentalization is made user-friendly, stuff gets simpler. Reimagining stuff from a clean sheets would be beneficial, as reverse compat costs us a lot, and I’d argue maybe more than it gets.

                        With SeLinux my main problem is its bolted-on nature, and the lack of its easy and intuitive confiiuration, which is worsened by the lack of permission/ACL/secontext inheritance in unix filesystems. Hello relabeling after extracting a tar archive…

                        About the other points i partly agree, they are continuously balancing these workflows, and fighting abuse (a8y features abused on android, leading to disabling some features to avoid fake popups, if I recall correctly)

                    3. 4

                      A while ago I dreamed up – but never really got around to trying to build one (although I do have a few hundred lines of very bad Verilog for one of the parts) – an interesting sort of machine, which kind of takes this idea to its logical conclusion, only in hardware. Okay I didn’t exactly dream it up, the idea is very old, but I keep wondering how a modern attempt at it would look like.

                      Imagine something like this:

                      • A stack of 64 or so small SBCs, akin to the RPi Zero, each of them running a bare-metal system – essentially, MS-DOS 9.0 + exactly one application :)…
                      • …with a high-speed interconnect so that they can pass messages to/from each other …
                      • …and another high-speed interconnect + a central video blitter, that enables a single display to show windows from all of these machines. Sort of like a Wayland compositor, but in hardare.

                      Now obviously the part about high-speed interconnect is where this becomes science fiction :) but the interesting parts that result from such a model are pretty fun to fantasize about:

                      • Each application has its own board. Want to run Quake V? You just pick the Quake V cartridge – which is actually a tiny computer! – and plug it in the stack. No need to administer anything, ever, really.
                      • All machines are physically segregated – good luck getting access to shared resources, ‘cause there aren’t any (in principle – in my alternate reality people haven’t quite figured out how to write message-passing code that doesn’t suffer from buffer overflows, I guess, and where a buffer can be overflown, anything can happen given enough determination).
                      • Each machine can come with its own tiny bit of fancy hardware. High-resolution, hi-fi DAC for the MP3 FLAC player board, RGB ambient LEDs for the radio player, whatever.
                      • Each machine can make its own choices in terms of all hardware, for that matter, as long as it plays nice on the interconnect(s). “Arduino Embedded Development Kit” board. the one that runs the IDE? Also sports a bunch of serial ports (real serial ports, none of that FTDI stuff), four SPI ports, eight I2C ports, and there’s a tiny logic analyzer on it, too. The Quake V board is mostly a CPU hanging off two SLI graphics cards probably.

                      I mean, with present-day tech, this would definitely be horrible, but I sometimes wonder if my grandkids aren’t going to play with one of these one day.

                      Lots and lots and lots of things in the history of computing essentially happened because there was no way to give everyone their own dedicated computer for each task, even though that’s the simplest model, and the one that we use to think about machines all the time, too (even in the age of multicore and little.BIG and whatnot). And lots of problems we have today would go away if we could, in fact, have a (nearly infinite) spool of computers that we could run each computational task on.

                      1. 3

                        I would, 100%, buy such a machine.

                        I seem to recall someone posted onto lobste.rs something about a “CP/M machine of the future” box a while back: a box with 16 or 32 Z80’s, each running CP/M and mulitplexing the common hardware like the screen. SOunds similar in spirit to what you’re describing, maybe.

                        1. 3

                          This reminds me GreenArrays even if there are major differences.

                          1. 2

                            the EOMA68 would probably have benefited from this idea. They were working on the compute engine being in Cardbus format that could be exchanged…

                            1. 1

                              What could we call the high-speed interconnect?

                              Well, it’s an Express Interconnect, and it’s for Peripheral Components, so I guess PCIE would be a good name.

                              It could implement hot-swapping, I/O virtualization, etc for the “cartridges” (that’s a long word, lets call them “PCIE cards”).

                              1. 1

                                I think I initially wanted to call it Infiniband but I was going through a bit of an Infiniband phase back when I first concocted this :).

                            2. 2

                              Sounds to me like an object capabilities system with extra segregation of users. Would that be a fair assessment?

                              1. 3

                                I think, in my mental model, it would be a subset or a particular instance of an object capabilities system.

                              2. 2

                                (I know Qubes does something like this, but its mechanism is different from what I’m describing here.)

                                Can you elaborate on how it’s different? What you’re describing sounds exactly like Qubes.

                                1. 2

                                  Let me preface this with “I may be completely wrong about Qubes.”

                                  From what I understand, Qubes is a single-user operating system with multiple security domains, implemented as different virtual machines (or containers? I don’t remember).

                                  In my idea different users, if any, run in different virtual machines under a hypervisor. The individual users run a little single-user operating system. That single user system has a single security domain under which all applications run, but applications are granted certain capabilities at runtime and are killed if they violate those capabilities. So all the applications are in the same security domain but accorded different capabilities (I had used OpenBSD’s pledge as an example, which isn’t quite like a classic capability system but definitely in the same vein).

                                  In my mind, it’s basically a Xen hypervisor running an instance of HaikuOS per user, with a little sand boxing mechanism per app. There are no per-file permissions or ownership, but rather application-specific limitations as expressed by the sandbox program associated with them.

                                  The inspiration was VM/CMS, in its original incarnation where CMS was still capable of running on the bare metal; if your machine doesn’t have multiple users you can just run the little internal single-user OS directly on your hardware. Only on physically shared machines would you need to run the hypervisor.

                                2. 2

                                  It’s obviously a different approach but very fine grained permissions is a feature of recent macOS releases.

                                3. 3

                                  Sudo provides users of a unix system with an auditable mechanism for controlled access to protected resource on the command line. It should be viewed as one part of a comprehensive defense in depth strategy.

                                  1. 1

                                    Wouldn’t an easy fix for sudo phishing to include some token in the password request message?

                                    1. 2

                                      I could easily make a program which spawns a sudo "$@" style child process, then any time my process reads a byte from stdin, it forwards that byte to sudo and also stores it in a buffer. When I receive a newline byte, I send my buffer to a server. My program will behave identically to sudo in every way, because the user sees all of sudo’s output (sudo would inherit my program’s stdout and stderr), and sudo sees everything the user is typing.

                                      1. 1

                                        How would you present it to the user without also presenting it to an attacker?