Maybe this is a hot take, but I suspect that unless we start using radically different physical hardware, UNIX is going to stay a pretty good API and I’m fine with it looking pretty similar in the year 2100.
Maybe this comment will age very poorly and people will dunk on me in the future. If so, sorry for climate change :(
Maybe this is a hot take, but I suspect that unless we start using radically different physical hardware, UNIX is going to stay a pretty good API and I’m fine with it looking pretty similar in the year 2100.
The hardware has changed quite a bit from the systems where UNIX was developed:
Multicore is the norm, increasingly with asymmetric multiprocessing (big.LITTLE and so on).
There are multiple heterogeneous compute units (GPUs, NPUs, and so on).
There is almost always at least one fast network.
Local storage latency is very low and seek times are also very low.
Remote storage capacity is effectively unbounded.
Some changes were present in the ’90s:
There’s usually a display capable of graphics.
There’s usually a pointing device.
RAM is a lot slower than the CPU(s), you can do a lot of compute per memory access.
At the same time, user needs have changed a lot:
Most computers have a single user.
Most users have multiple computers and need to synchronise data between them.
Software comes from untrusted sources.
Security models need to protect the user’s data from malicious or compromised applications, not from other users.
Users perform a very large number of different tasks on a computer.
I think UNIX can adapt to the changes to the hardware. I’m far less confident that it will adapt well to the changes to uses. In particular, the UNIX security model is a very poor fit for modern computers (though things like Capsicum can paper over some of this). Fuchsia provides an more Mach-like abstraction without a single global namespace (as did Plan 9), which makes it easier to run applications in isolated environments.
Thanks for writing this out a lot better than I could have! I really like your distinction between adapting to hardware vs. uses. Re: the security model, one observation from the OP that I liked was:
I think that if you took a Unix user from the early 1990s and dropped them into a 2022 Unix system via SSH, they wouldn’t find much that was majorly different in the experience. Admittedly, a system administrator would have a different experience; practices and tools have shifted drastically (for the better).
Do you think that even the API exposed to UNIX users/programs might have to change to accommodate new security models?
I could believe that. ACLs (setfacl) are already quite different from traditional UNIX permissions, and apparently they never made it into POSIX despite being widespread. Although maybe that’s also a case of a change to UNIX that a system admin do/users don’t normally have to care about.
EDIT: To be clear, I don’t want to move the goalposts–I definitely think something like setfacl that fails to be standardized as part of POSIX is an example of its limits, and a counterpoint to my claim that we’ll be using UNIX in 2100.
And to try and answer my own question, it looks like Fuchsia might not plan on being POSIX complaint? So that’d also be a counterpoint:
On Fuchsia the story is a bit different from Posix systems. First, the Zircon kernel (Fuchsia‘s microkernel) does not provide a typical Posix system call interface. So a Posix function like open can’t call a Zircon open syscall. Secondly, Fuchsia implements some parts of Posix, but omits large parts of the Posix model. Most conspicuously absent are signals, fork, and exec.
Things like POSIX ACLs don’t really change the model, they just add detail. They’re still about protecting files (where, in UNIX, ‘file’ just means ‘something that exists in some namespace outside of the program’s memory’) from users. In practice, there is a single real user and the security goals should be to protect that user’s files from programs, or even parts of programs.
Capsicum is a lot better in this regard. Once a program enters capability mode, it lacks all access to any global namespaces and requires some more privileged entity to pass it new file descriptors to be able to access anything new. This works well with the power box model, where things like open and save dialog boxes are separate programs that return file descriptors to the invoking program. It’s still difficult to compartmentalise an application with Capsicum though, because you really want a secure synchronous RPC mechanism, like Spring Doors (also on Solaris).
Server things are increasingly moving towards PaaS models where you write code against some high-level API with things like HTTP endpoints as the IPC model. Currently, these are built on Linux.p but I don’t see that continuing for the next 10-20 years because 90% of the Linux kernel is irrelevant in such a context. There’s no need for most IPC primitives, because the only communication allowed between components is over the network (which helps a lot with scalability, if everything is an async network request and you write your code like his then it’s easy to deploy each component on a separate machine). There’s no need for a local file system. There’s very little need for a complex scheduler. There’s no need for multiple users even, you just want a simple way of isolating a network stack, a TLS/QUIC stack, a protocol parser, and some customer code, ideally in a single address space with different components having views of different subsets of it. In addition, you want strong attestation over the whole thing so that the user has an audit trail that ensures that you are not injecting malicious code into their system. You probably don’t even want threading, because you want customers to scale up by deploying more components rather than by making individual components do more.
Yeah Unix was supposed to be a minimal portable layer on top of hardware, which you can build other systems on top of — like the web or JVM or every language ecosystem
So stability is a feature, not a bug
Ironically the minimal portable layer is also better than some of the higher layers for getting work done in a variety of domains
I agree mostly with the sentiment but it’s funny that he’s mentioned swap as an example. Because swap has changed quite a bit. First, do you want it at all? Second, in modern Linux a swap file is just fine; no need for a special partition. You can resize and add swap easily. And if you’re feeling exotic there’s options like zram or zswap. None of these are a radical change but if you apply 1992 understanding to modern Linux swap you’re missing out on some improvements.
He wrote a post the day before about swap history. Linux supported swap files in 92.
Interestingly, swapping to a file goes a long way back in Linux; it’s supported in 0.96c (from 1992), according to tuhs.org’s copy of mm/swap.c. However, Linux 0.96c only supported a single swap area (whether it was a file or a device), and it doesn’t look like you could turn swapping off once you turned it on.
This seems wrong. From 1990s, dpkg and apt come to mind, changing software packaging and distribution. From 2000s, udev and systemd, implementing hotplug and changing system initialization. How are these not significant changes to Unix? Yes, dpkg/apt/udev/systemd are Linux specific, but they came into being due to changing needs, so other Unix systems also adopted something similar, like brew and launchd for macOS.
Solaris 10 had SMF in 2005 and, I think, OS X 10.0 had launchd in 2001. These are just about out of the ‘90s, but only just. I don’t really see udev as a big change. Originally UNIX had userspace code that created dev nodes, then Linux moved it into the kernel, then they moved it out and created an event notification model. FreeBSD 5.0 added something similar around 2003, but it also didn’t fundamentally change anything.
FUSE and CUSE are probably bigger changes, but they’re really applying ‘80s micro kernel concepts to UNIX.
If you took a Unix user from the early 90s and dropped them into ssh in 2022, the first thing they’d notice is ssh exists. Then they might notice pretty prompts, up arrow command history, tab completion, lots of color, and other things that weren’t uniformly available on commercial Unix in the early 90s.
That’s to say nothing of desktop environments. Or the changes in APIs, since now we have things like threads.
Elsewhere in the article it says the pace of change slowed - which I’d agree with - but the early 90s wasn’t the point where the UX was solidified. That was much, much later.
POSIX ‘97 introduce pthreads. SSH was introduced in 1995. When I started university in 2000 it was a tool that the computer society and department had been using for years. Pretty prompts, command history, and coloured output on GNU coreutils were all present in the first Linux distro that I used (RedHat 4.something) around 1997.
So, maybe not early ‘90s, but these things haven’t changed significantly in over 20 years. Part of that is due to widespread use: you need a stable platform for adoption and it’s hard to introduce changes that require software changes to be useful when you need to change a software ecosystem of billions of lines of code.
If you look at the proceedings of OSDI or SOSP from the ‘90s, you’ll often see entirely new kernels and definitely see brand new kernel abstractions in a bunch of places. Since the early 2000s, that’s been a lot less common. There have been a few things, like Singularity, Barrelfish, and MirageOS, but a lot more papers are some small incremental tweak to Linux (often not even something that could work on other UNIX systems, but something that is necessary because of specific design decisions in Linux).
Generally agree. Note the article refers to Unix and a lot of the command line improvements happened on Linux first and migrated to commercial Unix much later, and desktop environments hadn’t really solidified until the mid-2000s. In the last 15 years, I’d agree that the pace of change has dramatically slowed.
It’ll be interesting to see what happens with Fuchsia/Zircon. Right now we seem to have a permissions model in kernel that doesn’t really map well to the permissions model used on mobile devices.
PS. I also started University in 2000, and in that year they introduced a firewall to block unencrypted Telnet, so I got to migrate to/deploy SSH. At the time SSH software wasn’t included in commercial Unix, and I think it was pre-OpenSSH with SSH1 only, using SSH1 only TeraTerm Pro as a client. Base TeraTerm didn’t include SSH support, which was provided via a TTSSH add-on. So maybe what changed is “everything just works now” :)
Maybe this is a hot take, but I suspect that unless we start using radically different physical hardware, UNIX is going to stay a pretty good API and I’m fine with it looking pretty similar in the year 2100.
Maybe this comment will age very poorly and people will dunk on me in the future. If so, sorry for climate change :(
The hardware has changed quite a bit from the systems where UNIX was developed:
Some changes were present in the ’90s:
At the same time, user needs have changed a lot:
I think UNIX can adapt to the changes to the hardware. I’m far less confident that it will adapt well to the changes to uses. In particular, the UNIX security model is a very poor fit for modern computers (though things like Capsicum can paper over some of this). Fuchsia provides an more Mach-like abstraction without a single global namespace (as did Plan 9), which makes it easier to run applications in isolated environments.
Thanks for writing this out a lot better than I could have! I really like your distinction between adapting to hardware vs. uses. Re: the security model, one observation from the OP that I liked was:
Do you think that even the API exposed to UNIX users/programs might have to change to accommodate new security models?
I could believe that. ACLs (setfacl) are already quite different from traditional UNIX permissions, and apparently they never made it into POSIX despite being widespread. Although maybe that’s also a case of a change to UNIX that a system admin do/users don’t normally have to care about.
EDIT: To be clear, I don’t want to move the goalposts–I definitely think something like setfacl that fails to be standardized as part of POSIX is an example of its limits, and a counterpoint to my claim that we’ll be using UNIX in 2100.
And to try and answer my own question, it looks like Fuchsia might not plan on being POSIX complaint? So that’d also be a counterpoint:
From: https://fuchsia.googlesource.com/docs/+/refs/heads/sandbox/jschein/default/libc.md
Things like POSIX ACLs don’t really change the model, they just add detail. They’re still about protecting files (where, in UNIX, ‘file’ just means ‘something that exists in some namespace outside of the program’s memory’) from users. In practice, there is a single real user and the security goals should be to protect that user’s files from programs, or even parts of programs.
Capsicum is a lot better in this regard. Once a program enters capability mode, it lacks all access to any global namespaces and requires some more privileged entity to pass it new file descriptors to be able to access anything new. This works well with the power box model, where things like open and save dialog boxes are separate programs that return file descriptors to the invoking program. It’s still difficult to compartmentalise an application with Capsicum though, because you really want a secure synchronous RPC mechanism, like Spring Doors (also on Solaris).
Server things are increasingly moving towards PaaS models where you write code against some high-level API with things like HTTP endpoints as the IPC model. Currently, these are built on Linux.p but I don’t see that continuing for the next 10-20 years because 90% of the Linux kernel is irrelevant in such a context. There’s no need for most IPC primitives, because the only communication allowed between components is over the network (which helps a lot with scalability, if everything is an async network request and you write your code like his then it’s easy to deploy each component on a separate machine). There’s no need for a local file system. There’s very little need for a complex scheduler. There’s no need for multiple users even, you just want a simple way of isolating a network stack, a TLS/QUIC stack, a protocol parser, and some customer code, ideally in a single address space with different components having views of different subsets of it. In addition, you want strong attestation over the whole thing so that the user has an audit trail that ensures that you are not injecting malicious code into their system. You probably don’t even want threading, because you want customers to scale up by deploying more components rather than by making individual components do more.
Yeah Unix was supposed to be a minimal portable layer on top of hardware, which you can build other systems on top of — like the web or JVM or every language ecosystem
So stability is a feature, not a bug
Ironically the minimal portable layer is also better than some of the higher layers for getting work done in a variety of domains
How would you rate io_uring as a change to the Unix model? Big deal or not a big deal?
Medium deal? Didn’t change the model but improved the economics a lot.
[Comment removed by author]
I agree mostly with the sentiment but it’s funny that he’s mentioned swap as an example. Because swap has changed quite a bit. First, do you want it at all? Second, in modern Linux a swap file is just fine; no need for a special partition. You can resize and add swap easily. And if you’re feeling exotic there’s options like zram or zswap. None of these are a radical change but if you apply 1992 understanding to modern Linux swap you’re missing out on some improvements.
He wrote a post the day before about swap history. Linux supported swap files in 92.
https://utcc.utoronto.ca/~cks/space/blog/unix/SwapSetupWasSimple
He just posts 6 days per week so people don’t link them all to this forum
This seems wrong. From 1990s, dpkg and apt come to mind, changing software packaging and distribution. From 2000s, udev and systemd, implementing hotplug and changing system initialization. How are these not significant changes to Unix? Yes, dpkg/apt/udev/systemd are Linux specific, but they came into being due to changing needs, so other Unix systems also adopted something similar, like brew and launchd for macOS.
Solaris 10 had SMF in 2005 and, I think, OS X 10.0 had launchd in 2001. These are just about out of the ‘90s, but only just. I don’t really see udev as a big change. Originally UNIX had userspace code that created dev nodes, then Linux moved it into the kernel, then they moved it out and created an event notification model. FreeBSD 5.0 added something similar around 2003, but it also didn’t fundamentally change anything.
FUSE and CUSE are probably bigger changes, but they’re really applying ‘80s micro kernel concepts to UNIX.
I strongly agree that package managers like dpkg have been one of the biggest changes to how you use the OS day to day.
If you took a Unix user from the early 90s and dropped them into ssh in 2022, the first thing they’d notice is ssh exists. Then they might notice pretty prompts, up arrow command history, tab completion, lots of color, and other things that weren’t uniformly available on commercial Unix in the early 90s.
That’s to say nothing of desktop environments. Or the changes in APIs, since now we have things like threads.
Elsewhere in the article it says the pace of change slowed - which I’d agree with - but the early 90s wasn’t the point where the UX was solidified. That was much, much later.
POSIX ‘97 introduce pthreads. SSH was introduced in 1995. When I started university in 2000 it was a tool that the computer society and department had been using for years. Pretty prompts, command history, and coloured output on GNU coreutils were all present in the first Linux distro that I used (RedHat 4.something) around 1997.
So, maybe not early ‘90s, but these things haven’t changed significantly in over 20 years. Part of that is due to widespread use: you need a stable platform for adoption and it’s hard to introduce changes that require software changes to be useful when you need to change a software ecosystem of billions of lines of code.
If you look at the proceedings of OSDI or SOSP from the ‘90s, you’ll often see entirely new kernels and definitely see brand new kernel abstractions in a bunch of places. Since the early 2000s, that’s been a lot less common. There have been a few things, like Singularity, Barrelfish, and MirageOS, but a lot more papers are some small incremental tweak to Linux (often not even something that could work on other UNIX systems, but something that is necessary because of specific design decisions in Linux).
Generally agree. Note the article refers to Unix and a lot of the command line improvements happened on Linux first and migrated to commercial Unix much later, and desktop environments hadn’t really solidified until the mid-2000s. In the last 15 years, I’d agree that the pace of change has dramatically slowed.
It’ll be interesting to see what happens with Fuchsia/Zircon. Right now we seem to have a permissions model in kernel that doesn’t really map well to the permissions model used on mobile devices.
PS. I also started University in 2000, and in that year they introduced a firewall to block unencrypted Telnet, so I got to migrate to/deploy SSH. At the time SSH software wasn’t included in commercial Unix, and I think it was pre-OpenSSH with SSH1 only, using SSH1 only TeraTerm Pro as a client. Base TeraTerm didn’t include SSH support, which was provided via a TTSSH add-on. So maybe what changed is “everything just works now” :)
History? Old? NO WAY! :p