This is from 1997 (specifically SOSP 1997), not 1995, despite the spurious copyright notice at the bottom. See the URL, and the fact that it cites papers from 1996 and 1997.
As a meta-quibble, while I find exokernels interesting to discuss, I’m not sure we need two different discussions for what amounts to the same topic, so I’d merge this one here or vice-versa.
I admit I just skimmed this article so maybe these points are all moot, but I got bored.
Abstractions are powerful. They can allow a complex idea to be simplified or “trimmed” to the basics allowing quicker development or even allowing something to be built at all. They can also allow cross platform software or even drivers (I think I read somewhere that nVidia and AMD have a cross platform core where most of the features work everywhere and then they just wrap it in the OS depended driver code to build an actual driver for each platform). Abstractions are more scalable than write it once for each piece of hardware. I know to a certain extent in an OS you want abstractions but also individually tailored code for each piece of hardware, so there has to be a happy medium, but from an application’s point of view it is great if they can just target Win32, Posix, Carbon/Cocoa rather than needing separate paths in their application to target Win32.USB.RazorRat9 and Win32.PS2.SomeOldMouse because the OS didn’t abstract that out for them.
The authors aren’t against abstractions, just against putting all the abstractions in the kernel, where they end up (in their view) non-optional and inflexible. The exokernel design is to have the kernel only mediate access to the hardware, and then userspace applications can use the hardware however they want, including by layering abstractions over it if they’d like. For example, if you don’t want to program the NIC directly (and you probably don’t), you could just use a userspace networking library to abstract away the NIC. But different applications could choose different abstractions as suitable. If you really find an existing OS provides exactly the abstractions you want, you can even just use the entirety of its abstractions as a library, a concept they call a “library OS”.
In a modern context, I think this idea has come back in via hypervisors, which can be seen as an exokernel, serving as the bare-minimum “operating system” that does little besides mediate hardware access. Then on top of that, you can run anything from a bare-metal application that does everything itself, to one that links in some carefully chosen abstractions, all the way to a whole guest operating system if you’d like. The whole-guest-operating-system approach has dominated (and for a while been seen as the only one, with hypervisors only a tool for multiplexing OSs), but something more like the exokernel vision is becoming more possible with tooling around unikernels, rump kernels, etc.
This is from 1997 (specifically SOSP 1997), not 1995, despite the spurious copyright notice at the bottom. See the URL, and the fact that it cites papers from 1996 and 1997.
As a meta-quibble, while I find exokernels interesting to discuss, I’m not sure we need two different discussions for what amounts to the same topic, so I’d merge this one here or vice-versa.
I admit I just skimmed this article so maybe these points are all moot, but I got bored.
Abstractions are powerful. They can allow a complex idea to be simplified or “trimmed” to the basics allowing quicker development or even allowing something to be built at all. They can also allow cross platform software or even drivers (I think I read somewhere that nVidia and AMD have a cross platform core where most of the features work everywhere and then they just wrap it in the OS depended driver code to build an actual driver for each platform). Abstractions are more scalable than write it once for each piece of hardware. I know to a certain extent in an OS you want abstractions but also individually tailored code for each piece of hardware, so there has to be a happy medium, but from an application’s point of view it is great if they can just target Win32, Posix, Carbon/Cocoa rather than needing separate paths in their application to target Win32.USB.RazorRat9 and Win32.PS2.SomeOldMouse because the OS didn’t abstract that out for them.
The authors aren’t against abstractions, just against putting all the abstractions in the kernel, where they end up (in their view) non-optional and inflexible. The exokernel design is to have the kernel only mediate access to the hardware, and then userspace applications can use the hardware however they want, including by layering abstractions over it if they’d like. For example, if you don’t want to program the NIC directly (and you probably don’t), you could just use a userspace networking library to abstract away the NIC. But different applications could choose different abstractions as suitable. If you really find an existing OS provides exactly the abstractions you want, you can even just use the entirety of its abstractions as a library, a concept they call a “library OS”.
In a modern context, I think this idea has come back in via hypervisors, which can be seen as an exokernel, serving as the bare-minimum “operating system” that does little besides mediate hardware access. Then on top of that, you can run anything from a bare-metal application that does everything itself, to one that links in some carefully chosen abstractions, all the way to a whole guest operating system if you’d like. The whole-guest-operating-system approach has dominated (and for a while been seen as the only one, with hypervisors only a tool for multiplexing OSs), but something more like the exokernel vision is becoming more possible with tooling around unikernels, rump kernels, etc.