Abstract: “The security benefits of keeping a system’s trusted computing base (TCB) small has long been accepted as a truism, as has the use of internal protection boundaries for limiting the damage caused by exploits. Applied to the operating system, this argues for a small microkernel as the core of the TCB, with OS services separated into mutually-protected components (servers) – in contrast to “monolithic” designs such as Linux, Windows or MacOS. While intuitive, the benefits of the small TCB have not been quantified to date. We address this by a study of critical Linux CVEs, where we examine whether they would be prevented or mitigated by a microkernel-based design. We find that almost all exploits are at least mitigated to less than critical severity, and 40% completely eliminated by an OS design based on a verified microkernel, such as seL4.”
Gnu Hurd it is
Well, so one of my Berlin Rust Hack & Learn regulars is porting rustc to Gnu Hurd. I can switch soon, year of the desktop is 2109.
The fact that I can’t tell if this is a joke or a typo makes it a better joke.
Both. I made the typo and decided to’s too good to be fixed.
If I remember correctly Haiku also has microkernel.
I thought that BeOS was microkernel based on what so many said. waddlespash of Haiku countered me saying it wasn’t. That discussion is here.
Haiku has a hybrid kernel, like Mac OS X or Windows NT.
QNX, Minix 3, or Genode get you more mileage. At least two have desktop environments, too. I’m not sure about Minix 3 but did find this picture.
Don’t MacOS and iOS both use variants of the Mach microkernel?
They’re what’s called hybrid kernels. They have too much running in kernel space to really qualify as microkernel. Using Mach was probably a mistake. It’s the microkernel whose inefficient design created the misconceptions we’ve been countering for a long time. Plus, if you have that much in the kernel, might as well just use a well-organized, monolothic design.
That’s what I thought a long time. CompSci work on both hardware and software has created many new methods that might have implications for hybrid designs. Micro vs something in between vs monolithic is worth rethinking hard these days.
That narrative makes it sound like they took Mach and added BSD back in until it was ready, when the evolution of Mach was that it started as an object-oriented kernel with an in-kernel BSD personality and that was the kernel NeXT took, along with CMU developer and Mach lead Avie Tevanien.
That was Mach 2.5. Mach 3.0 was the first microkernel version of Mach, and that’s the one GNU Mach is based on. Some code changes were backported to the XNU and OSFMK kernels from Mach 3.0, but they were always designed and implemented as full BSD kernels with object-oriented IPC, virtual memory management and multithreading.
Yeah, I didn’t study the development of Mach. Thanks for filling in those details. That they tried to trim a bigger OS into a microkernel makes its failure even more likely.
I don’t follow the reasoning; what failed? They didn’t fail to make a microkernel BSD, as Mach 3 is that. They didn’t fail to get adoption, and indeed it’s easier when you’re compatible with an existing system.
They failed in many ways:
Little adoption. XNU is not Mach but incorporates it. Whereas Windows, Linux, and BSD kernels are used directly by large, install bases.
So slow as a microkernel that people wanting microkernels went with other designs.
Less reliable than some alternatives under fault conditions.
Less maintainable, such as easy swaps of modules, than L4 and KeyKOS-based systems.
Due to its complexity, every attempt to secure it failed. Reading about Trusted Mach, DTMach, DTOS, etc is when I first saw it. All they did was talk trash about the problems they had analyzing and verifying it vs other systems of the time like STOP, GEMSOS and LOCK.
So, it was objectively worse than competing designs then and later in many attributes. It was too complex, too slow, and not as reliable as competitors like QNX. It couldn’t be secured to high assurance either ever or for a long time. So, it was a failure compared to them. It was a success if the goal was to generate research papers/funding, give people ideas, and make code someone might randomly mix with other code to create a commercial product.
All depends on viewpoint of or requirements for OS you’re selecting. It failed mine. Microkernels + isolated applications + user-mode Linux are currently best fit for my combined requirements. OKL4, INTEGRITY-178B, LynxSecure, and GenodeOS are examples implementing that model.
Yes, but with most of a BSD kernel stuck on and running in the same address space. https://en.wikipedia.org/wiki/XNU