“based on existing work done on the MACH microkernel at Carnegie Mellon.”
“There was already significant evidence that building an operating system based on the MACH kernel was worthwhile—after all, it was what NeXTSTEP, the operating system that would later form the basis of the modern MacOS”
Even without the title, I’d have been able to predict what happened. Almost all trashtalk about microkernels goes back to people’s experiences with MACH. It was an overly-complex kernel with terrible performance. Both performance- and security-oriented projects using it missed some or all their goals. Jobs used it but kept other stuff in kernel mode. Even that wasn’t a success for MACH given its goal was everything being in user-mode at good speed. It was success for hybrid kernel called XNU. QNX and L4 family are examples of pulling off separation and speed balance effectively.
“Linux wins heavily on points of being available now.”
Worse is Better and First Mover Advantage both teach us this lesson. Hindsight says a microkernel architecture that plugged into GNU with ability to port monolothic components piece by piece might have taken off. The first thing available was monolithic. The combo was useful. So, it took off. People pushing “better” things should keep this in mind: favor quick execution of stuff with immediate utility.
“One, putting it together and having it meet all these competing needs was incredibly complex, something reflected by the fact it spent billions of dollars on the project;”
This is a common, incorrect assumption. Refute it any time you see it. Amount of effort or money a group spends on a problem says as much about them as the problem. CompSci teams and small businesses with five to low, six digits regularly pull off innovations that big firms, including IBM, fail to achieve despite billions in R&D. So, above quote might just prove IBM’s execution was ineffective and hugely expensive. You can’t tell by dollars.
“Taligent was one of several pie-in-the-sky fiascos that left Apple in such desperate straits that they had to buy NeXT,”
Nah, they had two on the table: NeXT and BeOS. Macs and iPhones would be so fast had they bought BeOS. They wanted Steve Jobs back, though. Previously kicked out of Apple, Jobs got his shit together, made a great platform, was (if acquired) the right guy to turn Apple back around, and they get his platform. So, they bought NeXT.
“ while the embedded QNX excels at the stability part of the microkernel equation.”
And speed, the reason MACH tanked. Blackberry Playbook was based on QNX. In (maybe-staged) tests I saw, it smoked the iPad with multitasking and swaps that appeared to have no lag. That responsiveness is probably from QNX’s architecture being designed to guarantee it (if properly applied).
Also, don’t forget Minix 3 with its self-healing capabilities. Between reliability and licensing, Intel decided to use it in Management Engine. So, it’s one of most widely-deployed microkernels in existence now.
Even that wasn’t a success for MACH given its goal was everything being in user-mode at good speed. It was success for hybrid kernel called XNU.
That wasn’t a goal for Mach. From “Programming Under Mach”:
The Mach project at CMU was originally called the “Supercomputer Workbench Project”. That name came from the title of the original DARPA proposal I wrote in 1983 that had as its goal the development of an operating system for experimental multiprocessors…
and
The goal of that research was to develop a new operating system that would allow computer programmers to exploit modern hardware architectures emerging from vendors, universities, and research laboratories.
It is true that Mach versions prior to 3 were an object-oriented system that had an in-kernel BSD personality. It is true that Mach versions 3 and later are true microkernels. It is true that macOS uses XNU (though NeXTSTEP did not). XNU contains sources from NeXT’s Mach 2.5 port, Mach 3, and OSF’s osfmk port. It is not clear how this is a failure of Mach.
I could’ve been misinformed. I did read those papers thinking they were building a usable, general-purpose system to compete with monoliths. Given memory problems, I decided to go back to an early intro to check. I’m also checking in case they backpeddle from earlier claims after failure. That happens, too. So, here’s first one (Feb 1989) I found that straight-up summarizes the project instead of describing pieces of its design: Mach: A System Software Kernel (in PostScript).
Here’s some excerpts that seem to confirm my early view that performance was a goal:
“…These facilities allow the efficient implementation (my emphasis) of system functions outside the operating system kernel and support for binary compatibility with existing operating system environments.” (the first says performance is goal; the second indicates they want to make it useful, but that could be just prototyping)
“…modern memory management techniques (such as copy on write) are employed whenever large amounts of data are sent in a message from one program to another. This allows the transmission of megabytes of data at very low cost (my emphasis) with no actual data copying.” (they were optimizing IPC for performance)
“4. Current Status.” I’ll let you read that one. It indicates they’re using it daily with some companies selling it. The performance would be a natural goal to optimize for.
It wasn’t good enough to run as much in user mode through IPC as some later designs did. Doing that efficiently was a goal per the paper. So, they failed that goal after some otherwise really interesting work.
I still thank you for the counter given I misremembered or didn’t know that BSD code was already in the early versions with 3.0 being the microkernel. I came in late after everyone dumped Mach, just skimming descriptions of microkernel part of it. I probably saw version 3.0 assuming NeXT added the BSD code later. I might re-read the earlier papers in depth in near future just to give it a fresh look. I was unfair to it in the past since I was only security-focused rather than looking at its other qualities that might deserve respect.
Modern memory management techniques did indeed make Mach more efficient. 4.4BSD adopted Mach’s virtual memory management because it was faster than BSD’s. All those Free/Open/Net/Midnight/Dragonfly systems have VMMs descended from Mach.
As for the “efficient implementation” line, I think “our goal is performance” is an overly specific reading that isn’t warranted, given the academic context of the paper. This could also be read as “implementing services outside the kernel is easy”, “we think this is probably efficient, graphs to follow”, or “please keep funding our group”.
4.4BSD adopted Mach’s virtual memory management because it was faster than BSD’s. All those Free/Open/Net/Midnight/Dragonfly systems have VMMs descended from Mach.
I didn’t know that. It’s a hell of a contribution indeed.
Interesting article. Has a few problems, though.
“based on existing work done on the MACH microkernel at Carnegie Mellon.”
“There was already significant evidence that building an operating system based on the MACH kernel was worthwhile—after all, it was what NeXTSTEP, the operating system that would later form the basis of the modern MacOS”
Even without the title, I’d have been able to predict what happened. Almost all trashtalk about microkernels goes back to people’s experiences with MACH. It was an overly-complex kernel with terrible performance. Both performance- and security-oriented projects using it missed some or all their goals. Jobs used it but kept other stuff in kernel mode. Even that wasn’t a success for MACH given its goal was everything being in user-mode at good speed. It was success for hybrid kernel called XNU. QNX and L4 family are examples of pulling off separation and speed balance effectively.
“Linux wins heavily on points of being available now.”
Worse is Better and First Mover Advantage both teach us this lesson. Hindsight says a microkernel architecture that plugged into GNU with ability to port monolothic components piece by piece might have taken off. The first thing available was monolithic. The combo was useful. So, it took off. People pushing “better” things should keep this in mind: favor quick execution of stuff with immediate utility.
“One, putting it together and having it meet all these competing needs was incredibly complex, something reflected by the fact it spent billions of dollars on the project;”
This is a common, incorrect assumption. Refute it any time you see it. Amount of effort or money a group spends on a problem says as much about them as the problem. CompSci teams and small businesses with five to low, six digits regularly pull off innovations that big firms, including IBM, fail to achieve despite billions in R&D. So, above quote might just prove IBM’s execution was ineffective and hugely expensive. You can’t tell by dollars.
“Taligent was one of several pie-in-the-sky fiascos that left Apple in such desperate straits that they had to buy NeXT,”
Nah, they had two on the table: NeXT and BeOS. Macs and iPhones would be so fast had they bought BeOS. They wanted Steve Jobs back, though. Previously kicked out of Apple, Jobs got his shit together, made a great platform, was (if acquired) the right guy to turn Apple back around, and they get his platform. So, they bought NeXT.
“ while the embedded QNX excels at the stability part of the microkernel equation.”
And speed, the reason MACH tanked. Blackberry Playbook was based on QNX. In (maybe-staged) tests I saw, it smoked the iPad with multitasking and swaps that appeared to have no lag. That responsiveness is probably from QNX’s architecture being designed to guarantee it (if properly applied).
Also, don’t forget Minix 3 with its self-healing capabilities. Between reliability and licensing, Intel decided to use it in Management Engine. So, it’s one of most widely-deployed microkernels in existence now.
That wasn’t a goal for Mach. From “Programming Under Mach”:
and
It is true that Mach versions prior to 3 were an object-oriented system that had an in-kernel BSD personality. It is true that Mach versions 3 and later are true microkernels. It is true that macOS uses XNU (though NeXTSTEP did not). XNU contains sources from NeXT’s Mach 2.5 port, Mach 3, and OSF’s osfmk port. It is not clear how this is a failure of Mach.
I could’ve been misinformed. I did read those papers thinking they were building a usable, general-purpose system to compete with monoliths. Given memory problems, I decided to go back to an early intro to check. I’m also checking in case they backpeddle from earlier claims after failure. That happens, too. So, here’s first one (Feb 1989) I found that straight-up summarizes the project instead of describing pieces of its design: Mach: A System Software Kernel (in PostScript).
Here’s some excerpts that seem to confirm my early view that performance was a goal:
“…These facilities allow the efficient implementation (my emphasis) of system functions outside the operating system kernel and support for binary compatibility with existing operating system environments.” (the first says performance is goal; the second indicates they want to make it useful, but that could be just prototyping)
“…modern memory management techniques (such as copy on write) are employed whenever large amounts of data are sent in a message from one program to another. This allows the transmission of megabytes of data at very low cost (my emphasis) with no actual data copying.” (they were optimizing IPC for performance)
“4. Current Status.” I’ll let you read that one. It indicates they’re using it daily with some companies selling it. The performance would be a natural goal to optimize for.
It wasn’t good enough to run as much in user mode through IPC as some later designs did. Doing that efficiently was a goal per the paper. So, they failed that goal after some otherwise really interesting work.
I still thank you for the counter given I misremembered or didn’t know that BSD code was already in the early versions with 3.0 being the microkernel. I came in late after everyone dumped Mach, just skimming descriptions of microkernel part of it. I probably saw version 3.0 assuming NeXT added the BSD code later. I might re-read the earlier papers in depth in near future just to give it a fresh look. I was unfair to it in the past since I was only security-focused rather than looking at its other qualities that might deserve respect.
Modern memory management techniques did indeed make Mach more efficient. 4.4BSD adopted Mach’s virtual memory management because it was faster than BSD’s. All those Free/Open/Net/Midnight/Dragonfly systems have VMMs descended from Mach.
As for the “efficient implementation” line, I think “our goal is performance” is an overly specific reading that isn’t warranted, given the academic context of the paper. This could also be read as “implementing services outside the kernel is easy”, “we think this is probably efficient, graphs to follow”, or “please keep funding our group”.
I didn’t know that. It’s a hell of a contribution indeed.