Exokernels are not theory, they just got rebranded. We call them hypervisors now.
I’m not sure that I see a distinction between an app taking a structure containing pointers to syscall entries in it’s entry-point versus an app taking a pointer to a VDSO page that exports functions to invoke system calls in its ELF auxiliary arguments vector. In both cases, the kernel is injecting some code into the app process and handing you metadata to access it. The difference in exposing it into Rust as a static type is just that you are now baking in the set of exported functions in a way that means that adding a system call is an ABI (and possibly API) breaking change.
It looks as if the concurrency model is largely the same as Contiki: cooperative stack less coroutines. That’s nice for embedded systems, less nice for complex applications.
Maybe it’s fine if adding a system call is an ABI breaking change, as long as the kernel knows what version of the ABI a given app was built against and can make sure it passes the right structure to the program at startup, so a newer kernel can continue to run individual programs built against an older kernel version. Certainly it’s easier for the application programmer if every piece of functionality they could possibly utilize from the OS is accessible in one place and documented in a uniform way - it makes the the life of the OS developers harder of course, but there are many more application than kernel developers.
I agree that cooperative multitasking isn’t all that useful for a complicated application. I think that’s the least interesting aspect of this project - I’d personally rather see more effort put into making experimental OSs that make as few assumptions about the cooperativeness of the software they run as possible.
Maybe it’s fine if adding a system call is an ABI breaking change, as long as the kernel knows what version of the ABI a given app was built against and can make sure it passes the right structure to the program at startup, so a newer kernel can continue to run individual programs built against an older kernel version.
The down side of this is that all of the compat code lives in the kernel. FreeBSD does this to a large degree and I’d really like to move the syscall interface into a VDSO and have a compat launcher that provides a userspace shim DSO that’s effectively LD_PRELOADed into older binaries, but it’s a bit late for that now.
The system call interface of Linux and FreeBSD are documented but that typically isn’t what a programmer wants. They want the higher-level features exposed by libc (such as buffered I/O, APIs that handle retry in the presence of signal delivery, and so on). This APIs are also documented (in one place: the system manual).
I agree that trying new ideas is good, but all of the ideas I’ve seen in the README are ones I’ve seen in other places and there’s no mention of the other places that implemented them (Contiki, Singularity/Midori, ReduxOS, and so on) or of why those ideas worked or didn’t work in those contexts.
I agree that trying new ideas is good, but all of the ideas I’ve seen in the README are ones I’ve seen in other places and there’s no mention of the other places that implemented them (Contiki, Singularity/Midori, ReduxOS, and so on) or of why those ideas worked or didn’t work in those contexts.
Could be the author never heard of those projects.
Unless I’m missing something, this process model seems to be a naive reimplementation of dynamic libraries (passing in a reference to a data structure containing all the imports) and coroutines (process yields the cpu by saving state and returning, then picks up state on the next call.)
That’s what I got as well. I’m in favour of people trying new ideas. I’m even in favour of old ideas that didn’t work because of reasons that don’t really apply anymore (my new OS is basically MULTICS on microcontrollers), but these look like old ideas that we don’t use because they don’t work.
Did I understand correctly that multitasking requires a return from _start? That is an app, in order to yield control, has to preserve its state in the context, unwind stack and return. Then it (app’s _start) expects to be called again and it has to restore state from the context and call the needed function to do whatever’s next. Did I get that right?
On one hand it lands fairly well on server-type apps: handle one request, yield, repeat. Most one-shot CLI apps probably don’t need to yield at all.
But I’m a little bit sceptical whether this is a good model for highly stateful (large GUI apps) or performance-restricted apps (like games).
It’s also unclear whether threads are supported or how multi-core CPUs are used. Likewise, how does the kernel handles uncooperative apps.
It’s the same model as Contiki. It works really well for tiny embedded systems. It’s also more or less what most Windows 3.0 apps looked like: read a message from the OS, process it, return control to the OS, in a loop. A few things would explicitly yield in the middle but most didn’t. It’s a big part of the reason everyone hated Windows 3.x: a single app refusing to return control froze the entire system (except the mouse cursor, which was updated from an interrupt handler).
Exokernels are not theory, they just got rebranded. We call them hypervisors now.
I’m not sure that I see a distinction between an app taking a structure containing pointers to syscall entries in it’s entry-point versus an app taking a pointer to a VDSO page that exports functions to invoke system calls in its ELF auxiliary arguments vector. In both cases, the kernel is injecting some code into the app process and handing you metadata to access it. The difference in exposing it into Rust as a static type is just that you are now baking in the set of exported functions in a way that means that adding a system call is an ABI (and possibly API) breaking change.
It looks as if the concurrency model is largely the same as Contiki: cooperative stack less coroutines. That’s nice for embedded systems, less nice for complex applications.
Maybe it’s fine if adding a system call is an ABI breaking change, as long as the kernel knows what version of the ABI a given app was built against and can make sure it passes the right structure to the program at startup, so a newer kernel can continue to run individual programs built against an older kernel version. Certainly it’s easier for the application programmer if every piece of functionality they could possibly utilize from the OS is accessible in one place and documented in a uniform way - it makes the the life of the OS developers harder of course, but there are many more application than kernel developers.
I agree that cooperative multitasking isn’t all that useful for a complicated application. I think that’s the least interesting aspect of this project - I’d personally rather see more effort put into making experimental OSs that make as few assumptions about the cooperativeness of the software they run as possible.
The down side of this is that all of the compat code lives in the kernel. FreeBSD does this to a large degree and I’d really like to move the syscall interface into a VDSO and have a compat launcher that provides a userspace shim DSO that’s effectively LD_PRELOADed into older binaries, but it’s a bit late for that now.
The system call interface of Linux and FreeBSD are documented but that typically isn’t what a programmer wants. They want the higher-level features exposed by libc (such as buffered I/O, APIs that handle retry in the presence of signal delivery, and so on). This APIs are also documented (in one place: the system manual).
I agree that trying new ideas is good, but all of the ideas I’ve seen in the README are ones I’ve seen in other places and there’s no mention of the other places that implemented them (Contiki, Singularity/Midori, ReduxOS, and so on) or of why those ideas worked or didn’t work in those contexts.
Could be the author never heard of those projects.
Unless I’m missing something, this process model seems to be a naive reimplementation of dynamic libraries (passing in a reference to a data structure containing all the imports) and coroutines (process yields the cpu by saving state and returning, then picks up state on the next call.)
That’s what I got as well. I’m in favour of people trying new ideas. I’m even in favour of old ideas that didn’t work because of reasons that don’t really apply anymore (my new OS is basically MULTICS on microcontrollers), but these look like old ideas that we don’t use because they don’t work.
Did I understand correctly that multitasking requires a return from
_start
? That is an app, in order to yield control, has to preserve its state in the context, unwind stack and return. Then it (app’s_start
) expects to be called again and it has to restore state from the context and call the needed function to do whatever’s next. Did I get that right?On one hand it lands fairly well on server-type apps: handle one request, yield, repeat. Most one-shot CLI apps probably don’t need to yield at all.
But I’m a little bit sceptical whether this is a good model for highly stateful (large GUI apps) or performance-restricted apps (like games).
It’s also unclear whether threads are supported or how multi-core CPUs are used. Likewise, how does the kernel handles uncooperative apps.
It’s the same model as Contiki. It works really well for tiny embedded systems. It’s also more or less what most Windows 3.0 apps looked like: read a message from the OS, process it, return control to the OS, in a loop. A few things would explicitly yield in the middle but most didn’t. It’s a big part of the reason everyone hated Windows 3.x: a single app refusing to return control froze the entire system (except the mouse cursor, which was updated from an interrupt handler).