1. 86
    1. 28

      “Humility” is a great name for a debugger. The only better name might be “Frustration”.

      1. 9

        I like to affectionately name my debugger “ahhhhhhhh”

        1. 2

          How about naming the debugger “printf”?

          1. 3
            #define ahhhhhhhh printf
            
            ahhhhhhhh("fsck this");
            
    2. 10

      This seemed like a wacky proposition before I understood that this is not an OS for servers, it’s an OS for server subcomponents.

      From the gut, I like the primitives that Hubris chose, and I pick up more than a whiff of Erlang’s influence. It will be interesting to see where else Hubris spreads in the coming years.

      1. 8

        I pick up more than a whiff of Erlang’s influence

        I think that our industry could do worse than to look at Erlang for ideas. There are probably a lot of problems in micro-service architectures that the Erlang folks figured out 25 years ago.

    3. 8

      I’ve been following Oxide, and wishing them well for a while. I too want a re-think of the computers surrounding the CPU, but I want it on my desk, not in the cloud. Hopefully once they’ve landed the datacenter contracts they’re obviously aiming at, they’ll also start selling boards and/or boxen to regular folks. A well-documented and open yet secure workstation sounds just bloody tops to my ears.

    4. 2

      it employed a strictly synchronous task model

      Not entirely certain how to interpret that – does that mean it’s cooperatively (as opposed to preemptively) scheduled?

      Edit: no, I see this page explicitly mentions preemptive multitasking. (So, still unsure what “strictly synchronous task model” means.)

      1. 8

        This seems to be a reference to Hubris’s IPC mechanism and the general execution model for tasks, which is discussed in more detail in the linked docs.

        Tasks have a single thread of execution, and cannot do anything asynchronous: if they send an RPC to another task, they’re suspended by the kernel until that other task responds (or the kernel synthesizes a response if that task crashes). They only receive asynchronous notifications from other tasks or hardware interrupts when they explicitly perform a receive (which suspends the task until something noteworthy happens).

        You can still do preemptive execution in this model - arguably easier, because there’s very few surprises for the kernel to deal with: a task is either runnable, or it took one of a small number of actions that are explicitly documented to suspend the task, until some future other small number of actions resume it.

        This makes for a very nice programming model: a task is single-threaded, runs an explicit event loop if it exposes an API to other tasks, and everything it does is synchronous and executes exactly like it says in the code. Even interaction with the rest of the OS looks like normal function calls that just execute in a roundabout way.

        1. 8

          Just to add to that, this is not just a very useful model, but it can also be tuned for surprisingly good performance. Just a few days ago, in another comment here, I mentioned QNX as one of the microkernels that figured our very early that building a fast message-passing system involves hooking the message-passing part to the scheduling part. One of the tricks it employed was that (roughly – the terminology isn’t exact and there were exceptions) if a process sent a message to another process and expected an answer, then that other process would be immediately scheduled to run. In a strictly synchronous task model, even a simple scheduler can get surprisingly good performance by leveraging the fact that tasks inherently “know” what they need and when.

          It’s also worth pointing out that this makes the whole system a lot easier to debug. I haven’t used Hubris so I don’t know how far they’ve taken it but one of my pet peeves with most async message-based systems is that literally 90% of the debugging is “why is this task in this state”, as in “who sent that stupid message and why?” If the execution model is strictly synchronous that’s very easy to figure out: you just look at the tasks that are in suspended state and see which one’s trying to talk to yours, and if you look on their stack, you also figure out why.

          It’s probably also worth pointing out that all these things – synchronous execution, tasks defined at compile-time, and (not used by Hubris, but alluded to in another comment) cooperative multitasking are very much common in embedded systems. I’ve worked on systems that were similar in this regard (strictly synchronous, all tasks defined at compile-time) twice so far. It doesn’t map so well to general-purpose execution but this isn’t a general-purpose system :-D.

      2. 2

        I’m not sure why they wrote “preemptive multitasking” there. I’ve read the documentation and briefly looked at the code — tasks are only switched on syscalls and interrupts.

        1. 8

          Isn’t an interrupt-triggered task switch (e.g. on a timer interrupt, say) kind of the definition of preemptive multitasking? If the interrupted task gets stopped and another task starts running on the CPU, the first task has been preempted, no?

          1. 1

            By “interrupts” I meant hardware interrupts that the tasks subscribe to. I guess it still is preemptive but I don’t think they use a timer specifically for preemption.

        2. 4

          “Preemptive” is a term that hasn’t been used in a long time because the thing it replaced, “cooperative multitasking”, is no longer in use. In cooperative multitasking, each program needed to explicitly call some OS-provided function to yield time to other programs. For example, on pre-OS X Macs it was WaitNextEvent().

          1. 10

            Strangely cooperative multitasking is in use, again. Just at the next level up in the stack. We’ve just renamed it to things like “green threads” or “async” or so on, and it’s multi-tasking at the “task inside a process level” instead of “process inside the OS” level.

            1. 2

              I was just thinking that. As I understand it, JS sagas implemented using generator function*s and yield are basically doing cooperative multitasking within a JS single-threaded execution context, right? And isn’t this similar with generators in other languages?

              1. 1

                I’m not familiar with exactly what you mean when you say “saga”, but probably. Async in javascript is, so assuming they make use of that, yes.

                Generators, in a way, but they’re mostly an even simpler form of control flow than co-operative multi-tasking in that there is no scheduler, just “multiple stacks” (though usually they emulate the extra stacks with compiler magic) which you explicitly switch between. Generator support at the language level is enough to make some pretty ergonomic co-operative multi-tasking libraries though. Rust, for example, does all it’s async stuff on top syntactic sugar on top of generators, using normal libraries with no special support from the language for the scheduling part.

            2. 1

              Huh. I guess that’s true. Fascinating!