1. 52
  1. 14

    Why can’t Unikernels be SMP?

    1. 9

      IncludeOS has SMP support.

      MirageOS doesn’t, but that’s more about OCaml’s threading model. HalVM does (or probably could, anyway) thanks to Haskell’s SMP support, but it’s got some other problems. Then there’s LING, which uses the Erlang VM on Xen. Not sure about their SMP story, but I’d bet it’s not intrinsically difficult, due to Erlang’s design. But then again, if you have a fine-grained message passing architecture and can host lots of little lightweight kernels, maybe the kernels themselves don’t need SMP at all.

      1. 3

        No idea. Maybe they’re using some approach where the bare minimum of the kernel is linked in, but somehow no process management (implied by flat address space reference). I think you’re right though.

        1. 2

          No reason I can think of…..

          …except the stuff that does that in the linux kernel is the hardest and most mindbending to get right and is a product of man decades of work, and usually depends on the fine fine fine print of the actual CPU and memory architecture spec.

          Every programmer I know can write a multi threaded program that will work.

          Most of the time.

          Very very few programmers I know can write a significant multithread system that will be right 99.99999% of the time.

          I don’t think I have met one that get’s them right all the time….

          Multi-processor shared memory systems with multiple layers of caching are much harder to get right.

          1. 2

            I assume because the natural use case falls towards kilobytes instead of megabytes, and nobody needed to fit in kilobytes when they had multiple cores lying around.

            1. 1

              There’s no reason they can’t be. But the definition of unikernel they use in the post my preclude it.

            2. 10

              Just like the author, I thought unikernels would fit a specific space of security and performance.

              I would argue the reason they didn’t take off had nothing to do with the limitations of the tech. It took me a long time to learn that technology doesn’t get investment because it’s good, it gets good because of investment. The investment into unikernels just wasn’t there. I disagree with the author , I still think it’s superior to embedded linux. But that doesn’t matter, VC money went to docker instead of rump kernels. Money just isn’t that smart and ultimately investors are …. simple creatures.

              1. 11

                Dont forget backward compatibility with existing API’s and stuff. Lots of tech that takes off does that where clean slate usually doesnt.

                1. 2

                  I don’t really understand the docker/unikernel dicothomy. Can unikernels be used as containers like what docker enables?

                  It just seems like very different technology

                2. 6

                  Unikernels are cool on bare metal, but in a hypervisor… well, the hypervisor becomes just like a normal OS, but with the ABI being “MBR/EFI bootable bare metal images plus virtio hypercalls plus some emulated 90s hardware”.

                  The marketing benefit of “It’s Hardware Isolation™” doesn’t seem to actually be a huge security benefit, since the hypervisor might have holes in hypercalls just like a regular OS can have holes in syscalls. Process isolation hasn’t been the most pwned part of modern kernels anyway. I think Xen has had more isolation vulnerabilities than FreeBSD or Linux.

                  The ABI of the future shouldn’t be unrestricted unix processes. Shouldn’t be bare metal OS images with virtio either. It should be CloudABI.

                  1. 3

                    Although I agree conceptually, it can still be a benefit if hypervisor is simpler with less code than OS. This is especially true when hypervisor is a focused, recent design whereas OS was trying to do everything with layers of cruft built over time.

                  2. 4

                    Regarding the security of unikernels, I believe that there is a reasonable rebuttal from Bryan Cantrill: https://www.joyent.com/blog/unikernels-are-unfit-for-production

                    1. 1

                      Nice read, but doesn’t sound like Cantrill is unbiased here.

                      1. 4

                        He might be biased (joyent/containers vs unikernels) but @bcantrill’s opinions are usually more driven by wanting stronger engineering practices than marketing.

                    2. 4

                      Commenting specifically on unikernels and not the general lightweight VM vs. serverless & container approach: My thought on using unikernels is it should be better latency in that you don’t have to switch between user and kernel space. But from a security perspective, unless some of the changes discussed in https://lobste.rs/s/jsjkfn/making_c_less_dangerous happen, you are probably better off with processes and their own memory space.

                      I haven’t lately figured out exactly how far you can cut down the Linux kernel in terms of memory footprint and actual lines of code in use, but it used to be possible to run Linux in 4MiB of RAM. Yocto/openembedded can be quite complicated to use; I’d consider first whether https://buildroot.org/ meets your needs if you’re going to start from scratch.

                      1. 4

                        Why would processes be more secure than Unikernels? The attack surface of the kernel is much much larger than that of a hyper visor.

                        1. 3

                          Unless your hypervisor is providing more services than they traditionally do, in theory you should be able to strip down a traditional kernel almost as far as a unikernel except for the syscall and process management portions.

                          I’ve worked on embedded devices where not even the network stack was run on-chip, it was just bytes over a serial port. I think that’s the level of offloading where a unikernel would start to look significantly different than a traditional paravirtualized kernel.

                          1. 1

                            Like Oberon, Solo (about 16K for core), NuttX (highly tunable), and embOS (has some numbers). Of course, you probably already know about the stuff on the right being in embedded.

                            I bet you embedded folk cringe at the waste and unpredictability in mainstream OS’s and software even more than most of us knowing it could be way better because you work with better. Well, depends on what you build on like with hard RTOS’s and such.

                            “I’ve worked on embedded devices where not even the network stack was run on-chip, it was just bytes over a serial port.”

                            I actually posted in Jack Ganssle’s Embedded Muse here that real-time products should ditch traditional interrupts wherever possible in favor of I/O coprocessors or asynchronous circuitry. I gave examples there with the concept most proven in mainframes with them handling simultaneously massive CPU utilization and I/O throughput. Why you think about that? Oh yeah, Jack emailed me there was at least one product that was doing that with a big, ARM core for main workload but a tiny, cheap one for I/O.

                            And I just noticed right under my reply was one from “John Carter.” We have a @johncarter here that does embedded. You a muse reader, too, John? Or a different guy?

                            1. 3

                              We have a @johncarter here that does embedded. You a muse reader, too, John? Or a different guy?

                              Guilty as charged, that’s me.

                              I bet you embedded folk cringe at the waste and unpredictability in mainstream OS’s and software even more than most of us knowing it could be way better because you work with better.

                              It’s always a trade off. I have tiny low cost, low powered embedded linux systems on my desk…

                              With open embedded I can pull in tens of thousands of packages to address any need, way faster and better, than I could write myself in a decade….

                              And depending on whether we keep our brains switched on, it will be more robust security wise than any traditional micro RTOS I could use…. since the real world security testing of the linux kernel exceeds just about anything else on the planet.

                              I also have systems on my desk where I’m constantly being pushed to reduce bill of materials (BoM) cost, manufacturing cost, power consumption, shift life, physical size, weight, … ie. The business side of the company will never stop pushing that for sound business reasons.

                              There, we currently using a tradition embedded rtos, and we’re even considering the idea of swapping in an, effectively a unikernel, that my colleague and I wrote. (We run our off target test suite using it since I can cycle perfect emulate the scheduling behaviour of the target rtos whilst playing nice with valgrind and gdb and gcov…. on the desktop.

                              Interrupts, yup, they are a pain. I always feel there is something wrong with any design that has a guideline, “You mustn’t spend too long in a “blah” routine.” “Ok, so how long is too long?” “Dunno, shorter is better”.

                              That to me just stinks.

                              Coprocessors? Hmm. We’re sort of lucky… we have an FPGA to play with. So it’s our coprocessor. And a DSP.

                              Ideally a coprocessor design should be autonomous. ie. Big CPU can go to sleep while nothing is happening, I/O coprocessor ticks along handling noise and bounces and….. and coughs an event up into the fifo and wakes big brother if and when something needs thinking about.

                              And somebody somewhere has a event handling resource budget somewhere that guarantees that fifo never overflows, and somebody designed up front a strategy for throttling / providing back pressure on input rates. (Hint for any newbies reading this: Growing that fifo sounds like a solution right? Give me a couple of sound reasons why it isn’t…)

                              In a world full of micromanagers, designers tend to design systems and protocols that micromanage the coprocessors and destroy the value of having coprocessors.

                              ie. Your coprocessor idea is good, sound, but dammit it’s hard to get a herd of cats to design and implement that properly! (Sometimes the micromanagement is even part of the international standard you might be implementing!)

                              1. 1

                                Although I upvoted it, I forgot to say thanks for the reply. Might be able to market something like that if designed in reverse: the coprocessor is the main processor with the library set up to run reactive routines on the “general-purpose” processor. Kind of like how that one SoC on budget boards is really a graphics chip with a general-purpose ARM on the side. The marketing and technical materials would present it the way it’s intended to be used. Everything else about it is whatever is standard.

                                The recent discussion of FSM’s had me looking for an old comment of yours to jokingly counter @mempko’s “FSM’s are your friend” with “Long as you’re not John Carter.” ;) Then, I noticed the comment about showing complexity.

                                Interesting enough, I was advocating for that on HN recently for using Abstract, State Machines for complexity measurement. I don’t know if it’s been done. My idea was looking at the ranges of values, the transitions, and their combinatorial explosion. The higher those numbers, the higher the complexity. That ASM’s are a fundamental kind of thing you can model hardware and software in makes it generally applicable. That they’re a minimalist thing, like Turing Machines operating on structures, means there’s either very little or no incidental complexity clouding the measurements. What you think of that as a guy who’d like to see more state machines in your industry?

                                1. 2

                                  FSM’s are your friend

                                  Look at it this way.

                                  If some one tells you he has a FSM with 3 possible input events and 4 states, you say “Cool, all I have to do is review and test 4*3 = 12 things, Easy Peasy, FSM are my friend.

                                  I really really don’t care what the state of the rest of the states in the system are, I really really don’t care in what order the events arrive, I just have to think about 12 things to be certain this thing does The Right Thing!

                                  Now you start reviewing and you note one of the events is actually a message bearing an uint32_t payload, whoops thats 2^32 events you really need to think about. Another is bearing a pointer into the guts of a bear…

                                  And on the state side, you note in one of the transitions it does something vaguely like…

                                  void State1Event2()
                                  {
                                      if( someThingHorrid()) gotoState( State1);
                                      else gotoState( State2);
                                  }
                                  

                                  Oh shit! How much state is lurking in the bowels of someThingHorrid(), oh dear, there is nothing Easy Peasy about this nightmare!

                                  I think @vyodaiken in his post https://lobste.rs/s/j0nsoo/mathematical_basis_for_understanding has the tail of something important, but I don’t think he has stated it clearly enough.

                                  So let me clear up my own thinking by chewing on the tail of what I think Vyodaiken is saying.

                                  Now assume nobody has done anything unhappy making, and things really really are as Easy and as Peasy as reviewing 12 things.

                                  So let’s say we have another subsystem also done as a FSM, with 2 events and 5 states and we really only have to contemplate 2*5=10 things.

                                  If our system has just these two FSM’s, life is superbly simple, review and test 12 + 10 = 22 things and we’re provably PERFECT!

                                  Of course, the two FSM’s will be interacting, but that’s OK, the design holds.

                                  If for example, the interaction is in the form a hierarchical state machine with the second FSM completely embedded within a single state of the other. Great. Perfect.

                                  But usually under time pressure from a fantasy deadline somebody ties the two machines together inappropriately and suddenly it’s not 12+10=22 things you need to review and test, it’s 12*10=120 things to review and test.

                          2. 1

                            You can use separation kernels to reduce attack surface like commercial products were doing far back as 2005. Those also ran Linux in user-mode. Trimmed and/or memory-safe Linux running on tiny kernel with secure IPC like Cap n Proto.

                            1. 1

                              That doesn’t sound like Docker.

                              1. 1

                                I said as much to a person claiming to secure it once.

                        2. 5

                          This is an interesting take, thanks for sharing it.

                          I have no experience with unikernels but have always been sceptical on exactly this basis - that a lot of that extra stuff you’re cutting out is actually useful and might be missed. I’d be interested to hear if any lobsters have (or know somebody who has) used unikernels in anger, and whether they feel the same way.

                          1. 7

                            Haven’t used unikernels, but I have been using Docker containers for several months, and even there the comment in the article about “debuggability” rings true - I’ve been annoyed plenty a time because a docker container that had some kind of problem I was trying to debug didn’t have a software tool I needed installed (Run docker exec -it /bin/bash to get a shell - but none of the tools you would normally use to debug anything are installed in the image!). I imagine this would be strictly worse in a unikernal environment.

                            1. 2

                              I think the idea is, you just don’t debug a live unikernel. Once deployed, it’s basically a black box. If it’s acting up, you kill it and replace it with a fresh one. So you rely much more on application monitoring and external diagnostics elsewhere in your system, and on tests or other means of assurance during development. Livestock vs pets and all that.

                            2. 4

                              I’ve always wondered why this isn’t done with microkernels instead. That way you can build a whole ecosystem on systems designed in a modular fashion that you’d then be able to remove everything you don’t need with the added benefit of having compatible modules that could help you debug etc. instead of serverless computing that’s cheaper until you need to pay for the privilege of looking at logs when something goes wrong.

                            3. 2

                              A unikernel tied to an application is one thing. A unikernel tied to a Java, Erlang, or JavaScript VM however, could be very useful.

                              1. 2

                                There have been Java OS’s. One of them might be useful for whatever you were thinking. What specifically did you mean by “tied to a Java… VM?”

                                1. 1

                                  True, but AFAIK none of them made it much past proof-of-concept except for 10-15 years ago. But if you’re going to run a Java app, there’s probably gains to be made by using some sort of JVM-as-kernel instead of JVM-as-app-atop-linux.

                                  1. 2

                                    I think I see what you’re saying. The JX Project was mostly-Java OS, the J-Kernel built capability-based OS on JVM, and some projects (JOP, aJile ) built JVM’s in hardware. Azul Systems even had a Java processor for enterprise systems. Their switch to marketing an x86 VM more than Java CPU indicates market didn’t want it enough.

                                    A Java OS and unikernel running fast compatible with current applications? Who knows. There’s a chance. So far, Java/JVM on the metal has had more success in embedded. Those [few] products keep getting updated.

                              2. 0

                                This might be another case of our intuitive aesthetic preferences being given weight over practicalities. Static typing is another example. As I recall, the few studies that exist show that static typing has negligible productivity impact.