1. 23
  1.  

  2. 4

    Interesting, but I’m not convinced.

    Size: Each unikernel image contains a kernel specially compiled for the application. A container image properly optimized contains just the application (the kernel is shared by all containers). It’s possible to create Docker containers under 10 MB: see here and here.

    Security: Reusing a well known and largely deployed Linux kernel (without all the surrounding provided by the “larger” OS) sounds at least as secure as compiling its own specialized unikernel.

    Compatibility: The compatibility story sounds better for containers because your app will work by default without doing anything.

    1. 5

      I think the real benefit is going to be in high performance applications. With the advent of netmap/VALE I think we are going to start seeing more in-app userspace networking in the high performance realm. I think rump/unikernels could make a strong play here with userspace drivers for things like custom application specific code for network cards.

      Imagine a software defined switch (eg. openswitch) supporting rump kernels on-the-metal with userspace drivers and network stacks. I could see that being pretty sweet for the trend of software defined infrastructure.

      1. 2

        Where I’ve worked, userspace networking drivers were already used in the high performance networking that had a linux platform. One was a terabit router, but I wasn’t very close to the networking implementation. Another was more recent and was using VFIO.

        I think we’re already at what you describe, it’s just mostly in proprietary code. The SDN direction you’re talking about is SUPER exciting :)

      2. 2

        Whether size matters much I think depends on to what extent microservices take off, especially the really limit case of microservices where you have a ton of them. In traditional services, the size of a kernel isn’t really that noticeable (especially a unikernel, rather than a typical desktop kernel with a bazillion drivers and modules), but it would be more noticeable if it’s duplicated a lot of times.

        1. [Comment removed by author]

          1. 1

            For some things, a unikernel approach might allow a more efficient implementation of a high level language. For example, Azul style GC techniques when you can play directly with the page tables… But once you’re at that point, you’re in a pretty high performance high level language :)

        2. 2

          Size: Each unikernel image contains a kernel specially compiled for the application.

          Usually this is a kernel per runtime, not application. For example, a v8 engine like runtime.js or a JVM with a stripped-down API (does not exist yet as far as I know). The key upsight here is that if you’re already running inside a kvm or xen (like EC2), you don’t really need a full kernel with all the drivers. You just need some shim drivers that talk to the fake hardware at the highest abstraction level they can manage.

          Systems like docker get people comfortable with an environment where there aren’t things like shells, or filesystems for logs. Once you’re okay with that, unikernels let you scrape away a few now-unnecessary layers.

          1. 3

            Don’t forget we’ll also need a kernel to manage our kernels.

            1. 1

              With containers, you just need one kernel (i.e. Linux) per machine, common to all services. If you are on bare metal, the service talks to the kernel that talks to the hardware. That’s 3 layers.

              With unikernels, you need one hypervisor (i.e. KVM or Xen) per machine, plus one unikernel for each service. The service (which includes the unikernel) talks to the hypervisor that talks to the hardware. That’s 3 layers too.

              Moreover, hypervisors are probably less complex than a kernel, but they are still quite complex. And it should be noted that KVM relies on Linux. So the “traditional” kernel is not really removed here.

              If you run your services in the cloud, the kernel can be slimmed down to only keep the drivers that talk to the “fake hardware” provided by the hypervisor.

              I’m convinced that unikernels can be useful in some specific situations (for example a networking service with very strict performance requirements), but I’m having a hard time understanding what they have to offer to more mainstream application development.

          2. 1

            I’m not sure to understand how unikernel works. Does this mean every application will have to be compiled for the hardware they’ll be running on, as there will be no OS to take care of this task?