1. 27
  1.  

  2. 2

    Reading this clarified some notions for me, and make me wonder: is it precisely Virtualization that we need ?

    I got to the point that the large demand of virtual machine is more like a demand of a stable, known, dedicated environment whose resources can evolve than the fact to have hardware emulating hardware.

    Instead of solving it by adding virtualization, maybe there are other means of managing hardware which would allow to allocate resources arbitrarily to some reproducible environment without having another instance of an operating system.

    For instance, distributed file systems allow to give an “account” arbitrary amount of storage. It is more indirection than virtualization then.

    Has anyone knowledge of some work toward alternative ways to manage hardware resources, and bonds with virtualization ?

    1. 4

      Containerisation is probably what you’re talking about.

      Still one kernel, one set of device drivers, but individual environments for software to run.

      LXC/LXD provides this as “System containers” - each container operates like a lightweight VM (including an init). Docker/etc provides this as “app containers” - each container is designed to run a single process/application.

      1. 3

        Also interesting in this context is gVisor, which further sandboxes containers by implementing Linux system calls in user space, acting as a guest kernel:

        https://github.com/google/gvisor

        1. 1

          Somewhat related are rump kernels; kernels (or kernel components like drivers) running in user space.

      2. 4

        I believe Sun Microsystems’ Solaris zones were one of the first examples of this.

        1. 1

          Of course it was first invented by IBM as WPARs.

          1. 3

            Apparently WPARs were only introduced with AIX 6.1, or were you referring to something else? Solaris zones were, of course, extensively based upon FreeBSD jails.

            1. 2

              Hm, maybe I was thinking about LPARs, but they’re more like full VM. Thanks for the clarification.

              1. 1

                Yeah, LPARs have been around for a while (they predate AIX I believe) but, like you say, they’re not as lightweight as a zone/jail.

        2. 4

          It’s a good point. One advantage of VMs over containers that comes to mind is ability to run different OS. Also as seller of platform as a service (or whatever it’s called when we rent computers from Linode, Digital Ocean, AWS etc), VMs provide the level of sandboxing that is required but can’t be fulfilled by containers.

          1. 3

            One advantage of VMs over containers that comes to mind is ability to run different OS.

            Unless you implement a loader for the binary format and system calls:

            https://protocolsyntax.wordpress.com/2017/06/09/debian-7-wheezy-installation-in-freebsd-10-jail/

            (Of course, whether you’d call this running a different OS depends on whether you define the OS to just be the kernel.)

            1. 1

              This is very interesting actually. Binary format loader might not be that complicated but implementing system calls would be a fairly big ask - second only to implementing the OS itself in terms of complexity. Imagine running Windows in user space of FreeBSD. That may also not be as performant or as true an implementation as VM.

            2. 3

              The term you’re looking for is Infrastructure-as-a-Service. You wouldn’t be selecting and maintaining an own operating system as part of PaaS. PaaS is about deploying an application; Heroku is one example of PaaS.

              1. 1

                I see, thanks for clarifying!

              2. 2

                A bit like switching between processes needs no particular support from each and every binary to be setup by the operating system.

                Yes switching os. It’s enjoyable while discovering new systems.

              3. 3

                It’s not required to only run complete OS images. Unikernels for instance.

                What virtualization gives you is another level of protection hierarchy. A VM is just a “process” except it can create memory protected processes within itself, unlike threads.

                This is maybe possible at the kernel level, but it requires a system call for everything, which offers little flexibility. VMM allows the code it’s running to organize itself.

                1. 1

                  I never noticed that there is no generic way to nest memory-protected execution environments besides VMs.

                2. 1

                  Maybe paravirtualization is the first step of a process of merging the host OS with the guest OS.