Reading this clarified some notions for me, and make me wonder: is it precisely Virtualization that we need ?
I got to the point that the large demand of virtual machine is more like a demand of a stable, known, dedicated environment whose resources can evolve than the fact to have hardware emulating hardware.
Instead of solving it by adding virtualization, maybe there are other means of managing hardware which would allow to allocate resources arbitrarily to some reproducible environment without having another instance of an operating system.
For instance, distributed file systems allow to give an “account” arbitrary amount of storage. It is more indirection than virtualization then.
Has anyone knowledge of some work toward alternative ways to manage hardware resources, and bonds with virtualization ?
Containerisation is probably what you’re talking about.
Still one kernel, one set of device drivers, but individual environments for software to run.
LXC/LXD provides this as “System containers” - each container operates like a lightweight VM (including an init).
Docker/etc provides this as “app containers” - each container is designed to run a single process/application.
Also interesting in this context is gVisor, which further sandboxes containers by implementing Linux system calls in user space, acting as a guest kernel:
Somewhat related are rump kernels; kernels (or kernel components like drivers) running in user space.
I believe Sun Microsystems’ Solaris zones were one of the first examples of this.
Of course it was first invented by IBM as WPARs.
Apparently WPARs were only introduced with AIX 6.1, or were you referring to something else? Solaris zones were, of course, extensively based upon FreeBSD jails.
Hm, maybe I was thinking about LPARs, but they’re more like full VM. Thanks for the clarification.
Yeah, LPARs have been around for a while (they predate AIX I believe) but, like you say, they’re not as lightweight as a zone/jail.
It’s a good point. One advantage of VMs over containers that comes to mind is ability to run different OS. Also as seller of platform as a service (or whatever it’s called when we rent computers from Linode, Digital Ocean, AWS etc), VMs provide the level of sandboxing that is required but can’t be fulfilled by containers.
One advantage of VMs over containers that comes to mind is ability to run different OS.
Unless you implement a loader for the binary format and system calls:
(Of course, whether you’d call this running a different OS depends on whether you define the OS to just be the kernel.)
This is very interesting actually. Binary format loader might not be that complicated but implementing system calls would be a fairly big ask - second only to implementing the OS itself in terms of complexity. Imagine running Windows in user space of FreeBSD. That may also not be
as performant or as true an implementation as VM.
The term you’re looking for is Infrastructure-as-a-Service. You wouldn’t be selecting and maintaining an own operating system as part of PaaS. PaaS is about deploying an application; Heroku is one example of PaaS.
I see, thanks for clarifying!
A bit like switching between processes needs no particular support from each and every binary to be setup by the operating system.
Yes switching os. It’s enjoyable while discovering new systems.
It’s not required to only run complete OS images. Unikernels for instance.
What virtualization gives you is another level of protection hierarchy. A VM is just a “process” except it can create memory protected processes within itself, unlike threads.
This is maybe possible at the kernel level, but it requires a system call for everything, which offers little flexibility. VMM allows the code it’s running to organize itself.
I never noticed that there is no generic way to nest memory-protected execution environments besides VMs.
Maybe paravirtualization is the first step of a process of merging the host OS with the guest OS.