1. 19
  1.  

  2. 1

    Could someone with more experience with tools like vmm explain the approximate overhead of running a server inside a virtual machine like this? Is it purely ram, or what sort of percentage performance drop is there for IO or CPU usage in the virtualized machines?

    1. 10

      Depends. In general, your process in the guest does a syscall to read some file. That triggers an “exit” back to the VM monitor, which in this case just sends you into the guest kernel. That runs some kernel code, sets up IO and does a hypercall to the host. The host kernel part, vmm, sends the request to the userland daemon, vmd, which reads from the disk image file, which is another syscall back into the host kernel, queues IO for disk controller, etc. So it sounds like a billion steps, but really it’s only about twice as much as in a non virtualized case. And it’s pretty optimized, even at the hardware level. The fast path is supposed to be fast. And all of this is in proportion to how long it takes to read data from disk. Ideally, the overhead is fairly low unless you’re trying to blast gigabytes of data constantly.

      Currently, though, vmd isn’t great at disk IO because it’s single threaded and done synchronously, which I looked at changing but didn’t complete. But computers are pretty darn fast these days, so there’s often a lot one can get away with.

    2. 1

      What’s a typical use case for a setup like this?