1. 15

Abstract: “Commonly used user space network drivers such as DPDK or Snabb currently have effectively full access to the main memory via the unrestricted Direct Memory Access ( DMA ) capabilities of the PCI Express ( PCIe ) device they are controlling. This can be a security issue, as the driver can use the PCIe devices DMA access to read and / or write to main memory . But malicious hardware, or hardware with malicious or corrupted firmware can be an even bigger security risk, as most devices and their firmware are closed source. Attacks with malicious NICs have been shown as a proof of concept in the past. All modern CPUs feature an I/O Memory Management Unit ( IOMMU ) as part of their virtualization capabilities: It is required to pass PCIe devices through to virtual machines and is currently used almost exclusively for that. But it can also be used to restrict memory access of DMA devices, thus reducing the risk of malicious or simply badly implemented devices and code.

In this thesis, support for using the IOMMU via the vfio-pci driver from the Linux kernel for the user space network driver ixy was implemented in C and Rust and the IOMMU and its impact on the drivers were investigated. In the course of this, a model of the IOMMU on the investigated servers was developed to enable the usage of it in further work and other drivers, as well as minimize the performance impact from using it. Reasonable specifications of the IOMMU that are not widely known or documented, as the number of page entries in the IOMMU’s IOTLB or its insufficiently documented capability of using huge pages for memory management were found and used. Additionally, the C library libixy-vfio was developed to make it easy to use the IOMMU in any driver. When properly implemented, i.e. using 2 MiB hugepages and putting all NICs in the same IOMMU container, using the IOMMU has no significant impact on the performance of the ixy driver in C or Rust and does not introduce any latency to most packets, but effectively isolates the NICs and restricts access to memory effectively. Since the performance impact is negligible and the security risk when not using the IOMMU is high, using it should always be a priority for researchers and developers when writing all kind of drivers. Using the library libixy-vfio or following the patches for ixy or ixy.rs, implementing usage of the IOMMU is simple, safe and secure for user space network drivers.”

  1.  

  2. 3

    Many features that used to be enterprise are really starting to trickle down. Since CPU clock speed is basically plateauing on, consumer computers are basically servers now. They brought up a good point which was that VFIO and IOMMU led to some cool new tech like looking-glass which allows people to use VMs without all the latency/virtualization issues. And in the future that this trend will continue, where people will just skip the virtualization layer and go from userspace direct to hardware.

    1. 1

      It’s true. Yet, there’s also a counter trend where enterprise stuff adds extra risk or problems that people might want to dodge. The Management Engine was one. There’s a market for non-obvious backdoors in CPU’s. All these complex features in CPU’s are also leading to side channels and such. Some will go for less-complex CPU’s or side-channel-free coprocessors.

      Although mostly what you said, I think there will steadily be a smaller flow away from enterprise-style features. They’ll want different tradeoffs.

    2. 2

      I wonder how this work stands with the buggy/insecure IOMMU implementations.

      1. 1

        The assumption for all X-based protections is that X must work as advertised. Same for hardware. One of first things in any hardware paper enumerating the TCB is that the hardware is assumed to work. That’s also why high-assurance CompSci was working on verified CPU’s with one academic (VAAMP DLX-style) and one commercial (AAMP7G stack-oriented) delivered.