1. 28
  1.  

  2. 16

    This allows DirectX usage on Linux. The catch is that it only works on a Windows host. This is for WSL specifically, running in Hyper-V on a Windows host. It basically forwards DirectX calls to the Windows kernel through paravirtualization.

    1. 2

      I’m pretty sure wsl2 doesn’t run on hyper-v anymore.

      Edit: correction, looks like it sortof is. SO nvm

      1. 2

        The article says:

        The projected abstraction of the GPU follows closely the WDDM GPU abstraction model, allowing API and drivers built against that abstraction to be easily ported for use in a Linux environment.

        I think that means that it would be easier for GPU manufacturers to port their existing Windows drivers to support this new userspace API in Linux, than to port their existing Windows drivers to support existing Linux userspace APIs like the Direct Rendering Infrastructure. Linux already has a slight splintering of GPU APIs, with most drivers built on DRI, Intel drivers built on DRI with a different memory allocation scheme (DRI-GEM?), and NVidia doing completely their own thing.

        If Microsoft contributes a Mesa backend built on this WDDM API, and GPU manufacturers all publish drivers for it (unlike the bickering around Linux APIs, they all support Windows APIs happily enough), that would go a long way towards making graphics on Linux Just Work.

        On the other hand, it would mean ceding control of a huge part of the Linux user experience to closed-source, proprietary companies that traditionally have not had users’ long-term interests at heart.

      2. 13

        Embrace, extend, and extinguish, still the same company.

        1. 5

          Microsoft loves Linux so much they made DirectX run on Windows.

          1. 2

            GPU-PV is now a foundational part of Windows and is used in scenarios like Windows Defender Application Guard, the Windows Sandbox

            Last time I tested GPU support in Windows Sandbox, it was not a great experience. I could load up my GPU with a WebGL demo, so no problem with utilization, but the frame pacing / sync / etc in their “remote”-desktop-ish window thing was really bad.

            We have recently announced work on mapping layers that will bring hardware acceleration for OpenCL and OpenGL on top of DX12. We will be using these layers to provide hardware accelerated OpenGL and OpenCL to WSL through the Mesa library.

            Now that Mesa work makes sense even more. But kinda unfortunate that this relies on a proprietary lib in the Linux VM! Could’ve done the GL-to-D3D translation on the host and exposed virtio gpu (virgl) to the VM :P

            1. 1

              I think all these improvements that MS is making on WSL are a bit pointless because of how slow their filesystem is. Any development I do, whether on the front end or backend, is many times slower than on macOS (and probably Linux), to the point where it’s almost unusable, even with recent SSD.

              So it feels like they should put in more efforts to fix or change NTFS to something more modern and performant, as without this it doesn’t matter how good Windows is becoming for development.

              1. 6

                A couple of things:

                • This is WSL 2, not WSL. WSL 2 is a minimal Linux kernel in a paravirtualised Hyper-V VM, with tight integration between Windows and the host side of the PV interfaces. It has a native Linux filesystem with a block device back end, which is now pretty fast. It is very slow to access the Windows filesystem (or access the Linux filesystem from Windows because it’s currently using 9P over VMBus. I’m hoping they’ll move to FUSE over VMBus now the FUSE over VirtIO spec is pretty solid).
                • NTFS has very little to do with how slow filesystem access is on WSL 1. That’s largely to do with the fact that the driver stack includes a lot of interposing layers to handle the difference between NT and POSIX semantics.

                To put the slowness in perspective, here are a few ad-hoc informal benchmarks from WSL 1 on my laptop:

                $ dd if=bigfile of=bigfile2 bs=8192
                12870+1 records in
                12870+1 records out
                105436692 bytes (105 MB, 101 MiB) copied, 0.204951 s, 514 MB/s
                $ dd if=bigfile of=bigfile2
                205931+1 records in
                205931+1 records out
                105436692 bytes (105 MB, 101 MiB) copied, 2.4206 s, 43.6 MB/s
                $ dd if=bigfile of=bigfile2 bs=64
                1647448+1 records in
                1647448+1 records out
                105436692 bytes (105 MB, 101 MiB) copied, 17.6317 s, 6.0 MB/s
                

                To be honest, 6MiB/s when doing two system calls for every 64 bytes and 500MiB/s when doing two every 8KiB is fine for anything I use WSL for. Compiling LLVM in WSL is faster than doing a Windows build on the same machine (process creation time is a lot lower for WSL picoprocesses than for Win32 processes). I’m curious as to the workloads where it’s ‘almost unusable’ for you.

                1. 6

                  Hi, I’m a former NTFS developer who also worked briefly on the code that became WSL 1.

                  The problem with WSL 1’s file system performance isn’t the data operations that you’re showing here, but the namespace operations. The canonical example is something like “git status” which ends up visiting all of the files in a hierarchy despite performing no real data operations on them. Part of the reason, which you pointed out, is the layers between the application and the file system. These layers are typically far more interested in things like opens and closes than data operations. Each filter is inspecting each name to see if it has work to do, which is expensive. Contrast this with Linux, where there’s effectively a VFS cache that’s very close to the application, so metadata lookups end up going through a syscall and navigating a tree, but that’s about it.

                  This effect is visible on a lot of Linux software running on Windows, because it’s optimized for a different environment. It applies whether the program is running through WSL or is shipped as a native Win32 binary. Taking the above example, note that Git for Windows ended up with a usermode file system cache to mitigate the effect of slow namespace accesses.

                  A huge part of the reason that such caching is effective is because Windows is designed to return a lot of metadata in directory enumerate that traditional UNIX did not. Software designed for Win32 can call FindFirstFile and have file sizes and timestamps, for instance. Traditional UNIX required readdir + stat, which implies visiting each file. That’s about the slowest possible thing to do on a Windows file system.

                  One other thing that’s subtle but frustrating is Windows (and Mac) file systems are optimized for case insensitive operations. That means NTFS directories are sorted case insensitively. When a case sensitive operation occurs, it has to convert the name into a case-neutral form via a giant lookup table, search the directory, then on match re-compare for a case sensitive match, potentially traversing the tree left and right along case insensitive matches. A system that is optimized for case sensitive behavior can skip the relatively expensive case neutral conversion and the more expensive tree navigation that results from it.

                  1. 1

                    Just to be clear I wasn’t talking about wsl. Wsl is beyond slow for web development (Node, php in particular) so I work from Command, but even that with direct access to ntfs is very slow.

                2. 1

                  Skype’s child Teams distributed as .deb and .rpm, DirectX drivers powering the linux subsystem… It looks like Microsoft starts to give its system a full Linux-capable API.

                  I suppose the aim is making use of a widely available API for its existing system. I wonder to which point Windows will blend with Linux internals… Will it accept Linux drivers at some point for its LSW?

                  1. 2

                    This will have the good side effect that people can now target the Linux API to target the Windows Operating System.