1. 22
    1. 4

      So if I’m reading this article correctly, the point of this is to remove more potential sources of nondeterminism from Nix. Have there been any demonstrated benefits so far, or is this still all theoretical/robustness/WIP?

      1. 14

        It’s mostly about running nix-built OpenGL/Cuda binaries on a foreign distribution (Ubuntu, Fedora, Debian…). You need a way to inject some sort of GPU driver to the Nix closure. You won’t be able to run a nix-built OpenGL program on a foreign distribution if you don’t do so.

        NixGLHost is an alternative* approach to do this.

        * Alternative to NixGL. NixGLHost is in a very very alpha stage.

        1. 3

          One of my gripes with nixgl is that i have to run all my nix applications via nixgl. If I run a non-nix binary with nixgl it usually doesn’t go well, so i can’t run my whole user session with nixgl and have it propagate to child processes. Is there any, for example, NIX_LD_PRELOAD one could use, that could be set system-wide, that is ignored by non-nix binaries?

          1. 2

            To be honest, that’s not a use case I had in mind when exploring this problem space. I’d probably need more than 5 minutes to correctly think about this, take what I’m about to say with a grain of salt.

            My gut instinct is that we probably don’t want to globally mix the GPU Nix closure with the host one. I guess an easy non-solution for this would be to skip the problem altogether by provisioning the Nix binaries through a Nix shell. In this Nix shell, you could safely discard the host library paths and inject the nix-specific GPU libraries directly through the LD_LIBRARY_PATH (via nix-gl or nix-gl-host).

            Now, if you think about it more, the use case you’re describing seems valid UX-wise. I’m not sure what would be the best way to tackle it. The main danger is getting your libraries mixed up. NIX_LD_PRELOAD could be a nice trick, but it’s kind of a shotgun approach, you end up preloading your shim for each and every Nix program, regardless if they depend on OpenGL or not.

            As long as you don’t plan to use CUDA, I think the best approach would be injecting the GPU DSOs from libglvnd. There’s already all you need to point to your EGL DSOs through the __EGL_VENDOR_LIBRARY_DIRS env variable. There’s no handy way to do that for GLX, but I wrote a small patch you could re-use to do so.

            I’ll try to think more about that, cool use case, thanks for the feedback.