1. 43
  1. 19

    I think there’s a bit of a cultural clash between “hobbyist” and “non-hobbyist” development (I don’t want to say “open-source” and “closed-source” because there are plenty examples of this clash in either category).

    When I write a game, or some other cool program, and I don’t write it just for my own amusement, I want as many people out there to be able to use it – either because I’m selling it (so the more users, the more $$$), or because I really think it’s cool and useful and I want as many people to have access to it.

    In this context, an environment like the Steam runtime, or Win32 under Proton, is not the lowest common denominator. If what you want is as many people as possible to be able to have fun with your game, then it’s literally the best denominator of any kind. I want people to have fun with the things I make, not fiddle with Wayland and NVidia drivers and sound latency options and compositor settings and whatever until they can play my game, probably in window-mode only because full-screen mode crashes or screws up their desktop. That super elegant and secure setup may be better in some objective technical terms (although, more often than not, it’s better in terms of “philosophy”) but if it’s functionally less capable, then I’m going to stay away from it. I understand why the author calls it the lowest common denominator and I agree to this assessment on some level, what I want to underscore is that it’s not the only level, and for many people it’s not an important one, either.

    It’s a difficult mindframe to imagine because a lot of us in the open source world are, to some degree, platform zealots, so we’re used to advancing the state of our favourite platform, or at least probing its extent, being a tiny part of anything we do with it. But that only works for hobby projects. I totally get it, I love it, almost everything I’ve written in the last 15 years is Linux-related somehow, but that’s just not how all programming projects operate.

    1. 1

      When I write a game, or some other cool program, and I don’t write it just for my own amusement, I want as many people out there to be able to use it – either because I’m selling it (so the more users, the more $$$), or because I really think it’s cool and useful and I want as many people to have access to it.

      Yeah, this is exactly why developer distribution happens. Distributions deciding to packaging stuff feels chicken or the egg to people who aren’t used to the distribution process.

      In this context, an environment like the Steam runtime, or Win32 under Proton, is not the lowest common denominator. If what you want is as many people as possible to be able to have fun with your game, then it’s literally the best denominator of any kind. I want people to have fun with the things I make, not fiddle with Wayland and NVidia drivers and sound latency options and compositor settings and whatever until they can play my game, probably in window-mode only because full-screen mode crashes or screws up their desktop. That super elegant and secure setup may be better in some objective technical terms (although, more often than not, it’s better in terms of “philosophy”) but if it’s functionally less capable, then I’m going to stay away from it. I understand why the author calls it the lowest common denominator and I agree to this assessment on some level, what I want to underscore is that it’s not the only level, and for many people it’s not an important one, either.

      I mean, it’s less the Wayland security architecture (macOS went in the same direction, though more gradual), more the Linux desktop treadmill. But that’s a deep story worth another post…

      It’s a difficult mindframe to imagine because a lot of us in the open source world are, to some degree, platform zealots, so we’re used to advancing the state of our favourite platform, or at least probing its extent, being a tiny part of anything we do with it. But that only works for hobby projects. I totally get it, I love it, almost everything I’ve written in the last 15 years is Linux-related somehow, but that’s just not how all programming projects operate.

      I mean, I use it, but I don’t consider it my favourite platform. Of course, I’m a lot more jaded about Linux than most Linux enthuiasts…

      1. 3

        I mean, I use it, but I don’t consider it my favourite platform. Of course, I’m a lot more jaded about Linux than most Linux enthuiasts…

        I know what you mean, both here and above, when you mentioned the Linux desktop treadmill. I wasn’t really poking at Wayland, either, it’s just that “secure” is a little easier to qualify objectively than “ellegant” so it seemed like a good metric. Linux is not really my favourite platform, either, I guess it’s, at best, my… least un-favourite? But things were a lot different when I was younger :-P.

        1. 10

          I think everyone had a teenage Linux zealot phase.

          1. 4

            Not those of us who were already old when we first encountered Slackware …

            1. 2

              Ah, so you had an adult Linux zealot phase? :-)

              1. 2

                It wasn’t nearly as useful as Irix at the time so I sort of ignored it. More fool me.

                1. 1

                  I was a Linux zealot in my mid-to-late 20s…

            2. 3

              But things were a lot different when I was younger :-P.

              I’m not even that old, but this feels very real to me. When I was younger, as in, in school, I cared a lot more about “principles” and “philosophy”. Since then, I’ve done nothing but slide down a “I have crap to get done” spiral. I dropped Linux for macOS, Vim for various IDEs, etc. Am I upset? Honestly, not really! It’s just a different mindset.

              1. 1

                When I was younger, as in, in school, I cared a lot more about “principles” and “philosophy”

                At the risk of going off-topic a bit, I’m genuinely curious: what makes these things attractive? I grew up with no STEM people around me, so I had a brief period in my teens when I went on Freenode and parroted what everyone said about minimalist editors and using Linux because that’s what I thought it meant to be a real programmer. The moment I realized that many many programmers get by with whatever tool they were productive in, I dropped my foolishness like a hot rock.

                To date, principles and philosophy has always been the least interesting aspect of computing to me. I’ve always loved the ability to just open up a Linux/BSD OS and start tinkering (and start breaking things 😅) which is something that OSes like Windows encourage to a much smaller degree, but the philosophical aspect has always seemed incredibly arbitrary and closer to the silliness my friends had with sports teams than anything I found productive to tinker with. Licenses and such have always, to me, been a necessary evil, never one I relish thinking about. One of my very favorite aspects of academia when I was there, the upside of publish-or-perish, was the deep focus everyone had on getting to the bottom of their subject, whether that was by pen and paper, Emacs and Org-Mode, or Sublime Text; it was so much more refreshingly rigorous than the silliness around editor wars or programming philosophy. Getting work done has consistently been my favorite part of computing. The ability to combine abstract thinking with low iteration time has always driven my joy in software.

                So I’m curious, what’s the appeal? The staying power of editor wars and philosophy wars has always befuddled me, as these have always been the least interesting parts of computing to me.

                1. 4

                  So I’m curious, what’s the appeal? The staying power of editor wars and philosophy wars has always befuddled me, as these have always been the least interesting parts of computing to me.

                  From my perspective, the exciting thing about computers is that they’re machines that can be made to do an unbounded number of different things. One of the things that you can do with them is influence the shape of society and so I would like to us them to influence society into a shape that I want to live in. That means avoiding centralisation of control. Where you need centralisation for efficiency and economies of scale, decouple governance from hosting. Free Software fits into this because you invariably give some measure of control to the owners of the components that you depend on and F/OSS lets you (at least in theory) separate ownership from control.

                  1. 3

                    For me, it was politics. I’ve always been interested in politics, in fact, I was a poli sci major, so it was a natural extension. I still care about that stuff in the abstract, but I also have work to get done, and I just don’t have the mental and emotional energy to care about all of it, all the time. I think the stakes also seem a lot lower to me now than they once did. My “philosophy and principles” energy now mostly goes to things like lowering my carbon footprint and trying not to patronize companies that abuse human rights.

                    1. 1

                      Thanks that makes a lot of sense.

          2. 15

            I think the headline is somewhat misleading. The problem isn’t the lack of a stable Linux userland ABI. Linux has a stable system call interface. Glibc has a stable ABI with aggressive symbol versioning. The problem is that open source desktop environments (which are typically not specific to Linux, though most of the development tends to be there) don’t provide stable ABIs.

            X11 has provided a stable core protocol for decades but the core protocol isn’t really sufficient for anything. It took a long time for damage, composite, and render to be supported. The DRI / Mesa driver stack is large and can cause problems on updates. KDE and GNOME core frameworks change their APIs quite significantly every few years.

            Generally, the people who make money from Linux and *BSD do so in the server space. Stable ABIs are common there because paid engineers lose time and their employers lose money every time that they change.

            1. 6

              I think what Linux has as a stable ABI (system calls, maybe glibc) is adequate for embedded and server contexts - Docker is arguably close to being the server stable ABI anyways.

              I think you’re second paragraph is where I’m at for desktop - the “base platform” on Windows includes stable ABI for not just “here’s how to put a frame in a window” that libX11/a Wayland client gives you, but basic GUI constructs, 3D rendering, audio, shell integration like file dialogs, etc. In the fd.o world, this may not be consistent or stable. That’s again, a story for another day though…

              Generally, the people who make money from Linux and *BSD do so in the server space. Stable ABIs are common there because paid engineers lose time and their employers lose money every time that they change.

              I think a lot about how proprietary software can be more practically extensible than something open source due to providing stable interfaces rather than “have you kept your patches up to date :)”.

              1. 7

                I can’t say much about the server space, but on the embedded end, up until a year or so ago, when I last used it, it was a very hot dumpster fire. There are consulting companies out there with teams of hundreds of people whose jobs consist of literally nothing other than matching various package versions, cherry-picking patches and diffs to assemble Frankenstein versions of the Linux kernel, drivers, and major userspace pieces (PulseAudio, various AV codecs, various compositors) in order to match drivers and/or kernel changes in vendor-supplied BSPs. That’s why so many Linux embedded gadgets run five year-old, unpatched kernels and userspace components.

                With some exceptions, at least on price-sensitive hardware (not only consumer, but also e.g. automotive hardware that isn’t safety-critical), as soon as you’re out of kernel land, you’re exactly as screwed as you are on a desktop. That includes a huge range of hardware, from IoT and industrial machinery to medical and infotainment systems.

                1. 2

                  Some of us working in embedded are trying to change this. Debian works very well as a base, and then you just also need to a pick a tool to build the images. Debian is very well tested, integrated, and maintainable, much more so than some ragtag BSP with Linux 3.14 (which is actually what the vendor supplied for a board I’m working with now).

                  1. 4

                    I know it does, and it’s actually what I used for my own tinkering for a while. That being said, in 10+ years of this stuff, I don’t think I’ve ever seen it used in commercial projects. At this point, barring occasional RPi-based devices, it’s Yocto virtually everywhere, with buildroot now and then.

                    Man just writing Yocto makes me nauseaous. brb I gotta go puke :(.

                    1. 2

                      I can’t upvote this enough. Yocto makes me want to curl up into a ball.

                      1. 1

                        I feel you, but yocto and buildroot are so much better than what we used to have. I used to customize fedora to do the same thing as yocto in a more half-assed way back when it was called “fedora core” and the version numbers were in the single digits. And that was a massive upgrade over the crap we used to suffer through to ship some vertical stuff atop a zaurus PDA in 2002 or 2003. (Thankfully, we did not try to ship it atop a general purpose zaurus; we bought the PDAs and sold them configured with our images and a custom CF card as part of the price of our services.)

                        But there are things worse than yocto and I’ve had to live with more than one of them. Just pointing that out makes me nauseous…

                    2. 1

                      This creates some perverse economic incentives. This kind of thing is far less of a problem for the folks building embedded systems using FreeBSD. As a result, these companies need to hire fewer engineers. As a result, there are fewer experienced FreeBSD appliance developers on the job market than experienced Linux appliance developers. As a result, you’re better off choosing Linux for your appliance because you can more easily hire experienced people to work on it. The consulting companies that I know of that support both are very happy when people choose Linux because it means more work for them.

                      1. 2

                        Yeah, this is one of the many case studies that have led me to the conclusion that “the next big thing” in any field is usually not the best thing available, but the mediocre solution that does not have any immediately obvious shortcomings. These are the technologies that proliferate: they are good and efficient enough that they can solve actual problems, especially in early development stages, but they do so tediously enough that they aggregate so many people that they build momentum. Or, to put it another way, the technologies that self-inflict the most problems, while still being sufficiently good to solve enough of these problems as to save face.

                        1. 2

                          I think there’s a related trend: technologies are often built to solve the shortcomings of the things in the layer below. Docker addresses shortcomings that are fairly specific to Linux distros. GitHub is successful in a large part because git has such a terrible UI that there’s a big incentive to use extra tooling on top of it. Disruptive technologies are more less likely to emerge on top of good platforms.

                      2. 1

                        Realistically I don’t think fd.o world is common in embedded anymore. Android (and other things) have been increasingly displacing it. I loved my N900, but….

                        1. 1

                          AFAIK there’s still a very non-trivial amount of low-cost(-ish) devices running QML apps, or a web app, and they’re running on top of Linux for a whole variety of reasons that I really don’t understand in detail, because I haven’t written Android code in, like, forever, and things that have significantly more I/O than a handful of LEDs weren’t really my thing. Probably fewer and fewer, though.

                          Ironically enough, these things solved the stable platform problem the same way: they all ran on top of Qt, or a time-frozen JS stack on top of WebKit or whatever. But both of these have a lot of non-trivial dependencies. The simple act of assembling all of them on a working image is a difficult enough task that it’s a full-time job for so many people that it’s not even funny.

                  2. 8

                    I take a bit of offense at the article contrasting wine with “native” applications. Let me ask you: what makes wine any less native than gtk or qt? Both are actually just one step above the low level foundations. You* don’t actually call XCreateWindow, you call QWindow or gtk_window or SDL_whatever or wine CreateWindow each of which call XCreateWindow.

                    You often don’t even call socket() and read(). Nah, there’s QSocket and gtk_socket_new and SDL_net. So why is winsock any different?

                    I get that reading the .exe file format instead of the elf file format feels different. But… is that any different than a.out vs elf? It is still running the same machine code after doing the same kind of dynamic linking etc (yes I know it isn’t identical dynamic linking but is it any less native than ld.so?)

                    I just tried something too:

                    import core.sys.windows.windows;
                    
                    string hello = "hello\n";
                    
                    void main() {
                            auto szptr = hello.ptr;
                           // linux native syscall
                            asm {
                                    mov EAX, 4; // write
                                    mov EBX, 2; // stderr
                                    mov ECX, szptr;
                                    mov EDX, 6; // length
                                    int 0x80;
                            }
                           // Windows api call
                            MessageBoxA(null, "omg", "i wrote to stderr", 0);
                    }
                    

                    Compiled for Windows target and ran in Wine on Linux. Guess what happened? It worked. Not really a surprise - the machine code is still running on Linux! The Windows API is just another toolkit library in a Linux application, so not really a surprise you can, in fact, mix and match if you can rig up the build. Wine is not an emulator. Wine is native environment.

                    (btw i used 32 bit just cuz i remember the 32 bit syscall asm off the top of my head, but im sure it works in 64 bit too. and if you can do this, you can do other pure linux things too)

                    • As the author of an independent X toolkit, I do very much call XCreateWindow and it annoys me when people presume people like me don’t exist. But let’s be real, we’re very much the minority.
                    1. 3

                      Until very recently the mismatch in locking primitives between Windows and Linux had very real compatibility and performance impacts for programs relying on wine that native programs weren’t subject to.

                      Native does have a specific meaning here that also applies. It is tied to the origin of a program, similar to the use for plant species. A linux-native program began life as a program intended to be run on linux. Your argument seems akin to saying an invasive species of plant cannot be considered non-native if it thrives in the new ecosystem.

                    2. 4

                      A couple of years ago, I wanted to try out some old web browsers just for fun. I quickly discovered that absolutely no binaries from 5+ years ago run on current Linux. But I was easily able to run old browsers dating to the 1990s under WINE.

                      There is a solution out there: the “enterprise” distributions like RHEL and SuSE provide a stable ABI so you can reliably run other enterprise software like CAD, graphics, databases, ERP, storage, etc. I used to work on one of these enterprise products and we simply tested and documented on those specific distributions, and the vendors took great pains to make sure they were binary compatible for around 10 years at a time. Of course requiring consumers to buy a $200 RHEL license to use a consumer product is a nonstarter.

                      1. 4

                        There is a solution out there: the “enterprise” distributions like RHEL and SuSE provide a stable ABI

                        But they do it by not updating packages. This means that it’s painful if you need to build something using the latest C++ standard, for example. They also don’t do security back-ports for everything, so you may find that you’re running vulnerable bits of software. The big projects have their own LTS releases that things like RHEL can use, or have RedHat employees working for them who understand the codebases well enough to do the backports but various smaller things don’t and, worse, don’t issue security advisories and so RedHat may not know that something is security critical. This is also a problem for the Linux kernel, where security bugs aren’t always correctly classified and so you may miss a back-port to the LTS branch because the person who fixed it in Linus’ branch didn’t realise that it was exploitable.

                        1. 1

                          In my experience, even “the latest C++ standard” is a really generous way to put it. About an year and a half ago I ran into all sorts of weird cornercases related to C++-17 support on RHEL builds. It’s insane, and it’s a cultural bridge that’s really hard to cross with people who mostly have a FOSS-based technical background, and for whom “backwards compatibility” is synonymous with “ancient library versions”.

                      2. 2

                        As a Linux user since ~2005 this post 100% matches my experience (and desires) - from my POV if I can get a single exe file that runs on WINE, I’m much happier doing that than relying on an app that relies on all the dynamically linked libs and in particular needs whatever specific flavor of GTK/Qt/WxWidgets/Tk/Etc Fedora is shipping for the next 6 months - or is only distributed as a .deb and won’t work on a non-Debian system, etc.

                        After 17 years of Linux GUI churn I just want a stable GUI app that never breaks, distributed in a single file format that is supported/documented everywhere. I wish more GUI apps would seriously consider targeting Win32/WINE.

                        1. 1

                          I think when it’s a single exe file, it’s always fine. IMO the headache happens when it’s an exe file, several DLLs, a config file, and configuration expectations from the registry. That kind of stack leans on the least fun part of WINE. That’s not to say it’s unreasonable to target that, but just that it doesn’t automatically eat as much of the complexity as, say, using alien to retarget a .deb to fedora.

                          (I have had good success using the system package manager to set up a WINE executable and handle all the dependency/path crap for that for a GUI application. Not sure it was better, but it was reasonable.)

                          1. 1

                            I used WineSkin Winery on macOS and it appears to also be what GOG.com uses for their packaged Linux things. It creates a WINE root, runs an installer, and then gives you a .app bundle that runs the installed program in WINE. I’ve found it very easy to run things that come with a Windows installer on macOS.

                        2. 2

                          Just use FreeBSD instead. Its the stable userland.

                          1. 15

                            FreeBSD solves nothing outside of libc-level compat, which is often the least of your problems.

                            1. 2

                              FreeBSD has the kernel options COMPAT_FREEBSD13 and suchlike, which, when enabled, keep kernel ABI backwards compatibility at the syscall level, all the way back to the original BSD from Berkeley university, IIRC.

                              1. 16

                                You’re thinking in the wrong direction. FreeBSD provides a kernel, libc, and a set of core utilities that have a guaranteed stable ABI over a major release series (around five years). This is often enough for a lot of server applications. It is not sufficient for a GUI application. X.org is in the ports tree. Because the DRI drivers are ported from Linux and Linux’s attitude to stable ABIs is ‘oh, it’s Tuesday, let’s have a new one’, the GPU drivers and all of the OpenGL / Vulkan / whatever bits also live in the ports tree and have rolling updates. GUI libraries such as Qt and GTK are in the ports tree. The stable branch for the ports tree gets three months of best-effort security back-ports (i.e. don’t use it, because it doesn’t get all of the security back ports and whoever made it the default should have their commit bit revoked).

                                If the DRI drivers were stable then you’d at least be able to provide a jail with all of the other userland bits, but updating Mesa is a pretty painful activity. This is more or less what Xbox does. Each game comes with a Windows VM that is run in Hyper-V on the host with GPU device pass-through. It doesn’t matter how the host OS is updated because the game bundles all of its dependencies from the kernel on upwards.

                                Personally, I’d love to see something like KDE or GNOME build a FreeBSD ports tree that contained a five-year-guaranteed-ABI-stable set of GUI libraries. That’s the missing part. It’s possible that something like helloSystem will do this.

                                1. 2

                                  ugh, I can see how that’s a problem :(

                            2. 1

                              not the package manger or packaging policies (*cough bash dependencies cough*)

                            3. 1

                              Title should have been “Win32 is the stable Linux userland ABI for games”. Binary compatibility is irrelevant for practically everything else. If you’re going to run proprietary software on Linux, it’s probably something like Oracle, where you might as well let the choice of application software also dictate your choice of Linux version.

                              And for proprietary games, why do you care if they run on Linux without a Win32 shim?

                              1. 6

                                Or firmware flashing tools, or VTuber capture software, or… Theres a lot of proprietary software out there, and software whose authors just didn’t want to have to deal with the constantly moving target that is Linux.

                                And this matters because it affects any kind of proprietary desktop software, and to a lesser extent even open source software; sure, you can patch it to keep up with the dance, but will you?

                                1. 4

                                  And this matters because it affects any kind of proprietary desktop software, and to a lesser extent even open source software; sure, you can patch it to keep up with the dance, but will you?

                                  I think a lot about all the software that didn’t get ported when Gtk and Qt churned. Meanwhile, old Win32 binaries actually get a few improvements from running on a newer OS (i.e. upgrades to the new file dialogs if they didn’t do any hook based changes).

                                  1. 3

                                    A lot of distro decisions can be explained as having their roots in having the entire world ship on 7 CDs in 1999, and treating it as one continuous OS you’re carving like a ham.

                                    This line in particular made me think a lot. Package managers for your core OS software are great, and if someone wants to distribute their software via package managers then that’s also great. But I think trying to force everything into that mold to the point that desktop Linux is only now developing any kind of support for applications packaging their own dependencies in a coherent way was a mistake.

                                    1. 1

                                      Right. It’s not like there wasn’t proprietary-vendored software at the time either; people bent over backwards so they could run Netscape in 1998. Plus a plethora of other things from Mathematica to Corel’s attempt at an office suite.

                                      I think back then a.out/ELF/libc.so.6 transitions were happening too and also was causing problems, but it’s hard to elucidate how it affected that world.