1. 44
  1.  

  2. 18

    This is one of the things I wanted to write in response to https://lobste.rs/s/ezqjv5/i_m_not_sure_unix_won but haven’t really been able to come up with a coherent response. Anyone who believes that “in the good old days” UNIX was a monolithic system where programs could be easily run on different UNIXes wasn’t there. Hell, even if you stuck with one vendor (Sun), you would have a hell of a time upgrading from SunOS to Solaris, not to mention, HP-UX, AIX, SCO UNIX (eugh), IRIX and many others. Each had their “quirks” and required a massive porting effort.

    1. 3

      Hi, author of that original post. You’re definiltey not wrong, unfortunately. My concern with that original post was the fact that Linux was heading in the same direction of doing their own thing, rather than POSIX or the Unix-way. We had a chance to do it better, with hindsight this time.

      (Whether the Unix-way ever truly existed is another point I’m willing to concede!)

      Having had time to think about it more, Linux does deserve more credit than I gave for it. By and large, porting Linux stuff to BSD now is easier than some of the commercial Unixen of late (yes, I was there, if only for a few years). But it does feel like we’re slowly going backwards.

      1. 6

        As a flip side to that, I think that getting away from POSIX and “The UNIX way” (whatever that means), is actually moving forwards. “The UNIX way” was conceived in the days when the standard interface was a jumped up printer, and 640KB of RAM was “enough for anyone”. Computers have exploded in capability since then, and “The UNIX way” was seeming outdated even 30 years ago (The UNIX-HATERS mailing list started in 1987). If you told Dennis Ritchie and Ken Thompson in the 70s that their OS would power a computer orders of magnitude more powerful than the PDP-11, and then told them it would fit in your pocket… Well, I dunno, Ken Thompson is still alive, ask him.

        Anyways… My point is that the philosophical underpinnings of “The UNIX Way” have been stretched to the breaking point for a long time now, and arguably, for computer users, rather than developers, it broke a long time ago and they went to Windows or Mac. It’s useful as a metaphor for the KISS principle, but it just doesn’t match how people interface with Operating Systems today.

        1. 2

          The Bell Labs people did do ‘Unix mark II’ in the late 1980s and early 1990s in the form of Plan 9. It was rather different from Unix while retaining the spirit (in many people’s view) and its C programming environment definitely didn’t attempt to stick to POSIX (although it did carry a number of elements forward). This isn’t the same as what they might do today, of course, but you can view it as some signposts.

          1. 1

            My apologies, I thought the Unix way/Unix philosophy/etc were widely understood. Probably the most famous of these was Doug McIlroy’s “Make each program do one thing well.” Even if we’re building orders of magnitude more complexity today, I think there are still lessons to that approach.

            I agree we have to move with the times, but thus far reinventions have so far looked like what Henry Spencer warned about with reinventing UNIX, poorly.

            1. 1

              “Make each program do one thing well.”

              The precept is violated by a program like ls. Why does it have different options for sorting by size, ctime etc? Isn’t it more flexible to simply pipe that through sort?

              sort itself has a -u option, unneeded as you can just filter it through uniq. Yet it’s a feature in both GNU and (Open)BSD versions.

              1. 1

                Are we at the splitting hairs or yak shaving stage now? I guess yaks can have Split Enz, like a Leaky Boat.

                My original post was that it was disengenuous to say “Unix won” when Linux did. @mattrose disagreed, saying that the past wasn’t a cross-platform utopia either (true, alongside his quote from famed Unix-fan Bill Gates). I opined we had the opportunity to do better this time, but Linux is making the same mistakes to the detriment of OSs like BSD. Heck, even macOS. Also, that those Unix guys had good ideas that I assert have stood the test of time despite the latest in a long line eager to replace them. The machine I’m writing this on now is proof.

                Se a vida é. Wait, that was the Pet Shop Boys, not Split Enz.

                1. 2

                  Proponents of “the Unix way” espouse a strange dichotomy: they propose that the philosophy is superior to all competitors, and decry that the competitors are trouncing it in the market[1].

                  Something has to give. Perhaps the penchant for it is an aesthetic preference, nothing more.

                  [1] both in the economic one, and the marketplace of ideas.

                  1. 2

                    I totally understand the penchant for “the UNIX way”, and actually share it. It makes everything really simple. UNIX, at it’s base, is sending streams of text from one file-type object to another. It makes “do one thing well” really easy, as it enables combining the output of one program into the input of another program, and so you can write a program that enables that kind of pipelining. Even with non-text streams you can write that kind of pipeline, like gstreamer does, but…

                    From a user perspective, it’s a nightmare. Instead of having to know one program, you have to know 10 or more to do the same thing, and there is no discoverability. With Excel or its equivalent, I can sum easily select a column of numbers and get the sum of that column. The easiest “UNIX way” of doing the equivalent is, something like cat file.csv | awk -F"," '{ (s+=$3) ; END {print s}' and it took me a while to figure out how to invoke awk to do that, and that required me to know that

                    1. awk exists
                    2. awk is good at splitting text into columns
                    3. awk takes an input field-delimiter option

                    And that is completely outside of all the awk syntax that I needed to actually write the command.

                    This is why “the UNIX way” is being trounced in the market. When there’s a more complex, but user-friendly option, the conceptually simpler option is crowded out.

                    The trick is to try and keep in mind the Einstein aphorism ““Everything should be as simple as it can be, but not simpler”

          2. 2

            My personal experience is with the shared GPU drivers; one pile of C code used to work for both Linux and BSD. The main change was the move into the kernel. As GPUs required more kernel-side scaffolding to boot, kernel-side memory management for GPU buffers, etc. the code had to start specializing towards kernel interfaces.

            In general, a lot of code has moved down into kernels. Audio and video codec awareness, block-device checksumming, encryption services, and more.

        2. 11

          I think another point that needs to be acknowledged here is that easily, say, 97% of open-source participation has been of the “works on my box” variety. In other words: the vast majority of code that is written in the open, or even has an open source license attached to it, is offered with a spirit along the lines of, “I wrote this code to solve my problems, in my environment. I don’t have a computer that runs your OS (heck, I might even be relying on things specific to my distribution, let alone OS), and I don’t have very much time or ability to make the code any more complex in order to solve use-cases that aren’t mine. But maybe you’ll find something interesting or useful here, or maybe you want to fork it yourself.”

          If anything, this attitude was probably more common in the good old days than it is now, since “open source” as a concept was more politically divisive, and since the origins of the movement were centered around individuals tailoring their copies of a particular program.

          POSIX, and the ideal of portability, is a wonderful thing. But it should be tempered by a realism about people’s motivations and interests. And in particular, if the developers on a particular project want to advance that project according to their own interests, it will almost certainly not be a very compelling argument against.

          1. 4

            I think there’s a divide between two small variations of this mindset:

            • It works for me, if it doesn’t work for you then it sucks to be you.
            • It works for me, I’m happy to take patches to make it work for you.

            The latter is becoming more common, whereas I rarely encountered it 10-20 years ago. It’s especially bad that Google has adopted that mindset with things like Chromium, which have become core parts of the ecosystem. It’s increasingly difficult for a non-Google-blessed platform to gain any traction on the desktop because it is always lagging behind Google-blessed ones in support for the world’s most popular web browser and anything Electron-based. I really hope that this is something that the antitrust investigations into Google consider. It’s a massive market distortion and it’s not justified by installed base (they’re happy to take Fuchsia patches to Chromium because Fuchsia is a Google project).

            POSIX may not have been widely supported, but a lot of old software had a de-facto POSIX baked in: the absolute minimum set of *NIX APIs that it needed, with everything else being abstracted away in platform-specific code.

            1. 2

              Right, most of my OSS contributions are honestly just “btw here’s a big endian fix”. Distributing as source was a reality if you didn’t have several Unix boxes each with trivial yet critical differences to build for.

              In the Wintel world we got, everyone could just run the exact same binary regardless of system, so there was a lot less need for source.

            2. 8

              Oh man, GNU autoconf. Is that still used in new projects? I feel like there’s a whole book to be written in all the weird things it tries to detect. It’s been 15 years since some of the operating systems it supports were last even booted other than as a curiosity. But if you still need BeOS compatibility, why, it’s got you covered.

              1. 9

                I’ve thankfully avoided autoconf for a long time, but I did encounter one project that didn’t include all of the default autoconf things and so had a simple autoconf script that checked about a dozen things and worked on all surviving systems around 2008.

                I’ve used CMake for all new projects for a decade or so and it does a lot of the checks. Almost 20 years ago, phk pointed out that most of the autoconf checks boiled down to ‘is this Linux?’ or ‘is this *BSD?’ or ‘is this Windows?’ and CMake has done a pretty good job of memoising a lot of that and, unlike autoconf, works out of the box on Windows. I don’t know if it supports BeOS, but it supports Haiku out of the box. The memory allocator that I work on uses CMake and supports Haiku (allegedly - I’ve never tried it).

                For all of the hate autoconf gets (and that libtool deserves: it is completely useless on all platforms except AIX, and not supporting AIX is a feature, not a bug), it is nowhere near as painful as imake. Moving from imake to autoconf was one of the motivations in the X.org fork from XFree86 and it was a big improvement.

                1. 2

                  It’s tragically sad that phk’s post is nearly 20 years old.

                2. 9

                  I tend to prefer autoconf over CMake for some reasons:

                  • It’s easier to debug when it goes wrong, which is often (CMake debugging is excruciating unless you’re at Kitware)
                  • It is actually smaller than CMake for sure (gigantic >200 MB C++ blob over a non-euclidean amount of M4), though I doubt this manifests as something reasonable
                  • libtool and friends support AIX conventions better (it’s wacky and has fat libraries, weird exports, etc.), which is better for my dayjob (sorry David)

                  But other than those two things, it is pretty gross and hideously slow on anything that can’t hide the realities of how much fork and stat suck. CMake has a better separation of concerns (i.e. you can actually target not-Unix and make IDE projects from it).

                  1. 4

                    Making IDE projects is really what sold me on CMake. While debugging the build system itself can be excruciating with CMake, debugging the resulting program on a new platform without IDE support tends to be similarly excruciating. Especially if you want to target non-Unix also. And autotools on non-Unix is as godawful to debug as CMake for build issues, IME. Plus, my debugging skills are better on Unix-y things…

                    Cmake debugging has improved enough that it sucks a less to debug than it used to, and I don’t have to touch AIX lately, so it usually carries the day for me.

                    With all that said, if I regularly needed my programs to run on AIX, HP/UX or Solaris, and needed to build them with non-GNU or non-clang toolchains, I’d still choose autotools even now, though. CMake with GNU/Clang works pretty well for me now that I rarely need to worry about anything other than recent-ish Linux/BSD/Mac/Windows.

                    1. 2

                      Making IDE projects is really what sold me on CMake.

                      This is the only reason I use CMake: it supports Visual Studio and CLion well.

                    2. 3

                      CMake debugging is excruciating unless you’re at Kitware

                      This, so much this. The last time I needed to cross-compile, paths were getting mangled: /usr/bin became /path/to/cross/toolchain/usr/bin/usr/bin (note the doubled /usr/bin); I found myself running strace to figure out which of the wrong files it was including, and then setting breakpoints in the debugger in a custom debug build of the cmake binary to find out which of the cmake libraries was causing that include to get run.

                      It took a week to find the right variables to set to make that house of cards work.

                      1. 7

                        Every time I have to google -DCMAKE_BULLSHIT_FLAG, and for the project specific ones, hope they documented them! Most autotools projects, I can run ./configure --help at least. I think CMake is easier for developers, but autotools is easier for integrators.

                      2. 1

                        The debugging this is probably subjective. My experience debugging autotools is universally negative. I can usually find the place where it’s doing the stupid thing in the generated output, but trying to map that back to the input and whether it’s the autoconf, automake, or libtool input is hard. Perhaps more familiarity with the tooling makes it easier. With CMake, there’s a single layer that does the translation and there are helpers to dump the properties for any object in the system, so I can do that and easily see where the input is coming from. Again, that’s partly due to familiarity with CMake.

                        The only time that I’ve had a problem with debugging something in CMake was when I got the Objective-C runtime to build on Windows. I was slightly abusing CMake to say that Objective-C[++] files were C[++] so that I could reuse all of its built-in machinery. Unfortunately, when invoking cl.exe, CMake helpfully hard-coded the /TC or /TP (compile as C/C++) flags depending on the source language. When invoking clang-cl.exe, this forced it to try to compile Objective-C as C and Objective-C++ as C++, which then broke. I can’t really count that against CMake in a CMake vs autotools comparison, because autotools can’t target a Visual-Studio flavoured compiler or linker.

                        I’m not sure where the >200MB number comes from. On FreeBSD, the entire cmake 3.21 package is 34 MiB, of which 3.9 MiB is the C++ bit and the rest is all CMake script. On Windows, where the official binaries include all dependencies, the total install size is 98 MiB for CMake 3.21.2. Note that the size isn’t an apples-to-apples comparison with autotools because CMake does not depend on bash (which doesn’t matter on *NIX platforms, but is incredibly important on Windows, where autotools requires something like mingw) and includes quite a few (large and useful) things that autotools doesn’t, for example:

                        • A testing framework (CTest) that provides a simple way of running tests and integrates well with most CI systems.
                        • A package-building system that can generate tarballs, RPMs, DEBs, FreeBSD packages, Windows installers, NuGet packages, and a bunch of other things.
                        • Infrastructure for exporting and importing targets so that you can distribute modular components and import them easily (I think libtool was supposed to do this, but I’ve never seen it work for anything other than very tightly coupled projects).

                        I’ve been fortunate enough never to have had AIX inflicted on me, so I can’t really speak to how well CMake works on AIX versus autotools, but apparently there are 3.18 packages. I have had to use Windows and autotools really suffers there:

                        • It requires a UNIX-like shell and set of command-line tools, so you end up needing to install mingw or similar.
                        • It can’t drive a Visual Studio toolchain, so you have a hard dependency on clang or gcc for your build system, even if your code would happily build with MSVC.
                        • It can only generate GNU Make output, so now you have another tool that doesn’t really support Windows well in your dependency chain.

                        For me, the biggest reasons to prefer CMake are:

                        • It generates compile_commands.json automatically (I stick CMAKE_EXPORT_COMPILE_COMMANDS=true in my environment, otherwise you have to opt into this), so all of the non-build tooling works well.
                        • It generates ninja files that are significantly faster. For LLVM, the autotools build system took 30 seconds to run ‘make’ on a tree with no changes. The CMake-based one takes a tiny fraction of a second. Ninja also does parallel builds better.
                        • It can target Visual Studio (and use Ninja to build with the VS compiler and linker, which is typically much faster than a native VS build), which means that we can test our code with more compilers (we have clang, gcc, and MSVC in our CI matrix and each one provides warnings that the others don’t).
                        • It can generate Visual Studio and XCode projects, for when I want to use an IDE’s debugger. I typically live in vim, but for some debugging tasks an environment where the debugger and editor are integrated is really nice.
                        • CTest is really easy to use and we can trivially connect the output to CI reporting.
                        • It integrates well with things like pkg-config and other mechanisms for finding other code. I’m increasingly using vcpkg for dependencies and it has fantastic CMake integration (it’s largely implemented in CMake) and it will fetch and compile all of my dependencies for when I want something that statically links everything.
                        • The UI for users is much nicer. ccmake (*NIX) or cmake-gui.exe (Windows) give me a nice UI for exploring all of the options and let me expose typed options (e.g. booleans, simple enumerations where the user must pick one) and dependent options (some are hidden if they’re not required).
                        • The packaging support works even for header-only C++ libraries. I’ve just modernised the snmalloc build system and now you can do a build of the library in the header-only configuration and it will generate CMake config files that you can import into another project and get all of the compile flags necessary to build.
                        1. 2

                          I’ve been fortunate enough never to have had AIX inflicted on me, so I can’t really speak to how well CMake works on AIX versus autotools, but apparently there are 3.18 packages

                          This is the point where I say I oversimplified things: what I actually target is an AIX syscall emulator, and that environment tries to do away with the worst excesses of AIX. For example, it tries to match the normal soname version convention (libfoo.so.69) instead of the hellish AIX one that requires dynamically using ar (libfoo.a(libfoo.so.69)) that GNU also came up with based on how IBM was kinda unversioned libraries (there’s an autoconf switch to enable one or both conventions, most of this damage is inflicted on me by GNU), it only has 64-bit packages so no need for fat libraries, etc. - though I still need libtool because the .so files STILL have to be archives so ld won’t fuck up exports (primarily so it links with libfoo.so.69 instead of libfoo). I’m sure this can be taught to CMake instead of using libfool, but it still sucks man! Said RPM for 3.16 is 233 MB without any of the GUI stuff; seems it’s because CMake’s binaries are statically linked likely to due bad dumb AIX linker stuff. Don’t you just love how diverse POSIX can be?

                      3. 9

                        I got a cool horror story about this. It’s probably mostly off-topic, unless we’re talking about how weird autotools are and how nobody has been wanting to learn them anymore for years, which is somewhat tangential. But I mostly want to tell it for the laughs ’cause I know people here will probably enjoy it.

                        One of the “coolest” things I can say I’ve crossed off of a bucket list I never wanted in the first place during my experience in the corporate world is, uh, writing configure scripts by hand. I wish I were kidding but nope. For quite some time, I literally fixed dozens of bugs in a configure script, which I edited, by hand. Here’s what happened.

                        We had this huge hunk of a codebase that was split in maybe 60 or so separate programs, all of them managed through a big blob of autotools magic. It was definitely a non-trivial thing. One day, way before my time, back when they were more like 15 programs or so, someone who did not want to learn autoconf & friends had to add another program to that list, and being entirely unsure how this whole thing even worked, they just copy-pasted the relevant bits of the configure script and replaced all occurrences of $whateverprogram with $theirprogram, and did more or less the same for config.h, Makefile.in and so on.

                        Of course that wouldn’t quite make it past the mandated code review, so this person dutifully did what was asked and added the “right” autotools incantations in Makefile.am & co.. They could never make it work, though, so they did the only logical thing: they committed everything that autotools coughed out – including the half-generated, half-handcrafted configure script, config.h, Makefile.in, everything – and changed the build script to skip the autogen step and go straight for running configure.

                        Since the automated integration system was happy – it produced successful builds, after all! – and all the checkboxes raised during the review process had been ticked, there was no reason to delay this important piece of functionality anymore so the whole thing was merged.

                        Fast forward like ten years and of course the whole autotools scaffolding was basically useless. Because, for reasons I am not at liberty to discuss, this whole thing had absolutely no documentation whatsoever, nobody quite knew exactly what boilerplate was and wasn’t required – they just replicated that guy’s ten year-old diff, including the incorrect incantations in Makefile.am & friends. None of it was relevant, of course. The build system didn’t use it to generate anything, and if it tried, it wouldn’t work anymore, anyway: the original commit didn’t work, and it had ten years’ worth of replicated junk on top of that. Only the configure script – most of which had become hand-rolled by then – mattered.

                        By the time I got there I was more or less the only person on the team who’d been around Unices back when autotools were really common so I was eventually asked for an estimate on fixing it. Upon pointing out that I don’t really know autotools, either, and that more importantly, I’m not too familiar with the system, and that it has like ten years of brain damage, so it’d probably take me a few weeks at best, it quickly got chalked up under “shit we’ll ask interns to do”, because we’re not going to throw full time senior engineering money at that. But various build errors did eventually find their way to me, and I’d usually trace them to some bash copy-pasta that had been lifted of StackOverflow and plastered onto configure, which I’d fix as if this was a real thing.

                        Needless to say, the damn thing never got fixed. By the time this was happening, most of the people who showed for internship interviews hadn’t even heard of autotools, and learning all that, and all the ins and outs of the system that got built, well enough to actually fix the whole thing, realistically took way longer than the internship period.

                        Edit: I don’t think that gets used for new projects much, no. New projects don’t need it, they all run under Ubuntu LTS in Docker containers anyway :-P.

                        1. 1

                          Edit: I don’t think that gets used for new projects much, no. New projects don’t need it, they all run under Ubuntu LTS in Docker containers anyway :-P.

                          You might be kidding. But the world thinks that’s the right choice of build system.

                          1. 1

                            I’m kidding somewhat :-).

                      4. 5

                        Yeah, I definitely remember the pre-autoconf days (early 90s) on proprietary Unixes. For free software projects, if you were lucky, someone had already done the hard work, and all you had to do was pick the appropriate Makefile, or set OS-specific definitions in the shared Makefile. If you were less lucky, you had to know the specific peculiarities of your system, the kind of thing autoconf test for (do I need termios.h? time.h, or sys/time.h? Or both?). Read the Configure script for trn if you want to see the kind of hoops you needed to jump through.

                        1. 7

                          It was very, very, very bad. There are legitimate arguments to be had about the Linux monoculture, but pretending (or wishing) that there was a prelapsarian past of Unix Purity™ is just not an accurate representation of the state of software BITD.