1. 10
  1.  

  2. 5

    I think it’s worth remembering that this was all written in 2003. Lots of things have changed since then, and I think that ESR’s… rather tribal, I guess, characterization of the “hacker culture” is a somewhat shaky ground.

    A lot of the things that ESR was willing to ascribe to “culture” are, I think, consequences of very objective, technical realities. For example, I think the importance of tools that play well with other tools (hence things like the rule of silence) lays not so much in some mysterious collective ethos of hackerdom, but in the fact that such tools – and environments built out of them – lend themselves more easily to being built by independent developers working in their spare time, or at least in very different environments and organizations, with different interests, from universities to ISPs.

    The FOSS community certainly tried to write plenty of software that appeals to “Aunt Marge” – although it’s worth pointing out that “Aunt Marge” of 2003 is very much unlike the “Aunt Marge” of our day. Back in 2003, most of these attempts were kindda superficial and somewhat tone-deaf (ESR had a very angsty essay about his attempt at configuring a printer via a GUI). 17 years later, it’s a very different story – lack of design effort, or a culture of “built by programmers for programmers”, is certainly not one of the reasons why Gnome, for example, is still unappealing to Aunt Marge.

    1. 2

      Hearing about the ESR printer story again reminds me of when Gruber wrote about it - where the same factors that create that environment of tools adapt poorly if you try to make a GUI around it, because you’re working against the grain of the rest of the system. Almost like Conway’s law in reverse.

      1. 3

        I’d say that Gruber more argues that the disrespect that a lot of open source advocates and developers have for UX/UI designers. There are many hard jobs - software development isn’t the sole only hard domain and devaluing other people like technical writers leads to obvious situations like “gee this open source documentation is crap!”

        1. 4

          This is also something that should be put in context. Back in 2003 – and there’s no shortage of that in 2020, either – a lot of users would show up on the mailing list with a more or less detailed critique of some release, including items like “these buttons are not in the right order” or “the logo doesn’t look friendly enough”. (Around 2013 or so the more elaborate ones were called “usability studies”). Of course – in addition to the fact that some (sometimes most, sometimes all) of these “issues” would be easily resolved by just reading the docs, and bear in mind that this was 2003, when computers were still foreign enough that reading a book in order to learn how to use a program was considered normal – none of them were actually willing to implement any of these changes. If someone did implement that change, you could also rarely reach these people to ask them to test your patch. Someone in the LUG I was in was burnt by this enough times that he jokingly referred to not showing up at a meeting as “making an UI improvement” or “trying to make the meeting more user-friendly”.

          Plus – remember, this was 2003 – most of these programs had far more serious problems than buttons not being in the right order. This was back when Kmail would corrupt mailboxes and Epiphany would segfault if you pointed it at the wrong website. (Edit: if your first thought was hey, I could swear Kmail corrupted my mailbox but it was like three years ago, see the last couple of paragraphs :-) ).

          This led, indeed, to a feeling of disdain for this sort of “bug reports”, and it was certainly aided by the fact that Linux did have a very capable CLI, which most users could use pretty well – because, again, this was 2002 or 2003, and it was still common to use CLI-only installations in various environments, there were few to no web-based dashboards for server and/or router administration back then, for example, and most of them were pretty bad.

          This changed a lot starting with 2006-2007 or so, when all major Linux desktop suites became stable enough that you could, indeed, productively work on UX improvements for them, and a lot of work in this field did happen. At the risk of kindda making Gruber’s point, I think that the effect this had in the long run was, overall, detrimental, although I don’t think “UX designers” are at fault for that. (Edit: or, perhaps it’s mostly aptly said that the approach that most projects took was detrimental in the long run). A lot of good software was abandoned or rewritten in the name of UX improvement, and all the hard non-UX work that went into it – most of it, as is the case with most software, in the form of bugfixes – was thrown down the drain. Unfortunately, as the users of KDE 4.x (and Plasma 5) or Gnome 3 discovered, no matter how well you understand a problem after working on version n, version n+1 is still going to come out bug-ridden at first. And the more complex and ambitious version n+1 is, the more it’s going to take to get it stable.

          Consequently, while the Linux desktop experience was generally better in 2010 than it was in 2005, it was generally worse in 2015 than it was in 2010 (there is one important variable there, which is mostly outside the realm of desktop and userspace development in general, namely driver quality – barring the short period between cca. 2010 and cca. 2012 when PulseAudio went out of alpha, it took a lot less fiddling to get your hardware working). Now it’s definitely better than what we had back in 2010, but the guys in Redmond and Cupertino haven’t been standing still in this time – they both did a bunch of interesting things while the FOSS community was busy trying to fix regressions related to desktop icon placement (or removing them), after that component had been rewritten two or three times in ten years. As a consequence, FOSS desktops are playing catch-up with their commercial counterparts just like they did in 2003, except this time it’s pretty much self-inflicted.

          FWIW, I don’t think this was some mysterious cultural problem, either. What happened was that there was only so much interest you could garner for gradual UX improvement – the more radical visionaries didn’t have the patience to work with old codebases (and sometimes the codebases were, legitimately, unsalvageable for major UI rewrites), and many of the people who’d written these tools were either uninterested in a radical makeover, or wanted to work on other things after having worked on an app for like eight years and having taken it to the point where it did everything they needed it to do. This had a lot of slowly-evolving, but significant effects: it changed a lot of groups – alienated a lot of developers, attracted a lot of other developers, it changed the preferences of seasoned users (just look at how many KDE desktops vs. wmii/ratpoison/ion you saw at a LUG in 2005 vs. how many KDE desktops vs. i3/xmonad/awesome you see in 2020) and so on.