I genuinely look forward to this article every year.
It helps me reflect on things that I might just be accepting as “the way things are” in a stockholm-syndrome manner, as well as reflect on how a few things have actually improved.
This article is well-intentioned, but there are a couple problems with it as a summary of what we should fix in the open-source ecosystem:
First, the factual inaccuracies:
Linux drivers are usually much worse (they require a lot of tinkering, i.e. manual configuration) than Windows/Mac OS drivers in regard to support of non-standard display resolutions, very high (a.k.a. HiDPI) display resolutions or custom refresh rates.
I’m not able to discern what real phenomenon this is describing. Drivers have never been where users configure display modes, and Xorg.conf is a thing of the past. For any case where a display doesn’t work right with Linux (and it is the kernel’s responsibility since kernel modesetting was adopted the better part of a decade ago), a bug should definitely be filed on kernel.org (and linked in this article).
No reliable sound system, no reliable unified software audio mixing (implemented in all modern OSes except Linux), many old or/and proprietary applications still open audio output exclusively causing major user problems and headache.
ALSA has software downmixing (dmix) and all other Linux audio systems live atop ALSA. We can’t fix proprietary applications, but you can virtualize their access to the sound hardware.
Wayland works through rasterization of pixels which brings about two very bad critical problems which will never be solved:
Firstly, forget about performance/bandwidth efficient RDP protocol (it’s already implemented but it works by sending the updates of large chunks of the screen, i.e. a lot like old highly inefficient VNC), forget about OpenGL pass through, forget about raw compressed video passthrough. In case you’re interested all these features work in Microsoft’s RDP.
Secondly, forget about proper output rotation/scaling/ratio change.
So does X11, and so do the Windows and Android display systems. I don’t know the precise interactions between macOS applications and the display server, but usage of “Display PostScript” might imply OS X does pass vector images to the display server. However, all of this is immaterial: Wayland is an IPC protocol, and can easily be amended to support vector graphics if there ever appears a reason to do so: new buffer types can be negotiated for contents like vector graphics or compressed video. For now, no clients would use it and it would not improve performance or appearance. All graphically intensive X11 programs operate under a raster paradigm and do not use X11 drawing primitives.
OpenGL passthrough is a very specialized use-case, and much like remote Wayland in general, can be done as long as there’s coordination on both sides. Saying that Wayland makes this impossible is disingenuous. Pixel-based remoting works well and is still what happens in RDP for much of screen contents (e.g. FF or Chrome over RDP); in fact, the reference Wayland compositor can be accessed over a free-software implementation of Microsoft’s RDP.
Then there’s the fact that many of these are coordination problems. We can’t fix “there are too many distributions” or “[insert proprietary software] does not play nice” or “no unified interface to [thing]”. We can and should fix specific problems with a given distribution or a given interface to functionality. But fundamentally in open-source work you can only do your best to provide working code; it isn’t possible to forcibly remove the broken stuff in other people’s repositories and distributions. “Different programs do things differently” is not a problem we can fix. On the other hand, “there is no way for programs X and Y to share code” is a problem we can and should fix, and it will alleviate many of the problems described here. We should be finding ways to share more code between GNOME and KDE; GTK, EFL, and Qt; et cetera.
Most of the criticisms of the Linux kernel are very valid and point to failings in the design of UNIX: resources should be revocable, and kernel subsystems should have some isolation. That said, many of the problems listed here are also complaints about work that simply hasn’t been done due to lack of hardware documentation or developer power.
In general this article spends way too much time and effort complaining that the Linux ecosystem is not a product, and as such fails badly at things products do such as provide customer service and backwards compatibility guarantees. A multitude of outstanding bugs is worse than having all those bugs closed, but ultimately fixing bugs requires lots of work, and not many people can afford to work on Linux. It isn’t productive to respond to the work of those who can by saying, as quoted, “Fuck it! This ‘OS’ is a fucking joke.”
many of the problems listed here are also complaints about work that simply hasn’t been done due to lack of hardware documentation or developer power
The fact that there are good reasons for the failure does not solve the issue.
It might suggest that “building a free desktop OS” is not a successful model and we might as well give up. However, I would also ask if “building a desktop OS” in general is a profitable model. Apple does not make its profit by selling OSX or iOS. Even Microsoft does not make much profit from Windows anymore.
I switched away from Linux a couple of months ago (happy with FreeBSD now).
However I do maintain a number of Linux systems for people I know. I didn’t force any of those people to use Linux, but some of them just never used a computer before. It actually used to be extremely low maintenance. Things just were working. They were browsing the web on their netbooks, playing a tiny game or did other things.
Then pretty sudden the adoption three big changes on Linux caused a lot of problems across the distributions:
Network Manager being used everywhere
Systemd being used everywhere
Pulseaudio being used everywhere
On a couple of issues that the systems ran into I thought “well, it’s early stage. It will make things easier, eventually”.
So I kept updating. And a lot of those updates were like “things get fixed, but that now needs some kind of workaround”. I had trust it would get better, especially because a lot of distributions adopted stuff, so lots of people working on it.
I started to make some jokes about things becoming worse every once in a while, but I wasn’t serious. Those were rather big changes, so it would sure settle.
NetworkManager was easy. You could just turn it off and set it up using dhclient, dhcpcd or even static, whatever fits best. If you needed more you just scripted or used an alternative. But I tried to stick close to to what the distro goes for cause those were machines for computer novices. People that didn’t get tabs, etc.
Yes, I also offered Windows and so on, but those were even more complex for them (they mostly wanted to browse the web and virus and firewall warnings just can’t be dealt with). Also I couldn’t really help them on Windows.
So that has systemd and pulseaudio remaining. Here systemd was the smaller issue. While distros and I ran into some bugs, those were actually fixed. And at that time systemd was more “just an init system” with nicer/different syntax. There was some journald stuff, because I had some scripts parsing log files that I changed to the journald export format. And there what mostly bugs me is that they still have outright lies about their compatibility promises. They claim to not have changed the format starting from a certain version. That claim is simply wrong. They did change a couple of things since them and older versions of redhat distributions show that, but whatever.
At some point it seemed like most stuff was fixed, which was great. Faith restored. They still had frequent updated, but everything was working and I didn’t care so much.
Until they started adding all those pseudo-optional stuff that interfered and broke stuff again. It was quite a lot in the last year.
Pulseaudio was a similar story. There was so much breakage. However, nothing had a really hard requirement on PulseAudio back then. Skype was the first one I remember. So I was able to pretty much disable it. The next evolution was making script that before an application starts would simply kill pulseaudio. That fixed pretty much every problem. The people were fine with that, cause they barely used multiple applications at the same time, so nothing would suddenly stop working. Either they surfed or they played video games. Killing each other’s audio wasn’t a problem and so that hack was kind of okay. Also the bugs were to be solved soon anyway.
Just know: On all of those there were bug reports. I don’t talk about misconfigurations here. I had misconfigurations, but those were fixed. Sometimes the documentation did not exist or was outdated (yes, the official one) or similar, but eventually those things were fixed.
But then there were a number of kernel and pulseaudio updates that completely ruined pretty much everything again. This was a major annoyance. Especially because the resource usage of pulseaudio kept growing and audio got worse and worse. I had apps crashing with PA default configurations, etc. That was the time when I decided I’d switch away from Linux. I wanna get things one and not deal with pulseaudio all day.
First I looked at distros that didn’t have Pulseaudio and the other tools, but then I found FreeBSD which I used ages ago to have the most pleasant experience and I was able to simply disable pulseaudio from the ports and be done.
Later I learned that they also have sndio from some of the OpenBSD folks and that it evidently also works on Pulseaudio. To make it short: Sndio is effectively Pulseaudio for Linux and the BSDs, just working. Completely hassle free, no big resource requirements. All good. So if you are on Linux and hate Pulseaudio, you might wanna give that a try. I don’t know if it’s official or just patched, but recent Firefox also works with it.
Anyways, this is why those three products seem to be the “Main Linux problems on the desktop” in 2017.
systemd actually seems to become more sane. Well, maybe not sane, but it just stops breaking stuff constantly anymore. I think this is because there is sane and experienced people sending in patches these days. I still think OpenRC, used by Alpine, Gentoo and others is the way to go, but honestly, I don’t think one should really care about the init system so long as it is working. systemd seems to have gotten into a state where it is working most of the time.
NetworkManager.. I don’t get why, but that one seems to constantly mess up networking. I don’t see why and while I don’t care enough, cause it’s the thing to replace the easiest to replace and have things just work, I every now and then see others which rely on it struggle with it. And it’s just really odd to me. I really don’t wanna blame the devs here though. This is the software I know least about. I just got the habit into dumping it and live without it, instead having a setup the works perfectly for me. I really don’t think bad of it. It doesn’t fit me, but maybe it suits other people.
It still seems that people run into these “new” issues a lot. None of these issues existed when I first installed those systems. All I did was following the updates. With those three pieces of software. So I’d say those are the biggest problems right now.
I think those are possible solutions:
sndio
Give the Open Sound System another try
OpenRC
I really can’t see the game problem. In just don’t see them. Nearly all the games I want exist natively on Linux. Pay Day, Life is Stange, Rust, Witcher 2 (bigger games according to Steam), War Thunder (Free to Play), pretty much all the indie games work perfectly and natively.
All the older windows games work perfectly on Wine and a big part of newer Windows games do too. I never had wine run badly in the last five or so years. Oh no, I did. But that is when PulseAudio decided to occupy a full core. Switching it over to ALSA made it work perfectly. I even use stuff like pipelight to have recent Flash in my browser.
In fact I was most surprised that a tool that modifies the Window of a game, the title bar and so on by modifying memory worked on Windows (if you enable the windows title bar of course).
Wine is probably one of the best pieces of software, because here I get more windows applications working than on a recent version of Windows.
About the big desktop environments: I agree. The only bigger desktop environment that appears to work really well is XFCE. However, I only tried KDE, Gnome and Unity for comparison.
Also I think it’s very hard to measure security. Simply counting all the vulnerabilities detected by researcher seems to be a really bad approach.
I also think that the main reason for many infected systems by my guess is that there is many projects done without a system administrator/devop/platform engineer/system engineer/… or if so someone who has that title, but essentially without experience or understanding of the system with the technologies in use. This happens way less in the Windows world, indeed. Also libraries often don’t get updated, even when vulnerabilities are known.
But those are all just guesses based on what I see in the real world. So I hope that it’s clear, that the views are subjective.
I do maintain a number of Linux systems for people I know
How does that mix with the following?
I had some scripts parsing log files
I put my wife on Ubuntu LTS and had a very low maintenance system. She made the switch to systemd without noticing, because the system was never modified at that level. The only reason for sudo was updates. There certainly were no scripts parsing log files or anything.
In my daily life I don’t really encounter many of these. Granted, I use Linux on a desktop computer, and OSX on a laptop. I think the year of “Linux on the desktop (for non gamers)” was around 2010 or so, but Linux on the laptop is still not really reality, though I haven’t tried the new XPS dells. So that has left me with OSX on laptops.
But on a desktop, as all I do is fool around with a browser or editor it doesn’t really matter. Of course, just last year I had to hop over to Ubuntu since running “dnf update” (without any third-party repositories etc.) installed a kernel that didn’t boot.
Some steam games work nicely (all Source engine based games), which is fun.
I would like to point out a sort of… “meta” problem:
If you have to dedicate a huge chunk of your constructive criticism / critique to curbing people who would somehow find said work offensive, or take it personally - your biggest issue, to me, is the community!
The problem is that the Linux using community is huge and fragmented into groups of people with varying motives and levels of social skills. You’ve got everyone from reasonable devs who use Linux because they understand it and it’s limitations, to staunch and angry supporters who yell that Linux is the One True OS. Because Linux is opt-in (unlike Windows and Mac) the people who run it will be more likely to care about minutia and articles like the OP, so all sorts of people will end up reading the article and reacting to it unless the author can firewall off some of those who will derail the discussion.
So then the question for me becomes: Is this issue something inherent to large, fragmented social groups? or is it something entirely unique to the Linux community?
I think it’s present in any sort of large group but more so in the Linux community because running Linux is sort of reserved for nerds (like me) and we have a problem with social skills and emotional intelligence.
One of the biggest problems with desktop Linux (and free-software in general) is the naming problem.
To whit, I set my father up on a Linux box:
“I looked at the help and it said that it needs GNOME. I thought I was running Linux.”
“That’s the desktop environment, which is distinct from the kernel, which is Linux.”
“But this says it’s for the Unity desktop, I thought I was using GNOME.”
“Okay, well, yes, Unity is a desktop environment for Ubuntu that uses a lot of the GNOME stuff, but it’s not really the same thing…”
“What, Ubuntu? I thought it was Linux!”
This is compounded by the names used by various pieces of software. Sometimes it’s just opaque (my father seeing “ImageMagick (display Q16)” doesn’t help anyone).
More problematic are names that are just stupid: my nephew can’t help but think of Pulp Fiction any time someone mentions GIMP, and I had an uncomfortable moment when I had to dodge explaining what a gigolo was (“it’s called gigolo because it mounts anything! haw haw!”). Gigolo was for a while installed by default and visible under that name on Xubuntu.
Lubuntu is/was problematic too with just opaque names. “I need to change my screen resolution.” “Sure just go to LXRandR”…yeah, for someone who doesn’t know about the X Resize and Rotate Protocol Extension that isn’t going to tell them anything.
Most of these things are important, but of course when it comes to adoption; I can’t help seeing all of these cheap Android phones with serious OS issues and wonder “maybe it has everything to do with distribution and nothing to do with how good the OS actually is”.
One of the more annoying things I noticed when setting up and supporting Linux (CentOS in this case) for some family members is the problem with automatic updates. It doesn’t work. It seems the user will to explicitly check the box to install updates at shut down, no matter how much I try with yum-cron. I noticed the same with Ubuntu. Enabling ‘auto updates’ there just doesn’t work. Also, them turning off the laptop when updates are being installed is a recipe for disaster. There is no clear warning that the computer should NOT be turned off.
there are areas where Linux has excelled other OSes: excellent package management
Third parties packaging software for a massively fragmented set of package managers, and the fact that almost all distro package managers don’t allow you to install multiple versions of the same package is by no means an excellent solution. I don’t understand why this claim is made in the first paragraph when further down the former problem is acknowledged. See also Linus' rant on this topic.
I genuinely look forward to this article every year.
It helps me reflect on things that I might just be accepting as “the way things are” in a stockholm-syndrome manner, as well as reflect on how a few things have actually improved.
Neat article, I wonder if they’re accepting PRs. I’d like to add an item for handicapped accessibility.
(For instance font sizing on most Linux desktops is a nightmare.)
This article is well-intentioned, but there are a couple problems with it as a summary of what we should fix in the open-source ecosystem:
First, the factual inaccuracies:
I’m not able to discern what real phenomenon this is describing. Drivers have never been where users configure display modes, and Xorg.conf is a thing of the past. For any case where a display doesn’t work right with Linux (and it is the kernel’s responsibility since kernel modesetting was adopted the better part of a decade ago), a bug should definitely be filed on kernel.org (and linked in this article).
ALSA has software downmixing (dmix) and all other Linux audio systems live atop ALSA. We can’t fix proprietary applications, but you can virtualize their access to the sound hardware.
So does X11, and so do the Windows and Android display systems. I don’t know the precise interactions between macOS applications and the display server, but usage of “Display PostScript” might imply OS X does pass vector images to the display server. However, all of this is immaterial: Wayland is an IPC protocol, and can easily be amended to support vector graphics if there ever appears a reason to do so: new buffer types can be negotiated for contents like vector graphics or compressed video. For now, no clients would use it and it would not improve performance or appearance. All graphically intensive X11 programs operate under a raster paradigm and do not use X11 drawing primitives.
OpenGL passthrough is a very specialized use-case, and much like remote Wayland in general, can be done as long as there’s coordination on both sides. Saying that Wayland makes this impossible is disingenuous. Pixel-based remoting works well and is still what happens in RDP for much of screen contents (e.g. FF or Chrome over RDP); in fact, the reference Wayland compositor can be accessed over a free-software implementation of Microsoft’s RDP.
Then there’s the fact that many of these are coordination problems. We can’t fix “there are too many distributions” or “[insert proprietary software] does not play nice” or “no unified interface to [thing]”. We can and should fix specific problems with a given distribution or a given interface to functionality. But fundamentally in open-source work you can only do your best to provide working code; it isn’t possible to forcibly remove the broken stuff in other people’s repositories and distributions. “Different programs do things differently” is not a problem we can fix. On the other hand, “there is no way for programs X and Y to share code” is a problem we can and should fix, and it will alleviate many of the problems described here. We should be finding ways to share more code between GNOME and KDE; GTK, EFL, and Qt; et cetera.
Most of the criticisms of the Linux kernel are very valid and point to failings in the design of UNIX: resources should be revocable, and kernel subsystems should have some isolation. That said, many of the problems listed here are also complaints about work that simply hasn’t been done due to lack of hardware documentation or developer power.
In general this article spends way too much time and effort complaining that the Linux ecosystem is not a product, and as such fails badly at things products do such as provide customer service and backwards compatibility guarantees. A multitude of outstanding bugs is worse than having all those bugs closed, but ultimately fixing bugs requires lots of work, and not many people can afford to work on Linux. It isn’t productive to respond to the work of those who can by saying, as quoted, “Fuck it! This ‘OS’ is a fucking joke.”
The fact that there are good reasons for the failure does not solve the issue.
It might suggest that “building a free desktop OS” is not a successful model and we might as well give up. However, I would also ask if “building a desktop OS” in general is a profitable model. Apple does not make its profit by selling OSX or iOS. Even Microsoft does not make much profit from Windows anymore.
I switched away from Linux a couple of months ago (happy with FreeBSD now).
However I do maintain a number of Linux systems for people I know. I didn’t force any of those people to use Linux, but some of them just never used a computer before. It actually used to be extremely low maintenance. Things just were working. They were browsing the web on their netbooks, playing a tiny game or did other things.
Then pretty sudden the adoption three big changes on Linux caused a lot of problems across the distributions:
On a couple of issues that the systems ran into I thought “well, it’s early stage. It will make things easier, eventually”.
So I kept updating. And a lot of those updates were like “things get fixed, but that now needs some kind of workaround”. I had trust it would get better, especially because a lot of distributions adopted stuff, so lots of people working on it.
I started to make some jokes about things becoming worse every once in a while, but I wasn’t serious. Those were rather big changes, so it would sure settle.
NetworkManager was easy. You could just turn it off and set it up using dhclient, dhcpcd or even static, whatever fits best. If you needed more you just scripted or used an alternative. But I tried to stick close to to what the distro goes for cause those were machines for computer novices. People that didn’t get tabs, etc.
Yes, I also offered Windows and so on, but those were even more complex for them (they mostly wanted to browse the web and virus and firewall warnings just can’t be dealt with). Also I couldn’t really help them on Windows.
So that has systemd and pulseaudio remaining. Here systemd was the smaller issue. While distros and I ran into some bugs, those were actually fixed. And at that time systemd was more “just an init system” with nicer/different syntax. There was some journald stuff, because I had some scripts parsing log files that I changed to the journald export format. And there what mostly bugs me is that they still have outright lies about their compatibility promises. They claim to not have changed the format starting from a certain version. That claim is simply wrong. They did change a couple of things since them and older versions of redhat distributions show that, but whatever.
At some point it seemed like most stuff was fixed, which was great. Faith restored. They still had frequent updated, but everything was working and I didn’t care so much.
Until they started adding all those pseudo-optional stuff that interfered and broke stuff again. It was quite a lot in the last year.
Pulseaudio was a similar story. There was so much breakage. However, nothing had a really hard requirement on PulseAudio back then. Skype was the first one I remember. So I was able to pretty much disable it. The next evolution was making script that before an application starts would simply kill pulseaudio. That fixed pretty much every problem. The people were fine with that, cause they barely used multiple applications at the same time, so nothing would suddenly stop working. Either they surfed or they played video games. Killing each other’s audio wasn’t a problem and so that hack was kind of okay. Also the bugs were to be solved soon anyway.
Just know: On all of those there were bug reports. I don’t talk about misconfigurations here. I had misconfigurations, but those were fixed. Sometimes the documentation did not exist or was outdated (yes, the official one) or similar, but eventually those things were fixed.
But then there were a number of kernel and pulseaudio updates that completely ruined pretty much everything again. This was a major annoyance. Especially because the resource usage of pulseaudio kept growing and audio got worse and worse. I had apps crashing with PA default configurations, etc. That was the time when I decided I’d switch away from Linux. I wanna get things one and not deal with pulseaudio all day.
First I looked at distros that didn’t have Pulseaudio and the other tools, but then I found FreeBSD which I used ages ago to have the most pleasant experience and I was able to simply disable pulseaudio from the ports and be done.
Later I learned that they also have sndio from some of the OpenBSD folks and that it evidently also works on Pulseaudio. To make it short: Sndio is effectively Pulseaudio for Linux and the BSDs, just working. Completely hassle free, no big resource requirements. All good. So if you are on Linux and hate Pulseaudio, you might wanna give that a try. I don’t know if it’s official or just patched, but recent Firefox also works with it.
Anyways, this is why those three products seem to be the “Main Linux problems on the desktop” in 2017.
systemd actually seems to become more sane. Well, maybe not sane, but it just stops breaking stuff constantly anymore. I think this is because there is sane and experienced people sending in patches these days. I still think OpenRC, used by Alpine, Gentoo and others is the way to go, but honestly, I don’t think one should really care about the init system so long as it is working. systemd seems to have gotten into a state where it is working most of the time.
NetworkManager.. I don’t get why, but that one seems to constantly mess up networking. I don’t see why and while I don’t care enough, cause it’s the thing to replace the easiest to replace and have things just work, I every now and then see others which rely on it struggle with it. And it’s just really odd to me. I really don’t wanna blame the devs here though. This is the software I know least about. I just got the habit into dumping it and live without it, instead having a setup the works perfectly for me. I really don’t think bad of it. It doesn’t fit me, but maybe it suits other people.
It still seems that people run into these “new” issues a lot. None of these issues existed when I first installed those systems. All I did was following the updates. With those three pieces of software. So I’d say those are the biggest problems right now.
I think those are possible solutions:
I really can’t see the game problem. In just don’t see them. Nearly all the games I want exist natively on Linux. Pay Day, Life is Stange, Rust, Witcher 2 (bigger games according to Steam), War Thunder (Free to Play), pretty much all the indie games work perfectly and natively.
All the older windows games work perfectly on Wine and a big part of newer Windows games do too. I never had wine run badly in the last five or so years. Oh no, I did. But that is when PulseAudio decided to occupy a full core. Switching it over to ALSA made it work perfectly. I even use stuff like pipelight to have recent Flash in my browser.
In fact I was most surprised that a tool that modifies the Window of a game, the title bar and so on by modifying memory worked on Windows (if you enable the windows title bar of course).
Wine is probably one of the best pieces of software, because here I get more windows applications working than on a recent version of Windows.
About the big desktop environments: I agree. The only bigger desktop environment that appears to work really well is XFCE. However, I only tried KDE, Gnome and Unity for comparison.
Also I think it’s very hard to measure security. Simply counting all the vulnerabilities detected by researcher seems to be a really bad approach.
I also think that the main reason for many infected systems by my guess is that there is many projects done without a system administrator/devop/platform engineer/system engineer/… or if so someone who has that title, but essentially without experience or understanding of the system with the technologies in use. This happens way less in the Windows world, indeed. Also libraries often don’t get updated, even when vulnerabilities are known.
But those are all just guesses based on what I see in the real world. So I hope that it’s clear, that the views are subjective.
Also mostly agree with the article, otherwise.
How does that mix with the following?
I put my wife on Ubuntu LTS and had a very low maintenance system. She made the switch to systemd without noticing, because the system was never modified at that level. The only reason for sudo was updates. There certainly were no scripts parsing log files or anything.
Honestly I wish I understood NetworkManager and why it exists. It never cooperates which is very much un-Linux.
In my daily life I don’t really encounter many of these. Granted, I use Linux on a desktop computer, and OSX on a laptop. I think the year of “Linux on the desktop (for non gamers)” was around 2010 or so, but Linux on the laptop is still not really reality, though I haven’t tried the new XPS dells. So that has left me with OSX on laptops.
But on a desktop, as all I do is fool around with a browser or editor it doesn’t really matter. Of course, just last year I had to hop over to Ubuntu since running “dnf update” (without any third-party repositories etc.) installed a kernel that didn’t boot.
Some steam games work nicely (all Source engine based games), which is fun.
I would like to point out a sort of… “meta” problem:
If you have to dedicate a huge chunk of your constructive criticism / critique to curbing people who would somehow find said work offensive, or take it personally - your biggest issue, to me, is the community!
The problem is that the Linux using community is huge and fragmented into groups of people with varying motives and levels of social skills. You’ve got everyone from reasonable devs who use Linux because they understand it and it’s limitations, to staunch and angry supporters who yell that Linux is the One True OS. Because Linux is opt-in (unlike Windows and Mac) the people who run it will be more likely to care about minutia and articles like the OP, so all sorts of people will end up reading the article and reacting to it unless the author can firewall off some of those who will derail the discussion.
So then the question for me becomes: Is this issue something inherent to large, fragmented social groups? or is it something entirely unique to the Linux community?
I think it’s present in any sort of large group but more so in the Linux community because running Linux is sort of reserved for nerds (like me) and we have a problem with social skills and emotional intelligence.
All of the links provided about how terrible ALSA is, are from at least 3 or 4 years ago. That’s weak evidence for ALSA being terrible in 2017.
One of the biggest problems with desktop Linux (and free-software in general) is the naming problem.
To whit, I set my father up on a Linux box:
“I looked at the help and it said that it needs GNOME. I thought I was running Linux.”
“That’s the desktop environment, which is distinct from the kernel, which is Linux.”
“But this says it’s for the Unity desktop, I thought I was using GNOME.”
“Okay, well, yes, Unity is a desktop environment for Ubuntu that uses a lot of the GNOME stuff, but it’s not really the same thing…”
“What, Ubuntu? I thought it was Linux!”
This is compounded by the names used by various pieces of software. Sometimes it’s just opaque (my father seeing “ImageMagick (display Q16)” doesn’t help anyone).
More problematic are names that are just stupid: my nephew can’t help but think of Pulp Fiction any time someone mentions GIMP, and I had an uncomfortable moment when I had to dodge explaining what a gigolo was (“it’s called gigolo because it mounts anything! haw haw!”). Gigolo was for a while installed by default and visible under that name on Xubuntu.
Lubuntu is/was problematic too with just opaque names. “I need to change my screen resolution.” “Sure just go to LXRandR”…yeah, for someone who doesn’t know about the X Resize and Rotate Protocol Extension that isn’t going to tell them anything.
Most of these things are important, but of course when it comes to adoption; I can’t help seeing all of these cheap Android phones with serious OS issues and wonder “maybe it has everything to do with distribution and nothing to do with how good the OS actually is”.
One of the more annoying things I noticed when setting up and supporting Linux (CentOS in this case) for some family members is the problem with automatic updates. It doesn’t work. It seems the user will to explicitly check the box to install updates at shut down, no matter how much I try with
yum-cron
. I noticed the same with Ubuntu. Enabling ‘auto updates’ there just doesn’t work. Also, them turning off the laptop when updates are being installed is a recipe for disaster. There is no clear warning that the computer should NOT be turned off.Third parties packaging software for a massively fragmented set of package managers, and the fact that almost all distro package managers don’t allow you to install multiple versions of the same package is by no means an excellent solution. I don’t understand why this claim is made in the first paragraph when further down the former problem is acknowledged. See also Linus' rant on this topic.