1. 45
    1. 17

      Genuinely I don’t understand the point of this article.

      I would pick even gnome or kde over windows’s awful GUI (really any of the recent ones, but certainly windows 10) even if I use i3. Using windows is just… annoying… frustrating… painful… I have top a of the line laptop from dell with an nvidia iGPU, 32GiB of RAM and a top of the line (at the time) intel mobile class CPU. But the machine still finds a reason to bluescreen, randomly shut-down without safely powering down my VMs, break or god knows what all the time. And when such a thing happens there’s no options to debug it, there’s no good documentation, no idea of where to even start. I’m glad windows works for some people, but it doesn’t work for me. What wakeup call? What do I need to wake up to? I use linux among other things, it’s not perfect but for me it’s the best option.

      1. 10

        (NB: I’m the author of the article, although not the one who submitted it)

        Genuinely I don’t understand the point of this article.

        The fact that it’s tagged “rant” should sort of give it away :P. (I.e. it’s entirely pointless!)

        There is a bit of context to it that is probably missing, besides the part that @crazyloglad pointed out here. There is a remarkable degree of turnover among Linux users – nowadays I maybe know 6-7 people who use Linux or a BSD full time, but I know dozens who don’t use it anymore.

        And I think one of the reasons for that is the constant software churn in the desktop space. Lots of things, including various GTK/Gnome or KDE components, ritually get tore down, burnt and rebuilt every 6-8 years or so, and at one point you just get perpetual beta fatigue. I’m not sure how else to call it. Much of it, in the last decade, has been in the name of “better” or “more modern” UX, and yet we’re not in a much better position than ten years ago in terms of userbase. Meanwhile, Microsoft swoops in and, on their second attempt, comes up with a pretty convincing Linux desktop, with a small crew and very little original thought around it, just by focusing on things that actually make a difference.

        1. 15

          I suspect that Microsoft is accidentally the cause of a lot of the problems with the Linux desktop. Mac OS, even back in the days when it didn’t have protected memory and barely qualified as an operating system, had a clear and coherent set of human interface guidelines. Nothing on the system was particularly flashy[1] and so it was hard to really understand the value of this consistency unless you used it for a few months. Small things like the fact that you bring up preferences in every application in exactly the same way (same menu location, same keyboard shortcut), text field navigation with mouse (e.g. selecting whole words) or shortcut keys is exactly the same, button order is consistent in ever dialog box. A lot of apps brought their own widget set, in part because ‘90s Microsoft didn’t want to give away the competitive edge of Office and so didn’t provide things in the system widget set that would have made writing an Office competitor too easy.

          In contrast, the UI situation on Windows has always been a mess. Most dialog boxes put the buttons the wrong way around[2], but even that isn’t consistent and some put them the right way around. The ones that do get it right just put ‘okay’ and ‘cancel’ on the buttons instead of verbs (for example, on a Mac if you close a window without saving the buttons are ‘delete’, ‘cancel’, ‘save’).

          Macs are expensive. Most of the people working on *NIX desktop environments come from Windows. If they’ve used a Mac, it’s only for a short period, not long enough to learn the value of a consistent UI[3]. People always copy the systems that they’re familiar with and when you’re trying to copy a system that’s a bit of a mess, it’s really hard to come up with something better. The systems that have tried to copy the Mac UI have typically managed the superficial bits (Aqua themes) and not got any of the parts that actually make the Mac productive to use.

          [1] When OS X came out, Apple discovered that showing people the Genie animations for minimising in shops increased sales by a measurable amount. Flashiness can get the first sale, but it isn’t the thing that keeps people on the platform. Spinning cubes get old after a week of use.

          [2] Until the ‘90s, it was believed that this should be a locale-dependent thing. In left-to-right reading order, the button implying go back should be on the left and the one implying go forwards should be on the right. In left-to-right reading order locales, it should be the converse. More recent research has shown that the causation was the wrong way around: left-to-right writing schemes are dominant because humans think left-to-right is forwards motion, people don’t believe left-to-right is forwards because that’s the order that they’re taught to read. Getting this wrong is really glaring now that web browsers are dominant applications because they all have a pair of arrows where <- means ‘go back’ and -> means ‘go forwards’, and yet will still pop up dialogs with the buttons ordered as [proceed] [go back] as if a human might find that intuitive.

          [3] Apple has also been gradually making their UIs less consistent over the last 10-15 years as the HCI folks (people with a background in cognitive and behavioural psychology) retired and were replaced with UX folks (people who followed fads in what looks shiny and had no science to justify their decisions).

          1. 13

            IMHO the fact that, despite how messy it is, the Windows UI is so successful, points out at something that a lot of us don’t really want to admit, namely that consistency just isn’t that important. It’s not pointless, as the original Macintosh very convincingly demonstrated, especially with users who aren’t into computers as a hobby. But it’s not the holy grail, either.

            Lots of people sneer at CAD apps (or medical apps, I have some experience with that), for example, because their UIs are old and clunky, and they’re happy to ascribe it to the fact that the megacorps behind them just don’t know how to design user interfaces for human users.

            But if they were, in fact, to make a significant facelift, flat, large buttons, and hamburger menus and all, their existing users, who rely on these apps for 8 hours/day to make those mind-bogglingly complex PCBs and ICs, and who (individually or via their employers) pay those eye-watering licenses, would hate them and would demand for their money back and a downgrade. A facelift that modernized the interface and made it more “intuitive”, “cleaner” and “more discoverable” would be – justifiably! – treated as a (hopefully, but not necessarily) temporary productivity killer that’s entirely uncalled for: they already know how to use it, so there’s no point in making it more intuitive or more discoverable. Plus, these are CAD apps, not TikTok clones. The stakes are higher and you’re not going to rely on guts and interface discoverability, if you’re in doubt, you’re going to read the manual.

            If you make applications designed to offer a quick distraction, or to hook people up and show them ads or whatever, it is important to get these things right, because it takes just two seconds of frustration for them to close that stupid app and move on – after all it’s not like they get anything out of it. Professional users obviously don’t want bad interfaces, either, but functionality is far more important to get right. If your task for the day is to get characteristic impedance figures for the bus lines on your design, and you have to choose between the ugly app that can do it automatically and the beautiful, distraction-free, modern-looking app that doesn’t, you’re gonna go with the ugly one, because you don’t get paid for staring at a beautiful app. And once you’ve learned how to do it, if the interface gets changed and you have to spend another hour figuring out how to do it, you’re going to hate it, because that’s one hour you spend learning how to do something you already knew how to do, and which is not substantially different than before – in other words, it’s just wasted time.

            Lots of FOSS applications get this wrong (and I blame ESR and his stupid Aunt Tilly essay for that): they ascribe the success of some competitors to the beautiful UIs, rather than functionality. Then beautiful UIs do get done, sometimes after a long time of hard work and often at the price of tearing down old functionality and ending up with a less capable version, and still nobody wants these things. They’re still a footnote of the computer industry.

            I’ve also slowly become convinced of something else. Elegant though they may be, grand, over-arching theories of human-computer interactions are just not very useful. The devil is in the details, and accounting for the quirky details of quirky real-life processes often just results in quirky interfaces. Thing is, if you don’t understand the real life process (IC design, neurosurgery procedures, operation scheduling, whatever), you look at the GUIs and you think they’re overcomplicated and intimidating, and you want to make them simpler. If you do understand the process, they actually make a lot of sense, and the simpler interfaces are actually hard to use, because they make you work harder to get all the details right.

            That’s why academic papers on HCI are such incredible snoozefests to read compared to designer blogs, and so often leave you with questions and doubts. They make reserved, modest claims about limited scenarios, instead of grand, categorical statements about everyone and everything. But they do survive contact with the real world, and since they’re falsifiable, incorrect theories (like localised directionality) get abandoned. Whereas the grand esoteric theories of UX design can quickly weasel their way around counter-examples by claiming all sorts of exceptions or, if all else fails, by simply decreeing that users don’t know what they want, and that if a design isn’t as efficient as it’s supposed to be, they’re just holding it wrong. But because grand theories make for attractive explanations, they catch up more easily.

            (Edit: for shits and giggles, a few years ago, I did a quick test. Fitts’ Law gets thrown around a lot as a reason for making widgets bigger, because they’re easier to hit. Nevermind that’s not really what Fitts measured 50 years ago – but if you bother to run the numbers, it turns out that a lot of these “easier to hit” UIs actually have worse difficulty figures, because while the targets get bigger, the extra padding from so many targets adds up and travel distances increase enough that the difficulty index is, at best, only marginally improved. I don’t remember what I tried to run numbers on, I think it was some dialogs in the new GTK3 release of Evolution and some KDE apps when the larger Oxygen theme – in some cases they were worse by 15%)

            Apple has also been gradually making their UIs less consistent over the last 10-15 years as the HCI folks (people with a background in cognitive and behavioural psychology) retired and were replaced with UX folks (people who followed fads in what looks shiny and had no science to justify their decisions).

            This isn’t limited to Apple, though, it’s been a general regression everywhere, including FOSS. I’m pretty sure you can use Planet Gnome to test hypertension meds at this point, some of the UX posts there are beyond enraging.

            1. 1

              AutoCAD did make a significant facelift, cloning the Office 2007 “ribbon” interface, also a significant facelift.

              1. 1

                AutoCAD is in a somewhat “privileged” position, in that it has an excellent command interface that most old-time users are using (I haven’t used AutoCAD in years, but back when I did, I barely knew what was in the menus). But even in their case, the update took a while to trickle down, it was not very well received, and they shipped the “classic workspace” option for years along with the ribbon interface (I’m not sure if they still do but I wouldn’t be surprised if they did).

          2. 4

            More recent research has shown that the causation was the wrong way around: left-to-right writing schemes are dominant because humans think left-to-right is forwards motion, people don’t believe left-to-right is forwards because that’s the order that they’re taught to read.

            Do you have a good source for this? Arabic and Hebrew are prominent (and old!) right-to-left languages; it would seem more likely (to me) that a toss of the coin decided which direction a civilization wrote rather than “left-to-right is more natural and a huge chunk of civilization got it backwards.”

        2. 2

          There is a remarkable degree of turnover among Linux users – nowadays I maybe know 6-7 people who use Linux or a BSD full time, but I know dozens who don’t use it anymore.

          I think that chasing the shiny object is to blame for a lot of that. Some times the shiny object really is better (systemd, for all its multitude of flaws, failures, misfeatures and malfeasances really is an improvement on the state of things before), sometimes it might be (Wayland might be worth it, in another decade, maybe), and sometimes it was not, is not and never shall be (here I think of the removal of screensavers from GNOME, of secure password sync from Firefox[0] and of extensions from mobile Firefox).

          I don’t think it is coincidence that so many folks are using i3, dwm and StumpWM now — they really are better than the desktop environments.

          But, for what it’s worth, I don’t think I know anyone who used to use Linux or a BSD, and I have been using Linux solely for almost 22 years now.

          [0] Yes, Firefox still offers password sync, but it is now possible for Mozilla to steal your decryption key by delivering malicious JavaScript on a Firefox Account login. The old protocol really was secure

          1. 3

            I don’t think it is coincidence that so many folks are using i3, dwm and StumpWM now — they really are better than the desktop environments.

            They are, but it’s also really disappointing. The fact that tiling a bunch of VT-220s on a monitor is substantially better, or at least a sufficiently good alternative for so many people, to GUIs developed 40 years after the Xerox Star, really says a lot about the quality of said GUIs.

            But, for what it’s worth, I don’t think I know anyone who used to use Linux or a BSD, and I have been using Linux solely for almost 22 years now.

            This obviously varies a lot, I don’t wanna claim that what I know is anything more than anecdata. But e.g. everyone in what used to be my local LUG has a Mac now. Some of them use Windows with Cygwin or WSL, mostly because they still use some old tools they wrote or their fingers are very much used to things like bc. I still run Linux and OpenBSD on most of my machines, just not the one I generally work on, that’s a Mac, and I don’t like it, I just dislike it the least.

        3. 1

          That churn is extremely superficial, though. I can work comfortably on anything from twm to latest ubuntu.

      2. 9

        I do have a linux machine for my work stuff running KDE. And I love the amount of stuff I can customize, hotkeys that can be changed out of the box, updates I can control etc.

        But if you get windows to run in a stable manner (look out for updates, disable fast start/stop, disable some annoying services, get a professional version so it allows you to do that, get some additions for a tabbed explorer, remove all them ugly tiles in the start menu, disable anything that has “cortana” in its name and forget windows search), then you will have a better experience on windows. You’ll not have to deal with broken GPU drivers, you’ll not have to deal with broken multi-display multi-DPI stuff, which includes no option to scale differently, display switching crashing your desktop, laptops going back to sleep because you were too fast in closing its lid on bootup when you connected an external display. You’ll not have to deal with your pricey GPU not getting used for video encoding and decoding. Browsers not using hardware acceleration and rendering 90% on the CPU. Games being broken or not using the GPU fully. Sleep mode sometimes not waking up some PCIE device, leading to a complete hangup of the laptop. So the moment you actually want to use your hardware fully, maybe even game on that and do anything that is more than a 1 display system with a CPU, you’ll be pleased to use windows. And let’s not talk about driver problems because of some random changes in linux that breaks running a printer+scanner via USB. That is the sad truth.

        Maybe Wayland will change at least the display problems, but that doesn’t fix anything regarding broken GPU support. And no matter whose fault it is, I don’t buy a PC for 1200€, just so I can watch my PC trying to render my desktop in 4k on the CPU, tearing in videos and random flickering when doing stuff with blender. I’m not up to tinkering with that, I want to tinker with software I built, not with some bizarre GPU driver and 9k Stackoverflow/Askubuntu/Serverfault entries of people who all can’t do anything, because proprietary GPU problems are simply a blackbox. I haven’t had any bluescreen in the last 5 years except one, and that was my fault for overloading the VRAM in windows.

        And at that point WSL2 might actually be a threat, because it might allow me to just ditch linux on my box entirely and get the good stuff in WSL2 but remove the driver pain (while the reverse isn’t possible). Why bother with dual boot or two machines if you can use everything with a WSL2 setup. It might even fix the hardware acceleration problem in linux, because windows can just hand over a virtualized GPU that uses the real one underneath using the official drivers. I won’t have to tell people to try linux on the desktop, they can just use WSL2 for the stuff that requires it and leave the whole linux desktop on the side, along with all the knowledge of installing it or actually trying out a full linux desktop. (I haven’t used it at this point) What this will do is remove momentum and eventually interest from people to get a good linux desktop up and running, maybe even cripple the linux kernel in terms of hardware support. Because why bother with all those devices if you’re reduced to running on servers and in a virtualized environment of windows, where all you need are the generic drivers.

        I can definitely see that coming. I’ve used linux primarily pre corona, and now that I’m home most of them time I’m dreading to start my linux box.

        1. 1

          look out for updates

          What do you mean by this? Are you saying I should manually review and read about every update?

          disable fast start/stop

          Done

          disable some annoying services

          I’m curious which ones but I think I disabled most of them.

          get a professional version so it allows you to do that

          Windows 10 enterprise.

          get some additions for a tabbed explorer

          Can you recommend some?

          remove all them ugly tiles in the start menu, disable anything that has “cortana” in its name and forget windows search)

          Done and done and done

          broken GPU drivers

          I haven’t had to deal with this yet, but I’ve had multiple instances where USB stopped working, bluetooth stopped working and the dock stopped working which previously worked before a windows update but then required me to manually update the drivers after the windows update to get it working again.

          multi-display multi-DPI

          I don’t think there currently exists any non-broken multi-DPI solution on windows or any other platform and so I avoid having this problem in the first place. The windows solution to this problem is just as bad as the wayland one. You can’t solve this problem if you rasterize before knowing where the pixels will end up being. You would need a new model for how you describe visuals on a screen which would be vector graphics oriented.

          display switching crashing your desktop, laptops going back to sleep because you were too fast in closing its lid on bootup when you connected an external display

          I have had the first one happen a few times on windows, the second issue is something I don’t run into since I don’t currently run my laptop with the lid closed while using external displays, but it’s something I’ve planned to move to. I’ve been procrastinating moving to this setup because of the number of times I’ve seen it break for coworkers (running the same hardware and software configuration). I’ve never had a display switch crash anything on linux, although I’ve had games cause X to crash but at least I had a debug log to work from at that point and could at least see if I can do something about it.

          Games being broken or not using the GPU fully.

          Gaming on linux, if you don’t mind doing an odd bit of tinkering, has certainly been a lot less stressful than gaming on windows, which works fine until something breaks and then there’s absolutely zero information which is available to fix it. It’s not ideal but I play VR games on linux, I take advantage of my hardware, it’s a very viable platform especially when I don’t want to deal with the constant shitty mess of windows. I’ve never heard of a game not using the GPU fully (when it works).

          So the moment you actually want to use your hardware fully, maybe even game on that and do anything that is more than a 1 display system with a CPU, you’ll be pleased to use windows.

          I use windows and linux on a daily basis. I’m pleased to use linux, I sometimes want to change jobs because of having to use windows.

          And let’s not talk about driver problems because of some random changes in linux that breaks running a printer+scanner via USB.

          Or when you update windows and your printer+scanner no longer works. My printing experience on linux has generally been more pleasant than linux because printers don’t suddenly become bricks just because microsoft decides to force you to update to a new version of windows overnight.

          Printers still suck (and so do scanners) but I’ve mitigated most problems by sticking to supported models (of which there are plenty of good online databases).

          1. 1

            I don’t think there currently exists any non-broken multi-DPI solution on windows or any other platform and so I avoid having this problem in the first place. The windows solution to this problem is just as bad as the wayland one. You can’t solve this problem if you rasterize before knowing where the pixels will end up being. You would need a new model for how you describe visuals on a screen which would be vector graphics oriented.

            I have no problems moving windows between HighDPI and normal 1080p displays on windows. WIndows 11 will fix a lot of the multi-screen issues of moving windows to the wrong display.

            Meanwhile my Linux-Box can’t even render videos on 4k due to missing hardware acceleration (don’t forget the tearing). And obviously it’s not capable of different scaling between the HighDPI and the 1080p display. Thus it’s a blurry 2k res on a 4k display. And after logging into the 2k screen, my whole plasmashell is crashed, which is why I’ve got a bash command hotkey to restart it.

            I haven’t had to deal with this yet, but I’ve had multiple instances where USB stopped working, bluetooth stopped working and the dock stopped working which previously worked before a windows update but then required me to manually update the drivers after the windows update to get it working again.

            I never had any broken devices after an update or a malfunctioning system. Only one BSOD directly after an upgrade, which fixed itself with a restart.

            I’ve never heard of a game not using the GPU fully

            Nouveau is notorious for not being able to control clock speed and the drivers not being able to use as much capacity. Fixing a bad GPU driver on linux had me reinstalling the whole OS multiple times.

            1. 2

              I have no problems moving windows between HighDPI and normal 1080p displays on windows. WIndows 11 will fix a lot of the multi-screen issues of moving windows to the wrong display.

              Same experience here. I tried using a Linux + Windows laptop for 7 months or so. Windows mixed DPI support is generally good, including fractional scaling (which is what you really want on a 14” 1080p screen). The exception are some older applications, which have blurry fonts. Mixed DPI on macOS is nearly flawless.

              On Linux + GNOME it is ok if you use Wayland and all your screens use integer scaling. It all breaks down once you use fractional scaling. X11 applications are blurry (even on integer-scaled screens) because they are scaled up. Plus rendering becomes much slower with fractional scaling.

              Meanwhile my Linux-Box can’t even render videos on 4k due to missing hardware acceleration (don’t forget the tearing).

              I did get it to work, both on AMD and NVIDIA (proprietary drivers). But it pretty much only works on applications that have good support for VA-API (e.g. mpv) or NVDEC and to some extend with Firefox (you have to enable enable experimental options, force h.264 on e.g. youtube, and it crashes more often). With a lot of applications, like Zoom, Skype, or Chrome, rendering happens on the CPU and it blows away your battery life and you have constantly spinning fans.

              1. 1

                Yeah the battery stuff is really annoying. I really hope wayland will finally take over everything and we’ll have at least some good scaling. Playback on VLC works, but I actually don’t want to have to download everything to play it smoothly, so firefox would have to work first with that.. (And for movie streaming you can’t download stuff.)

            2. 1

              I have no problems moving windows between HighDPI and normal 1080p displays on windows. WIndows 11 will fix a lot of the multi-screen issues of moving windows to the wrong display.

              If you completely move a window between two displays, the problem is easy-ish to solve with some hacks, it’s easier to solve if your dpi is a multiple of the other dpi. And issues especially occur when windows straddle the screen boundary. Try running a game across two displays on a multi-dpi setup, you will either end up with half the game getting downscaled from 4k (which is a waste of resources and your gpu probably can’t handle that at 60fps) or you end up with a blurry mess on the other screen. When I did use multi-dpi on windows as recent as windows 10 there were still plenty of windows core components which would not render correctly when you did this. You would either get blurryness or just text rasterization which looked off.

              But like I said, this problem is easily solved by not having a multi-dpi setup. No modern software fully supports this properly, and no solution is fully seamless, just because YOU can’t personally spot all the problems doesn’t mean that they don’t exist. Some people’s standards for “working” are different or involve different workloads.

              Meanwhile my Linux-Box can’t even render videos on 4k due to missing hardware acceleration (don’t forget the tearing).

              Sounds like issues with your configuration. I run 4k videos at 60Hz with HDR from a single board computer running linux, it would run at 10fps if it had to rely solely on the CPU. It’s a solved problem. If you’re complaining because it doesn’t work in your web browser, I can sympathise there, but that’s not because there’s no support for it, it’s just that by default it’s disabled (at least on firefox) for some reason. You can enable it by following a short guide in 5 minutes and never have to worry about it again. A small price to pay for an operating system that actually does what you ask it to.

              And obviously it’s not capable of different scaling between the HighDPI and the 1080p display. Thus it’s a blurry 2k res on a 4k display.

              Wayland does support this (I think), but like I said, there is no real solution to this which wouldn’t involve completely redesigning everything including core graphics libraries and everyone’s mental model of how screens work.

              Really, getting hung up on multi-dpi support seems a little bit weird. Just buy a second 4k display if you care so much.

              And after logging into the 2k screen, my whole plasmashell is crashed, which is why I’ve got a bash command hotkey to restart it.

              Then don’t use plasma.

              At least on linux you get the choice not to use plasma. When windows explorer has its regular weekly breakage the only option I have is rebooting windows. I can’t even replace it.

              Heck, if you are still hung up on wanting to use KDE then fix the bug. At least with linux you have the facilities to do this. When bugs like this appear on windows (especially when they only affect a tiny fraction of users) there’s no guarantee when or if it will be fixed. I don’t keep track but I’ve regularly encountered dozens of different bugs in windows over the course of using it for the past 15 years.

              I never had any broken devices after an update or a malfunctioning system. Only one BSOD directly after an upgrade, which fixed itself with a restart.

              Good for you. My point is that your experience is not universal and that there are people for whom linux breaks a lot less than windows. You insisting this isn’t the case won’t make it so.

              Nouveau is notorious for not being able to control clock speed and the drivers not being able to use as much capacity.

              Which matters why?

              If someone wrote a third party open source nvidia driver for windows would you claim that windows can’t take full advantage of hardware? What kind of argument is this?

              Nouveau is one option, it’s not supported by nvidia, no wonder it doesn’t work as well when it’s based on reverse engineering efforts. However, this would only be a valid criticism if there were not nvidia supported proprietary nvidia gpu drivers for linux which worked just fine. If you want a better experience with open source drivers then pick hardware which has proper linux support like intel or amd gpus. I’ve ran both and although I now refuse to buy nvidia on the principle that they just refuse to try to cooperate with anyone, it actually worked fine for over 5 years of linux gaming.

              1. 5

                I agree with a lot of your post, so I’m not going to repeat that (other than adding a strong +1 to avoiding nvidia on that principle), but I want to call out this:

                Really, getting hung up on multi-dpi support seems a little bit weird. Just buy a second 4k display if you care so much.

                It may not be a concern to you, but that doesn’t mean it doesn’t affect others. There are many cases where you’d have displays with different densities, and two different-density monitors is just one. Two examples that I personally have:

                1. My work macbook has a very high DPI display, but if I want more screen space while working from home, I have to plug in one of my personal 24” 1080p monitors. The way Apple do the scaling isn’t the best, but different scaling per display is otherwise seemless. Trying to do that with my Linux laptop is a mess.
                2. I have a pen display that is a higher density than my regular monitors. It’s mostly fine since you use it up-close, but being able to bump it up to 125% or so would be perfect. That’s just not a thing I can do nicely on my Linux desktop. I’m planning to upgrade it at some point soon to one that’s even higher density, where I’m guessing 200% scaling would work nicely, but I may end up stuck having to boot into Windows to use it at all.

                There are likely many other scenarios where it’s not “simply” a case of upgrading a single monitor, but also, the “Just buy [potentially very expensive thing]” argument is incredibly weak and dismissive in its own right.

                1. 1

                  My work macbook has a very high DPI display, but if I want more screen space while working from home, I have to plug in one of my personal 24” 1080p monitors. The way Apple do the scaling isn’t the best, but different scaling per display is otherwise seemless. Trying to do that with my Linux laptop is a mess.

                  I get that, but my point is that you can just get a second 1080p monitor and close your laptop. Or buy two high DPI monitors.

                  Really, the problem I have with this kind of criticism is that although valid, I would rather have some DPI problems and a slightly ugly UI because I had to display 1080p on a 4k display than have all the annoying problems I have with windows, especially when I have actual work to do. It’s incredibly stressful to have the hardware and software I am required by my company to use cause hours of downtime or work lost per week. With linux, there is a lot less stress, I just have to be cognizant of making the right hardware buying decisions.

                  I have a pen display that is a higher density than my regular monitors. It’s mostly fine since you use it up-close, but being able to bump it up to 125% or so would be perfect. That’s just not a thing I can do nicely on my Linux desktop. I’m planning to upgrade it at some point soon to one that’s even higher density, where I’m guessing 200% scaling would work nicely, but I may end up stuck having to boot into Windows to use it at all.

                  I think you should try wayland. It can do scaling and I think I have even seen it work (about as well as multi-dpi solutions can work given the state of things).

                  If you are absolutely stuck on X there are a couple of workarounds, one is launching your drawing application at a higher DPI. It won’t change if you move it to a different screen but it is not actually that big of a hack and will probably solve your particular problem. I even found a reddit post for it: https://old.reddit.com/r/archlinux/comments/5x2syg/multiple_monitors_with_different_dpis/

                  The other hack is to run 2 X servers but that’s really unpleasant to work with. But since you are using a specific application on that display this may work too.

                  potentially very expensive thing

                  If you’re dealing with a work mac, get your workplace to pay for it.

                  Enterprise Windows 10 licenses cost money too, not as much as good monitors, but they’re not an order of magnitude more expensive (although I guess it depends on if you buy them from apple).

                  1. 2

                    I get that, but my point is that you can just get a second 1080p monitor and close your laptop. Or buy two high DPI monitors.

                    Once again, the “just pay more money” is an incredibly dismissive and weak argument, unless you’re willing to start shelling out cash to strangers on the internet. If someone had the means and desire to do so, they obviously would have done so already.

                    I think you should try wayland. It can do scaling

                    Wayland may be suitable in my particular case (it’s not), but it’s also not near a general solution yet.

                    If you’re dealing with a work mac, get your workplace to pay for it.

                    I was using it as an example - forget I used the word “work” and it holds just as true. My current setup is “fine” for me, but I’m not the only person in the world with a macbook, a monitor, and a desire to plug the two together.


                    The entire point of my comment wasn’t to ask for solutions to two very specific problems I personally have; it was to point out that you’re being dismissive of issues that you yourself don’t have, while also pointing out that someone else’s issues are not everyone’s. To use your own words, “My point is that your experience is not universal”.

                    1. 0

                      Once again, the “just pay more money” is an incredibly dismissive and weak argument, unless you’re willing to start shelling out cash to strangers on the internet. If someone had the means and desire to do so, they obviously would have done so already.

                      No, actually, let’s bring this thread back to it’s core.

                      Some strangers on the internet (not you) are telling me that windows is so great and that it will solve all my problems, or that linux has massive irredeemable problems and then proceed to list “completely fucking insignificant” (in my opinion) UI and scaling issues compared to my burnout inducing endless hell of windows issues. Regarding the problems they claim it solves: it either doesn’t solve (because they don’t exist on linux so there is nothing to solve), or are not things that windows solves to my satisfaction, or are not things I consider problems at all (and in multiple cases, I don’t think that’s just me, I think the person is just mislead as to either the definition of a linux problem or just has a unique bad experience).

                      What’s insulting is the rest of this thread (not you) of people who keep telling me how wrong I am about my consistent negative experience with windows and positive experience with linux and how amazing windows is because you can play games with intrusive kernel mode anti cheat as if not being able to run literal malware is one of the biggest problems I should, according to them, be having with linux.

                      My needs are unconventional, they are not met in an acceptable manner by windows. I started off by saying “I’m glad windows works for some people, but it doesn’t work for me.” I wish people actually read that part before they started listing off how windows can solve all my problems. I use windows on a daily basis and I hate it.

                      So really, what is “an incredibly dismissive and weak argument” is people insisting that the solutions that work for me are somehow not acceptable when I’m the only one who has to accept them.

                      I am not surprised you got turned around and started thinking that I was trying to dismiss other people’s experiences with windows in linux because that’s what it would look like if you read this thread as me defending linux as a viable tool for everyone. It is not, I am simply defending linux as a viable tool for me.

              2. 3

                I don’t want to use things on multiple screens at the same time, I want them to be able to move across different displays while changing their scaling accordingly.. And that is already something I want when connecting one display to one laptop, you don’t want your 1080p laptop scaled like your 1080p display. And I certainly like writing on higher-res displays for work.

                When I did use multi-dpi on windows as recent as windows 10 there were still plenty of windows core components which would not render correctly when you did this. You would either get blurryness or just text rasterization which looked off.

                Which are 0 of my daily drivers. Not browsers, explorer, taskmanager, telegram, discord, steam, VLC, VS, VSCode..

                Then don’t use plasma

                And then what ? i3 ? gnome ? Could just use apple, they have a unix that works at least. “just exchange the whole desktop experience and it might work again”, sounds like a nice solution.

                When bugs like this appear on windows (especially when they only affect a tiny fraction of users) there’s no guarantee when or if it will be fixed.

                And on linux you’ll have to pray somebody hears you in the white noise of people complaining and actually fixes stuff for you, and doesn’t leave it for years as an bug report in a horrible bugzilla instance. Or you just start being the expert yourself, which is possible if you’ve got nothing to do. (And then have fun bringing that fix upstream.) It’s not that simple. It’s nice to have the possibility of recompiling stuff yourself, but that doesn’t magically fix the problem nor gives your the knowledge how to do so.

                You insisting this isn’t the case won’t make it so.

                And that’s where I’m not sure it’s worth to discuss any further. Because you’re clearly down-sizing linux GPU problems to “just tinker with it/just use wayland even if it breaks many programs” while complaining about the same on windows. My experience may be different to yours, but the comments and votes here, plus my circle of friends (and many students at my department) are speaking for my experience. One were people complain about windows and hate it its update policy. But love it for simply working with games(*), scaling where linux fails flat on its head and other features. You seem to simply ignore everyone that doesn’t want to tinker around with their GPU setup. No your firefox won’t be able to do playback on 4k screen out of the box, it’ll do that on your CPU by default. We even had submissions here about how broken those interfaces are, so firefox and chrome disabled their support on linux for GPU acceleration and only turned it back on for some card after some time. Seems to be very stable..

                I like linux, but I really dread its shortcomings for everything that is consumer facing and not servers I can hack with and forget about UIs. And I know for certain how bad windows can be. I’ve set up my whole family on linux, so it can definitely work. I only have to explain to them again why blender on linux may just crash randomly.

                (*) Yes, all of them, including anti-cheats, which won’t work on linux or you’ll gamble when they will bann you. I know some friends running a hyperv-emulation in KVM to get them to run on rainbow..

                1. 1

                  taskmanager

                  The fact that taskmanager is one of your daily driver applications is quite funny.

                  … VS, VSCode

                  I certainly use more obscure applications than these, so it explains why I have more obscure problems.

                  And then what ? i3 ? gnome ? Could just use apple, they have a unix that works at least. “just exchange the whole desktop experience and it might work again”, sounds like a nice solution.

                  KDE has never been the most stable option, it has usually been the prettiest though. I’m sorry about the issues you’re having but really at least you have options unlike on windows.

                  And on linux you’ll have to pray somebody hears you in the white noise of people complaining and actually fixes stuff for you, and doesn’t leave it for years as an bug report in a horrible bugzilla instance. Or you just start being the expert yourself, which is possible if you’ve got nothing to do. (And then have fun bringing that fix upstream.) It’s not that simple. It’s nice to have the possibility of recompiling stuff yourself, but that doesn’t magically fix the problem nor gives your the knowledge how to do so.

                  You have to pray someone hears you regardless. The point is that on linux you can actually fix it yourself, or switch the component out for something else. On windows you don’t have either option.

                  And then have fun bringing that fix upstream.

                  Usually much easier than trying to get someone else to fix it. Funnily enough projects love bug fixes.

                  It’s not that simple.

                  I’ll gladly take not simple over impossible any day.

                  And that’s where I’m not sure it’s worth to discuss any further. Because you’re clearly down-sizing linux GPU problems to “just tinker with it/just use wayland even if it breaks many programs” while complaining about the same on windows.

                  I genuinely have not had this mythical gpu worst case disaster scenario you keep describing. So I’m not “down-sizing” anything, I am just suggesting that maybe it’s your own fault. Really, I’ve used a very diverse set of hardware over the past few years. The point I’ve been making repeatedly is that “tinkering” to get something to work on linux is far easier than “copy pasting random commands from blog posts which went dead 10 years ago until something works” on windows. When things break on linux it’s a night and day difference in debugging experience compared to windows, and you do need to know a little bit about how things work, but I’ve used windows for longer than I have used linux and I know less about how it works despite my best efforts to learn.

                  Your GPU problems seem to stem from the fact that you are using nouveau. Stop using nouveau. It won’t break anything, it will just mean you can stop complaining about everything being broken. It might even fix your plasma crashes when you connect a second monitor.

                  My experience may be different to yours, but the comments and votes here, plus my circle of friends (and many students at my department) are speaking for my experience.

                  I could also pull out a large suite of anecdotes but really that won’t make an argument, so maybe let’s not go there?

                  But love it for simply working with games(*),

                  Some games not working on linux is not a linux problem. Despite absolute best efforts by linux users to make it their problem. Catastrophically anti-consumer and anti-privacy anti-cheat solutions are not something you can easily make work on linux for sure, but I’m not certain I want it to work.

                  scaling where linux fails flat on its head

                  I’ll take some scaling issues and being able to actually use my computer and get it to do what I want over work lost, time lost and incredible stress.

                  No your firefox won’t be able to do playback on 4k screen out of the box, it’ll do that on your CPU by default.

                  Good to know you read the bit of my comment where I already addressed this.

                  Seems to be very stable..

                  Okay, at this point you’re close to just being insulting. Let me spell it out for you:

                  Needing to configure firefox to use hardware acceleration, not having a hacky automatic solution for multi-DPI on X, not being able to play games which employ anti-cheat solutions which orwell couldn’t imagine, some UI inconsistencies, having to tinker sometimes. These are all insignificant problems compared to the issues I have with windows on a regular basis. You said it yourself, you use a web browser, two web browser based programs, 3 programs developed by microsoft to work on windows (although that’s never stopped them from being broken for me) and a media player which statically links mplayer libraries which were developed not for windows and a chat client. Your usecase is vanilla.

                  My daily driver for work is running VMWare workstation running on average about 3 VMs, firefox, emacs, teams, outlook, openvpn, onenote. I sometimes also have to run a gpu accellerated password cracker. For everything else I use a linux vm running arch and i3 because it’s so much faster to actually get shit done. Honestly my usecase isn’t that much more exciting either. I have daily issues with teams, outlook, and onenote (but those are not windows issues, it’s just that microsoft can’t for the life of them write anything that works). The windows UI regularly stops working after updates (I think this is due to the strict policies applied on the computer to harden it, these were done via group policy). The windows UI regularly crashes when connecting and disconnecting a thunderbolt dock. I have suspend and resume issues all the time, including issues where the machine will bluescreen coming out of suspend when multiple VMs are running. VM hardware passthrough has a tendency to be regularly broken requiring a reboot.

                  To top it off, the windows firewall experience is crazy, even if it has application level control, I still can’t understand why you would want something so confusing to configure.

                  And I know for certain how bad windows can be.

                  And I think you’re used to it, to the point that you don’t notice it. The fact that linux is bad in different ways doesn’t necessarily mean it’s as bad.

                  or you’ll gamble when they will bann you

                  Seems illegal. Maybe don’t give those companies money?

            3. 1

              All that comes obviously with the typical Microsoft problems. Like account bindings of your license, while 2FA may even make it harder to get your account back, because apparently not using your license account primarily on windows is weird and 2FA prevents them from “unlocking” your account again.

              The same goes for all the tracking, weird “Trophies” that are now present and stuff like that. But not having to tinker with GPU stuff (and getting a system that has no desktop anymore at 3AM) is very appealing.

              Can you recommend some?

              http://qttabbar.sourceforge.net/ works ok.
              Installed 2012 on windows 7, haven’t reinstalled my windows since, program still works except for 1-2 quirks.

    2. 7

      The fact that a substantial part of the FOSS community seriously prefers using what is effectively the Windows 1.01 interface with a few more features and anti-aliasing instead of any of the results of nearly a decade of UX-focused work in KDE, Gnome, or Cinnamon is a pretty convincing hint

      I’m not sure this person understands the motivation of using i3. I would argue it’s got nothing to do with the topic presented. This happened also on the other side while it was still viable. https://sourceforge.net/projects/blueboxshell/ for windows 2k was my barebones alternative shell for a long time.

      trying to emulate the best parts of Windows’ GUI for about twenty years now

      Grass is greener and all that… Which Windows GUI? Win32 which doesn’t fit in modern world but is everywhere, winforms which is kind of the same thing but not, XAML which “everyone” says is dead now, UWP which is the same thing but different, WinUI will maybe win now? Windows has lots of its own GUI toolkit identity crisis situations.

      1. 10

        Windows has lots of its own GUI toolkit identity crisis situations.

        It does, but backwards compatibility makes a huge difference, and is a big part of the reason why there are so many apps for everything on Windows, and so few of them on Linux. Linux had a lot of applications over the years, it’s just they (or their dependencies) get ritually burned and abandoned every 8-10 years so there’s a perpetual lack of availability.

        Lots of Windows applications are actually pretty old. I use IrfanView on my one Windows machine because that’s what I used 20+ years ago when I was a Windows user . It uses ye ole’ Win32 toolkit, which is maybe not the prettiest (that’s obviously subjective though, I actually like it a lot, it’s fast, responsive and efficient) but it’s there, and it has been for a long time, and it has thirty years’ worth of bugfixes at this point.

        Few Linux applications get to be 25 years old though – it takes a lot of effort just to keep up with the changes and deprecations in the GUI toolkits. Back in 2009 I wrote a small GTK app that folks continued to use at the lab where I worked at the time until last year or so. It has not received a single new feature since then and it got maybe four or five bugfixes. But I’ve pushed almost 100 commits (give or take, I’m just grepping for gtk in the commit log) just to keep it working through the GTK 3 saga. Some of these are workarounds for the more brain-damaged “features” like active window dimming, but most of them just do impedance matching.

        (Edit: this isn’t really GTK-specific. Things are a little better on the Qt side but GUI toolkits are just the tip of the iceberg).

        IrfanView has had one release a year for the longest time now. I hear things have calmed down a bit nowadays, but back in 2013, if this had been a generally-available application, and not just something a bunch of nerds in a lab use, I’d have had to make 2 or 3 maintenance releases a year just to keep the damn thing compiling.

        This has a lot of far-reaching consequences. Say, if you compile a 15 year-old codebase against the latest Windows SDK, it’s still the same 15 year-old app and it still uses ye ole’ Win32 API but with all the fixes up to yersterday. If you want to run XMMS because that’s your thing, you can, but you’re going to compile it against unmaintained, Unicode-unaware GTK 1.9x from 15 years ago, with everything that entails.

        1. 1

          I’ve also written a GTK+ program a decade ago, and GTK+ 3 mostly brought unfixable problems, offering nothing but perhaps the option to achieve height-for-width size negotiation without a (reliable) hack.

    3. 6

      This article seems to have attracted quite some controversy (and a bunch of votes for some reason!). I haven’t submitted it, but I did author it, so I’d like to add some things in order to clarify it:

      1. This is tagged rant and it is a rant. I don’t want to claim it’s anything more than that, and it’s mostly rooted in my own experience. I routinely poke fun at asking four interns to perform some tasks and rate how hard it was and calling it a usability study. This is effectively the result of one grumpy developer performing some tasks and rating how hard it was – it’s even worse, and deserves even less serious attention than that.
      2. It’s also a completely personal rant. I used FreeBSD, then Linux, for almost twenty years. I liked them. I still do (I run Linux and OpenBSD on a bunch of machines) and I want them to succeed. I still contribute to FOSS software when I have time to spare and I genuinely think keeping source code open helps write better software.
      3. The Reddit post I’m linking to (it’s written by a Gnome developer) was disappointing for me to read, primarily because I’d never refer to another programmer as a shitty anything, no matter what they write. But – and I want to point this out because Gnome gets a lot of hate – I actually think what they do is useful and important. It’s not my cup of tea. I think their UI design is bad and misguided, but that’s obviously subjective, and I think the way some of its developers treat other developers and their users is terrible. On the other hand there’s a lot of original effort going into it, and I think that’s important even if I personally don’t like the results, and the outreach programs that the Gnome foundation runs are hugely beneficial.
      1. 3

        But – and I want to point this out because Gnome gets a lot of hate – I actually think what they do is useful and important. It’s not my cup of tea. I think their UI design is bad and misguided, but that’s obviously subjective, and I think the way some of its developers treat other developers and their users is terrible. On the other hand there’s a lot of original effort going into it, and I think that’s important even if I personally don’t like the results, and the outreach programs that the Gnome foundation runs are hugely beneficial.

        This is what frustrates me too. I find Gnome interesting when they go off on their own path; it could be doing something interesting than just doing the typical “idk clone Windows”. At their closest though, they’re only on the cusp of being interesting, and at worst it feels like “what are you doing?” I still follow Gnome because they’re at least trying, good or bad. (I also wrote about this; I’m a lot more sympathetic to Gnome than most here, but I’m not blind towards the flaws either.)

    4. 4

      At least half of the article missing – the part that would explain what there is to wake up to or what the proper response should be after waking up. “We” are just starting to pay the price for the lack of architectural cohesion and the virtual petting zoo of half-assed IPC solutions that emerged to absorb and mask the hacks.

      The general observations are correct. “Slap a VM around it” for compatibility and let the browser be the “be all” user layer to operating system is what is happening all around. The tradition of the user-facing user-space and the hardware-facing kernel has been worked around by virtualization strategies from both ends, and the actual “machine” that is running things is a very different beast.

      If there’s nothing to wake up to, one might as well stay dreaming the retro-computing dream.

      1. 4

        I actually half-drafted a follow-up to this post back when it was written. I never got around to posting it. Half the time I regret posting this one, too, since it’s quite ranty, but the Internet never forgets :-).

        But the follow-up did articulate a number of ideas that I think I can distil into some basic principles. Sorry for weird/possibly pretentious presentation, these aren’t some grand claims I’m making, it’s just how I take notes.

        \1. While an elegant technical solution is generally desirable for anything, in the long run, it is better to maintain and improve less-than-ideal technology that works and allows you to build good applications, than to perpetually re-invent basic technologies and rotate user-facing applications as exhausted maintainers give up.

        Case in point: the clunky Win32 API and the plethora of alternatives that the guys in Redmond keep launching up and putting on life support, do nonetheless power an extraordinarily large array of applications, whereas SourceForge hosts thousands of useful applications that could work fine today, too, but they don’t even compile anymore. The app scarcity of open-source Unices is in good part self-inflicted.

        Furthermore, these technologies have a lot of quirks and are probably less than ideal in many regards. But because they have been perpetually improved, rather than repeatedly abandoned and rewritten, they are a pretty solid base. Some of the things the folks in Redmond are doing in terms of application security are amazing and more than a decade ahead what we have in *nix land (and I remember a time when they were the laughing stock of 1337 Linux users), and that’s partly because they tried to consolidate and build on these foundations, instead of trying out new ones ever couple of years.

        \2. It’s impossible to “empower” users without giving them stable interfaces, and simple, stable means to interconnect them, at every level.

        Interfaces that keep shuffling around aren’t just annoying to learn and re-learn (and I don’t mean just GUIs, I mean all sorts of system interfaces, too, from config files to shells). They’re also difficult to automate and script, even for experienced users, not because it’s hard to do it per se, but because you have to keep doing it. For example, half the reason why Automator scripts on macOS are so useful is that you write one once and you’re good to go for ten years. If you had to go back, debug and fix half of your scripts every couple of months, everyone would give up on them.

        (Edit: the other reason, of course, is the simplicity of the interface, quirky and occasionally inflexible though it may be – Automator isn’t really meant as a programming tool for super-experienced users after all – but the point is, if you spend more time getting things to talk to each other than teaching them what to say to each other, you’re never going to have time to achieve much.)

        \3. While an attractive interface gets people interested, and a good one keeps them happy, the ultimate driver of adoption in a professional environment is functionality, not design.

        Besides their interface, macOS and Windows are both so successful because, quirks aside, they are extremely capable environments, partly because they get 2) right.

        Apps designed to be distractions need an attractive, fashionable, perpetually fresh interface that doesn’t frustrate people, because otherwise they’re just going to use some other app to look at cat pics while you show them ads. Obviously, professional [1] users need good interfaces, too, but the deciding factor is ultimately whether an application can do what you need, in the professional sense, or not. Lots of successful FOSS applications (e.g. KiCAD) don’t have particularly good UX, but they’re powerful and functional. Meanwhile, there are lots of FOSS applications with pretty UIs that nobody wants.

        Inflammatory corollary: improving the quality of an application’s interface will obviously make it better, but removing or shuffling functionality in the name of UX improvement will generally make it worse.

        \4. You can’t build and maintain productive interfaces for professional [1] users by design principles meant to grab and maintain the attention of novice users.

        While some general design guidelines obviously apply everywhere, you can’t just adopt design guidelines meant for chat apps on phones and hope to build a good desktop IDE by applying them.

        Two slightly inflammatory corollaries:

        \4a) An interface inevitably mimics the process it interfaces with, at least to some degree

        \4b) You cannot design a good interface for a process you don’t understand.

        Quirky processes (business processes, surgery logistics and scheduling etc.) inevitably result in interfaces that look quirky and intimidating, but adequately serve the people who understand these quirks. Attempting to simplify them so that they look more approachable to those who don’t understand them actually makes them worse for the target audience

        [1] By “professional”, I mean people who get paid to do some particular work using that application. I don’t mean this as an implied derogatory term for the “other” people who are somehow unprofessional. It’s only meant to stress that using a program, in this case, is not a matter of fun, but of putting bread on the table, that it often carries out a responsibility to others (e.g. a physician isn’t just putting bread on his table, but also treating his patients!), and that “engagement” is inherent and doesn’t need to be driven (i.e. it’s generally expected that, if you’re an engineer, you don’t need to be convinced to use a CAD tool instead of looking at cat pics, because one of them is your job and the other one isn’t).

        Edit: I’m not entirely sure what a proper response would be, or what the right avenues for further development we could take. I’ve got some ideas sketched but those are in an even worse shape than the ones above. Some of them are strictly on the human/general practice side of things (e.g. a more scientific approach to design, routed in falsifiability and detailed understanding rather than general “philosophical” principles). Others are more tech-driven (e.g. a closer mapping of interfaces to abstract reasoning processes rather than real-life metaphors like bookshelves and desks, which is far more feasible now that we have cheap-ish access to VR, haptic input devices and so on).

        1. 2

          I actually half-drafted a follow-up to this post back when it was written. I never got around to posting it. Half the time I regret posting this one, too, since it’s quite ranty, but the Internet never forgets :-).

          Worse, the web as society latches onto rants. It is a format that lends itself very well to ‘me too!’ing chiming in with the anger or indignation and the noise floor rises considerably for everyone - then it upvotes that while ignoring things with more substance to it, lessening motivation to put effort into the latter and over time we get incredibly cranky.

          But the follow-up did articulate a number of ideas that I think I can distil into some basic principles. Sorry for weird/possibly pretentious presentation, these aren’t some grand claims I’m making, it’s just how I take notes.

          It is an iterative process. Raise the stakes by claiming it grand so you’ll try harder to defend the observations ;-)

          Abstracting over your argument(s) a bit to not cascade into walls of walls of text, I bin it into building for “established legacy”, “current trends” and “future intents” - FOSS-HCI in general fail in all three. The legacy is neglected and volatile. The current trends are only adapted and applied after they are the established default elsewhere and the trend setters are moving on. Future intents are not articulated because few have “a plan”. GNOME - to their credit, has the appearance of one. Too bad it is uninspired hogwash.

          Legacy costs to maintain and interferes with “the future”, adapting to trends require much more robust yet flexible primitives (not qualities I’d ascribe to say GTK or Wayland). Future plans tend to come with to other organisation plans and incentives - which are virtually nonexistent.

          FOSS does tend to follow and define technical trends, but not actually deliver products that blend with them. “Rewrite in marketed trendy framework/language as a learning experience” (CV-padding) is conflated with engineering and now there’s yet another implementation competing for attention with attention as its main driver even. There are exceptions of course, Firefox and Blender for instance. Though I’d put one of the two more in marble-madness meets Katamari Damacy.

          On UI volatility: While there is a sunken-cost-fallacy, there is also very much a sunken-effort-value (“experience”). Messing with UIs to follow UX-trends (or just change things to reignite interest) reduces the value and banking on the sunken-cost to retain users. Sadly, it tends to work and the web of perpetual change has trained us to shut up and take it - but at least there is an opportunity, at the right time an “Office Classic” would probably sell quite well; too bad the data is so strongly coupled to the implementation (premature composition).

          An example of us failing professionals:

          My dentist, after a panoramic scan of my jaw and a high-res 3D scan of my teeth into a tool that predicted teeth motion and generates sets of 3d printed inserts that realigns teeth to counter future biomechanical stress and decay, then presenting the data and plan as an animated 3D model for inspection and discussion - had the “Windows 7 for professionals” installation reboot during the middle of a visit. The “oh that happens a few times a day, no worries, the data is safe” did not inspire confidence. The scan was viewed on an off-the-shelf HP monitor that sure as hell wasn’t calibrated after leaving the factory a decade ago, or even had the proper window contents as someone messed up sRGB <-> linear-RGB somewhere in the chain - hope the precision that was lost didn’t mask something important. They apparently had tried to update, the software refused to run citing a “unsupported OS version”. Curiously enough, this machine was also reachable from the guest Wifi. So one thing in this story is awesome, and the rest completely shameful.

          Others are more tech-driven (e.g. a closer mapping of interfaces to abstract reasoning processes rather than real-life metaphors like bookshelves and desks, which is far more feasible now that we have cheap-ish access to VR, haptic input devices and so on).

          VR is really interesting in the professional space - you are immediately punished for creative outbursts as there are stronger real-world assumptions that is punishing to mess with (“you” become part of the experience, “you” are the input device). Lifting a model (iconic representation) of something to inspect/expand what it represents won’t be tolerated to popup a browser, that’s a jump scare.

    5. 4

      The tone of this article seems to imply that if could only get GNOME to be stable in both form and function, and not suck, then windows users might move. And there may be a handful of power users who like that kind of paradigm for whom this is true?

      But for most users the situation is the same now as it has been for the majority of my life: the users don’t choose windows. They know that Macs do not run windows, and that all other computers run windows. Their computers at school (except the Mac lab) and work (except certain Mac heavy industries) use Windows. If they still own a home computer (which they do less and less) it also came with windows. Windows is part of the computer. No amount of software development, software excellence, UX work, or stability can reach these users, and they are most of the users.

      1. 2

        No amount of software development, software excellence, UX work, or stability can reach these users, and they are most of the users.

        Not true. You are imagining “everyone”. Nobody like that exists. Make it good for a specific group and they will switch. Often due to word of mouth.

        Just stop caring about your grandma’s PC already. Care about your colleague, who could have easier job with Blender and ffmpeg on Linux. Or your startup with kicad and FreeCAD doing the heavy lifting.

        Way away from MS Office is not LibreOffice, it’s a wiki, CodiMD and a good ERP solution.

        1. 3

          My grandma’s PC is easy to switch, that’s not the hard part ;)

          The point is that you can’t target improvements at a specific group if that group is made of people who have no idea you exist or that the OS exists or that it could ever be replaced. Like I said, people don’t choose windows, it’s just there.

          1. 1

            Right, but they come to you, because you are the expert. That’s the moment you can make them aware. ;-)

            1. 1

              For friends and family yes, sure. Getting the ones who rely on me to switch has never been much issue

    6. 4

      Linux does not operate, either development or consumption, in a consumer market. It never has. It has always competed in the deeply technical, power user, power a business back office or web stack, market. There are people and business that market to and understand that market and use Linux to do so. Those same people have a voice and authority in the development of Linux.

      But Linux doesn’t understand the consumer market. They don’t compete successfully there and they likely never will. It takes an enormous amount of resources to compete in the consumer PC market. There are too many variables and creating a good user experience requires a lot of study and market research as well as a lot of marketing and training. Apple and Microsoft can afford this expense. But the various consumer focused Linux companies can not. You can’t “free” as in beer your way out of this one it costs money and no one is investing the amounts needed to make it happen. Linux is never going to be a consumer PC competitor unless MS or Apple suddenly decide they want to build a usable desktop shell for it. Or maybe some company will come along with an investor willing to put enough money into it to make it work.

      1. 9

        Linux has always been the box of parts to build a consumer product out of (i.e. Android, ChromeOS). It will never be a product in of itself because people don’t install operating systems except hopeless dorks like us.

    7. 3

      Second, that you see more and more laptops running things like i3 and dwm than back in 2010 – and these tools haven’t gotten any better in these ten years. The fact that a substantial part of the FOSS community seriously prefers using what is effectively the Windows 1.01 interface with a few more features and anti-aliasing instead of any of the results of nearly a decade of UX-focused work in KDE, Gnome, or Cinnamon is a pretty convincing hint that Linux’ whopping marketshare in the desktop space isn’t so whopping just because of evil Microsoft’s monopoly.

      I don’t use i3, but I do use dwm and it doesn’t need to get any “better” because it does exactly what I want already. I am not any more effective using KDE, GNOME, or Cinnamon; quite the opposite actually. You can discus Window Managers all day, but it’s just like music taste: whatever floats your boat, and what your opinion on it is matters exactly nothing to me. Either way, I fail to see how the software that I run on my personal computer used only by me has any bearing on Linux’s or Microsoft’s market share.

      The FOSS community has been trying to emulate the best parts of Windows’ GUI for about twenty years now. It’s still pretty far away from catching up – in fact, I think that now, in 2020, it’s farther than it was in 2010.

      I used Unity for a bit some number of years ago after my laptop broke and I used my girlfriend’s macbook from a bootable USB drive. I thought it was okay, and comparable to Windows 7 (albeit different, but not really “worse”). I never used Windows 8 or 10, so I can’t speak to that, but I hear a lot of people complaining about it (Windows 8 especially). “Linux on the desktop” has been fine for quite a while, even Gnome 2 was perfectly usable back in the day: my very non-technical brother would often use my Gnome 2 computer when we were still living with our parents, and he had no problems with it. That was 20 years ago. Sure, you can point at this bug or some missing feature or the like, but it’s not like Windows or macOS are perfect either. I don’t think this is where the issue is at.

      1. 3

        even Gnome 2 was perfectly usable back in the day: my very non-technical brother would often use my Gnome 2 computer when we were still living with our parents, and he had no problems with it.

        Now, if they had just left GNOME 2 alone. I think that’s a major point of the original rant. But I guess the iPad made both Microsoft and the GNOME project leaders feel, incorrectly, that if they didn’t change with the times, they’d be doomed to irrelevance.

    8. 3

      I ditched standalone Linux for WSL2 sometime last year. I can’t remember the exact reason I switched but I knew I had had enough with the various problems I encountered, and at that time I had not used Windows in 5 years. Maybe I just looking for an excuse to try something new, but WSL paved the way and I don’t see myself turning back soon. Windows has its own annoyances, but once you set it up it should be (relatively) smooth sailing.

    9. [Comment removed by author]

    10. 1

      For me the reason I use Windows over Linux nowadays is for better game support and better GPU drivers.