1. 32
    1. 16

      Things are never perfect, but even with some of the issues I ran into I’m very happy I switched. It’s hard to describe, but things feel more solid.

      Holy crap. You have to restart whatever App Store clone is fancy this season in order to use it more than once, and one of the most widely-use password managers crashes (let me guess, the crash involves Wayland Gnome’s flavour of Wayland, GTK, or both?) and it feels more solid? Are you sure you weren’t using Windows Me with a weird WindowBlinds theme before!?

      I made the switch the other way round (Linux -> macOS) two years ago. Did I already develop Apple Stockholm syndrome? Am I crazy? Is that kind of stuff normal?

      Edit: I mean please don’t get me started on macOS Ventura. I’m not trying to scoff at Linux, I’m asking if we are doomed!

      2+ years later I’m still SSH-into into Linux boxes for a lot of development. Is this going to be my next ten years, choosing between a) using the latest breakthrough in silicon design as a glorified VT-220 strapped to an iPad or b) perpetually reliving 1999 desktop nightmares, except without Quake III Arena to make it all worth it?

      1. 13

        Sometimes I’m beginning to wonder if we neckbeards just never run into these problems because we’ve set our ways 20y ago and never changed. On my Ubuntu work machine some sort of graphical apt pops up from time to time (couldn’t be bothered to investigate how to turn it off), but I run my updates regularly via cli-apt-get. There’s no regularly crashing app besides Zoom, and I don’t hold that against any Linux distro.

        1. 5

          That’s kind of what I’m leaning towards, too. Gnome Software isn’t the first attempt to bolt a GUI in top of a package manager and/or an upstream software source, people have been trying to do that since the early 00s (possibly earlier, too, but I wasn’t running Linux then). At some point, after enough thrashed installs, I just gave up on them.

      2. 5

        I’ve always avoided Gnome (and PulseAudio and software like that) like the plague and it’s been the Year of Linux on the Desktop for 20+ years now and it’s generally been rock solid for me.

        At the moment I’m running Guix on an MSI Gaming laptop from two years ago with an RTX3080 and I love it. Running Steam, Lutris (Diablo 4 beta last weekend), Stable Diffusion and no crashes. 50+ day uptimes only to reboot because Guix kinda expects it.

        And ofcourse an ideal dev machine. No Docker shenanigans like on Windows and OSX.

      3. 3

        Your response captures my own feelings on reading this.

        They jumped from frying pan into fire, and they’re happy about the change of scenery. They point out that some extremities are on fire, but hey, it’s different.

        It’s very odd indeed, but it’s probably part of what life is like if all this stuff is just a mystery to you and you use the built-in tools without question.

      4. 3

        I haven’t had to restart a Linux system to fix the package manager in a couple decades on 3 distros I regularly administrate 15-50 systems (depending on the year). This includes systems updated daily/weekly and left running for over a year. Lately most systems get rebooted whenever a new kernel package comes along, but never any other time. Maybe the problem here is you shouldn’t be using the “whatever fancy GUI prototype is in vogue this season” and just use the default system CLI package manager.

        1. 1

          Why in the world does everyone think I’m talking about myself here and not about the original post!?

          Edit: AH! I think I get it. The “you” there is not the generic “you”: the link to that blog post was posted by the post’s author. I’m not using Gnome Software, I’m not even using a Linux desktop anymore. They are :-).

      5. 3

        You have to restart whatever App Store clone is fancy this season

        Why do you use the App Store at all? Which Linux distro are you talking about exactly? Does it not provide a CLI package manager like apt-get or yum or something?

        1. 3

          I don’t use the software app, but I might if it worked. Package names are often undiscoverable, and for whatever reason, I forget if it’s dnf list, dnf search or some other command—the gui has a search window—nice and discoverable.

          Beyond that, if it’s crap, why do they ship the damn thing? So many linux users proudly explain that they know better than to stand near the spike pit. I just want software that doesn’t have the spike pit.

        2. 3

          Why do you use the App Store at all?

          I don’t! In my experience, the only App Store-like thing that ever came close to working on Linux was Synaptic!

      6. 2

        Especially since KeePassXC is one of the most robust applications for me, across 3 machines and 2 operating systems. I don’t have any problems with using KUbuntu as linux daily driver of that sort. Then again, they don’t go all-in on wayland, and it’s not Gnome Wayland. Event though KDE has its own issues.

        1. 1

          That’s kind of what I’m surprised at, too. I’ve used it everywhere – I used it on Linux, I now use it on both macOS and Windows. It’s one of the applications I’ve never seen crashing. I haven’t used it under a Wayland compositor, mind you, mostly because those tend to crash before I need to login anywhere, hence my suspicion this is Gnome-related somehow…

          1. 2

            Randomly looked at their issues again. And snap seems to be doing it’s job (TM).

            1. 1

              Oh, wow, okay. I’m sorry I blamed Gnome Shell or GTK for that – they caused me the most headaches way back but I should’ve obviously realised there are worse things out there.

              I’m not even sure Snap is the worse thing here? I’ve heard – but the emphasis is on “heard”, I haven’t had to know in a while and I’m just ecstatic about it – that some KDE-related software can be kard to package due to the different lifecycles of KDE Frameworks, Apps, and Plasma. It might be a case of the folks doing the frameworks packaging getting stuck between a rock (non-KDE applications that nonetheless use kf5 & friends) and a hard place (KDE apps and Plasma).

              KDE 3.2 nostalgia intensifies

      7. 1

        I ran Fedora for 6 months and experienced this level of problems, so I switched to Mint, and it has been much better. I previously tried OpenSUSE Tumbleweed as well, didn’t like it, and concluded that running a Linux with extremely fresh packages is not for me, I want stability and “it just works”. Mint is stable and boring.

      8. 1

        You have to restart whatever App Store clone is fancy this season

        gnome-software is over 10 years old at this point, which is probably also why it has so many issues. Its not the norm, no. The standard for most package management GUI has been fairly responsive, batch installs and uninstalls, etc. (e.g. synaptic for apt).

        KPXC doesn’t touch GTK at all, and runs stable under Wayland and under Gnome, at least in my case. (Fedora Silverblue with Flatpaks).

        1. 13

          Its not the norm, no.

          I have to disagree here.

          I worked for Red Hat. I was an insider. This kind of PITA is completely 100% normal for RH OSes, but people who live in that world consider it normal and just part of life.

          I recently wrote an article about the experience – as a Linux and Mac person – of using an Arm laptop under Windows:


          I commented, at length, on the horrors of updating Windows, and said that habitual Windows users wouldn’t notice this stuff.

          Sure enough, one commenter goes “well it’s not like this on Intel Windows! It’s just you! Or it’s just Arm! It’s not like that!”

          It is EXACTLY like that but if you don’t know anything else, it’s normal.

          You say “GNOME software is over 10 years old” like that’s an excuse. It is not an excuse. It is the opposite of an excuse. At ten days old this sort of thing should not happen.

          But because GNOME 3.x is a raging dumpster fire of an environment, lashed together in Javascript, and built on a central design principle of “look how others do this and do it differently”, GNOME users have forgotten what a stable solid reliable desktop even feels like, and feel that something a decade old will naturally barely work any more because the foundations have been ripped out and rebuild half a dozen times since then, the UI guidelines replaced totally 3 times, the API changed twice a year as if that is normal

          It is not normal. This is not right. This is not OK.

          Graphical desktops are simple, old, settled tech, designed in the 1970s, productised in the 1980s, evolved and polished to excellence by the 1990s. There is quite simply no legitimate excuse for this stuff not being perfect by now, implemented in something rock-solid, running lightning fast in native code, with any bugs discovered and fixed decades ago.

          Cross-platform packaging was solved in the 1980s. Cross platform native binaries were a thing by a third of a century ago. “Oh but this is a new field and we are learning as we go” is not an excuse.

          As Douglas Adams put it:

          “Well, you’re obviously being totally naive of course,” said the girl, “When you’ve been in marketing as long as I have you’ll know that before any new product can be developed it has to be properly researched. We’ve got to find out what people want from fire, how they relate to it, what sort of image it has for them.”

          The crowd were tense. They were expecting something wonderful from Ford.

          “Stick it up your nose,” he said.

          “Which is precisely the sort of thing we need to know,” insisted the girl, “Do people want fire that can be applied nasally?”

          This is, in a word, such an utterly bogus and ludicrous response that anyone should be ashamed to offer it.

          It’s nearly a decade old so of course it doesn’t work is risible.

          The correct answer is “it is nearly a decade old, so now it is tiny, blisteringly fast, and has absolutely no known bugs”.

          1. 6

            Graphical desktops are simple, old, settled tech, designed in the 1970s, productised in the 1980s, evolved and polished to excellence by the 1990s

            I’m sorry but modern requirements do have changes to this. Sometimes changes that are so hard to put into the old codebase, people started rewriting it. HighDPI, multi DPI, (fractional scaling), HDR support, screen readers, touch and its UI change requirements, security (hello X11, admin popups..), direct rendering vs “throw some buttons on there”, screen recording. Sure it’s no excuse to have a buggy mess, but it’s not like you could just throw windows 2000 (or similar) on a current system and call it a day. You’ll have a hard time getting any of the modern requirements I mentioned integrated.

            1. 4

              I don’t really see how that invalidates any part of my comment, TBH.

              Desktops are not unique to Linux. Apple macOS has a “desktop”. They call it the “Finder,” because in around 2000 the NeXTstep desktop was rewritten to resemble the classic MacOS desktop which was actually called the Finder.

              But the NeXTstep desktop, which used to be called Workspace IIRC, has been around since 1989.

              I am using it right now. I have two 27” monitors. One’s a build-in Retina display, which at 5120x2880 is quite HighDPI, and the other one is an older Thunderbolt display, which at 2560x1440 is higher DPI than most of my other screens. Everything looks identical on both my displays, they are both smooth and crisp, and if I drag a window from one to the other, both halves of the window are the same size as I move it even while it’s straddling the display.

              This is 34 year old code. Over a third of a century. 35 if you count the first NeXT public demo of version 0.8 in 1988.

              Windows has a desktop, called Explorer. It is basically the same one that shipped on Windows 95. It’s 28. Again, Windows 10 and 11, both currently shipping and maintained, can both handle this with aplomb. Took ’em a while to catch up to macOS but they got there.

              If GNOME can’t do this properly and well, if this means constant rewrites and functionality being dropped and then reimplemented that means that the GNOME team are doing software development wrong. KDE is a year older than GNOME and I have tried it on a HiDPI display, this month, and it worked fine.

              1. 6

                I don’t think it’s fair to include pre-OpenStep versions of NeXTSTEP, because the addition of the Foundation Kit was a pretty fundamental rewrite. Most of the NX GUI classes took raw C strings in a bunch of places. So most of this code is really only 28 years old.

                To @proctrap’s point, there have been some fundamental changes. OpenStep had resolution independence through it’s PostScript roots and adding screen reader support was a fairly incremental change (just flagging some info that was already there), but CoreAnimation was a moderately large shift in rendering model and is essential for a modern GUI to efficiently use the GPU. OPENSTEP tried very hard to avoid redrawing. When you scrolled, it would copy pixels around and then redraw. It traded this a lot against memory overhead. It used expose events to draw only the area that had been exposed, so nothing needed to keep copies of bits of windows that were hidden. When you dragged a window, you got a bunch of events to draw the new bits (it actually asked for a bit more to be drawn that was exposed so that you didn’t get one event per pixel). With CoreAnimation’s layer model, each view can render to a texture and these live on the GPU. GPUs have a practically infinite amount of RAM in comparison to rather requirements of a 2D UI (remember, OPENSTEP ran on machines with 8 MiB of RAM, including any buffering for display) and so you avoid any redraw events for expose, you only need to redraw views whose contents have changed or which have been resized. For things with simple animation cycles (progress indicators, glowing buttons, whatever), the images are just cycled on the GPU baby uploading different textures.

                Text rendering is where this has the biggest impact. On OPENSTEP, each glyph was rasterised on the CPU directly every time it was drawn. On OS X (since around 10.3ish), each glyph in a font that’s used is rendered once to a cache on the GPU and composited there. This resulted in a massive drop in CPU consumption (it’s why you could smooth scroll on a 300 MHz Mac), which translated to lower power consumption on mobile (compositing on the GPU is very cheap, it’s designed to composite hundreds of millions of triangles, the thousands that you need for the GUI barely wake it up).

                That said, Apple demonstrated that you can retrofit most of these to existing APIs without problems. A lot of software written for OpenStep can be built against Cocoa with some deprecation warnings but no changes. Updating it is usually fairly painless (the biggest problem is that the nib format changed and so UIs need redrawing, XCode can’t import NeXT-era ones).

                If GNUstep had gained the traction that GTK and Qt managed, the *NIX desktop would have been a much more pleasant place.

                1. 1

                  I defer on the details here, inasmuch as I am confident you’ve forgotten more about NeXTstep and its kin than I ever knew in my life.

                  But as you say: old stuff still works. Yes, it’s been rewritten and extended substantially but it still works, as you say better than ever, while every 6mth or so there are breaking changes in GNOME and KDE, as per the messages about KeePassX upthread from here.

                  It is not OK that they still can’t get this stuff right.

                  I don’t know where to point the finger. Whenever I even try, big names spring out of the woodwork to deny everything and then disappear again.

                  I said on the Reg that WSL is a remote cousin of the NT POSIX personality. Some senior MS exec appears out of nowhere to post to say that, no, WSL is a side-offshoot of Android app support. They’re adamant and angry.

                  I request citations. (It’s my job.)

                  Suddenly even more senior MS exec appears with tons of links that isn’t Googleable anywhere to show that WSL1 is Android runtime with the Android stuff switched out.

                  What this really said to me: “we don’t have any engineers who understand the POSIX stuff enough to touch it any more, so we wrote a new one. But it wasn’t good enough, so now, we just use a VM.”

                  It is documented history that MS threatened to sue Red Hat, SUSE, Canonical and others over Linux desktops infringing MS patents on Win95. They did. MS invented Win95 from the whole cloth. I watched, I ran the betas, I was there. It’s true.

                  So SUSE signed and the KDE juggernaut trundled along without substantial changes.

                  RH and Canonical said no, then a total rewrite of GNOME followed. Again, historical record. Canonical tried to get involved; GNOME told them to take a hike. Recorded history. Shuttleworth blogged about it. So GNOME did GNOME 3, with no Start menu, no system tray, no taskbar, and they’re still frantically trying to banish status icons over a decade later.

                  Canonical, banished, does Unity. There’s a plan: run it on phones and tablets. It’s a good plan. It’s a good desktop. I still use it.

                  I idly blog about this, someone sticks it on HN and suddenly Miguel de Icaza pops up to deny everything. Some former head of desktops at Canonical noone’s ever heard of pops up to deny everything. No citations, no links, no evidence, everyone accepts it because EVERYONE knows that MS <3 LINUX!

                  It’s Wheeler’s “We can solve any problem by introducing an extra level of indirection,” only now, we can solve any accusation of fundamental incompetence by introducing an extra level of lies, FUD and BS.

                  1. 2

                    It is not OK that they still can’t get this stuff right.

                    Completely agreed.

                    Suddenly even more senior MS exec appears with tons of links that isn’t Googleable anywhere to show that WSL1 is Android runtime with the Android stuff switched out.

                    The latest version of the Windows Kernel Internals book has more details on this. The short version is that the POSIX and OS/2 personalities, like the Win32 one, share a lot of code for things like loading PE/COFF binaries and interface with the kernel via very similar mechanisms. WSL1 used a hook that was originally added for Drawbridge called ‘picoprocesses’. The various personalities are all independent layers that provide different APIs to the same underlying functionality, but they’re also completely isolated. One of the reasons that the original NT POSIX personality was so useless was that there was no way of talking to the GUI and very limited IPC, so you couldn’t usefully run POSIX things on Windows unless you ran only POSIX things.

                    In contrast, picoprocesses provided a single hook that allowed you to create a(n almost) empty process and give it a custom system call table. This is closer to the FreeBSD ABI layer than the NT personality layer, but with the weird limitation that you can have only one. The goal for WSL wasn’t POSIX compatibility, it was Linux binary compatibility. This meant that it had to implement exactly the system call numbers of Linux and exactly the Linux flavour of the various APIs. This was quite a different motivation. The POSIX personality existed because the US government required POSIX support as a feature checkbox item, but no one was ever expected to use it. The support in WSL originally existed to allow Windows Phone to run Android apps and was shipped on the desktop because Linux (specifically, not POSIX, *BSD, or *NIX) had basically won as the server OS and Microsoft wanted people to deploy Linux things in Azure, and that’s an easier sell if they’re running Windows on the client. Unfortunately, 100% Linux compatibility is almost impossible for anything that isn’t Linux and so WSL set expectations too high and people complained when things didn’t work (especially Docker, which depends on some truly horrific things on Linux).

                    They’re surprisingly different in technology. The Win32 layer has more code in common with the POSIX personality than WSL does.

                    What this really said to me: “we don’t have any engineers who understand the POSIX stuff enough to touch it any more, so we wrote a new one. But it wasn’t good enough, so now, we just use a VM.”

                    Modifying the old POSIX code into a Linux ABI layer would have been very hard. Remember, this was a POSIX layer that still used PE/COFF binaries, used DLLs injected by the kernel for exposing a system-call interface, and so on. It also hadn’t been updated for recent versions of Windows and depended on a lot of things that had been refactored or removed.

                    The thing that made me sad was that they didn’t just embed a FreeBSD kernel in NT and use the FreeBSD Linux ABI layer. The license would have permitted it and they’d have benefitted from starting with something that was about as far along as WSL ever got and had other contributors.

                    RH and Canonical said no, then a total rewrite of GNOME followed. Again, historical record. Canonical tried to get involved; GNOME told them to take a hike. Recorded history. Shuttleworth blogged about it. So GNOME did GNOME 3, with no Start menu, no system tray, no taskbar, and they’re still frantically trying to banish status icons over a decade later.

                    I only vaguely paid attention to that drama, but from the perspective of someone trying to create a GNUstep-based DE at the time, it looked more like Mac-envy than MS-fear: GNOME 3 and Unity both seemed like people trying to copy OS X without understanding what it was that made OS X pleasurable to use and without any of the underlying technology necessary to be able to implement it.

                    I idly blog about this, someone sticks it on HN and suddenly Miguel de Icaza pops up to deny everything.

                    I was really surprised at the internal reactions when MdI joined Microsoft. The attitude inside the company was that he’s a great leader in the Linux desktop world and it’s fantastic that he’s now helping Microsoft make the best Linux environments and it shows how perception of Microsoft has changed. My recollection of his perception from the F/OSS desktop community (before I gave up, ran OS X, and stopped caring) was that he was the guy that never met a bad MS technology that he didn’t like and tried to force GNOME to copy everything MS did, no matter how much of a bad idea it was. The rumour was that he’d applied to MS and been rejected and so made it his mission to create his own MS-like ecosystem that he could work on.

                    EVERYONE knows that MS <3 LINUX!

                    Pragmatically, MS knows that Linux brings in huge amounts of money to Azure, and that Linux (Android) brings in a huge amount of money to the Office division. And MS (like any other trillion-dollar company) loves revenue. Unfortunately, in spite of being one of the largest contributors to open source, only a few people in the company actually understand open source. They think of open source as being an ecosystem of products rather than a source of disruptive technologies.

                    P.S. When are you going to write an article about CHERIoT for El Reg?

          2. 4

            ‘The correct answer is “it is nearly a decade old, so now it is tiny, blisteringly fast, and has absolutely no known bugs”.’

            This. I’m sad to say that there are still some bugs in XFCE, but none that I encounter on a daily basis and generally fewer in each release. I haven’t understood why people think GNOME is a good idea since their 2.x releases.

            I’ve been waiting for Wayland to mature and I’m still not really seeing signs of it.

            Every Debian upgrade from stable to new stable is smoother than the last one, modulo specific breaking changes which are (a) usually well documented, (b) aren’t automatable because they require policy choices, and (c) don’t apply to new installs at all, which are also smoother and faster than they used to be.

            1. 2

              why people think GNOME is a good idea

              I would actually recommend it for some people, since it’s looking pretty good (unlike XFCE), has some good defaults and doesn’t come with the amount of options that KDE has. (And I haven’t had any breakage on LTS Ubuntu with Gnome desktops.) I prefer KDE, but I wish I would have recommended some people in my family gnome. (Which I gave KDE back then, as it’s more resembling the windows 7 startup menu.) But you don’t change the Desktop of someone who is over 80 years old. Even if their KDE usage ends up spawning 4 virtual desktops, with 10 firefox windows, 2 Taskbars and 2 start menus. Apparently they like it that way.

              1. 3

                GNOME is pretty. Its graphics design is second-to-none in the Linux world, and it pretty much always has been, since the Red Hat Linux era.

                It’s therefore even more of a shame that, to me, it’s an unusable nightmare of a desktop environment.

                KDE, which is boldly redefining “clunky” and “overcomplicated”, is at least minimally usable, but it is, IMHO, fugly and it has been since KDE 2.0.0. And I wrote an article on how to download, compile and install KDE 2.0.0. Can’t remember for whom now; long time ago.

                (When RH applied the RHL 9 Bluecurve theme to KDE, I have never ever seen KDE look so pretty, before or since.)

                Xfce is plain, but it’s not ugly. You can theme it up the wazoo if you want. I don’t want. I leave it alone. But that pales into utter insignificance because it works.

            2. 2

              Thank you!

              Sometimes I feel like it’s just me. I really do appreciate this feedback.

          3. 1

            Its not the norm, no.

            I have to disagree here. […] This kind of PITA is completely 100% normal for RH OSes […] It is not normal. […]

            Confusing structure.

            You wouldn’t use synaptic, as I mentioned as an example of something more normal, on a RH OS.

            The correct answer is “it is nearly a decade old, so now it is tiny, blisteringly fast, and has absolutely no known bugs”.

            It clearly wouldn’t be the correct answer because that contains a lie?

            1. 4

              I do not think that you understood what I was saying here. I am making extensive use of irony and sarcasm in order to try to make a point.

              Confusing structure.

              I am saying that problems like those described are normal for RH products and people using the RH software ecosystem.

              Then I continue to say that these things are not normal for the rest of the Linux world.

              In other words, my point is that these things are normal for RH, and they are not normal for Linux as a whole.

              In my direct personal experience as a former RH employee, a lot of RH people are not aware of the greater Linux world and that other distros and other communities are not the same, and that often, things are better in the wider Linux world.

              I am sorry that this was not clear. It seemed clear to me when I wrote it.

              It clearly wouldn’t be the correct answer because that contains a lie?

              Again, you are missing the point here.

              I am saying “the correct answer,” as in, this is how things should be.

              In other words, I am saying that in a more normal, sane, healthy software ecosystem, the correct answer ought to be that after a over a decade of biannual releases, which means over 20 major versions, something should have improved and be better than it ever was.

              In a normal healthy project, after 12 years and 44 versions, a component should be completely debugged, totally stable, and then have had 5-10 years to do fine-tuning and performance optimisation.

              (I will also note that 2 major releases per year = 20 major releases. For a healthy software project, you do not need to obfuscate this by, in this example, redefining the minor version as the major version at version 3.40, so that version 3.40 is now called version 40 and from then on everyone pretends that minor versions are major versions.)

              (BTW “obfuscate” is a more polite way of saying “tell a lie about”.)

              I am not saying “GNOME Software is written in native code, is bug free and performance optimised”.

              I am saying “GNOME Software OUGHT TO BE native code, bug free and performance optimised by now.”

              Is that clearer now?

              1. 1

                Then I continue to say that these things are not normal for the rest of the Linux world.

                Which is what I already said, with an example from the rest of the Linux world, so I don’t understand why you say you disagree with me on that topic. Hence my confusion.



                1. 1

                  So, from your quoted reply, you are saying that:

                  10 years of development against moving targets (app store trends, flavor-of-the-year GTK api, plugin based package management, abstraction based package management, back to plugin based package management)

                  … justifies it hanging? That this is understandable and acceptable given the difficult environment?

                  1. [Comment removed by author]

        2. 7

          gnome-software is over 10 years old at this point, which is probably also why it has so many issues.

          I have clearly developed Stockholm syndrome because IMHO ten year-old software should not have so many issues :-D. Software that’s been maintained for ten years usually gets better with time, not worse. This isn’t some random third-party util that’s been abandoned for six years, Gnome Software is one of the Core Apps.

          1. 3

            To elaborate further, 10 years of development against moving targets (app store trends, flavor-of-the-year GTK api, plugin based package management, abstraction based package management, back to plugin based package management)

            Similarly, Servo easily hitting speed achievements that Firefox struggles to achieve.

            1. 1

              Right, I can see why it reads that way, but I didn’t mean that as a jab specifically at the code in Gnome Software. Its developers are trying to solve a very complicated problem and I am well aware of the fact that the tech churn in the Linux desktop space is half the reason why the year of Linux on the desktop is a meme.

              I mean that, regardless of the reason why (and I’m certainly inclined to believe the churn is the reason), the fact that ten years of constant and apt maintenance are insufficient to make an otherwise barebones piece of software work is troubling. This is not a good base to build a desktop on.

    2. 4

      Is Firefox no longer offered via apt in ubuntu? The reason for literally all of the problems the author describes with firefox are due to snap’s attempt to make the system more secure by isolating firefox. If firefox is isolated from the system then some integrations will have difficulties. Unfortunately we’re not at the point where we can pop up a permission dialog at runtime like in Android yet. Moving from snap (trying to isolate things securely) to flatpak (no extra security mechanisms) is a comparison that is valid only if ubuntu is forcing users to use the snap versions of the packages (is this the case?).

      1. 10

        Is Firefox no longer offered via apt in ubuntu?

        Oh you’re gonna love this. On Ubuntu 22.10, apt install firefox installs a transitional package that installs the snap package. I know this because, unrelated to its other shennanigans, Canonical mysteriously manages to ship Multipass, the only macOS virtualization tool that lets me run Linux and doesn’t make me want to tear my eyes out in rage, so when I’m not SSH-ing into a remote box, I run Ubuntu. apt install firefox takes at least 30 minutes to complete. I don’t even know why, I never bothered to troubleshoot it. I saw the words “snap” on the screen, hit Ctrl-C after I lost patience, re-created the VM instance because that rendered it unbootable, and installed Firefox from Mozilla’s PPA.

        1. 3

          ! TIL Multipass. This looks way easier than clunking around with Parallels when I just need to quickly run a build. Thanks!

          1. 4

            You’re welcome! I have no idea why Canonical isn’t making a bigger deal out of it, it’s so smooth it’s amazing. I ran into it completely by mistake, back when the M1 was still pretty early and I just needed a Linux box to cross-compile some ARM stuff.

            All the magic I ever needed to cast on it was some pf voodoo because I use a VPN on one of my machines and it insisted on bridging to my Wi-Fi adapter, with predictably poor results. It’s still more magic than I ever want to do but it’s a one-time hurdle which I don’t mind that much.

        2. 3

          It gets worse. A while ago, I posted instructions on how to remove snap Firefox and install Firefox properly from the Mozilla PPA [1]. You wouldn’t believe what happened next: Somehow, some Ubuntu auto updates ran, and they reinstalled the snap Firefox over my apt Firefox and I was suddenly on an older Firefox version and my Firefox profiles wouldn’t open [2]! Imagine my surprise when I suddenly had an empty, old Firefox one day when I used my computer.

          I was so furious, I removed snap entirely from my Linux installation and will make sure that I never use anything that has to do with snap ever again.

          [1]: https://lobste.rs/s/2gzopp/ubuntu_bungled_firefox_snap_package#c_imi1af

          [2]: Profiles can never be opened by an older Firefox than they were created / modified with

    3. 4

      I love that the author thinks macOS software hit rock bottom in 2015.

      1. 15

        I do enjoy the subgenre of article that’s “yeah, I switched away from Mac, it’s not as good as it used to be” and then proceeds to talk about how many issues (some papercuts, some just very obviously not good) they run into with desktop Linux, but continues to stick with it anyways despite the glaring flaws.

        1. 9

          I think I’m one of these people, having switched from Linux -> macOS -> Linux again. To me, there’s one thing that makes this worth it. If something’s broken and it really annoys me, I can fix it myself on Linux, but on macOS I don’t really have any other choice but to wait and hope.

          1. 10

            This is my concern. I have a lot less patience for troubleshooting and fixing things myself than I once did. But it still feels better than just sitting around hoping Apple will do the things I want them to. Especially when the things I want them to do don’t support their business model.

            I know that a lot of things are better today: near ubiquitous internet connectivity, all-day (and longer!) batteries, beautiful screens, tremendous power and capacity. But for all of the advances in hardware and connectivity, we’re on a constant treadmill of rewrites and do-overs that prevents software from ever reaching a reasonable state of reliability and stability.

            Modern computering makes me sad.

          2. 5

            I don’t see it as being very different on Linux. Even understanding the code of GNOME or GTK sufficiently to be able to find an issue is hard. Once that’s done, upstreaming a fix so that the fix isn’t broken in the next minor update is hard for an outsider. And even then, the thing that you fixed is likely to be thrown away and replaced with something differently annoying in the next major release. ‘You can fix it’ is true if you have the resources od RedHat or Canonical, or so much otherwise.

    4. 3

      The solution is to kill the gnome-software process if you want to use ‘Software’ more than once per session. Apparently this has been an issue for at least 12 months

      Feel like this was an issue for me back in 2018 and stuck with me until I left fedora.

    5. 3

      I think Linux is in a pretty good place when it comes to distros these days. We have the luxury of being extremely fickle in our distro choice, there are a ton of viable options. Even if distros are 99% the same software through a different package manager or release cycle we might nitpick on a technical issue that matters to us (every distro has something to pick on) or just go on the vibe of the company producing it or the community. There’s so much competition, it’s great.

      I’m on Ubuntu and I’ve disabled snap and use X11. I’ll probably do Pop!_OS next, just because it doesn’t have snap and has some more keyboard shortcuts by default. That’s literally it, it saved me a couple of clicks. I used Fedora in the past, but setting up a bluetooth headset was annoying because of codecs. That’s all it took to try something else. It’s still a great distro.

      The distros are great. Don’t get me started on the Wayland transition though! ;)

      1. 4

        I’ll probably do Pop!_OS next,

        Look, I don’t want to be mean here, but Pop OS – I refuse to use their ludicrous styling – is the same OS.

        GNOME is GNOME. Slap a skin on it, theme it a bit, bung in some extensions, it’s still GNOME.

        Ubuntu is Ubuntu. Every Ubuntu downstream is still Ubuntu.

        Ubuntu is somewhat different from Debian to the extent that something different based on Debian is slightly different from something based on Debian.

        Switching from one GNOME-on-Ubuntu remix to another GNOME-on-Ubuntu remix isn’t changing distros. It’s the same distro with its hair tied back and a different colour of eyeshadow.

        It’s not even as radical as a haircut.

        If you want a change of distro, something different, then move to Alpine with Xfce or Arch with KDE or something. Then learn to tweak it so it’ll work how you like. That’s changing distro.

        More to the point, try working on FreeBSD for a year and then you will appreciate why I say that one GNOMEbuntu is the same as every GNOMEbuntu.

        And you’ll come away going “wow, this is so refreshing, I didn’t know that I could have something that was like half the size and twice the speed and it’s easy!

        1. 1

          Pop isn’t super different from Ubuntu, but from my personal user perspective a lot more things just “work out of the box.” It has the gnome plugins I would normally install. Nvidia graphics cards work. Snap packages are not the default. Pop feels as different from Ubuntu as Ubuntu does from Fedora for me (I currently run Pop and Fedora on my machines because I always have trouble with snap packages).

          1. 1

            Remarkable. This is so very different from my own experience.

            What sort of problems do you experience with Snaps? I often hear this, but I don’t experience any myself, and usually people can’t point to blog posts or bug reports of specific issues.

            1. 1

              They don’t have the same permissions as other apps. So customizing Gnome via Firefox wasn’t possible when I tried (this might have been fixed since then). Other users ran into other issues as well https://evertpot.com/firefox-ubuntu-snap/

              What really rubbed me the wrong way about this was that even trying to install firefox with apt would install the snap version, not the .deb version. So customizing gnome the way gnome recommends became a small headache of debugging the issue, trying to circumventing snap, and then having to debug why apt was STILL installing a snap package.

              1. 1

                That’s a very well-known issue. It’s not a permissions issue. Snaps are sandboxed, by design.

                I thought this, and how to solve it, were really well-known. You surprise me by raising one of the oldest and simplest-to-workaround issues.

                The “official” way: install the GNOME extensions manager. apt install gnome-shell-extension-manager

                Unofficial: download Chrome; dpkg -i google-chrome-$version; use it.

                Better: install native Firefox. https://www.omgubuntu.co.uk/2022/04/how-to-install-firefox-deb-apt-ubuntu-22-04

      2. 3

        I switched to Pop!_OS from Ubuntu about 3 years ago and haven’t looked back. The hardware support is great, and I am excited to see what they’re building in Rust for the new COSMIC Desktop. I hope you find it as productive and enjoyable as I do!

    6. 2

      What about servers? Would people here prefer Ubuntu over Fedora for their VPS servers?

      1. 19

        Just use Debian

        1. 1

          Not long-term enough. My “disposable” servers, aka web/irc/random are usually Debian, but I hate redoing mail servers, so there Ubuntu LTE comes in handy. Only doing that dance every 5 years is great.

          1. 12

            Debian supports every release in LTS for about 5 years: https://wiki.debian.org/LTS

            Buster was initially released in July 2019, and will leave LTS in June 2024.

          2. 5

            Yeah, no.

            If your installation isn’t a few decades old, that’s not long-term.


          3. 3

            Set up your config in Nix and then you can literally make the ISO with with the whole system preconfigured (assuming you don’t need to deal with partitioning). If you ‘redoing’ is difficult, the write make it reproducible so it’s not difficult.

            1. 1

              You’re missing the point, but it’s my fault for not making it clear. It can only be reproducible if you stick to the same software stack, or sometimes the same major version.

              I choose LTS for mail servers because I decide on something, and then only apply security updates for a few years, I am not switching to newer versions, that’s why I don’t see how nix would bring any benefit. Yes, I admit that this is a pet and not cattle, but I’m not an enterprise, I have a mailserver for 4 people. This is how I want to to run it. What you proposed would work for what I said about the “disposable” servers, e.g. web. I can probably stomach an Apache 2.2. -> 2.4 upgrade every decade, but it’s a completely different problem.

              Also this is more of a feeling, but I don’t have high confidence that the nix demographic would backport security fixes for whatever-smtpd 2.x on 2023 until 2028. I’m kinda confident I would be able to get 3.x and 4.x during those years, and pretty fast.

              1. 1

                Nix expects you to stay mostly on track. There is certainly backporting for security, but I they mostly want you on the latest version. The way the modules are set up though, they really help mitigate the entropy issues of a typical Linux set up where configs grow stale and the machine falls over. The modules will either abstract away and migrate for you or be explicit about deprecation so you will not be able to upgrade without putting your config in a working state again to move from Apache 2.2 to 2.4 if the underlying configuration changed. Also if a config gile does break, you can reboot from an older working version. If you keep the state in a VCS, then it’s trivial to spin up that exact state on a different machine too (minus the stateful parts like the actual mail).

                Personally I would prefer doing regular upgrades here and there and making sure I’m kept up-to-date than languishing on a stale version (I understand it involves more maintenance).

      2. 9

        OpenBSD all the way. It just works!

        1. 1

          Unless it connects over Bluetooth. Then it won’t, probably ever, because having support for the one industry-wide wireless-peripheral standard is apparently less important than being secure so they removed the entire subsystem.

          I am not mocking. This really happened.

          I do not build distros, and I am very happy to say that I don’t build or run servers any more, but given what OpenBSD is and does, how come projects like m0n0wall, pfSense, or OPNsense don’t use it? Isn’t this exactly the sort of thing it should be ideal for?

          1. 1

            Why would you use bluetooth on a server?

            The make opinionated choices and security has their highest priority. I like that.

            1. 1

              Well, as an example, before I deployed an OS on a server, I’d want to be very familiar with it. Run it for a while first. Try it in a VM, maybe run it on a desktop for a while.

              I have machines that only have a Bluetooth mouse. Frankly, if an OS says “we support machine X” – as for example, OpenBSD says it supports M1 Macs – then I’d expect that everything on the machine worked. So for instance if I had an M1 Macbook Air with a Magic Mouse and Magic Keyboard (I don’t, but I’ve owned both and didn’t like them and sold them), I’d expect them to work.

              But they won’t.

              For clarity, I have reviewed OpenBSD:



              This is the sort of question I got asked, on the Reg and on Twitter and so on.

              It is a real issue and it really does affect people.

              If you had, say, only got a Bluetooth keyboard and mouse on a Mac, which many many Mac owners do, then it won’t work, even as a server.

              So, yes, I think this kind of thing does matter, and matter a lot.

      3. 6

        Just use OpenBSD.

        1. 1

          See my reply to @frign above.

      4. 1

        Seems like an inconveniently short release cycle for a server. I’m running openSUSE leap/tumbleweed on all my (Linux) VPS servers now and I have no complaints. Ubuntu is “fine” but snapper was not really a selling point for me.

      5. 1

        UI is out of the picture for a server, so it boils down to the linux distro one is comfortable with. Package manager familiarity and related plumbing scripts also play a role in the overall comfort aspect. Other than that, all Linux are created equal, IMO.

    7. 2

      I had a similar feeling with out-of-the-box ubuntu a few years ago and made the switch the PopOS - so far, so happy! I got the batteries-included feeling that I got from Ubuntu back in the day which I valued over a lot of other distros. It hasn’t been all plain sailing - a few upgrades broke some drivers along the way - but the defaults have been sensible and Wayland has been all good in the hood for me

    8. 1

      I have been a long time *buntu (and Debian) user and thought about making the switch over to Fedora for my personal laptop. The thing is, when you live inside an ecosystem for so long its hard to really feel the benefits of change. My tooling would be the same, my GNOME desktop configuration and shell extensions would be the same. My tmux/bash/nvim configuration would be the same. So really, what is the point of making the move?

      I get the sense from this post that its like “I wanted to move, and I did, and it was fine.” Which is great, but I don’t get a sense of any actual benefit of the move. No more Snap I guess?

    9. 1

      I think in a vast majority of cases Debian over Ubuntu is a better choice for servers and desktops.. LTS for servers, Testing branch for desktops.

      Knowing a bit about Shuttleworth and his organization, it’s hard not to see him as anything other than a grifter.

    10. 1

      For the ffmpeg issue and the stability issues, I’d recommend having a look at the flatpak versions available (and throw in flatseal to manage permissions, if you want).

      Though I can definitely understand an aversion to them if you had to deal with snaps prior.

      Different experience set since I’m on silverblue, but yes gnome-software is… definitely something. I found it helpful to get multiple coffees during its first sync after installing the OS

      Most Gnome apps are getting a more individual look in 44 (with some cute third party apps too, Amberol and Clapper

    11. 1

      As someone who last used desktop Ubuntu in 2012, the comments about Ubuntu and Canonical were interesting to read – although I can’t help thinking they sound quite similar to the same things we were all saying about Ubuntu back then. Interfering with upstream inevitably causes some problems, but it’s also central to Canonical’s whole philosophy: Ubuntu is designed as a product, and the software it distributes should fit into that goal. That’s both the reason I started using it in 2010 when I installed Linux for the first time, and the reason I got rid of it in 2012 when Unity annoyed me and I realised I could do what I wanted more easily with other distributions which don’t “refine” the experience quite so much.

      The section about Fedora, however, seems to me like it could apply to any number of Linux distributions which ship a decent set of GNOME packages – it’s more of a Ubuntu+GNOME vs. upstream GNOME comparison than anything else. Not that that’s a bad thing, I just think it says far more about Ubuntu than it does about Fedora or any other distribution.