I really need to revisit Plasma, I’ve moved to i3 then sway something like 10 years ago, and once in a while I feel nostalgia for not fullscreen windows. :)
Depends a lot on the machine and, I guess, the programmer :-P.
When I’m writing (code or otherwise) I sometimes do it on a small-ish laptop screen, 13”/15” depending on the laptop. Full-screen windows are cool for that. But if I’m also working on or with a schematic, or referencing multiple documents, I plug in an external monitor.
That external monitor is 27”. At that size, full-screen windows suck for anything other than reading large schematics.
I really like how well KDE handles multi-window workflows (although I really miss the window tabs KDE had way back :-( ).
I have definitely had a very good experience with KDE over the past year. It definitely does not look as nice as GNOME, but I can do the stuff I need to do with it, and the plasma desktop environment is mostly things that make sense (screenshot tool in KDE is quite nice and functional, if pretty ugly. GNOME’s one looks way nicer and improves on every release, of course)
We had a very unfortunate thing happen with desktop environments, with both major DE going through very painful transitions at the same time. Gnome is doing a lot of good work improving things, but so has KDE.
We had a very unfortunate thing happen with desktop environments,
It wasn’t just the DEs, either. “Desktop Linux” just generally seemed to get worse for a long time after 2010-ish. And by “worse” I’m not talking about nerdy philosophical holy wars (systemd, etc), but I just mean getting things to work quasi-correctly. Some of it was “growing pains”, like when I had to do arcane rituals to simultaneously appease PulseAudio apps and ALSA apps, but some of it was just too much stuff all breaking in a short timespan. We had NetworkManager always doing something funny, Xorg changes + Wayland stuff, libinput with weird acceleration, polkit, udev, PAM, AppArmor, etc, etc, all changing or being rewritten or replacing each other.
Desktop Linux was nuts for a while. And, honestly, with the whole Snap vs Flatpack vs whatever ecosystems, I feel like we’re still doomed. Mostly because all of the app-containerizations absolutely suck. Badly. They all take up way more space, eat up way more resources when running, take longer to launch, etc. I know and understand that maintaining a massive repository for all supported software for an OS is kind of crazy and seems unsustainable, but these technologies are just not the answer. When will software devs learn that “computers are fast” is actually a lie used to justify lazy programmers? </pedestal>
It wasn’t just the DEs, either. “Desktop Linux” just generally seemed to get worse for a long time after 2010-ish. And by “worse” I’m not talking about nerdy philosophical holy wars (systemd, etc), but I just mean getting things to work quasi-correctly.
Some two years ago, when I was still going through my inner “fuck it, I’m getting a Mac – holy fuck I’m not spending that kind of money on a computer!!!! – but I should really… – LOOK AT THE PRICE TAG MAN!!” debates, a friend of mine pointed out that, you know, we all look at the past through rose-tinted glasses, lots of things broke all the time way back, too.
A few days afterwards we were digging through my shovelware CDs and, just for shits and giggles, I produced a Slackware 10 CD, which we proceeded to install on an old x86 PC I keep for nostalgia reasons. Slackware 10 shipped with KDE 3.2.3, which was still pretty buggy and not quite the “golden” 3.5 standards yet.
Man, it’s not all rose-tinted glasses, that thing was pretty solid. Two years ago I could still break Plasma desktop just by staring at it mencingly – like, you could fiddle with the widgets on the panel for a bit and have it crash or resize them incorrectly, drag a network-mounted folder to the panel to iconify it and then get it to freeze at login by unplugging the network cable, or get System Settings and/or Kwin to grind to a halt or outright crash just by installing a handful of window decoration themes.
Then again, the tech stack underneath all that has grown tremendously since then. Plasma 5 has the same goals on paper but it takes a lot more work to achieve them than it took back in 2004 or whatever.
I love this anecdote, and I’ve had similar experiences.
I’m a software dev these days, myself, and I’ve always been a free software fan/advocate, so I don’t want to shit on anyone’s hard work–especially when they are mostly doing it for free and releasing it to the world for free. But, I do wonder where things went wrong in the Desktop Linux world.
Is it that the “modern” underlying technologies (wayland, libinput, systemd, auth/security systems, etc) are harder to work with than the older stuff?
Is it that modern hardware is harder to work with (different sleep levels, proprietary driver APIs, etc)?
Is it just that there’s so much MORE of both of the above to support, and therefore the maintenance burden increases monotonically over time?
Or is it just the age-old software problem of trying to include the kitchen sink while never breaking backwards compatibility so that everyone is happy (which usually ends up with nobody happy)?
Again, I appreciate the work the KDE devs do, and I’m really glad that KDE and Plasma exist and that many people use their stuff and are happy with it… But…, I will state my uninformed speculation as a fellow software dev: I suspect that the vast majority of bugs in Plasma today are a direct result of trying to make the desktop too modular and too configurable. The truth is that the desktop pieces generally need to know about each other, so that the desktop can avoid being configured into a bad state, or so that widgets can adapt themselves when something else changes, e.g., containing panel resizes, screen size changes, etc. Obviously Plasma does have mechanisms in place for these things, and I don’t know what those mechanisms are (other than that it probably uses DBUS to publish event messages), so this is just speculation, but I imagine that the system for coordinating changes and alerting all of the different desktop parts is simultaneously more complex and more limited than it would be if the whole desktop were more tightly integrated. I strongly suspect that Plasma architected itself with a kind of traditional, Alan Kay-ish, “object oriented” philosophy: everything is an independent actor that communicate via asynchronous messages, and can be added and removed dynamically at runtime. I’m sure that the idea was to maximize flexibility and extensibility, but I also think that the cost to that approach is more complexity and that it’s basically impossible to figure out what will actually happen in response to a change. Not to mention that most of this stuff is (or was the last time I checked, at least) written in C++, which is not the easiest language to do dynamic stuff in.
I suspect that the vast majority of bugs in Plasma today are a direct result of trying to make the desktop too modular and too configurable.
I hear this a lot but, looking back, I really don’t think it’s the case. KDE 3.x-era was surprisingly close to modern Plasma and KDE Applications releases in terms of features and configurability – not on the same level but also not barebones at all, and was developed by fewer people over a much shorter period of time. A lot of it got rewritten from the ground up – there was a lot of architecture astronautics in the 4.x series, so a couple of Plasma components actually lost some featyres on the way. And this was all happening back when the whole KDE series was a big unhappy bunch of naked C++ – it happened way before QtQuick & co..
IMHO it’s just a symptom of too few eyes looking over code that uses technology developed primarily for other purposes. Back in the early ‘00s there was money to be made in the desktop space, so all the cool kids were writing window managers and whatnot, and there was substantial (by FOSS standards of the age) commercial backing for the development of commercially-viable solutions, paying customers and all. This is no longer the case. Most developers in the current generations are interested in other things, and even the big players in the desktop space are mostly looking elsewhere. Much of the modern Linux tech stack has been developed for things other than desktops, too, so there’s a lot of effort to be duplicated at the desktop end (eh, Wayland?), and modern hardware is itself a lot more complex, so it just takes a lot more effort to do the same things well.
Some of the loss in quality is just inherent to looking the wrong way for inspiration – people in FOSS love to sneer at closed platforms, but they seek to emulate them without much discrimination, including the bad parts (app stores, ineffective UX).
But I think most of it is just the result of too few smart people having to do too much work. FOSS platforms were deliberately written without any care for backwards compatibility, so we can’t even reap the benefits of 20+ years of maintenance and application development the way Windows (and, to some extent, macOS) can.
I hear this a lot but, looking back, I really don’t think it’s the case. KDE 3.x-era was surprisingly close to modern Plasma and KDE Applications releases in terms of features and configurability
It was very configurable, yes. But, I was speaking less from the lens of the user of the product, and more from the software architecture (as I came to understand it from blog posts, etc). I don’t know what the KDE 3.x code was like, but my impression for KDE/Plasma 4+ was that the code architecture was totally reorganized for maximum modularity.
Here’s a small example of what I mean from some KDE 4 documentation page: https://techbase.kde.org/Development/Architecture/KDE4/KParts. This idea of writing the terminal, text editor, etc as modular components that could be embedded into other stuff is an example of that kind of thinking, IMO. It sounds awesome, but there’s always something that ends up either constraining the component’s functionality in order to stay embeddable, or causing the component to not work quite right when embedded into something the author didn’t expect to be embedded in.
Back in the early ‘00s there was money to be made in the desktop space, so all the cool kids were writing window managers and whatnot, and there was substantial (by FOSS standards of the age) commercial backing for the development of commercially-viable solutions, paying customers and all. This is no longer the case.
Is that correct? My understanding was that a good chunk of the GNOME leadership were employed by Red Hat. Is that no longer the case? I don’t know the history of KDE and its stewardship, but if Novell or SUSE were contributing financially to it and now no longer are, I could see how that would hurt the person-power of the project.
Some of the loss in quality is just inherent to looking the wrong way for inspiration – people in FOSS love to sneer at closed platforms, but they seek to emulate them without much discrimination, including the bad parts (app stores, ineffective UX).
I definitely agree with this. That’s actually one reason why I tune out the GNOME Shell haters. It’s not that I don’t have some of my own criticisms about the UI/UX of it, but I really appreciate that they tried something different. Aside: And as someone who has worked on Macs for 10 years, it blows my mind when people say that GNOME Shell is at all mac-like; the workflow and UX has almost nothing in common with macOS except for the app-oriented super-tab switcher.
Here’s a small example of what I mean from some KDE 4 documentation page: https://techbase.kde.org/Development/Architecture/KDE4/KParts. This idea of writing the terminal, text editor, etc as modular components that could be embedded into other stuff is an example of that kind of thinking, IMO.
Uhh… it’s been a while so I don’t remember the details very well but KDE 3 was definitely very modular as well. In fact KParts dates from the 3.x series, not 4.x: https://techbase.kde.org/Development/Architecture/KDE3/KParts . KDE 4.x introduced a whole bunch of new things that, uh, didn’t work out well for a while, like Nepomuk, and changed the desktop shell model pretty radically (IIRC that’s when (what would eventually become) Plasma Shell came up). Some frameworks and applications probably went through some rewrites, some were abandoned, and things like DCOP were buried, but the overall approach to designing reusable frameworks definitely stayed.
Is that correct? My understanding was that a good chunk of the GNOME leadership were employed by Red Hat. Is that no longer the case? I don’t know the history of KDE and its stewardship, but if Novell or SUSE were contributing financially to it and now no longer are, I could see how that would hurt the person-power of the project.
I think Red Hat still employs some Gnome developers. But Canonical no longer has a desktop team IIRC, Ximian is pretty much gone, Nokia isn’t pouring money into desktop/mobile Linux technologies etc.. Pretty much all the big Linux players are mostly working on server-side technologies or embedded deployments.
I definitely agree with this. That’s actually one reason why I tune out the GNOME Shell haters.
I don’t really mind Gnome Shell, Linux always had all sorts of whacky “desktop shell” thingies. However, I really started to hate my Linux boxes starting with GTK3.
I dropped most of the GTK3 applications I was using and got a trip down the memory lane compiling Emacs with the Lucid toolkit. But it wasn’t really avoidable on account of Firefox. That meant I had to deal with its asinine file finding dialog, the touch-sized widgets on a non-touch screen, and that awful font rendering on a daily basis. Not having to deal with that more than justifies the money I spent on my Mac, hell I’d pay twice that money just to never see those barely-readable Adwaita-themed windows again *sigh*.
Uhh… it’s been a while so I don’t remember the details very well but KDE 3 was definitely very modular as well.
Fair enough. I definitely used KDE 3 a bit back in the day, but I don’t remember knowing anything about the development side of it. I could very well be mistaken about KDE 4 being a significant push toward modularity.
I was a desktop linux user from about 2003 until 2010 or so, going through a variety of distros (Slackware, Gentoo, Arch) and sticking with Ubuntu since 2006ish.
At one point I got tired of how worse things were getting, specially for laptop users and switched to a Mac. I’ve used macs for my main working computers since then and mostly only used Linux in servers/rPis, etc.
About 3 years back, before the new Arm based macs came out I was a bit fed up with my work computer at the time, an Intel based Mac, being so sluggish, so I decided to try out desktop Linux again (with whatever the current Ubuntu was at the time) on my Windows Desktop PC which is mostly just a gaming PC.
I could replicate my usual workflow, specially because I never depended too much on Mac specific apps, but then even on a desktop machine with two screens, the overall experience was just…not great.
The number one thing that irked me was dealing with my two screens which have different resolutions and different DPIs. The desktop UI for it just literally didn’t work and I had to deal with xrandr commands that ran on desktop start and apparently this is “normal” and everyone accepted this as it being ok. And even then I could never get it exactly right and sometimes it would just mess up to a point that the whole display server would need a restart.
Other than that, the way many of these modern web based desktop apps just have all sorts of issues with different DPIs and font rendering.
I thought, how were all of these things still such a massive issue? Specially the whole screen thing with laptops being the norm over the last 15 years and people often using external screens that probably have a different DPI from their laptop anyway?
Last year I decided to acquire a personal laptop again (for many years I only had work laptops) and I thought I’d have a go at a Framework laptop, and this time I thought I’d start with Kubuntu and KDE, as I’d also briefly tried modern KDE on an Asahi Linux installation and loved it.
KDE seems to handle the whole multiple display/DPI thing a lot better but still not perfectly. The web based desktop app and font rendering issues were somehow still there but not as bad (I did read about some Electron bugs that got fixed in the meanwhile).
And then I dove into the whole Snap/Flatpack thing which I was kind of unfamiliar with as not having used desktop linux for so many years. And what a mess! At multiple points I had multiple instances of Firefox running and it took me a while to understand why. Some apps would open the system Firefox, others would go for the containerized one.
I get why these containerized app ecosystems exist, but in my limited experience with it the interoperability between these apps seems terrible and it makes for a terrible user experience. It feels like a major step back for all the improvements and ease of use Desktop Linux made over the years.
I did also briefly try the latest Ubuntu with Gnome and the whole dual screen DPI situation was just as bad as before, I’m guessing related to the whole fractional scaling thing. Running stuff at 100% was fine but too small, 200% fine but too bug, 150%? A blurry mess. KDE deals fine with all those in between scales.
My other spicy opinion on breakage is that Ubuntu not doing rolling releases holds back everything, because bug fixes take too long to get in front of users.
“I don’t want updates to break things” OK well now every bug fix takes at least 6 months to get released. And is bundled with 10 other ones.
I understand the trade offs being made but imagine a world in which bug fixes show up “immediately”
And it still is, depending on the distro (obviously, you can manually compile/manage packages with any distro, but some distros make that an officially supported approach).
I agree that Ubuntu’s release philosophy isn’t great, but in its defense, bug fixes are not blocked from being pushed out as regular updates in between major releases.
What I do think is the big problem with Ubuntu’s releases is that there used to be no real distinction between “system” stuff and “user applications”. It’s one thing to say “Ubuntu version X.Y has bash version A.B, so write your scripts targeting that version.” It’s another to say “Ubuntu version X.Y has Firefox version ABC.” Why the hell wouldn’t “apps” always just be rolling release style? I do understand that the line between a “system thing” and a “user-space thing” is blurry and somewhat arbitrary, but that doesn’t mean that giving up is the right call.
To be fair, I guess that Ubuntu’s push toward “snap” packages for these things does kind of solve the issue, since I think snaps can update independently.
It wasn’t just the DEs, either. “Desktop Linux” just generally seemed to get worse for a long time after 2010-ish.
That’s part of why I landed on StumpWM and stayed. It’s small, simple, works well for my use cases, and hasn’t experienced the sort of churn and CADT rewrites that have plagued others.
Moving to FreeBSD as my daily driver also helped there, because it allowed me to nope out of a lot of the general desktop Linux churn.
I switched to Plasma from GNOME because I was tired of my customizations getting obliterated all the time. I also like the fact I can mess with the key combinations in many more apps, since my muscle memory uses Command, not Control. Combined with a couple add-ons and global menu, I’ve never looked back.
I have to admit that I’m a little bit of a Plasma “hater”. I haven’t tried it in three or four years at this point (I remember when I thought keeping the same desktop settings for six months was a long time!).
But, every time I try Plasma, I end up disappointed at how many paper cuts there always are. There were always too many configuration options–and I don’t say that as a “user”, I say that as a developer. They obviously cannot do good QA on all of the different combinations of settings, so there was always something that didn’t work right the minute you deviated from the default configuration. I remember trying to do a vertical panel, but some of the (official, first-party) panel widgets didn’t resize/reorient correctly. I remember the main panel disappearing fairly unpredictably when I’d un/plug my laptop from my two monitors, which would also screw up the hacky super-key-to-open-main-menu feature. Sometimes the KRunner pop up would be in the wrong place if I moved the panel away from its default place at the bottom of the screen. And there were always countless other things. Every time I tried it, I hoped that the experience would be more polished, but it never was over the course of years (I tried every version from 4.0 to 5.10-ish).
Again, the last time I tried was several years ago now, so I guess it’s time to try again with an open mind. But, it’s really hard to not be skeptical/cynical.
One other thing that always bothered the hell out of me was how much it barfed config files all over our XDG_CONFIG_HOME directory. I get that the KDE applications are their own things, but why does every single Plasma desktop component also need its own top-level config directory? I hope they changed that, but I doubt it.
I remember the main panel disappearing fairly unpredictably when I’d un/plug my laptop from my two monitors,
This one got fixed, I definitely used to get that a while back, but now the panel stays put.
One other thing that always bothered the hell out of me was how much it barfed config files all over our XDG_CONFIG_HOME directory.
I think that’s still the case, there’s hodge-podge of a bunch of unintuitively named confit files. What’s more, configs are still a mix of actual configuration and volatile state, so they aren’t easily stored in a git repo. Hope
There were always too many configuration options–and I don’t say that as a “user”, I say that as a developer.
Entirely agree. The customisations are only what some KDE dev somewhere wants, and the UI on them is terrible.
I want real deep customisation, and I want it by direct interaction not dialog boxes.
If I want to move my panel, and I do, I want to just drag it, not pick an option or try to click on a little target.
I want deep customisation. I want lots of vertical space because everyone has widescreens now. That means a vertical panel but with horizontal contents. That means setting the size of the start button. That means vertical window title bars, like wm2. KDE either can’t do this or buries it down 1/2 dozen levels of dialog boxes.
KDE copied the design of Win98, badly.
I want a simple copy of Win95, without all the Active Desktop garbage.
I really need to revisit Plasma, I’ve moved to i3 then sway something like 10 years ago, and once in a while I feel nostalgia for not fullscreen windows. :)
Why would one want non fullscreen windows in plasma o_O?
Best workflow ever :-)
Depends a lot on the machine and, I guess, the programmer :-P.
When I’m writing (code or otherwise) I sometimes do it on a small-ish laptop screen, 13”/15” depending on the laptop. Full-screen windows are cool for that. But if I’m also working on or with a schematic, or referencing multiple documents, I plug in an external monitor.
That external monitor is 27”. At that size, full-screen windows suck for anything other than reading large schematics.
I really like how well KDE handles multi-window workflows (although I really miss the window tabs KDE had way back :-( ).
I have definitely had a very good experience with KDE over the past year. It definitely does not look as nice as GNOME, but I can do the stuff I need to do with it, and the plasma desktop environment is mostly things that make sense (screenshot tool in KDE is quite nice and functional, if pretty ugly. GNOME’s one looks way nicer and improves on every release, of course)
We had a very unfortunate thing happen with desktop environments, with both major DE going through very painful transitions at the same time. Gnome is doing a lot of good work improving things, but so has KDE.
It wasn’t just the DEs, either. “Desktop Linux” just generally seemed to get worse for a long time after 2010-ish. And by “worse” I’m not talking about nerdy philosophical holy wars (systemd, etc), but I just mean getting things to work quasi-correctly. Some of it was “growing pains”, like when I had to do arcane rituals to simultaneously appease PulseAudio apps and ALSA apps, but some of it was just too much stuff all breaking in a short timespan. We had NetworkManager always doing something funny, Xorg changes + Wayland stuff, libinput with weird acceleration, polkit, udev, PAM, AppArmor, etc, etc, all changing or being rewritten or replacing each other.
Desktop Linux was nuts for a while. And, honestly, with the whole Snap vs Flatpack vs whatever ecosystems, I feel like we’re still doomed. Mostly because all of the app-containerizations absolutely suck. Badly. They all take up way more space, eat up way more resources when running, take longer to launch, etc. I know and understand that maintaining a massive repository for all supported software for an OS is kind of crazy and seems unsustainable, but these technologies are just not the answer. When will software devs learn that “computers are fast” is actually a lie used to justify lazy programmers? </pedestal>
Some two years ago, when I was still going through my inner “fuck it, I’m getting a Mac – holy fuck I’m not spending that kind of money on a computer!!!! – but I should really… – LOOK AT THE PRICE TAG MAN!!” debates, a friend of mine pointed out that, you know, we all look at the past through rose-tinted glasses, lots of things broke all the time way back, too.
A few days afterwards we were digging through my shovelware CDs and, just for shits and giggles, I produced a Slackware 10 CD, which we proceeded to install on an old x86 PC I keep for nostalgia reasons. Slackware 10 shipped with KDE 3.2.3, which was still pretty buggy and not quite the “golden” 3.5 standards yet.
Man, it’s not all rose-tinted glasses, that thing was pretty solid. Two years ago I could still break Plasma desktop just by staring at it mencingly – like, you could fiddle with the widgets on the panel for a bit and have it crash or resize them incorrectly, drag a network-mounted folder to the panel to iconify it and then get it to freeze at login by unplugging the network cable, or get System Settings and/or Kwin to grind to a halt or outright crash just by installing a handful of window decoration themes.
Then again, the tech stack underneath all that has grown tremendously since then. Plasma 5 has the same goals on paper but it takes a lot more work to achieve them than it took back in 2004 or whatever.
I love this anecdote, and I’ve had similar experiences.
I’m a software dev these days, myself, and I’ve always been a free software fan/advocate, so I don’t want to shit on anyone’s hard work–especially when they are mostly doing it for free and releasing it to the world for free. But, I do wonder where things went wrong in the Desktop Linux world.
Is it that the “modern” underlying technologies (wayland, libinput, systemd, auth/security systems, etc) are harder to work with than the older stuff?
Is it that modern hardware is harder to work with (different sleep levels, proprietary driver APIs, etc)?
Is it just that there’s so much MORE of both of the above to support, and therefore the maintenance burden increases monotonically over time?
Or is it just the age-old software problem of trying to include the kitchen sink while never breaking backwards compatibility so that everyone is happy (which usually ends up with nobody happy)?
Again, I appreciate the work the KDE devs do, and I’m really glad that KDE and Plasma exist and that many people use their stuff and are happy with it… But…, I will state my uninformed speculation as a fellow software dev: I suspect that the vast majority of bugs in Plasma today are a direct result of trying to make the desktop too modular and too configurable. The truth is that the desktop pieces generally need to know about each other, so that the desktop can avoid being configured into a bad state, or so that widgets can adapt themselves when something else changes, e.g., containing panel resizes, screen size changes, etc. Obviously Plasma does have mechanisms in place for these things, and I don’t know what those mechanisms are (other than that it probably uses DBUS to publish event messages), so this is just speculation, but I imagine that the system for coordinating changes and alerting all of the different desktop parts is simultaneously more complex and more limited than it would be if the whole desktop were more tightly integrated. I strongly suspect that Plasma architected itself with a kind of traditional, Alan Kay-ish, “object oriented” philosophy: everything is an independent actor that communicate via asynchronous messages, and can be added and removed dynamically at runtime. I’m sure that the idea was to maximize flexibility and extensibility, but I also think that the cost to that approach is more complexity and that it’s basically impossible to figure out what will actually happen in response to a change. Not to mention that most of this stuff is (or was the last time I checked, at least) written in C++, which is not the easiest language to do dynamic stuff in.
I hear this a lot but, looking back, I really don’t think it’s the case. KDE 3.x-era was surprisingly close to modern Plasma and KDE Applications releases in terms of features and configurability – not on the same level but also not barebones at all, and was developed by fewer people over a much shorter period of time. A lot of it got rewritten from the ground up – there was a lot of architecture astronautics in the 4.x series, so a couple of Plasma components actually lost some featyres on the way. And this was all happening back when the whole KDE series was a big unhappy bunch of naked C++ – it happened way before QtQuick & co..
IMHO it’s just a symptom of too few eyes looking over code that uses technology developed primarily for other purposes. Back in the early ‘00s there was money to be made in the desktop space, so all the cool kids were writing window managers and whatnot, and there was substantial (by FOSS standards of the age) commercial backing for the development of commercially-viable solutions, paying customers and all. This is no longer the case. Most developers in the current generations are interested in other things, and even the big players in the desktop space are mostly looking elsewhere. Much of the modern Linux tech stack has been developed for things other than desktops, too, so there’s a lot of effort to be duplicated at the desktop end (eh, Wayland?), and modern hardware is itself a lot more complex, so it just takes a lot more effort to do the same things well.
Some of the loss in quality is just inherent to looking the wrong way for inspiration – people in FOSS love to sneer at closed platforms, but they seek to emulate them without much discrimination, including the bad parts (app stores, ineffective UX).
But I think most of it is just the result of too few smart people having to do too much work. FOSS platforms were deliberately written without any care for backwards compatibility, so we can’t even reap the benefits of 20+ years of maintenance and application development the way Windows (and, to some extent, macOS) can.
It was very configurable, yes. But, I was speaking less from the lens of the user of the product, and more from the software architecture (as I came to understand it from blog posts, etc). I don’t know what the KDE 3.x code was like, but my impression for KDE/Plasma 4+ was that the code architecture was totally reorganized for maximum modularity.
Here’s a small example of what I mean from some KDE 4 documentation page: https://techbase.kde.org/Development/Architecture/KDE4/KParts. This idea of writing the terminal, text editor, etc as modular components that could be embedded into other stuff is an example of that kind of thinking, IMO. It sounds awesome, but there’s always something that ends up either constraining the component’s functionality in order to stay embeddable, or causing the component to not work quite right when embedded into something the author didn’t expect to be embedded in.
Is that correct? My understanding was that a good chunk of the GNOME leadership were employed by Red Hat. Is that no longer the case? I don’t know the history of KDE and its stewardship, but if Novell or SUSE were contributing financially to it and now no longer are, I could see how that would hurt the person-power of the project.
I definitely agree with this. That’s actually one reason why I tune out the GNOME Shell haters. It’s not that I don’t have some of my own criticisms about the UI/UX of it, but I really appreciate that they tried something different. Aside: And as someone who has worked on Macs for 10 years, it blows my mind when people say that GNOME Shell is at all mac-like; the workflow and UX has almost nothing in common with macOS except for the app-oriented super-tab switcher.
Uhh… it’s been a while so I don’t remember the details very well but KDE 3 was definitely very modular as well. In fact KParts dates from the 3.x series, not 4.x: https://techbase.kde.org/Development/Architecture/KDE3/KParts . KDE 4.x introduced a whole bunch of new things that, uh, didn’t work out well for a while, like Nepomuk, and changed the desktop shell model pretty radically (IIRC that’s when (what would eventually become) Plasma Shell came up). Some frameworks and applications probably went through some rewrites, some were abandoned, and things like DCOP were buried, but the overall approach to designing reusable frameworks definitely stayed.
I think Red Hat still employs some Gnome developers. But Canonical no longer has a desktop team IIRC, Ximian is pretty much gone, Nokia isn’t pouring money into desktop/mobile Linux technologies etc.. Pretty much all the big Linux players are mostly working on server-side technologies or embedded deployments.
I don’t really mind Gnome Shell, Linux always had all sorts of whacky “desktop shell” thingies. However, I really started to hate my Linux boxes starting with GTK3.
I dropped most of the GTK3 applications I was using and got a trip down the memory lane compiling Emacs with the Lucid toolkit. But it wasn’t really avoidable on account of Firefox. That meant I had to deal with its asinine file finding dialog, the touch-sized widgets on a non-touch screen, and that awful font rendering on a daily basis. Not having to deal with that more than justifies the money I spent on my Mac, hell I’d pay twice that money just to never see those barely-readable Adwaita-themed windows again *sigh*.
Fair enough. I definitely used KDE 3 a bit back in the day, but I don’t remember knowing anything about the development side of it. I could very well be mistaken about KDE 4 being a significant push toward modularity.
Oof, I echo all of this so much.
I was a desktop linux user from about 2003 until 2010 or so, going through a variety of distros (Slackware, Gentoo, Arch) and sticking with Ubuntu since 2006ish.
At one point I got tired of how worse things were getting, specially for laptop users and switched to a Mac. I’ve used macs for my main working computers since then and mostly only used Linux in servers/rPis, etc.
About 3 years back, before the new Arm based macs came out I was a bit fed up with my work computer at the time, an Intel based Mac, being so sluggish, so I decided to try out desktop Linux again (with whatever the current Ubuntu was at the time) on my Windows Desktop PC which is mostly just a gaming PC.
I could replicate my usual workflow, specially because I never depended too much on Mac specific apps, but then even on a desktop machine with two screens, the overall experience was just…not great.
The number one thing that irked me was dealing with my two screens which have different resolutions and different DPIs. The desktop UI for it just literally didn’t work and I had to deal with xrandr commands that ran on desktop start and apparently this is “normal” and everyone accepted this as it being ok. And even then I could never get it exactly right and sometimes it would just mess up to a point that the whole display server would need a restart.
Other than that, the way many of these modern web based desktop apps just have all sorts of issues with different DPIs and font rendering.
I thought, how were all of these things still such a massive issue? Specially the whole screen thing with laptops being the norm over the last 15 years and people often using external screens that probably have a different DPI from their laptop anyway?
Last year I decided to acquire a personal laptop again (for many years I only had work laptops) and I thought I’d have a go at a Framework laptop, and this time I thought I’d start with Kubuntu and KDE, as I’d also briefly tried modern KDE on an Asahi Linux installation and loved it.
KDE seems to handle the whole multiple display/DPI thing a lot better but still not perfectly. The web based desktop app and font rendering issues were somehow still there but not as bad (I did read about some Electron bugs that got fixed in the meanwhile).
And then I dove into the whole Snap/Flatpack thing which I was kind of unfamiliar with as not having used desktop linux for so many years. And what a mess! At multiple points I had multiple instances of Firefox running and it took me a while to understand why. Some apps would open the system Firefox, others would go for the containerized one.
I get why these containerized app ecosystems exist, but in my limited experience with it the interoperability between these apps seems terrible and it makes for a terrible user experience. It feels like a major step back for all the improvements and ease of use Desktop Linux made over the years.
I did also briefly try the latest Ubuntu with Gnome and the whole dual screen DPI situation was just as bad as before, I’m guessing related to the whole fractional scaling thing. Running stuff at 100% was fine but too small, 200% fine but too bug, 150%? A blurry mess. KDE deals fine with all those in between scales.
My other spicy opinion on breakage is that Ubuntu not doing rolling releases holds back everything, because bug fixes take too long to get in front of users.
“I don’t want updates to break things” OK well now every bug fix takes at least 6 months to get released. And is bundled with 10 other ones.
I understand the trade offs being made but imagine a world in which bug fixes show up “immediately”
… that was the world back in the day.
“Oh, $SHIT is broken. I see the patch landed last night. I’ll just grab the source and rebuild.”
And it still is, depending on the distro (obviously, you can manually compile/manage packages with any distro, but some distros make that an officially supported approach).
I agree that Ubuntu’s release philosophy isn’t great, but in its defense, bug fixes are not blocked from being pushed out as regular updates in between major releases.
What I do think is the big problem with Ubuntu’s releases is that there used to be no real distinction between “system” stuff and “user applications”. It’s one thing to say “Ubuntu version X.Y has bash version A.B, so write your scripts targeting that version.” It’s another to say “Ubuntu version X.Y has Firefox version ABC.” Why the hell wouldn’t “apps” always just be rolling release style? I do understand that the line between a “system thing” and a “user-space thing” is blurry and somewhat arbitrary, but that doesn’t mean that giving up is the right call.
To be fair, I guess that Ubuntu’s push toward “snap” packages for these things does kind of solve the issue, since I think snaps can update independently.
That’s part of why I landed on StumpWM and stayed. It’s small, simple, works well for my use cases, and hasn’t experienced the sort of churn and CADT rewrites that have plagued others.
Moving to FreeBSD as my daily driver also helped there, because it allowed me to nope out of a lot of the general desktop Linux churn.
Why do people rewrite everything all the time?
I switched to Plasma from GNOME because I was tired of my customizations getting obliterated all the time. I also like the fact I can mess with the key combinations in many more apps, since my muscle memory uses Command, not Control. Combined with a couple add-ons and global menu, I’ve never looked back.
The reasons were entirely clear, but people’s memories are short, and there is a tonne of politics.
Microsoft threatened to sue. Red Hat and Ubuntu *2 bigger GNOME backers) refused to cooperate (including with each other) and built new desktops.
SUSE and Linspire (2 of the biggest KDE backers) cooperated.
I detailed it all here.
https://liam-on-linux.dreamwidth.org/85359.html
Someone stuck it on HN and senior GNOME folk denied everything. I don’t believe them. Of course they deny it, but it was no accident.
This is all a matter of historical record.
I have to admit that I’m a little bit of a Plasma “hater”. I haven’t tried it in three or four years at this point (I remember when I thought keeping the same desktop settings for six months was a long time!).
But, every time I try Plasma, I end up disappointed at how many paper cuts there always are. There were always too many configuration options–and I don’t say that as a “user”, I say that as a developer. They obviously cannot do good QA on all of the different combinations of settings, so there was always something that didn’t work right the minute you deviated from the default configuration. I remember trying to do a vertical panel, but some of the (official, first-party) panel widgets didn’t resize/reorient correctly. I remember the main panel disappearing fairly unpredictably when I’d un/plug my laptop from my two monitors, which would also screw up the hacky super-key-to-open-main-menu feature. Sometimes the KRunner pop up would be in the wrong place if I moved the panel away from its default place at the bottom of the screen. And there were always countless other things. Every time I tried it, I hoped that the experience would be more polished, but it never was over the course of years (I tried every version from 4.0 to 5.10-ish).
Again, the last time I tried was several years ago now, so I guess it’s time to try again with an open mind. But, it’s really hard to not be skeptical/cynical.
One other thing that always bothered the hell out of me was how much it barfed config files all over our XDG_CONFIG_HOME directory. I get that the KDE applications are their own things, but why does every single Plasma desktop component also need its own top-level config directory? I hope they changed that, but I doubt it.
This one got fixed, I definitely used to get that a while back, but now the panel stays put.
I think that’s still the case, there’s hodge-podge of a bunch of unintuitively named confit files. What’s more, configs are still a mix of actual configuration and volatile state, so they aren’t easily stored in a git repo. Hope
https://bugs.kde.org/show_bug.cgi?id=444974
is addressed at some point (and saddened that this seemingly didn’t get into plasma 6 roadmap).
Entirely agree. The customisations are only what some KDE dev somewhere wants, and the UI on them is terrible.
I want real deep customisation, and I want it by direct interaction not dialog boxes.
If I want to move my panel, and I do, I want to just drag it, not pick an option or try to click on a little target.
I want deep customisation. I want lots of vertical space because everyone has widescreens now. That means a vertical panel but with horizontal contents. That means setting the size of the start button. That means vertical window title bars, like wm2. KDE either can’t do this or buries it down 1/2 dozen levels of dialog boxes.
KDE copied the design of Win98, badly.
I want a simple copy of Win95, without all the Active Desktop garbage.