I totally get what the OP is saying. I want technology that is stable,
sustainable, liberating, and which respects my autonomy. But the only
way we get there is free software. Not corporate-driven open source,
and certainly not proprietary software.
I wonder if Unix-likes are really the best foundation for building that
future; I suspect they might not be.
If you care about old hardware and alternative architectures, Apple
isn’t your best bet. They only target whatever Apple is selling.
They’re a hardware company; software is their side-hustle. Well, hardware
and now rent-seeking in their gated community, by means of the app store.
They’re also a company built on planned obsolescence and
“technology as fashion statement”.
As for stability, most of the big players don’t care, because the incentives
aren’t present. The tech field has inadvertently discovered the
secret of zero-point energy: how to power the unchecked growth of an industry worth trillions of dollars on little more than bullshit, a commodity whose infinite supply is guaranteed.
Surely, this discovery warrants a Nobel Prize in physics.
As long as the alchemists of Silicon Valley can turn bullshit into
investments from venture capitalists, there will be no stability.
For what it’s worth, I use an iPhone. It’s a hand-me-down; I
wouldn’t have bought it new. It’s an excellent appliance, but I know
that when I use it, Apple is in the driver’s seat, not me. And I resent them for it.
But the only way we get there is free software. Not corporate-driven open source, and certainly not proprietary software.
I was a big believe in that for a very long time, too, but I’m not too convinced that software being free is the secret sauce here. Plenty of of free software projects treat users, at best, like a nuisance, and are actively or derisively hostile to other projects, sometimes in plain sight (eh, Gnome?). There’s lot of GPL code that only has major commercial backers behind it, working on major commercial schedules and producing major commercial codebases, to the point where even technically-inclined users are, in practice, largely unable to make contributions, or meaningfully maintain community forks, even if the licensing allows it (see e.g. Chrome).
I’m starting to believe that free licensing is mostly an enabler, not a guarantee of any kind. No amount of licensing will fix problems that arise due to ego, or irresponsibility, or unkindness. Commercial pragmatism sometimes manages to keep some of these things in check at the product development level (which, presumably, is one of the reasons why the quality of Linux desktop development has steadily declined as less and less money got poured into it, but I’m open to the possibility that I’m just being bitter about these things now…)
Plenty of of free software projects treat users, at best, like a nuisance, and are actively or derisively hostile to other projects, sometimes in plain sight
I can personally attest to this. I know it all too well.
largely unable to make contributions, or meaningfully maintain community forks, even if the licensing allows it (see e.g. Chrome).
I feel this is an argument for having slower, surer process as well. Write a spec before writing and landing something. Have test plans. Implement human usability studies for new UI features. A tree that moves slower is inherently more stable (because new bugs can’t find their way in, and old bugs have longer to be fixed before it’s rewritten) and gives more opportunity for community involvement. But I know this is a controversial opinion.
I want technology that is stable, sustainable, liberating, and which respects my autonomy. But the only way we get there is free software.
The only way we get there is with a society that is stable, sustainable, and respects you autonomy. And we haven’t had that since at least the Industrial Revolution (for certain, specific quantities of stability, sustainability, and respect; those have never been absolute). The switch from craftsmanship to mass production made it kinda a lost cause.
I strongly suspect that’s not really true. We have to admit that the vast majority of open-source contributors do not have the work ethic (and why would they, they’re working on open source in their spare time because it’s fun) to push a project from “it fits my needs” to “everyone else can use it”. The sad situation is that most projects are eternally stuck in the minimal viable product stage, and nobody is willing to put in the extra “unfun” 80% of work to polish them and make them easy to use and stable.
I know this is a problem I’m having, and I doubt I’m the only one.
Nobody wants to do usability studies, even informal ones. Hell, I just gather a group of friends + my Mum and have them sit at my laptop and tell me what they like and don’t like about the FOSS I’m writing, and from what I’ve gathered my UIs are miles away better than most others.
Nobody wants to write a spec or a roadmap. Personally, I grew up loving discussing things and learning, and I view requirements gathering as an extension of both of those activities, so it’s really enjoyable for me.
I run my FOSS projects kind of like how most corps used to run corp dev (somewhere between waterfall and agile). I feel that the quality is higher than most others, though I admit it’s subjective. And, you always like what you make because you made it so it works the way you expect.
But in my opinion, process really is the problem. But if process was required, most FOSS wouldn’t exist, because nobody would really want to follow that sort of process in their free time. (Present company excluded.)
I really want to see more UX designers get interested and involved in FOSS. It’s very clear that FOSS is driven primarily by sysadmins and secondarily by programmers. If we want a usable system then UX designers need to be involved in the conversation early and often.
… and, with the risk of being catty, when there are UX designers, they are content in copying mainstream interfaces instead of innovating and trying to produce something original.
Every once in a while I’ll get the up the gumption to get organized, or maybe clean the house. Then I’ll use that energy, and I’ll try to put a system in place to raise that baseline for good. Later on, when I don’t have the same zeal, whatever system I invented almost certainly fails.
The only systems that seem to survive are those that are dead simple. For instance, when I must remember to bring something with me, I leave it leaning against the front door. That habit stuck.
So when it comes to development process, do you think there’s some absolute minimum process that could be advocated and popularized to help FOSS projects select the tasks that matter and make progress? …Lazy agile?
I leave it leaning against the front door. That habit stuck.
That was a particularly vivid flashback of my teenage years you just gave me. Wow.
do you think there’s some absolute minimum process that could be advocated and popularized to help FOSS projects select the tasks that matter and make progress? …Lazy agile?
I’m going to have to think very hard on this one. I’m not sure what that would look like. Definitely a thought train worth boarding.
So when it comes to development process, do you think there’s some absolute minimum process that could be advocated and popularized to help FOSS projects select the tasks that matter and make progress? …Lazy agile?
This is why if I intend to work on FOSS that I feel might be “large” (e.g. something I can see myself working on for many months or years), I setup an issue tracker very early. For me, dumping requirements, ideas, and code shortcuts I’m taking into an issue tracker means that if I’m feeling sufficiently motivated I can power through big refactors or features that take a lot of work to add, but if I’m feeling less ambitious, I have some silly bug where I decremented the same count twice that I can fix that takes only an hour or so of my time and results in an actual, tangible win. That helps me keep forward momentum going. This is what helps me, at least.
What are the problem areas for you? I’m genuinely curious as I have been using free open source as a daily driver for a few years now, and for a while the worst was the lack of gaming support. There are more linux specific titles on steam right now to occupy my time that I don’t even have to faff about with proton yet.
I’ve been in teiresias camp now for a while, free software is the future.
I am fine. I love to tinker with things even if it prevents me from “doing the work”.
As a very basic example: if my network stops working after a package update, I lose the 30m-1h to find what’s wrong and fix it. However there are people in the world where this kind of productivity loss is unacceptable.
I’m afraid that the open source community mostly develops for people like me (and you, from what you’re saying).
The only way we get there is with a society that is stable, sustainable, and respects you autonomy. And we haven’t had that since at least the Industrial Revolution
Wait, what?
Since the Industrial Revolution we’ve had pretty much constantly increasing wealth, human rights, health, autonomy, throughout almost all the world:
Sub-Saharan Africa was and continues to be a basket-case, but I don’t think anyone is blaming that on the Industrial Revolution. (Actually, leaving aside the racists … what are people blaming that on? Why is it that the rest of the world is dragging itself out of povery, but Sub-Saharan Africa isn’t?)
“Since the Industrial Revolution we’ve had pretty much constantly increasing wealth, human rights, health, autonomy, throughout almost all the world”
Be really careful with interpreting data-driven arguments (Pinker, Rosling, ourworldindata, etc.) as truth - they’re popular in the tech world, rationalist/progress circles, 80,000 hours, etc. I find myself unlearning parts of these narratives and trying to be more open minded to less rigorous arguments these days. The argument I’ve heard is that you can have someone in earning more money (which gets recorded on paper) but they may actually be undernourished compared to smallholder subsistence living (which data may translate as living in poverty). You also have to factor in the changes in ecological function / land use change with increase of people living in an urban niche.
As far as how all this relates back to free software, I don’t know enough - I see problems and interesting ideas both in free software movements and non-free software 🤷♂️
As far as how all this relates back to free software, I don’t know enough - I see problems and interesting ideas both in free software movements and non-free software 🤷♂️
It was just me chiming in as usual whenever anyone blames capitalism, or industry, or so-on for all the world’s ills[1].
Capitalism and industry have been responsible for lifting billions out of the default state of humanity: miserable poverty, disease, and tribal warfare.
And in first-world countries, we’ve gone in one generation from “one expensive, fragile, and basically toy-like 8-bit microcomputer” to “everyone in the family owns multiple computers, including hand-held battery powered supercomputers with always-on high-speed Internet connections”. 90% of Australians, for example, are regular Internet users. 90%!
Meanwhile the proposed alternatives have been responsible for millions of deaths in the last century alone.
The good ideas present in socialism (like caring for those who, through no fault of their own, are incapable of caring for themselves) are in no way incompatible with pure[1] capitalism, and are also far from unique to socialism.
All that socialism implies is that people are forced to fund that care, as opposed to doing it voluntarily (through charity, mutual socities, unions, religions, etc.).
To put it in hacking terms: socialism is a brute-force kluge ;)
That type of socialism is like the GPL, in enforcing behavior that one is afraid might not happen voluntarily, given there would be the more capitalist resources available to do so.
A country with mandatory private health insurances use exactly the same maths to figure out costs, yet people see that as a huge problem because the USA fscked things up. I can attest to universal health care not being universal, not systematically, as the lines are long with a heavy emphasis on anything that’s not pre-emptive, even denying treatment due to cost.
That worries me, since competing companies are incentivized to keep their customers. Is that closed-source software? Maybe.
But often FOSS seems to behave like this monopoly superorganism that can do whatever, like the new Gnome UI stuff. Good thing there’s at least some competition.
That type of socialism is like the GPL, in enforcing behavior that one is afraid might not happen voluntarily
Except that it’s unlike the GPL in that if you don’t want to use GPL software, you’re free to choose something else. If you don’t want to license your software under the GPL, you’re free to choose a different license.
Socialism doesn’t give those subject to it any choice in the matter.
(Edited to clarify: as currently implemented by mainstream politics. Voluntary communes and the like are just fine by me. Not how I’d choose to live personally, but a perfectly valid choice. And, note, completely compatible with laissez-faire capitalism.)
That’s a fair extension to my analogy, sure. This does certainly start to break down if people compare BSD-licensed contributions and voluntary societal ones. Sadly that often degrades quite quickly into rich people buying a clean conscience without actually giving a crap, which is a nice parallel for Google’s FOSS effort.
I do agree with you and personally don’t really care if good charity came from a bad person/party, unless there are nasty strings attached.
Edit: bad wording maybe for “bad”. Nasty strings are t&c but also you can’t buy yourself clean with money from child-trafficking. These terms are too vague and subjective.
The common explanation I’ve heard in left leaning circles is that because the countries are dirt poor, they have to take loans from institutions like the IMF, and those loans have incredibly shitty agreements which basically guarantee that the country remains poor because all of the value generated in that country is shipped over to the rich parts of the world. Many of them, for example, have enough fertile land and water to keep the population fed, but that land and water is instead being used to grow cash crop for the richer countries, which is part of the reason we enjoy cheap T-shirts and coffee. There’s also a lot of other ways the current economic world order kind of screws over the poorer countries; a lot of it is described in the Wikipedia article on neocolonialism.
Some people go as far as to claim that capitalism requires an underclass, so in social democracies which try to achieve some degree of equality within the nation, the underclass has to be out-sourced to places like Africa or China. (That certainly seems to be what’s happening, but whether it’s required by the economic system or just a flaw in the current implementation of it is up for debate.)
Personally, I find those explanations fairly convincing, and I haven’t heard any good refutations. I’m far from an expert on the topic though, so there may be other, good explanations. My personal guess would be that the reason this topic isn’t discussed that much (at least in non-racist circles) is that we basically have to conclude that the rich parts of the world are responsible for perpetuating the problem, and that acknowledging this and fixing it would be really fucking expensive.
The book The Dictator’s Handbook (summarized in Rules for Rulers) offers another explanation. Foreign aid is a quid pro quo for policy changes. Aid recipients accept the loans and use it to enrich their government’s supporters.
If I’m using my PinePhone and there’s a problem, it’s usually something I can fix. Even if it means running the onboard diagnostics and ordering a new motherboard (yeah, my WiFi just failed), that’s an intended use case. Sure there’s a binary blob or two involved, and I can’t personally repair surface-mount boards … but to a far greater extent than either an iPhone or an Android phone, it’s my device.
Contrast that with, say, an old Samsung phone. Want to upgrade the OS? You’re SOL if Samsung and/or your carrier has stopped shipping updates. Want to root the device, or swap OS? Expect a bunch of software to stop working (think Google Play, and games with overzealous anti-cheat for starters). Want to repair the device? Go buy some specialist tools and cross your fingers … but probably don’t bother, because OS updates aren’t a thing any more anyhow.
but to a far greater extent than either an iPhone or an Android phone, it’s my device.
It is your device if you understand and enjoy technology to that extent, and I think this is an important point to drive home. Imagine you have a friend Foo. Foo uses a Mac, but is getting real tired of their Mac constantly telling them they can’t install a piece of software or that some application of theirs can’t read from a directory. Foo hears that all their cool tech friends are on Linux, so maybe Foo should be too. Foo installs a distro, and then tries to plug in two monitors with different DPIs. Big mistake; nothing is scaled properly. Foo searches online and sees references to font scaling, HiDPI support, this thing called Gnome, and other stuff. Foo hops into an online chatroom to ask a question then gets asked what their current Window Manager is. What?? Someone in the chat tells Foo that this is why they never use HiDPI displays, because it’s too much work to configure. What in the world, they just don’t use something because Linux doesn’t support it??
Half of my own knowledge of Linux comes from having gotten things to work for Linux. I remember in the mid-2000s when I had to run wpa_supplicant by hand on my wireless adapter and then add in some custom IP routes to make it play well with my router. I learned about ALSA by trying to figure out why my audio doesn’t work on startup (turns out the device changes device IDs on boot, and configs are based on the device ID, how fun). I learned about X11 and Xorg when troubleshooting issues with resolution, compiling display drivers, setting refresh rates, HiDPI, you name it. I learned LPR and CUPS by trying to get my printers to work. For me, this stuff is fun (to an extent, I don’t exactly enjoy having to whip out xrandr when trying to get my laptop to display slides to give a presentation.) But to the average user that is somewhat interested in freedom or configurability, “owning your device” shouldn’t mean having deep expertise in computing to troubleshoot an issue.
It is your device if you understand and enjoy technology to that extent, and I think this is an important point to drive home.
Sure, absolutely. I was merely answering the original question from my own perspective, as requested by @kevinc. (Well, to be fair, he didn’t request it from me, but I’m presumptuous like that ;-P ).
What in the world, they just don’t use something because Linux doesn’t support it??
The irony! I’m posting this from a 1080p external monitor that I bought, at the time, because setting up display scaling on FreeBSD was on my TODO list.
Samsung phones are a really bad example to use, since whatever-replaced-Cyanogen is still supporting the S3 last I checked (which is 11 years old at this point). Since the thing has a replaceable battery, you could reasonably expect to use it as a basic phone years to come (even if the memory is anaemic by modern Android standards).
You might have slightly better luck with using Apple in your example, but they’re on a 7-8 year support cycle with OS updates too. Wait 8 years and see if you can still replace your PinePhone’s motherboard. I’d be moderately surprised if Pine64 was still making the board in that time. (I know they have a LTS A64 but I don’t know what, if any, commitments they’ve made re the phone.)
Samsung phones are a really bad example to use, since whatever-replaced-Cyanogen is still supporting the S3 last I checked (which is 11 years old at this point)
Yeah but Samsung isn’t. And a number of vendors whose software “supports Android” flat out refuses to run on phones with ROMs other than those approved by the manufacturer and carrier.
That some enterprising open-source developers have managed to hack part of the way around the problems posed by this awful ecosystem is great, but it doesn’t diminish the problems, or most of the feelings of helplessness.
Sure. Sacrificing autonomy begets dependence. Dependence begets learned helplessness, which in turn begets dependence, in a vicious cycle. Sometimes there are perfectly good reasons to sacrifice personal autonomy, such as when the needs of the many are in conflict with the needs of the one. A perfect example of that situation is Covid 19 and lockdowns + mask mandates, but that discussion isn’t relevant here. Needless to say, when I feel that some company is constraining my power to make decisions, I turn resentful. When I use an iProduct from Apple, terms and conditions apply. Terms and conditions are those things that the conquering army dictates to a surrendering foe.
Thanks for elaborating! If I understand, part of the problem is the popular norm of accepting the terms and conditions rather than thinking critically about them. That would lead those who do think critically and opt out to be relatively isolated in an uphill battle. I for one am unhappy with the QWERTY keyboard standard, not there’s anything nefarious about it — it’s just something people don’t think critically about and consider alternatives to. We could have better, but we let inertia win. I don’t really have an entity to be resentful of, but I might if a corporation were behind it.
There’s a very real sense in which Ubuntu is a flaming rubbish fire of a desktop, but honestly it means I don’t have to deal with Mac OS X or Windows and that’s enough for me. I used Windows a great deal when I was younger, and I tried using OS X for five years after moving to the US with only the Macbooks that would fit in my suitcase; I really can’t stand using either one.
I just like the way X11 and tiling window managers feel (i3 now, formerly dwm) and I like having a vaguely UNIX-like OS underneath. For better or worse, Ubuntu is able to drive my desktops and my ThinkPad and runs the applications I want – with Proton it even runs many Windows games. So, while it’s a rubbish fire with all of the problems the article describes, it’s my preferred rubbish fire for right now. I’m not sure the grass is really greener anywhere else.
This comment got longer than I planned. TL;DR: Mac good, then Mac outrageously bad, now Mac good again.
I got my first Mac in 2013, when MacBooks and MacOS were peaking. Within a few years, the steady decline of MacOS began, to my great disappointment. All the slowdowns and extremely blatant UI bugs made my $3k+ work MacBook Pro feel like a cheap piece of junk.
When my 2013 MacBook Pro replaced my ThinkPad T61 (from 2007ish?), I remember being absolutely blown away that the MacOS lock screen rendered before I could even open the lid enough to type my password. Not a particularly spectacular benchmark, I know, but it illustrates how far Macs fell: just a few years later my work MacBook Pro would take 10-15 seconds to become responsive when waking up.
And the bugs. My god, the bugs. I tried for a good 30 minutes to connect Apple Music to my bigger B&W speakers using shairport-sync. Never worked, even though my iPhone works fine. It did, however, give me lots of confusing feedback with checkmarks checking and unchecking themselves, spinners, and completely disrupting the rest of my system audio. Eventually I rebooted to end the pain.
But I heard the new M1 Macs fixed everything, so I took a chance and got an M1 MacBook Air. If it still sucked, I was ready to return it and switch back to Linux after all this time.
I have to say, it dramatically exceeded my expectations.
My new M1 MacBook Air has that old snappy responsive feel I loved about my first MacBook Pro. Passes my lame open-the-lid-from-sleep test with flying colors. And connecting to my speakers works instantly and flawlessly—even allowing me to send only Apple Music to my big speakers, and keep using my desk speakers for the rest of the system.
The M1 does literally everything better. I could probably come up with dozens more examples, but for whatever reason those two problems infuriated me the most.
Anyway, I’m not trying to sell you or anyone else on Macs. I’m just ecstatic my preferred OS doesn’t suck anymore. I don’t know when you moved to the US, maybe your 5 years of MacOS happened smack in the middle of the dark days? Or maybe you just don’t like MacOS. Obviously, that’s your prerogative. Even though you call your setup a rubbish fire, I’m glad you have something that works for you!
Story time: this morning I grabbed my Linux laptop running KDE Plasma for a few files off of it before I head in to work. (Also noticed my article got posted here, hey!) When I opened the lid:
I can’t fathom how people are willing to sacrifice so much freedom because the screen contents may be stale. Somehow it never hurts my productivity, if I open the lid, I know I am opening the lid. I won’t be in the flow when opening the lid, I will already know what time it is when I open it. Is it condescending of me to call it vanity?
This is exactly the perspective that keeps non-technical users from using FOSS. Their concerns are relegated to “vanity” by most. What is “vanity” to you may be essential to others. So when those others ask why this behavior occurs and receive the above response about “vanity not freedom”, they just politely drop FOSS and move on.
Hard agree. Also, I’ve wasted about ten seconds per day on this (~3-5 seconds * 2-3 lid events per day).
I’ve been running Linux on this laptop since 2015.
21,900 seconds, or 6 hours, of my life has been wasted on this “vanity” bug regarding KDE’s lock screen.
Tiny amounts of performance can have huge amounts of waste when you have economies of scale like this, especially when it’s a daily-driven workstation used for multiple years. Think of the board games I could have played with my Mum, the meme videos I could have watched with my friends, and the snuggles I could have given my cat in that time. (Let alone the open source code I could have been writing.)
This was some time ago so I don’t perfectly remember all of my reasoning. In quality most definitely, though in my opinion popularity played a huge role in software quality for developers.
As a long time Linux user I wanted to continue using tools I was familiar with, which meant the shell, vim, package management, and the numerous command line tools I had grown accustomed to. Thus the quality, maturity, popularity, and package availability of Homebrew played a significant role in my evaluation.
The late 2013 model I got was the first with a Haswell processor, and, if I recall correctly, NVMe SSD. I don’t remember why, but Haswell processors were supposed to have been a big leap over the previous generation. I think for battery life? And NVMe bandwidth was mind-blowing compared to SATA III.
Lastly, it was still hard to beat my ThinkPad T61’s 1680x1050 display at the time, and I flat out refused to compromise on screen real estate, so the Retina display scaling to 1920x1200 was a big plus. I was in college at the time, so I didn’t usually have an external monitor. Whereas today I use my M1 MacBook Air with a 32” 2560x1440 monitor for serious work.
In the technology adoption life cycle I’m squarely in the early majority, by no means an early adopter. I started pondering a switch to Macs after continued evidence from developers around me that OS X did everything I wanted it to do. The hardware leap of the late 2013 model sealed the deal for me and I ordered one as soon as the first OS X Mavericks patch release was announced.
Yeah. I remember twenty years ago. I was all hyped up on the enormous potential of the Linux desktop and I was really looking forward to the day Linux would make Microsoft go and hide in shame.
I’m really sad that the Linux community fumbled the ball regarding desktop. Instead of better systems we just got more of them - everyone was making their own. Instead of pulling together, we just got bitching and bickering.
The last four years my desktop have been macOS and I love it.
I generally use all 3 major operating systems (sorry BSD proper) on the desktop and I’ve found that I can’t get a setup I like with anything other than Linux desktop. The programmability of the interface I use (sway) is so far ahead of anything I can achieve on MacOS or Windows. Additionally, the ease at which I can modify even low level parts of the operating system or see what’s going on when something breaks makes Linux far and away the best option for me on the desktop.
Mac is second, but it’s not a close race. Apple has extraordinarily sane defaults and a snappy interface, but doing something that takes a few minutes of shell scripting in my Linux system like changing my wallpaper via keyboard macro is an annoying experience to set up on the mac. It feels like apple wants you to use the system as they’ve designed it, and if you want to do anything else, you are wrong. I don’t like that, and I don’t like that my setup hinges on built in programs that haven’t been updated in years and don’t include obvious features that should be there.
The worst is Windows. No sane defaults. No real configuration. The only reason I run windows these days is to play games. I have no idea why things break that worked perfectly fine before updates. The updates don’t work, and take forever. The worst part is the constant advertising and pushing of their crappy corporate products. I have a pro or whatever license, and yet I constantly get bombarded with their crappy games, stupid bing integrations, and one-drive crap. I get mad every time I load up the machine.
doing something that takes a few minutes of shell scripting in my Linux system like changing my wallpaper via keyboard macro is an annoying experience to set up on the mac.
That’s an odd example, since setting the desktop wallpaper is exposed via AppleScript (actually, OSA Scripting, so you can use JavaScript if you prefer. Triggering things from the keyboard at any point requires either registering a global shortcut or setting it up as a text service. I have never done this, but I’m pretty confident I could do it in half an hour - including time spent looking up things in docs - on macOS, I’d probably spend an hour or more working out what the right tools are to try to use.
I typed “trigger desktop wallpaper change applescript” into Google. The second result looked like it had something promising. Found this at the bottom of the comments:
tell application "System Events"
tell current desktop
set initInterval to get change interval -- get the currrent display interval
set change interval to -1 -- force a change to happen right now
set change interval to initInterval -- change it back to the original display interval
end tell
end tell
I haven’t really used AppleScript or Automator, but it took 30 seconds or so to find the “Run AppleScript” ‘action’ (?). Pasted that in there. Pressed play. My background changed. It took less than two minutes all up.
I can appreciate and respect this, but I have never once wanted to change my wallpaper via keyboard macro. I think I’ve had the same wallpaper since 2013 even, haha.
This really does come off as the stereotype that Linux is made by hackers, for hackers. I have other interests, a family, a photo collection that makes digiKam choke, faxes I have to send to banks and I have to fight LO to make non-ugly cover sheets, etc. I may be a hacker underneath, but I value my time and other interests. And I think that’s ok too. I just wish there was an option on Linux for people who want to just get things done.
And one of those other interests is older hardware, which the community/ecosystem is now actively trying to crap on, too.
OP wants to use S3 Trio64 with Wayland, yet migrates to a much newer Mac. Apples hardware support is in general pretty long but not “runs newest version of macOS on hardware from 1995”.
Not to mention that macOS has been terrible wrt to stability, axing 32-bit support altogether in Catalina.
OP wants to use S3 Trio64 with Wayland, yet migrates to a much newer Mac.
If I have to give up older hardware support anyway, I’m going to use something that doesn’t make me tear my hair out every time I try to do something with it.
The point is that Wayland is making it so Linux has higher system requirements than Catalina. So, why not just run Catalina then? Linux doesn’t meet any of my requirements any more as noted in TFA (not just hw compat but all the other points).
Not to mention that macOS has been terrible wrt to stability
This isn’t an article in support of ABI stability – though you could argue ABI stability would be a side-effect of most of the improvements I desire, and it is something I do find desireable.
This is an article in favour of stability in the sense of does not crash and has predictable releases.
Also, a properly written CarbonLib application can run on any system from Mac OS 8.1 to macOS X 10.14 Mojave. How do I write an application for Linux that can run on Red Hat 5 (not RHEL 5) and Ubuntu 18.04, and still appear reasonably integrated with the system across all versions?
CarbonLib applications look native, in the way that Qt 5 apps look in Plasma, or in the way Gtk apps look in GNOME. The way Qt 4 apps don’t look in Plasma any more. The way wxGtk apps don’t look in Mutter/Wayland.
It could be possible, with a good widget engine, to get Qt 3 apps to look nice in Plasma. With TDE you could even build Qt 3 on modern versions of Linux. Then you could, maybe, get an app that looked native on Red Hat 5 and Ubuntu 18.04.
I would love to see this just for the giggles, tbh, but I don’t feel like investing the time in trying it myself.
That is highly unlikely for anything other than really small command line applications. There is a pretty wide range of software version upgrades. Chances are the kernel which does it’s best to not break userspace and keep backwards compatible is the only thing that won’t have changed the interfaces you need to consume. Almost everything else your software depends on has likely had breaking releases in the time between those two distro releases. Unless you are statically compiling, something the distros frown on, you are probably going to have to do a significant amount of work to make this succeed in both worlds simultaneously.
The Wayland / X11 dilemma is nightmare fodder still, after 13 years since Wayland’s first release. That’s like a death clock of our own devising… I’m kinda more scared of the day X11 gets irrevocably deprecated (if Wayland hasn’t become ready by then) than of the 2038 bug.
Ugh, another one bites the dust — without giving the BSDs a chance. Try OpenBSD, the out of the box experience is the most pleasant one I’ve had yet, more so than macOS even, because it does exactly what I want it to.
I have to use a MacBook for work, and while the UI is pretty and all, I find its tiny quirks and overall rigidity very annoying. I can’t do things the way I’d like to.
I gave FreeBSD a serious chance for months. I hacked a little on i915 trying to fix some backlight bugs. I contributed to what became the implementation of the utimensat syscall implementation for the Linuxulator. I was even featured on an episode of the original BSDNow in 2015.
I suppose you could argue some value of stability and release engineering still exist in FreeBSD, but they’re even more hostile to older hardware and the desktop experience is even worse as more and more software assumes Linux.
One reason I like my OpenBSD Desktop is that it is still running great on my 2012 Toshiba Portege - I’ve upgraded the RAM to 16Gb and the HDD to an SSD - the only draw back is getting decent batteries for portability is now becoming an issue. But I have heard that OpenBSD may be coming to the M1 in the near future :~)
macppc supports all New World models, including the G3.
I think NetBSD is probably the best of the BSDs for running on PowerPC, if we’re talking about it; in addition to macppc they have ofppc, amigappc, bebox, etc.
The main points are: stability, portability and obsolescence and how they are a struggle.
But then the author moves to the latest MacOS? Where is the stability? Apple is famous for breaking compatibility and biting the bullet whenever they can push a new proprietary API to ensnare devs (Metal?). Where is the portability (Apple only cares about the hardware they sell of course). And where is the (lack of) planned obsolescence? This is the whole long-term strategy of Apple: tech as fashion and short hardware update cycles.
So this is why the author leaves linux desktop? He could run a recentish notebook, with ARM or x86 cores and linux would be perfectly fine. None of those issues would be valid then.
But then the author moves to the latest MacOS? Where is the stability?
On the user side of things :). A few months ago I got one of them fancy M1 MBPs, too, after not having used a Mac since back when OS X was on Tiger. Everything that I used back in 2007 still worked without major gripes or bugs. With a few exceptions (e.g. mutt) the only Linux programs that I used back in 2005 and still worked fine were the ones that were effectively abandoned at some point during this period.
Finder, for example, is still more or less of a dumpster fire with various quirks but they’re the same quirks. Nautilus, uh, I mean, Files, and Konque… uh, Dolphin, have a new set of quirks every six months. At some point you eventually want to get off the designer hobby project train.
In this sense, a lot of Linux software isn’t really being developed, as in, it doesn’t acquire new capabilities. It doesn’t solve new problems, it just solves the old problems again (supposedly in a more “usable” way, yeah right). It’s cool, I don’t want to shit on someone’s hobby project, but let’s not hold that against the people who don’t want to partake.
(Edit: to be clear, Big Sur’s design is hot garbage and macOS is all kinds of annoying and I generally hate it, but I wouldn’t go back to dealing with Gnome and GTK and Wayland and D-Bus and all that stuff for the life of me, I’ve wasted enough time fiddling with all that.)
Unlike Apple, you have some options with open source. Don’t like the latest Gnome craze? Get MATE, which is basically Gnome 2. There are lots of people who keep old window managers and desktop environments alive and working. The Ubuntu download page lists a couple, but many more can be installed with a few commands.
I think I have been running the same setup for six or seven years now, no problem at all.
If you try to grab a Gnome 2 box, you’ll find that Mate is pretty different even if the default screen looks about the same. Not because of Mate but because of GTK3 general craziness. Sure, the panels look about the same, but as soon as you open an application you hit the same huge widgets, the same dysfunctional open file dialog and so on. It’s “basically the same” in screenshots but once you start clicking around it feels pretty different.
If all you want is a bunch of xterms and a browser, you got a lot of options, but a bunch of xterms and a browser is what I used back in 2001, too, and they were already obsolete back then. The world of computing has long moved on. A bunch of xterms and a browser is what many, if not most experienced Linux users still use simply because it’s either that or the perpetual usability circlejerk of the Linux desktop. I enjoy the smug feeling of green text on black background as much as anyone but at some point I kindda wanted to stop living in the computing world of the early 00s.
I’ve used the same WindowMaker-based setup for more than 10 years, until 2014 or so, I think. After that I could technically keep using it, but it was mostly an exercise in avoiding things. I don’t find that either fun or productive. I kept at it for 6+ years (basically until last year) but I hated it.
(Edit: imho, the options are really still the same that they were 15 years ago: Gnome apps, KDE apps, or console apps and an assortment of xthis and xthat from the early/mid-90s – which lately mostly boils down to “apps built for phones” and “apps built for the computers of the Hackers age”. Whether you run them under Gnome, KDE, or whatever everyone’s favourite TWM replacement is this year doesn’t make much of a difference. Lots of options, but not much of a choice.)
As one of the people who wrote the offending Mesa drivers, I agree that there are many problems with the Mesa codebase which are endemic and probably can’t be fixed without massive rewrites. Big-endianness is a constant hassle because bit-twiddling in C is a chore and abstraction is tantalizingly difficult. Performance is difficult to understand and largely boils down to whether or not operations are GPU-accelerated. I used to fantasize about rewriting Mesa in Haskell; these days I suppose I’d pick OCaml for that fantasy.
Many of these comments are accurate, but I’ll point out that Wayland continues to improve. Red Hat has announced that nVidia GPUs will work this summer with a new driver release from nvidia, and remote desktop is finally supported (in Fedora 34). They say that in Fedora 34, Wayland is now ready for general use, before that it was in beta mode. X11 is better for supporting old hardware, but one of my motivations for switching to Wayland is better support for new hardware: eg, trackpad gesture support like pinch-to-zoom and 3-finger swipe. There is a dwindling number of people willing to work on X11 (is it down to one guy yet?), so X11 will continue to get worse on new hardware, while Wayland will continue to get better.
Also, I’m migrating my laptop from Mac to Linux/Wayland this year, because even though the M1 CPU is hot right now, and the new magic keyboard is marginally acceptable instead of a crime against humanity, MacOS software keeps getting worse as a development environment for free software. bash hasn’t been updated since 2007, gdb stopped working a few years ago, OpenGL is deprecated and will likely disappear in a few years, etc.
A major issue that gets in the way of having stable open source software is the rapid evolution in the hardware space, plus the hardware is closed source and can’t be manufactured by artisans (unlike the software). Laptops only last so long, then you must upgrade to the latest hardware, and open source is on a treadmill keeping up with the hardware.
They say that in Fedora 34, Wayland is now ready for general use
Let me know when I can have WebEx meetings on it. (Seriously. Not trying to be passive-aggressive, I’ll give it a whirl.)
MacOS software keeps getting worse as a development environment for free software.
I guess that depends on your view. My C++14 project that uses CMake and Boost needed no changes to build and run on Sierra and Mojave.*
Okay, it needed a header because of the differences between libstdc++ and libc++, but that was really on me.
bash hasn’t been updated since 2007,
I use zsh. I’m not sure why bash version matters. Maybe autotools, but if you’re building things with autotools you’re likely either using Homebrew or you’re building your own stuff in /opt anyway, so I’m not sure why it would matter then.
gdb stopped working a few years ago,
lldb seems to work fine for me.
OpenGL is deprecated and will likely disappear in a few years, etc.
On Linux, too. Vulkan is the New Way. At least that’s what all the Linux 3D people tell me.
Laptops only last so long, then you must upgrade to the latest hardware
Well, technically they do, but the “so long” can be a much longer than most people seem to realize. I am typing this on a six year old laptop and it does not show any sign of aging other than that the buttons on the trackpad are becoming scratched. I got a brand new kickass laptop for work last summer and I really don’t notice much difference between the two. The screen on the new one is better, but this one is fine too.
You don’t have to get the latest shiny icons to be productive. People say they know that, but they generally don’t act like it. Computers don’t get slower over time. The clock speed stays the same, as does the amount of ram. If you experience slowness, just remove some cruft and you’ll be happy again without spending hundreds of dollars.
Consumers are as responsible for keeping this treadmill running as the manufacturers are.
I’m writing this on a 10 year old MacBook Pro. The “s” key is a tiny bit flaky, but otherwise it’s still running fine. I expect it to last me another year or two until a second-gen Arm-powered MBP exists.
I agree with many of the points people have made in this discussion. I can work in a Linux desktop environment. But I develop software for a living so I wouldn’t expect a typical business user or grandparent to feel the same way. I quite like my M1 MacBook but I have concerns with obsolescence and privacy. I’m anxious about going outside the walled garden.
The idea of the Linux desktop is fertile ground for opinionated discussion. It sounds like many people want a free-as-in-freedom alternative to the Mac platform. Something that is stable, but not necessarily supported (3rd party support services would be an option). Something that is very simple to use out of the box but is open to customizations. Something that developers know once they deploy to it they can be confident that any users who stay within certain use guidelines or policies will be able to run their code for at least 10 years.
Superficially it appears like the Linux ecosystem and technology stack could be a good candidate for this hypothetical end user platform. But the more I think about it the less confident I am in that idea. Who decides what’s in and out? Who lays out design guidelines? Where are apps found? What is the payment system for commercial apps? Who decides when the ABI breaks? When the hardware changes?
Personally, I’d pay for an open source, freedom respecting platform but I appear to be in the minority.
Whoever makes the system. (Gee, like Apple with macOS X and MS with Windows, huh?)
Who lays out design guidelines?
Whoever makes the system. (Seeing a pattern?)
Where are apps found?
The maker of the system makes a repository of software available. (Distros already do that, to be fair.)
What is the payment system for commercial apps?
If this is a libre system, then more than likely it uses something like Liberapay.
Who decides when the ABI breaks? When the hardware changes?
Tying ABI to hardware is what did early Solaris (and, to a lesser degree, IRIX) in.
I would say that RHEL might be the best example of “what can a Linux that cares about backwards-compat look like without radically changing the ecosystem”, which can be either an endorsement or an indictment depending on your opinion of RHEL.
Superficially it appears like the Linux ecosystem and technology stack could be a good candidate for this hypothetical end user platform. But the more I think about it the less confident I am in that idea. Who decides what’s in and out? Who lays out design guidelines? Where are apps found? What is the payment system for commercial apps? Who decides when the ABI breaks? When the hardware changes?
I think elementary might be the closest one to realizing this kind of thing. They’re building a platform, with release cycles, prescribing the libraries used, a storefront with payment for developers, HIG, etc. Basically, the disparate pieces distro and desktop devs have, but as a single unified package with some kind of vision behind it.
The linked Gnome dev blog post is worth reading and reflecting on; cathedral and bazaar overtones, and they’re pro-cathedral. With my experience, I suspect they’re probably right.
the author dislikes that google does not want to accept portability-patches, but i kind of understand. if you accept those patches, now you have to maintain them:
you need people who understand those architectures
you need hardware with such architectures to test
in general, every extra line of code has a cost, because it makes refactoring/reworking/changing the code harder, slows down development
this is extra effort. sometimes it is worth it, and sometimes it is not.
the phrasing in the article like “the Talos user community offered a PowerPC port” sounds like all the extra work is done by not-google, but that’s not how it will be long term.
The company that produces the Talos offered Google free CI hardware.
I realise that there would be a non-zero cost to Google, but pardon me if I don’t cry if one of the - if not the - largest company on earth might have to spend a few minutes per release cycle scrolling over a PowerPC block of code.
What annoys me so much about the Wayland vs X thing is people claim that X could not be fixed, but this is an obvious lie - in fact, many of the things people claim to be unfixable already are fixed! The wayland FAQ doesn’t even agree they couldn’t fix X - it says explicitly they could, they just chose not to because they saw an opportunity to do something else more fun. So they’re breaking everything because they want to goof off instead of doing valuable work.
And I’m sure they are going to try to shove it down my throat eventually, just like that awful PulseAudio disaster (which btw is STILL broken, while ALSA is finally actually pretty OK). And I think at that point… I’ll have no real choice but to fork it myself. I can’t stand Mac and Windows continues to annoy more and more with each passing year too. Competition is supposed to make things better, but instead it is a race to the bottom. (Though at least most my old programs still work on Windows!)
There’s already enough things that go wrong with our software stacks, but I would say that release cycles getting larger would compound a lot of issues mentioned when discussing these topics.
It leads to stuff like having to maintain N package repos (I can’t install gettext 0.21 on ubuntu 20.04 from the repos despite it being extremely likely that it would “just work”). The whole “huge release of software at once” thing means you get to deal with about 20 problems interacting with each other at once. And you don’t get all the fun stuff quickly!
Macs have this problem, where the once-a-year feature release leads to nobody knowing why anything is broken, but it’s all broken and basically impossible as a user to fix it. Meanwhile Chrome ships out its constant releases. When there’s an issue, it’s like… one issue, and there are many people hitting it. So there’s extremely quick triangulation on potential fixes!
There is of course the “I don’t want my UI to change out from under me every 3 months” criticism from rolling releases. But I think that’s more of an issue with the underlying software development strategy (and if anything, I think rolling releases has made software better at maintaining user preferences, as it’s harder to ask people to just try reconfiguring everything at once). Like you can ship bugfixes, new features etc on a continuous basis. And without big “N.0” release pressure, there’s less “oh here’s the big new feature for you now” feeling.
I understand OP. Before M1 I was thinking Apple is pushing me away and I have to bite the bullet and get going on Linux desktop. However my main stumbling block is always, input handling (touch pad specifically), HDPI and keyboard bindings.
Input in Linux sort-of-kind-of-works, but it’s just so far behind Apple. Consistent, smooth, inertial scrolling with just the right amount of acceleration. Linux have plenty of knobs for it, but I suspect there are deeper issues preventing it from getting the exact precision feel of macOS.
HDPI support is maybe why X11 is at the end of the road. Again it sort-of-kind-of-works, until you happen to launch some rarely used app and it’s illegible. Inconsistent font sizes, multi monitor setup issues. Again, there are plenty of knobs to fiddle with, but it’s boring job.
Keyboard bindings for basic functions like copy-paste. Many apps you can remap, but I always find apps where you can’t remap and it becomes this one app where you can’t use muscle memory. Like… ctrl-c is something completely else in a terminal window, and it creates this issue of… remap all the apps to Some Other Key? or deal with the inconsistency.
Command Is Not Control is something that took me a while to learn when I originally switched from WinNT and Solaris to Mac OS back in the day, but is such a powerful paradigm. Thank you for reminding me of that. It was something I wasn’t even conscious I was doing when I went back to Catalina!
I totally get what the OP is saying. I want technology that is stable, sustainable, liberating, and which respects my autonomy. But the only way we get there is free software. Not corporate-driven open source, and certainly not proprietary software. I wonder if Unix-likes are really the best foundation for building that future; I suspect they might not be.
If you care about old hardware and alternative architectures, Apple isn’t your best bet. They only target whatever Apple is selling. They’re a hardware company; software is their side-hustle. Well, hardware and now rent-seeking in their gated community, by means of the app store. They’re also a company built on planned obsolescence and “technology as fashion statement”.
As for stability, most of the big players don’t care, because the incentives aren’t present. The tech field has inadvertently discovered the secret of zero-point energy: how to power the unchecked growth of an industry worth trillions of dollars on little more than bullshit, a commodity whose infinite supply is guaranteed. Surely, this discovery warrants a Nobel Prize in physics. As long as the alchemists of Silicon Valley can turn bullshit into investments from venture capitalists, there will be no stability.
For what it’s worth, I use an iPhone. It’s a hand-me-down; I wouldn’t have bought it new. It’s an excellent appliance, but I know that when I use it, Apple is in the driver’s seat, not me. And I resent them for it.
I was a big believe in that for a very long time, too, but I’m not too convinced that software being free is the secret sauce here. Plenty of of free software projects treat users, at best, like a nuisance, and are actively or derisively hostile to other projects, sometimes in plain sight (eh, Gnome?). There’s lot of GPL code that only has major commercial backers behind it, working on major commercial schedules and producing major commercial codebases, to the point where even technically-inclined users are, in practice, largely unable to make contributions, or meaningfully maintain community forks, even if the licensing allows it (see e.g. Chrome).
I’m starting to believe that free licensing is mostly an enabler, not a guarantee of any kind. No amount of licensing will fix problems that arise due to ego, or irresponsibility, or unkindness. Commercial pragmatism sometimes manages to keep some of these things in check at the product development level (which, presumably, is one of the reasons why the quality of Linux desktop development has steadily declined as less and less money got poured into it, but I’m open to the possibility that I’m just being bitter about these things now…)
I can personally attest to this. I know it all too well.
I feel this is an argument for having slower, surer process as well. Write a spec before writing and landing something. Have test plans. Implement human usability studies for new UI features. A tree that moves slower is inherently more stable (because new bugs can’t find their way in, and old bugs have longer to be fixed before it’s rewritten) and gives more opportunity for community involvement. But I know this is a controversial opinion.
The only way we get there is with a society that is stable, sustainable, and respects you autonomy. And we haven’t had that since at least the Industrial Revolution (for certain, specific quantities of stability, sustainability, and respect; those have never been absolute). The switch from craftsmanship to mass production made it kinda a lost cause.
I strongly suspect that’s not really true. We have to admit that the vast majority of open-source contributors do not have the work ethic (and why would they, they’re working on open source in their spare time because it’s fun) to push a project from “it fits my needs” to “everyone else can use it”. The sad situation is that most projects are eternally stuck in the minimal viable product stage, and nobody is willing to put in the extra “unfun” 80% of work to polish them and make them easy to use and stable.
I know this is a problem I’m having, and I doubt I’m the only one.
This really is the problem.
Nobody wants to do usability studies, even informal ones. Hell, I just gather a group of friends + my Mum and have them sit at my laptop and tell me what they like and don’t like about the FOSS I’m writing, and from what I’ve gathered my UIs are miles away better than most others.
Nobody wants to write a spec or a roadmap. Personally, I grew up loving discussing things and learning, and I view requirements gathering as an extension of both of those activities, so it’s really enjoyable for me.
I run my FOSS projects kind of like how most corps used to run corp dev (somewhere between waterfall and agile). I feel that the quality is higher than most others, though I admit it’s subjective. And, you always like what you make because you made it so it works the way you expect.
But in my opinion, process really is the problem. But if process was required, most FOSS wouldn’t exist, because nobody would really want to follow that sort of process in their free time. (Present company excluded.)
I really want to see more UX designers get interested and involved in FOSS. It’s very clear that FOSS is driven primarily by sysadmins and secondarily by programmers. If we want a usable system then UX designers need to be involved in the conversation early and often.
… and, with the risk of being catty, when there are UX designers, they are content in copying mainstream interfaces instead of innovating and trying to produce something original.
In the case of Linux, they’re oftent content copying the obviously bad examples, too.
Every once in a while I’ll get the up the gumption to get organized, or maybe clean the house. Then I’ll use that energy, and I’ll try to put a system in place to raise that baseline for good. Later on, when I don’t have the same zeal, whatever system I invented almost certainly fails.
The only systems that seem to survive are those that are dead simple. For instance, when I must remember to bring something with me, I leave it leaning against the front door. That habit stuck.
So when it comes to development process, do you think there’s some absolute minimum process that could be advocated and popularized to help FOSS projects select the tasks that matter and make progress? …Lazy agile?
That was a particularly vivid flashback of my teenage years you just gave me. Wow.
I’m going to have to think very hard on this one. I’m not sure what that would look like. Definitely a thought train worth boarding.
This is why if I intend to work on FOSS that I feel might be “large” (e.g. something I can see myself working on for many months or years), I setup an issue tracker very early. For me, dumping requirements, ideas, and code shortcuts I’m taking into an issue tracker means that if I’m feeling sufficiently motivated I can power through big refactors or features that take a lot of work to add, but if I’m feeling less ambitious, I have some silly bug where I decremented the same count twice that I can fix that takes only an hour or so of my time and results in an actual, tangible win. That helps me keep forward momentum going. This is what helps me, at least.
Interesting perspective. Thanks for chiming in!
What are the problem areas for you? I’m genuinely curious as I have been using free open source as a daily driver for a few years now, and for a while the worst was the lack of gaming support. There are more linux specific titles on steam right now to occupy my time that I don’t even have to faff about with proton yet.
I’ve been in teiresias camp now for a while, free software is the future.
I am fine. I love to tinker with things even if it prevents me from “doing the work”. As a very basic example: if my network stops working after a package update, I lose the 30m-1h to find what’s wrong and fix it. However there are people in the world where this kind of productivity loss is unacceptable.
I’m afraid that the open source community mostly develops for people like me (and you, from what you’re saying).
Wait, what?
Since the Industrial Revolution we’ve had pretty much constantly increasing wealth, human rights, health, autonomy, throughout almost all the world:
https://ourworldindata.org/uploads/2019/11/Extreme-Poverty-projection-by-the-World-Bank-to-2030-786x550.png
Sub-Saharan Africa was and continues to be a basket-case, but I don’t think anyone is blaming that on the Industrial Revolution. (Actually, leaving aside the racists … what are people blaming that on? Why is it that the rest of the world is dragging itself out of povery, but Sub-Saharan Africa isn’t?)
“Sub-Saharan Africa was and continues to be a basket-case, but I don’t think anyone is blaming that on the Industrial Revolution. “
You’re not accounting for the possibility of linkage between the shutdown of the annual monsoon of sub-saharan africa in the 1960’s to coal-burning in europe releasing sulphur dust, which rose during the industrial revolution. Obviously there were also political-driven things out in the area back then too. See one argument here: https://extranewsfeed.com/the-climate-doomsday-is-already-here-556a0763c11d , referencing http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.652.3232&rep=rep1&type=pdf and http://centaur.reading.ac.uk/37053/1/Dong_etal_revised2.pdf .
“Since the Industrial Revolution we’ve had pretty much constantly increasing wealth, human rights, health, autonomy, throughout almost all the world”
Be really careful with interpreting data-driven arguments (Pinker, Rosling, ourworldindata, etc.) as truth - they’re popular in the tech world, rationalist/progress circles, 80,000 hours, etc. I find myself unlearning parts of these narratives and trying to be more open minded to less rigorous arguments these days. The argument I’ve heard is that you can have someone in earning more money (which gets recorded on paper) but they may actually be undernourished compared to smallholder subsistence living (which data may translate as living in poverty). You also have to factor in the changes in ecological function / land use change with increase of people living in an urban niche.
As far as how all this relates back to free software, I don’t know enough - I see problems and interesting ideas both in free software movements and non-free software 🤷♂️
It was just me chiming in as usual whenever anyone blames capitalism, or industry, or so-on for all the world’s ills[1].
Capitalism and industry have been responsible for lifting billions out of the default state of humanity: miserable poverty, disease, and tribal warfare.
And in first-world countries, we’ve gone in one generation from “one expensive, fragile, and basically toy-like 8-bit microcomputer” to “everyone in the family owns multiple computers, including hand-held battery powered supercomputers with always-on high-speed Internet connections”. 90% of Australians, for example, are regular Internet users. 90%!
Meanwhile the proposed alternatives have been responsible for millions of deaths in the last century alone.
[1] Hyperbole, but not far off the mark.
There needs to be a middle-ground between “pure” capitalism and “pure” socialism.
Both of them scare the living crap out of me.
But both of them also have very good, very useful ideas that the world needs to utilise.
The good ideas present in socialism (like caring for those who, through no fault of their own, are incapable of caring for themselves) are in no way incompatible with pure[1] capitalism, and are also far from unique to socialism.
All that socialism implies is that people are forced to fund that care, as opposed to doing it voluntarily (through charity, mutual socities, unions, religions, etc.).
To put it in hacking terms: socialism is a brute-force kluge ;)
[1] By which I assume you mean laissez-faire.
That type of socialism is like the GPL, in enforcing behavior that one is afraid might not happen voluntarily, given there would be the more capitalist resources available to do so.
A country with mandatory private health insurances use exactly the same maths to figure out costs, yet people see that as a huge problem because the USA fscked things up. I can attest to universal health care not being universal, not systematically, as the lines are long with a heavy emphasis on anything that’s not pre-emptive, even denying treatment due to cost.
That worries me, since competing companies are incentivized to keep their customers. Is that closed-source software? Maybe.
But often FOSS seems to behave like this monopoly superorganism that can do whatever, like the new Gnome UI stuff. Good thing there’s at least some competition.
Except that it’s unlike the GPL in that if you don’t want to use GPL software, you’re free to choose something else. If you don’t want to license your software under the GPL, you’re free to choose a different license.
Socialism doesn’t give those subject to it any choice in the matter.
(Edited to clarify: as currently implemented by mainstream politics. Voluntary communes and the like are just fine by me. Not how I’d choose to live personally, but a perfectly valid choice. And, note, completely compatible with laissez-faire capitalism.)
That’s a fair extension to my analogy, sure. This does certainly start to break down if people compare BSD-licensed contributions and voluntary societal ones. Sadly that often degrades quite quickly into rich people buying a clean conscience without actually giving a crap, which is a nice parallel for Google’s FOSS effort.
I do agree with you and personally don’t really care if good charity came from a bad person/party, unless there are nasty strings attached.
Edit: bad wording maybe for “bad”. Nasty strings are t&c but also you can’t buy yourself clean with money from child-trafficking. These terms are too vague and subjective.
The common explanation I’ve heard in left leaning circles is that because the countries are dirt poor, they have to take loans from institutions like the IMF, and those loans have incredibly shitty agreements which basically guarantee that the country remains poor because all of the value generated in that country is shipped over to the rich parts of the world. Many of them, for example, have enough fertile land and water to keep the population fed, but that land and water is instead being used to grow cash crop for the richer countries, which is part of the reason we enjoy cheap T-shirts and coffee. There’s also a lot of other ways the current economic world order kind of screws over the poorer countries; a lot of it is described in the Wikipedia article on neocolonialism.
Some people go as far as to claim that capitalism requires an underclass, so in social democracies which try to achieve some degree of equality within the nation, the underclass has to be out-sourced to places like Africa or China. (That certainly seems to be what’s happening, but whether it’s required by the economic system or just a flaw in the current implementation of it is up for debate.)
Personally, I find those explanations fairly convincing, and I haven’t heard any good refutations. I’m far from an expert on the topic though, so there may be other, good explanations. My personal guess would be that the reason this topic isn’t discussed that much (at least in non-racist circles) is that we basically have to conclude that the rich parts of the world are responsible for perpetuating the problem, and that acknowledging this and fixing it would be really fucking expensive.
The book The Dictator’s Handbook (summarized in Rules for Rulers) offers another explanation. Foreign aid is a quid pro quo for policy changes. Aid recipients accept the loans and use it to enrich their government’s supporters.
Can you say more about the origin of that resentment? I’ve seen versions of this perspective often and I’d like to understand where it comes from.
For me, it’s a feeling of learned helplessness.
If I’m using my PinePhone and there’s a problem, it’s usually something I can fix. Even if it means running the onboard diagnostics and ordering a new motherboard (yeah, my WiFi just failed), that’s an intended use case. Sure there’s a binary blob or two involved, and I can’t personally repair surface-mount boards … but to a far greater extent than either an iPhone or an Android phone, it’s my device.
Contrast that with, say, an old Samsung phone. Want to upgrade the OS? You’re SOL if Samsung and/or your carrier has stopped shipping updates. Want to root the device, or swap OS? Expect a bunch of software to stop working (think Google Play, and games with overzealous anti-cheat for starters). Want to repair the device? Go buy some specialist tools and cross your fingers … but probably don’t bother, because OS updates aren’t a thing any more anyhow.
It is your device if you understand and enjoy technology to that extent, and I think this is an important point to drive home. Imagine you have a friend Foo. Foo uses a Mac, but is getting real tired of their Mac constantly telling them they can’t install a piece of software or that some application of theirs can’t read from a directory. Foo hears that all their cool tech friends are on Linux, so maybe Foo should be too. Foo installs a distro, and then tries to plug in two monitors with different DPIs. Big mistake; nothing is scaled properly. Foo searches online and sees references to font scaling, HiDPI support, this thing called Gnome, and other stuff. Foo hops into an online chatroom to ask a question then gets asked what their current Window Manager is. What?? Someone in the chat tells Foo that this is why they never use HiDPI displays, because it’s too much work to configure. What in the world, they just don’t use something because Linux doesn’t support it??
Half of my own knowledge of Linux comes from having gotten things to work for Linux. I remember in the mid-2000s when I had to run wpa_supplicant by hand on my wireless adapter and then add in some custom IP routes to make it play well with my router. I learned about ALSA by trying to figure out why my audio doesn’t work on startup (turns out the device changes device IDs on boot, and configs are based on the device ID, how fun). I learned about X11 and Xorg when troubleshooting issues with resolution, compiling display drivers, setting refresh rates, HiDPI, you name it. I learned LPR and CUPS by trying to get my printers to work. For me, this stuff is fun (to an extent, I don’t exactly enjoy having to whip out xrandr when trying to get my laptop to display slides to give a presentation.) But to the average user that is somewhat interested in freedom or configurability, “owning your device” shouldn’t mean having deep expertise in computing to troubleshoot an issue.
Sure, absolutely. I was merely answering the original question from my own perspective, as requested by @kevinc. (Well, to be fair, he didn’t request it from me, but I’m presumptuous like that ;-P ).
The irony! I’m posting this from a 1080p external monitor that I bought, at the time, because setting up display scaling on FreeBSD was on my TODO list.
I did appreciate the bonus perspective. :)
Samsung phones are a really bad example to use, since whatever-replaced-Cyanogen is still supporting the S3 last I checked (which is 11 years old at this point). Since the thing has a replaceable battery, you could reasonably expect to use it as a basic phone years to come (even if the memory is anaemic by modern Android standards).
You might have slightly better luck with using Apple in your example, but they’re on a 7-8 year support cycle with OS updates too. Wait 8 years and see if you can still replace your PinePhone’s motherboard. I’d be moderately surprised if Pine64 was still making the board in that time. (I know they have a LTS A64 but I don’t know what, if any, commitments they’ve made re the phone.)
Yeah but Samsung isn’t. And a number of vendors whose software “supports Android” flat out refuses to run on phones with ROMs other than those approved by the manufacturer and carrier.
That some enterprising open-source developers have managed to hack part of the way around the problems posed by this awful ecosystem is great, but it doesn’t diminish the problems, or most of the feelings of helplessness.
Sure. Sacrificing autonomy begets dependence. Dependence begets learned helplessness, which in turn begets dependence, in a vicious cycle. Sometimes there are perfectly good reasons to sacrifice personal autonomy, such as when the needs of the many are in conflict with the needs of the one. A perfect example of that situation is Covid 19 and lockdowns + mask mandates, but that discussion isn’t relevant here. Needless to say, when I feel that some company is constraining my power to make decisions, I turn resentful. When I use an iProduct from Apple, terms and conditions apply. Terms and conditions are those things that the conquering army dictates to a surrendering foe.
Thanks for elaborating! If I understand, part of the problem is the popular norm of accepting the terms and conditions rather than thinking critically about them. That would lead those who do think critically and opt out to be relatively isolated in an uphill battle. I for one am unhappy with the QWERTY keyboard standard, not there’s anything nefarious about it — it’s just something people don’t think critically about and consider alternatives to. We could have better, but we let inertia win. I don’t really have an entity to be resentful of, but I might if a corporation were behind it.
There’s a very real sense in which Ubuntu is a flaming rubbish fire of a desktop, but honestly it means I don’t have to deal with Mac OS X or Windows and that’s enough for me. I used Windows a great deal when I was younger, and I tried using OS X for five years after moving to the US with only the Macbooks that would fit in my suitcase; I really can’t stand using either one.
I just like the way X11 and tiling window managers feel (i3 now, formerly dwm) and I like having a vaguely UNIX-like OS underneath. For better or worse, Ubuntu is able to drive my desktops and my ThinkPad and runs the applications I want – with Proton it even runs many Windows games. So, while it’s a rubbish fire with all of the problems the article describes, it’s my preferred rubbish fire for right now. I’m not sure the grass is really greener anywhere else.
This comment got longer than I planned. TL;DR: Mac good, then Mac outrageously bad, now Mac good again.
I got my first Mac in 2013, when MacBooks and MacOS were peaking. Within a few years, the steady decline of MacOS began, to my great disappointment. All the slowdowns and extremely blatant UI bugs made my $3k+ work MacBook Pro feel like a cheap piece of junk.
When my 2013 MacBook Pro replaced my ThinkPad T61 (from 2007ish?), I remember being absolutely blown away that the MacOS lock screen rendered before I could even open the lid enough to type my password. Not a particularly spectacular benchmark, I know, but it illustrates how far Macs fell: just a few years later my work MacBook Pro would take 10-15 seconds to become responsive when waking up.
And the bugs. My god, the bugs. I tried for a good 30 minutes to connect Apple Music to my bigger B&W speakers using shairport-sync. Never worked, even though my iPhone works fine. It did, however, give me lots of confusing feedback with checkmarks checking and unchecking themselves, spinners, and completely disrupting the rest of my system audio. Eventually I rebooted to end the pain.
But I heard the new M1 Macs fixed everything, so I took a chance and got an M1 MacBook Air. If it still sucked, I was ready to return it and switch back to Linux after all this time.
I have to say, it dramatically exceeded my expectations.
My new M1 MacBook Air has that old snappy responsive feel I loved about my first MacBook Pro. Passes my lame open-the-lid-from-sleep test with flying colors. And connecting to my speakers works instantly and flawlessly—even allowing me to send only Apple Music to my big speakers, and keep using my desk speakers for the rest of the system.
The M1 does literally everything better. I could probably come up with dozens more examples, but for whatever reason those two problems infuriated me the most.
Anyway, I’m not trying to sell you or anyone else on Macs. I’m just ecstatic my preferred OS doesn’t suck anymore. I don’t know when you moved to the US, maybe your 5 years of MacOS happened smack in the middle of the dark days? Or maybe you just don’t like MacOS. Obviously, that’s your prerogative. Even though you call your setup a rubbish fire, I’m glad you have something that works for you!
This isn’t lame, this is human usability.
Story time: this morning I grabbed my Linux laptop running KDE Plasma for a few files off of it before I head in to work. (Also noticed my article got posted here, hey!) When I opened the lid:
black screen
“22:40 Sunday”
redraw
“07:04 Monday”
And yeah, it pissed me off.
Then I grabbed my Mac. When I opened the lid:
I can’t fathom how people are willing to sacrifice so much freedom because the screen contents may be stale. Somehow it never hurts my productivity, if I open the lid, I know I am opening the lid. I won’t be in the flow when opening the lid, I will already know what time it is when I open it. Is it condescending of me to call it vanity?
This is exactly the perspective that keeps non-technical users from using FOSS. Their concerns are relegated to “vanity” by most. What is “vanity” to you may be essential to others. So when those others ask why this behavior occurs and receive the above response about “vanity not freedom”, they just politely drop FOSS and move on.
Hard agree. Also, I’ve wasted about ten seconds per day on this (~3-5 seconds * 2-3 lid events per day).
I’ve been running Linux on this laptop since 2015.
21,900 seconds, or 6 hours, of my life has been wasted on this “vanity” bug regarding KDE’s lock screen.
Tiny amounts of performance can have huge amounts of waste when you have economies of scale like this, especially when it’s a daily-driven workstation used for multiple years. Think of the board games I could have played with my Mum, the meme videos I could have watched with my friends, and the snuggles I could have given my cat in that time. (Let alone the open source code I could have been writing.)
And this vanity was a concern already with the original Macintosh. https://www.folklore.org/StoryView.py?project=Macintosh&story=Saving_Lives.txt&sortOrder=Sort+by+Date
Do you mean in quality (in your opinion) or popularity?
This was some time ago so I don’t perfectly remember all of my reasoning. In quality most definitely, though in my opinion popularity played a huge role in software quality for developers.
As a long time Linux user I wanted to continue using tools I was familiar with, which meant the shell, vim, package management, and the numerous command line tools I had grown accustomed to. Thus the quality, maturity, popularity, and package availability of Homebrew played a significant role in my evaluation.
The late 2013 model I got was the first with a Haswell processor, and, if I recall correctly, NVMe SSD. I don’t remember why, but Haswell processors were supposed to have been a big leap over the previous generation. I think for battery life? And NVMe bandwidth was mind-blowing compared to SATA III.
Lastly, it was still hard to beat my ThinkPad T61’s 1680x1050 display at the time, and I flat out refused to compromise on screen real estate, so the Retina display scaling to 1920x1200 was a big plus. I was in college at the time, so I didn’t usually have an external monitor. Whereas today I use my M1 MacBook Air with a 32” 2560x1440 monitor for serious work.
In the technology adoption life cycle I’m squarely in the early majority, by no means an early adopter. I started pondering a switch to Macs after continued evidence from developers around me that OS X did everything I wanted it to do. The hardware leap of the late 2013 model sealed the deal for me and I ordered one as soon as the first OS X Mavericks patch release was announced.
Yeah. I remember twenty years ago. I was all hyped up on the enormous potential of the Linux desktop and I was really looking forward to the day Linux would make Microsoft go and hide in shame.
I’m really sad that the Linux community fumbled the ball regarding desktop. Instead of better systems we just got more of them - everyone was making their own. Instead of pulling together, we just got bitching and bickering.
The last four years my desktop have been macOS and I love it.
I generally use all 3 major operating systems (sorry BSD proper) on the desktop and I’ve found that I can’t get a setup I like with anything other than Linux desktop. The programmability of the interface I use (sway) is so far ahead of anything I can achieve on MacOS or Windows. Additionally, the ease at which I can modify even low level parts of the operating system or see what’s going on when something breaks makes Linux far and away the best option for me on the desktop.
Mac is second, but it’s not a close race. Apple has extraordinarily sane defaults and a snappy interface, but doing something that takes a few minutes of shell scripting in my Linux system like changing my wallpaper via keyboard macro is an annoying experience to set up on the mac. It feels like apple wants you to use the system as they’ve designed it, and if you want to do anything else, you are wrong. I don’t like that, and I don’t like that my setup hinges on built in programs that haven’t been updated in years and don’t include obvious features that should be there.
The worst is Windows. No sane defaults. No real configuration. The only reason I run windows these days is to play games. I have no idea why things break that worked perfectly fine before updates. The updates don’t work, and take forever. The worst part is the constant advertising and pushing of their crappy corporate products. I have a pro or whatever license, and yet I constantly get bombarded with their crappy games, stupid bing integrations, and one-drive crap. I get mad every time I load up the machine.
That’s an odd example, since setting the desktop wallpaper is exposed via AppleScript (actually, OSA Scripting, so you can use JavaScript if you prefer. Triggering things from the keyboard at any point requires either registering a global shortcut or setting it up as a text service. I have never done this, but I’m pretty confident I could do it in half an hour - including time spent looking up things in docs - on macOS, I’d probably spend an hour or more working out what the right tools are to try to use.
I typed “trigger desktop wallpaper change applescript” into Google. The second result looked like it had something promising. Found this at the bottom of the comments:
I haven’t really used AppleScript or Automator, but it took 30 seconds or so to find the “Run AppleScript” ‘action’ (?). Pasted that in there. Pressed play. My background changed. It took less than two minutes all up.
I can appreciate and respect this, but I have never once wanted to change my wallpaper via keyboard macro. I think I’ve had the same wallpaper since 2013 even, haha.
This really does come off as the stereotype that Linux is made by hackers, for hackers. I have other interests, a family, a photo collection that makes digiKam choke, faxes I have to send to banks and I have to fight LO to make non-ugly cover sheets, etc. I may be a hacker underneath, but I value my time and other interests. And I think that’s ok too. I just wish there was an option on Linux for people who want to just get things done.
And one of those other interests is older hardware, which the community/ecosystem is now actively trying to crap on, too.
OP wants to use S3 Trio64 with Wayland, yet migrates to a much newer Mac. Apples hardware support is in general pretty long but not “runs newest version of macOS on hardware from 1995”.
Not to mention that macOS has been terrible wrt to stability, axing 32-bit support altogether in Catalina.
If I have to give up older hardware support anyway, I’m going to use something that doesn’t make me tear my hair out every time I try to do something with it.
The point is that Wayland is making it so Linux has higher system requirements than Catalina. So, why not just run Catalina then? Linux doesn’t meet any of my requirements any more as noted in TFA (not just hw compat but all the other points).
This isn’t an article in support of ABI stability – though you could argue ABI stability would be a side-effect of most of the improvements I desire, and it is something I do find desireable.
This is an article in favour of stability in the sense of does not crash and has predictable releases.
Also, a properly written CarbonLib application can run on any system from Mac OS 8.1 to macOS X 10.14 Mojave. How do I write an application for Linux that can run on Red Hat 5 (not RHEL 5) and Ubuntu 18.04, and still appear reasonably integrated with the system across all versions?
That would mean that it uses autotools, I think it is fairly likely that it would work on both those systems ;)
By “appear” I mean UX-wise.
CarbonLib applications look native, in the way that Qt 5 apps look in Plasma, or in the way Gtk apps look in GNOME. The way Qt 4 apps don’t look in Plasma any more. The way wxGtk apps don’t look in Mutter/Wayland.
It could be possible, with a good widget engine, to get Qt 3 apps to look nice in Plasma. With TDE you could even build Qt 3 on modern versions of Linux. Then you could, maybe, get an app that looked native on Red Hat 5 and Ubuntu 18.04.
I would love to see this just for the giggles, tbh, but I don’t feel like investing the time in trying it myself.
That is highly unlikely for anything other than really small command line applications. There is a pretty wide range of software version upgrades. Chances are the kernel which does it’s best to not break userspace and keep backwards compatible is the only thing that won’t have changed the interfaces you need to consume. Almost everything else your software depends on has likely had breaking releases in the time between those two distro releases. Unless you are statically compiling, something the distros frown on, you are probably going to have to do a significant amount of work to make this succeed in both worlds simultaneously.
The Wayland / X11 dilemma is nightmare fodder still, after 13 years since Wayland’s first release. That’s like a death clock of our own devising… I’m kinda more scared of the day X11 gets irrevocably deprecated (if Wayland hasn’t become ready by then) than of the 2038 bug.
Fully agreed.
Ugh, another one bites the dust — without giving the BSDs a chance. Try OpenBSD, the out of the box experience is the most pleasant one I’ve had yet, more so than macOS even, because it does exactly what I want it to.
I have to use a MacBook for work, and while the UI is pretty and all, I find its tiny quirks and overall rigidity very annoying. I can’t do things the way I’d like to.
I gave FreeBSD a serious chance for months. I hacked a little on i915 trying to fix some backlight bugs. I contributed to what became the implementation of the
utimensat
syscall implementation for the Linuxulator. I was even featured on an episode of the original BSDNow in 2015.I suppose you could argue some value of stability and release engineering still exist in FreeBSD, but they’re even more hostile to older hardware and the desktop experience is even worse as more and more software assumes Linux.
The entire time reading this article I was thinking that NetBSD would be a much better fit for you than Linux
One reason I like my OpenBSD Desktop is that it is still running great on my 2012 Toshiba Portege - I’ve upgraded the RAM to 16Gb and the HDD to an SSD - the only draw back is getting decent batteries for portability is now becoming an issue. But I have heard that OpenBSD may be coming to the M1 in the near future :~)
did we read the same article
The article complains Go’s PowerPC baseline is POWER8. OpenBSD’s PowerPC baseline is POWER9…
There are two power ports: powerpc64 (POWER9) and macppc (G4/G5 and maybe G3).
Browser support in macppc was pretty abysmal last I checked though.
macppc supports all New World models, including the G3.
I think NetBSD is probably the best of the BSDs for running on PowerPC, if we’re talking about it; in addition to macppc they have ofppc, amigappc, bebox, etc.
As a Mac user who mostly likes the way it works on the surface, I wish it were as straightforward beneath the surface as a BSD.
The main points are: stability, portability and obsolescence and how they are a struggle.
But then the author moves to the latest MacOS? Where is the stability? Apple is famous for breaking compatibility and biting the bullet whenever they can push a new proprietary API to ensnare devs (Metal?). Where is the portability (Apple only cares about the hardware they sell of course). And where is the (lack of) planned obsolescence? This is the whole long-term strategy of Apple: tech as fashion and short hardware update cycles.
So this is why the author leaves linux desktop? He could run a recentish notebook, with ARM or x86 cores and linux would be perfectly fine. None of those issues would be valid then.
This is a weird take.
On the user side of things :). A few months ago I got one of them fancy M1 MBPs, too, after not having used a Mac since back when OS X was on Tiger. Everything that I used back in 2007 still worked without major gripes or bugs. With a few exceptions (e.g. mutt) the only Linux programs that I used back in 2005 and still worked fine were the ones that were effectively abandoned at some point during this period.
Finder, for example, is still more or less of a dumpster fire with various quirks but they’re the same quirks. Nautilus, uh, I mean, Files, and Konque… uh, Dolphin, have a new set of quirks every six months. At some point you eventually want to get off the designer hobby project train.
In this sense, a lot of Linux software isn’t really being developed, as in, it doesn’t acquire new capabilities. It doesn’t solve new problems, it just solves the old problems again (supposedly in a more “usable” way, yeah right). It’s cool, I don’t want to shit on someone’s hobby project, but let’s not hold that against the people who don’t want to partake.
(Edit: to be clear, Big Sur’s design is hot garbage and macOS is all kinds of annoying and I generally hate it, but I wouldn’t go back to dealing with Gnome and GTK and Wayland and D-Bus and all that stuff for the life of me, I’ve wasted enough time fiddling with all that.)
THIS, so much.
Well, just step off then?
Unlike Apple, you have some options with open source. Don’t like the latest Gnome craze? Get MATE, which is basically Gnome 2. There are lots of people who keep old window managers and desktop environments alive and working. The Ubuntu download page lists a couple, but many more can be installed with a few commands.
I think I have been running the same setup for six or seven years now, no problem at all.
If you try to grab a Gnome 2 box, you’ll find that Mate is pretty different even if the default screen looks about the same. Not because of Mate but because of GTK3 general craziness. Sure, the panels look about the same, but as soon as you open an application you hit the same huge widgets, the same dysfunctional open file dialog and so on. It’s “basically the same” in screenshots but once you start clicking around it feels pretty different.
If all you want is a bunch of xterms and a browser, you got a lot of options, but a bunch of xterms and a browser is what I used back in 2001, too, and they were already obsolete back then. The world of computing has long moved on. A bunch of xterms and a browser is what many, if not most experienced Linux users still use simply because it’s either that or the perpetual usability circlejerk of the Linux desktop. I enjoy the smug feeling of green text on black background as much as anyone but at some point I kindda wanted to stop living in the computing world of the early 00s.
I’ve used the same WindowMaker-based setup for more than 10 years, until 2014 or so, I think. After that I could technically keep using it, but it was mostly an exercise in avoiding things. I don’t find that either fun or productive. I kept at it for 6+ years (basically until last year) but I hated it.
(Edit: imho, the options are really still the same that they were 15 years ago: Gnome apps, KDE apps, or console apps and an assortment of
xthis
andxthat
from the early/mid-90s – which lately mostly boils down to “apps built for phones” and “apps built for the computers of the Hackers age”. Whether you run them under Gnome, KDE, or whatever everyone’s favourite TWM replacement is this year doesn’t make much of a difference. Lots of options, but not much of a choice.)[Comment removed by author]
As one of the people who wrote the offending Mesa drivers, I agree that there are many problems with the Mesa codebase which are endemic and probably can’t be fixed without massive rewrites. Big-endianness is a constant hassle because bit-twiddling in C is a chore and abstraction is tantalizingly difficult. Performance is difficult to understand and largely boils down to whether or not operations are GPU-accelerated. I used to fantasize about rewriting Mesa in Haskell; these days I suppose I’d pick OCaml for that fantasy.
Do you have a high-level sketch of a Mesa rewrite in OCaml? That sounds fascinating.
Many of these comments are accurate, but I’ll point out that Wayland continues to improve. Red Hat has announced that nVidia GPUs will work this summer with a new driver release from nvidia, and remote desktop is finally supported (in Fedora 34). They say that in Fedora 34, Wayland is now ready for general use, before that it was in beta mode. X11 is better for supporting old hardware, but one of my motivations for switching to Wayland is better support for new hardware: eg, trackpad gesture support like pinch-to-zoom and 3-finger swipe. There is a dwindling number of people willing to work on X11 (is it down to one guy yet?), so X11 will continue to get worse on new hardware, while Wayland will continue to get better.
Also, I’m migrating my laptop from Mac to Linux/Wayland this year, because even though the M1 CPU is hot right now, and the new magic keyboard is marginally acceptable instead of a crime against humanity, MacOS software keeps getting worse as a development environment for free software. bash hasn’t been updated since 2007, gdb stopped working a few years ago, OpenGL is deprecated and will likely disappear in a few years, etc.
A major issue that gets in the way of having stable open source software is the rapid evolution in the hardware space, plus the hardware is closed source and can’t be manufactured by artisans (unlike the software). Laptops only last so long, then you must upgrade to the latest hardware, and open source is on a treadmill keeping up with the hardware.
Let me know when I can have WebEx meetings on it. (Seriously. Not trying to be passive-aggressive, I’ll give it a whirl.)
I guess that depends on your view. My C++14 project that uses CMake and Boost needed no changes to build and run on Sierra and Mojave.*
I use zsh. I’m not sure why bash version matters. Maybe autotools, but if you’re building things with autotools you’re likely either using Homebrew or you’re building your own stuff in
/opt
anyway, so I’m not sure why it would matter then.lldb
seems to work fine for me.On Linux, too. Vulkan is the New Way. At least that’s what all the Linux 3D people tell me.
Well, technically they do, but the “so long” can be a much longer than most people seem to realize. I am typing this on a six year old laptop and it does not show any sign of aging other than that the buttons on the trackpad are becoming scratched. I got a brand new kickass laptop for work last summer and I really don’t notice much difference between the two. The screen on the new one is better, but this one is fine too.
You don’t have to get the latest shiny icons to be productive. People say they know that, but they generally don’t act like it. Computers don’t get slower over time. The clock speed stays the same, as does the amount of ram. If you experience slowness, just remove some cruft and you’ll be happy again without spending hundreds of dollars.
Consumers are as responsible for keeping this treadmill running as the manufacturers are.
I missed that line entirely.
I’m writing this on a 10 year old MacBook Pro. The “s” key is a tiny bit flaky, but otherwise it’s still running fine. I expect it to last me another year or two until a second-gen Arm-powered MBP exists.
This is an insightful post and interesting commentary. I haven’t finished reading it all and there’s plenty to digest but I’m reminded of another blog entry that was posted on this forum: https://blogs.gnome.org/tbernard/2019/12/04/there-is-no-linux-platform-1/.
I agree with many of the points people have made in this discussion. I can work in a Linux desktop environment. But I develop software for a living so I wouldn’t expect a typical business user or grandparent to feel the same way. I quite like my M1 MacBook but I have concerns with obsolescence and privacy. I’m anxious about going outside the walled garden.
The idea of the Linux desktop is fertile ground for opinionated discussion. It sounds like many people want a free-as-in-freedom alternative to the Mac platform. Something that is stable, but not necessarily supported (3rd party support services would be an option). Something that is very simple to use out of the box but is open to customizations. Something that developers know once they deploy to it they can be confident that any users who stay within certain use guidelines or policies will be able to run their code for at least 10 years.
Superficially it appears like the Linux ecosystem and technology stack could be a good candidate for this hypothetical end user platform. But the more I think about it the less confident I am in that idea. Who decides what’s in and out? Who lays out design guidelines? Where are apps found? What is the payment system for commercial apps? Who decides when the ABI breaks? When the hardware changes?
Personally, I’d pay for an open source, freedom respecting platform but I appear to be in the minority.
Whoever makes the system. (Gee, like Apple with macOS X and MS with Windows, huh?)
Whoever makes the system. (Seeing a pattern?)
The maker of the system makes a repository of software available. (Distros already do that, to be fair.)
If this is a libre system, then more than likely it uses something like Liberapay.
Tying ABI to hardware is what did early Solaris (and, to a lesser degree, IRIX) in.
I would say that RHEL might be the best example of “what can a Linux that cares about backwards-compat look like without radically changing the ecosystem”, which can be either an endorsement or an indictment depending on your opinion of RHEL.
RHEL is a good example. So basically we need something like a Red Hat for hardware and cloud services to complete the alternative stack.
I think elementary might be the closest one to realizing this kind of thing. They’re building a platform, with release cycles, prescribing the libraries used, a storefront with payment for developers, HIG, etc. Basically, the disparate pieces distro and desktop devs have, but as a single unified package with some kind of vision behind it.
The linked Gnome dev blog post is worth reading and reflecting on; cathedral and bazaar overtones, and they’re pro-cathedral. With my experience, I suspect they’re probably right.
the author dislikes that google does not want to accept portability-patches, but i kind of understand. if you accept those patches, now you have to maintain them:
this is extra effort. sometimes it is worth it, and sometimes it is not.
the phrasing in the article like “the Talos user community offered a PowerPC port” sounds like all the extra work is done by not-google, but that’s not how it will be long term.
The company that produces the Talos offered Google free CI hardware.
I realise that there would be a non-zero cost to Google, but pardon me if I don’t cry if one of the - if not the - largest company on earth might have to spend a few minutes per release cycle scrolling over a PowerPC block of code.
What annoys me so much about the Wayland vs X thing is people claim that X could not be fixed, but this is an obvious lie - in fact, many of the things people claim to be unfixable already are fixed! The wayland FAQ doesn’t even agree they couldn’t fix X - it says explicitly they could, they just chose not to because they saw an opportunity to do something else more fun. So they’re breaking everything because they want to goof off instead of doing valuable work.
And I’m sure they are going to try to shove it down my throat eventually, just like that awful PulseAudio disaster (which btw is STILL broken, while ALSA is finally actually pretty OK). And I think at that point… I’ll have no real choice but to fork it myself. I can’t stand Mac and Windows continues to annoy more and more with each passing year too. Competition is supposed to make things better, but instead it is a race to the bottom. (Though at least most my old programs still work on Windows!)
There’s already enough things that go wrong with our software stacks, but I would say that release cycles getting larger would compound a lot of issues mentioned when discussing these topics.
It leads to stuff like having to maintain N package repos (I can’t install
gettext
0.21 on ubuntu 20.04 from the repos despite it being extremely likely that it would “just work”). The whole “huge release of software at once” thing means you get to deal with about 20 problems interacting with each other at once. And you don’t get all the fun stuff quickly!Macs have this problem, where the once-a-year feature release leads to nobody knowing why anything is broken, but it’s all broken and basically impossible as a user to fix it. Meanwhile Chrome ships out its constant releases. When there’s an issue, it’s like… one issue, and there are many people hitting it. So there’s extremely quick triangulation on potential fixes!
There is of course the “I don’t want my UI to change out from under me every 3 months” criticism from rolling releases. But I think that’s more of an issue with the underlying software development strategy (and if anything, I think rolling releases has made software better at maintaining user preferences, as it’s harder to ask people to just try reconfiguring everything at once). Like you can ship bugfixes, new features etc on a continuous basis. And without big “N.0” release pressure, there’s less “oh here’s the big new feature for you now” feeling.
I understand OP. Before M1 I was thinking Apple is pushing me away and I have to bite the bullet and get going on Linux desktop. However my main stumbling block is always, input handling (touch pad specifically), HDPI and keyboard bindings.
Input in Linux sort-of-kind-of-works, but it’s just so far behind Apple. Consistent, smooth, inertial scrolling with just the right amount of acceleration. Linux have plenty of knobs for it, but I suspect there are deeper issues preventing it from getting the exact precision feel of macOS.
HDPI support is maybe why X11 is at the end of the road. Again it sort-of-kind-of-works, until you happen to launch some rarely used app and it’s illegible. Inconsistent font sizes, multi monitor setup issues. Again, there are plenty of knobs to fiddle with, but it’s boring job.
Keyboard bindings for basic functions like copy-paste. Many apps you can remap, but I always find apps where you can’t remap and it becomes this one app where you can’t use muscle memory. Like… ctrl-c is something completely else in a terminal window, and it creates this issue of… remap all the apps to Some Other Key? or deal with the inconsistency.
Command Is Not Control is something that took me a while to learn when I originally switched from WinNT and Solaris to Mac OS back in the day, but is such a powerful paradigm. Thank you for reminding me of that. It was something I wasn’t even conscious I was doing when I went back to Catalina!
finally, something BeOS got right too!