most linux distributions on the desktop suck because instead of focusing on the strengths (composable tools which have a thight scope), they try to imitate windows and macos failing spectacularly, mostly replicating the weaknesses. i for one can’t name 50% of the things in an unbuntu which are doing magick in the background. if they break, the most sensible option is to just reinstall.
Most people just want to browse the web, check their email, type a document, etc. What they don’t want to do is learn about all those composable tools or otherwise spend a whole lot of time figuring out details of how their system works, which is perfectly reasonable. Are there trade-offs? Sure; but this doesn’t “suck”, it’s just different people with different requirements, priorities, and usage patterns.
(Edit: um, this turned out more ranty than I thought it would, so just to be clear – I absolutely agree with you here, and my rant below is pretty much orthogonal to yours your comment. Also, I don’t think imitating Windows or macOS is a problem – but I do think that this imitation effort is driven by, and anchored in, a bunch of arrogant misconceptions about “non-expert” usage habits. That’s why it ends up “mostly replicating the weaknesses”, as @rbn nicely put it: because it’s guided by a sort of caricature image of the “novice user”, and we end up with software whose “novice-friendly” features are exaggerated to unrealistic proportions, so they end up being difficult to use for everyone, including novice users).
I hear this a lot and every time I hear it, I have the same thing to say: spend a few weeks watching the people who “just” want to browse the web, check their email, type a document, and you’ll find that these things are not at all the simple things that the FOSS UX community insists on believing they are for some reason. Lots of these people don’t, and can’t do them on Linux not because it’s too complicated, but because it lacks the functionality they need. (Ironically, in large part because lots of “novice-friendly” software is written by people who have never stopped to ask a novice what they need, and have this silly picture in mind of “Aunt Tilly” who just wants to resize a photo every once in a while and is confused by all those buttons).
My informal experience with teachers and seniors (mom’s a teacher and, of course, has a lot of teachers friends, and I used to help a volunteer-backed organization for seniors) has cured me of this sort of programmer’s arrogance:
“Just type a document” can involve sifting through a collection of about 20,000 clip arts, pictures and cartoonish things, and about 4,000 Word documents, all of them covering four years of elementary school. Most file managers and desktops are comically inadequate for that (the default is “huge icon mode” and switching to a normal mode for all folders involves sifting through menus and configuration windows like crazy, “symbolic icons” are just anonymous blobs of black and white when they’re small, the file manager chokes trying to thumbnail these, etc.). It’s also kindda hard to do that with an “open file” dialog that lacks a thumbnail option. Also, populating the GTK “Open file” dialog with about 30 bookmarks (4 years of school x 5-6 really important subjects + a few folders for school-related paperwork) makes it painful to watch.
LibreOffice is in fact surprisingly adequate for this, until you need to print something.
Smooth/inertial scrolling with a mouse looks smooth for like 30 seconds and that’s great if you need to open one PDF every once in a blue moon. If you do it for more than five minutes it makes you dizzy. If your job consists primarily of browsing PDFs and DOC files, there’s a good chance you won’t make it through the day without throwing up. That’s why it’s not the default in most commercial PDF readers, and also why virtually all of them allow you to disable it at least.
Lots of senior people don’t exactly have an IT genius around to help them when something breaks. Their children and grandchildren are away and aren’t really keen on helping their aging parents or grandparents with their computers. “Just browsing the web” or “checking their email” isn’t too easy when, every six months, you discover that a feature has disappeared (‘cause “no one was using it”) or that it’s been re-designed so it’s “more usable”.
“Just checking their email” sometimes involves sending your friends pictures of your grandchildren, or sending your grandchildren a picture from a cruise or whatever. Back to sifting through a few thousand pictures! (Edit: thankfully, this is becoming less and less of a problem with cloud-stored pics – which is another can of worms in and of itself of course, but oh well. Thing is, sifting through thousands of pictures isn’t some power user task, not in 2020, when you can store a few thousands of ’em on a phone and people really do return with hundreds of them from each vacation).
All of that routinely happens on entry-level laptops. With large fonts, Breeze and Adwaita are impossible to use on 1366x768 displays. Lots of dialog windows simply don’t fit on the screen, and there’s often hardly any room left for application content. The huge widgets are supposed to help with accessibility but – not that it’s a surprise – if you have poor eyesight, it’s bigger fonts that help, not bigger buttons per se. (inb4 “but touch-enabled laptops are the future”: try buying one of those on a senior’s income, and then try using it when you’re 70 and just holding your arm up for five minutes is uncomfortable).
Unsurprisingly, desktops developed for “most people” (e.g. Gnome) don’t fare much better than those that aren’t. In fact, Gnome’s most successful “incarnation” – Ubuntu’s – is a pretty significant departure from upstream’s focus on visual simplicity and straightforwardness. And the biggest trouble we’ve had with either of them isn’t in esoteric things like whether the icons are the right size or not, it’s been primarily in the fact that support for desktop icons is finicky, followed closely by poor contrast that makes window content hard to read (Windows 10 also does some dimming for inactive windows, but only in menu items – at least for “legacy” Win32 apps. For some reason, it’s now fashionable to dim everything you can because novice users don’t use more than one application at a time and easily lose their focus. Hah.).
(Edit: um, this turned out more ranty than I thought it would, so just to be clear – I absolutely agree with you here, and my rant below is pretty much orthogonal to yours. Also, I don’t think imitating Windows or macOS is a problem – but I do think that this imitation effort is driven by, and anchored in, a bunch of arrogant misconceptions about “non-expert” usage habits. That’s why it ends up “mostly replicating the weaknesses”, as @rbn nicely put it: because it’s guided by a sort of caricature image of the “novice user”, and we end up with software whose “novice-friendly” features are exaggerated to unrealistic proportions, so they end up being difficult to use for everyone, including novice users).
everyone “old” has used computers for much of their lives now, and they are the generation which may even have touched a dos shell. the problem is that current ux trends, regardless where, are constantly throwing away the experience of their users. this isn’t limited to “old” people. i’ve not used android for some years and am always lost for half a minute if i try to change some settings in it. with linux, this isn’t limited to gui applications, but for the complete system as everyone seems to love reinventing wheels, only in a slightly non-round fashion.
It’s my impression as well. Technology is now pervasive enough that “non-expert” users actually do use computers a lot, not because they want to become experts or because they’re the members of the computer-using vanguard of their generation, but because, well, what else are you gonna use?
The teachers at the school were my mother teachers have all been using computers to do all their school-related work for at least 15 years now. They all check every box in the “not an expert” list, from “will click Yes in every dialog box” to “has no idea what’s inside a computer”. But that doesn’t mean they only use basic word processor functions or have no more than twenty or thirty files lying around. They do – they’re required to do – every school-related thing on a computer now, and have twenty years’ worth of silly children’s songs, cartoons, arts & crafts crap and stuff lying around. Sometimes it’s neatly sorted in a bunch of folders under Documents, sometimes – as you might expect – in a bunch of folders on the desktop, because that’s where they’re most easily accessible.
The same goes for pics. Between school trips, birthday celebrations and whatnot, my mom must have tens of thousands of pictures on her hard drive. Every time there’s one of these, all the children, and sometimes all the parents, send her all the pictures they took. Watching someone try to make sense out of all that using Eye of Gnome’s super-declutered interface (which displays only a row of thumbnails at the bottom of the screen) is almost painful.
This isn’t some “power user” usage habit. This is 2020, when everyone has like 10,000 pictures on their phone and takes twenty selfies every time they go out. The idea that “simple users” have “rudimentary requirements” dates from back when computers really were something you used only when good ol’ pen and paper weren’t good enough, or when you were forced to. That doesn’t happen anymore.
This is a big qualm I have with encouraging “non technical” users to switch to desktop Linux. Because the UI changes all the time, you have to throw away all the skills and conventions you learned for older versions. GNOME 2 was fine. Sure, GNOME 3 might have some improvements (I would actually argue that it does not, but that’s another debate), but now existing users don’t know their way around any more. As a non-GNOME user looking in, it seems like GNOME 3 has continued to frequently make significant changes to the UI/UX that would throw me off if I did use it.
I think changing the UI out from under the user is very disrespectful of their time investment in the platform. If GNOME was marketed under the notion of “we’re just making what we want to use personally, and if you don’t like it bug off” I could buy the argument “but it’s a FOSS project, they can do what they want”. However, if you want to market your project as being friendly for “non technical users”, I think there is a certain expectation you will provide more in that area than just lip service.
In my experience with “old” or “non technical” users, they are normally people who have to use computers as a means to the ends of getting their real work done. They are usually sick of UIs constantly changing much more than they ever were of some drawbacks in one UI or another. In my anecdotal experience, these people could care less if the icons are skeuomorphic or flat, or whether the menus slide in or fade in, or what the drop shadow radius is. They just want to be able to learn how to use it once and then get on with what they actually want to use the computer for without having to learn how to use it all over again.
MATE is still fine. It’s well-maintained and better than GNOME2 without being worse in any regard. It’s not breaking the UX between releases in any way either. It just works.
Agreed, but does that ‘most people’ apply to ‘most people who use the Linux desktop’ or ‘most people who use desktop computers’? If we replace ‘Linux’ with ‘house’ or ‘car’ we’d be clearly talking about a mass-market consumer commodity which I am not sure Linux really is. Maybe it would be better to focus less on the mythical ‘normies’ and accept that desktop Linux is a niche created by and for it’s own niche users?
That being said, I do think having Ubuntu for example as a go-to normie friendly distro is wonderful. Just seems few projects can do that effectively.
This is very true. I was able to show my family how to install their linux box, because it’s a self explaining installer UI. Because the desktop after this “just works”*. No “but you have to know XY to start wifi/use the GPU/setup email/printer/bluetooth” or such bullshit.
All the people who want that amount of customization can still go and use their i3 and be happy. Just pick bare metal in the installer and be done. You won’t notice anything different.
*Yes I’m aware of the warts you can stumble upon, I’m using linux daily, just right now..
Most people just want to browse the web, check their email, type a document, etc. What they don’t want to do is learn about all those composable tools or otherwise spend a whole lot of time figuring out details of how their system works, which is perfectly reasonable.
which has worked the same for at least 15 years. i can still use claws-mail, browsers are all the same now, abiword still exists. all those programs used to run everywhere. this isn’t about the application programs but the decisions in plumbing. which then in turn makes applications more complicated and less portable than before.
for example, all i know is that somehow it is considered to be fine that systemctl directly asks my password if required. just typing my password into some program which thinks it needs root feels completely wrong, and i don’t even know how these things work. some polkit-dbus-magick? maybe?
This is why I vehemently refuse to run derivative distributions.
There’s quite the difference between the likes of Gentoo, Arch or Debian, and their derivatives.
The derivatives are, majorly, made by typically technically inept conmen for their usage by laymen, which I’ll refer to as suckers thereon, in the spirit of the speech referenced at the top of this thread.
To attract as many suckers as possible, effort is put, besides doing a ton of marketing (If you’re Mike Rocketworth and got some equity, this is easier to do), into ensuring the interface appeals to the lowest common denominator.
Thus it has to be really dumb, to the point of insulting to intelligence, and unbearable to use by the technically inclined. This is deliberate, as we don’t make good suckers for this specific type of con. Good suckers are people who will remain blind to the fundamental problems, will install it everywhere, will promote it in social media, will help the person orchestrating all this behind the curtain.
At some point, the large and dumb user base will be leveraged for economic gain.
That’s how it’s always been, and I don’t see this changing anytime soon.
I personally like a lot of what Ubuntu has done, particularly on their software projects.
I’d argue that recent releases have gotten quite accessible, to the point where I have given my tech-illiterate mother a desktop with Ubuntu on it and haven’t run into any issues I couldn’t help debug over the phone. It just so happens there’s a lot that goes into making a usable GUI that non-{sysadmins,developers,geeks} can effectively use, while still being completely usable as my main driver for work. I’ve been happily running Ubuntu for years, and while I’ve mostly switched to Arch for my new installs, I don’t feel like my intelligence is being insulted or threatened because every piece of my system isn’t controlled by text files.
So what is your alternative? That everyone should become an expert? I find the amount of contempt towards non-technical/IT people in your post rather bewildering to be honest.
Not using nor recommending derivative distributions.
That everyone should become an expert?
Why would everyone have to become an expert?
I find the amount of contempt towards non-technical/IT people in your post rather bewildering to be honest.
There’s none of that. It might be a language thing. Maybe the word “suckers” did rub you the wrong way? Con victims are traditionally referred to as suckers.
As to being bewildered over some website discussion post, I recommend against. It is, generally speaking, not worth it.
The derivatives are, majorly, made by typically technically inept conmen for their usage by laymen, which I’ll refer to as suckers thereon, in the spirit of the speech referenced at the top of this thread.
Let’s hypothetically say someone was more focused on curing a disease than figuring out how to use Linux. The time they saved on stuff like that gave them more time to find the cure. They save all kinds of lives, including yours. Your verdict: they’re suckers and dumb for curing disease with easy to use products vs not doing it to master harder-to-use products that they spent a ton of time customizing. Don’t seem right to me…
Also, this isn’t hypothetical: it’s a situation that plays out regularly in many non-OS things you depend on or enjoy in life. You’re likely depending on the very people you curse for what you enjoy in life.
Let’s hypothetically say someone was more focused on curing a disease than figuring out how to use Linux. The time they saved on stuff like that gave them more time to find the cure. They save all kinds of lives, including yours. Your verdict: they’re suckers and dumb for curing disease with easy to use products vs not doing it to master harder-to-use products that they spent a ton of time customizing. Don’t seem right to me…
i’m not in the discussion club, but this feels like a bad argument, suddenly including “THEY ARE SAVING LIVES!!1” here. in any case: it is expected from professionals to know their tools OR to hire someone who does.
Also, this isn’t hypothetical: it’s a situation that plays out regularly in many non-OS things you depend on or enjoy in life. You’re likely depending on the very people you curse for what you enjoy in life.
It’s a technique I use when someone uses an argument that says everyone or everything is or isn’t such and such. From there, I pick the counterexample that most justifies not having that position. Both EMTs and hospital workers have griped to me about how their tech UI’s, among otherthings, cause them problems. So, I went right with that example.
Normally Id consider it cheap since person using it often going for rhetoric. In my case, a large subset of people in the original claim do fight with tech to save lives. I’ll leave that for your consideration.
medical appliances are a whole different beast. i hope that they aren’t running desktop linux, but something without touchscreen but good hardware buttons :)
Well, I was including desktops and appliances. That said, you might find this interesting or worrying.
The FDA part is half BS. The vendors of safe/secure RTOS’s, like INTEGRITY-178B and LynxOS-178B, have had offerings for this a long time. At least one straight-up advertised a medical platform. The suppliers appear to just be using insecure tech deliberately to soakup more profit.
For falling for a con. It literally is the name used for a con victim in the conmen world.
the large and dumb user base
I realize this is where most of the negative impressions my post got are likely coming from. I should be more careful with language. Dumb there meant “relatively technically illiterate”, and I should have used this full form.
You’re likely depending on the very people you curse
No cursing going on. I do not hate the victims.
At the end, I mostly take issue with:
The conmen, due to their intentions.
The shills, which cooperate with the conmen. Typically community influencers who happen to be acquaintances with the conmen, and vouch for them on request despite personal lack of belief on these projects.
I try not to be a shill, and not to promote projects I do not believe on. I often qualify my recommendations; This is why I do it. Experience has made me skeptic and somewhat cynical.
By cursing, I meant talking down about them unnecessarily. The word is overloaded in my country. I apologize for any confusion.
Re conmen. I have two replies on this. One category refutes it due to them simply being forced to use it at school and work. To get benefits of those, they must use the tech regardless of their belief in it. The tech also still brings those benefits even if unworthy.
The second reminds me of when a founder of high-assurance security, Dr.Roger Schell, met Black Forrest Group of CIOs/execs to convince them to buy software. Many folks that think they’re dumb at tech (I did). They said they’d love to buy software at higher assurance but wouldnt be allowed to. Pressing them, they said they believed software developers all left bugs in on purpose to later sell them the fixes mixed with more buggy features. With features they needed, they’d never be able to buy it high quality anyway.
The same is true today. Users want to get something done. Microsoft’s monopoly position made about all the apps, hardware, etc work with Windows. They were also probably brought up on it. So, the option that gets them the most benefit at the lowest cost… money plus effort (more important)… is Windows. For a while, it had less surprises than alternatives, too.
So, there’s no con or at least most know it’s garbage. It’s simple economics and psychology. They benefit more by using it than not using it. There’s benefits to things like shell and programming if they invest the time. They can’t see it or have no need where they’re at.
What we can do is show them by building better tools that help them and integrate with their workflow. When they’re amazed, get them interested in the how. It’s what Ive been doing at work.
What the hell makes you think people will care for your technical opinions if you call people “technically inept conmen” and “suckers” for having a hobby? Ease off on the condescension.
The overall tone of my comment is a reflection of my hate of seeing people people being taken advantage of, salted with me being too tired to comment properly.
i’m not convinced that every derivative distribution is a conjob, but some trends in linux sometimes sure have the feeling of one (booting in less seconds!11).
an example from my experience: while i prefer window managers, kde is working better/saner when packaged by a single person for slackware than with kubuntu.
i’m not convinced that every derivative distribution is a conjob
Beware the claim was qualified with “majorly”. I know you’re not going for a strawman, but I feel necessary to prevent an accidental one.
kde is working better/saner when packaged by a single person for slackware than with kubuntu.
I am not surprised; Distributions with “flavors” for DEs are a strong marker for poor design decisions in the packaging preventing adequate support for multiple DEs.
Some of the arguments apply to both desktop and server. One is considered lost the other won. Clearly, these arguments cannot be useful.
If you only care about server the whole issue with marketing people not using the OS on desktop disappears.
“Linux people are dumb” is a really pointless way of looking at things. It cannot even be made constructive.
Putting Linux and “open source in general” in the same bucket is also not very useful. Projects have different technical and organizational problems and this doesn’t help us identify them. I may have missed any points that apply in general.
Here are some more plausible reasons why Linux on desktop sucks:
It doesn’t. The competition is just better these days. But Ubuntu is probably as usable as Windows 98. (Except for the package manager point below).
Linux is fragmented. Lots of work is done to achieve similar things instead of making one thing work really well. Linux makes us customize things; it’s part of the fun for many people. This is terrible for making things work well out of the box since we are optimizing for something else.
Few people consider QA to be fun work. One of the main points about free and open source is “free”. People don’t do work for free if they don’t like it. The same applies to the previous point.
Linux has package managers. Even if we only had one this would still mean that a Linux requires a huge amount of work that does not exist for Windows or Mac (at least if you ignore the “stores). This work could be done in a distributed way by people who make the software if we had something like AppImage that works and which Linux people would find acceptable.
Inertia is a huge problem. If you have a solution A to some specific problem (e.g. Xorg) and somebody proposes a new solution B to a very similar problem (e.g. Wayland). Even if solution B is better in almost every respect, a lot of people will want to stick with A because in some respects A was better. And this is just on top of the usual problems of rewriting software. So you end up with both A and B and we are back to the fragmentation.
Linux is fragmented. Lots of work is done to achieve similar things instead of making one thing work really well.
This sounded like a good argument in 2003, but 17 years later something just doesn’t add up. KDE and Gnome, to pick just the two biggest players, are both 20+ years old at this point. Fragmentation does decrease the amount of work that gets put into any one project but it’s been twenty years.
IMHO the fact that neither of them quite cuts it has less to do with fragmentation, and more to do with the fact that, once you start picking them apart, most of their components aren’t actually 20 years old. So they don’t have twenty years’ worth of functionality, not to mention bugfixes and stability.
My favourite KDE bug is a good example, I guess: NT 3.51 had a pretty similar bug, more than 22 years ago, and I think a big part of the reason why Microsoft is still ahead in this game is that their bugfix is probably still around in explorer.exe’s source code, whereas KDE has gone through at least three completely different modules for that thing, in the same timeframe (“modules” is a bit hand-wavy here, I hesitate to call it “shell”, although I guess it wouldn’t be completely inaccurate as far as Plasma 5 is concerned).
(BTW, I call it “my favourite KDE bug” because I actually like KDE and I’d love to use it more, so I spent a few weekends trying to fix that thing, but alas my QML-fu is basically zero)
I mean, yes, if you were to put Gnome and KDE together, you’d get one project instead of two, so “50% less fragmentation”. But between the two of them they’ve basically written five or six desktop environments already, and I’m just counting the major shifts here, where few applications/technologies were retained, not the major releases (albeit there would be some merit in that, too). That is the real source of “fragmentation”, and just hypothetically merging projects isn’t going to help, not when a good portion of the community sees nothing wrong with a “well, they’‘ll have to decide if they’re a Gnome app, an XFCE app, or a GTK app” sort of approach.
Putting Linux and “open source in general” in the same bucket is also not very useful.
It is a very deliberate tactic used by influencers to equate Linux and Open Source/Free Software in people’s minds, thus closing the door to other, often better designed, open source operating systems.
This allows them to control the discourse. For a real world example, you’ll see a lot of “Linux communities” (forums, irc, slack/telegram/discord/whateverpopularcrap) where almost all of the discussion going on isn’t about Linux (be it the kernel or linux-specific userspace), but about third party open source applications.
But the people in charge of these are often Linux fanatics, and will lock threads and ban people when they mention other open source OSs. Often enough, questioning their Idols’ (Linus and the core team) perfection will warrant such a response.
Linux has package managers. Even if we only had one this would still mean that a Linux requires a huge amount of work that does not exist for Windows or Mac (at least if you ignore the “stores). This work could be done in a distributed way by people who make the software if we had something like AppImage that works and which Linux people would find acceptable.
This is one of the primary reasons I use linux. Package management makes installing software much better than any other paradigm.
Package management makes installing software much better than any other paradigm.
Which is faint praise indeed. Package managers work well with a single centralised repository. When you’re using Debian, Ubuntu, Fedora, FreeBSD, or whatever and the thing you want is in the default repositories for the version of the OS that you’re using, it works really well. So well that you forget the amount of effort that it takes to maintain those repositories, ensure that everything that depends on libWhatever works with the same version of libWhatever or, if not, that you can install libWhatever v42 and libWhatever v43 in different directories and point the respective dependencies at different versions.
It starts to break down when you start having multiple sources. For example, I have an Ubuntu system that has three different package repos configured that can all provide cmake. Which one do I get? Whichever is newer. That’s fine, as long as they’re all using the upstream versioning. In this case they are, so that’s fine. It’s also fine because CMake has very strong backwards compatibility guarantees and so nothing breaks when I install a newer version than the one older things were expecting.
Now try substituting something like ICU for CMake. ICU is not backwards compatible and it requires things to be recompiled when you install a new version. If an external package repo is shipping a program that depends on a newer version of ICU, they will provide a newer version. If two external package repos are shipping programs that depend on newer versions of ICU than the default repos, they may both provide different newer versions of ICU. They may make sure that their versions can be installed in parallel with the main repo’s version, but the that are probably unaware of each other and so won’t test for compatibility with them.
This is the kind of thing that PC-BSD’s PBI, Ubuntu’s Snap, Docker containers, and GNOME’s thing-that-I-can’t-remember-the-name-of are intended to solve. Unfortunately, they do it by brining in a lot of the problems of the distribution models that are common on Windows and Mac: everything comes with a complete set of its own dependencies and now you can’t easily do central updates to a security vulnerability in one of the libraries that many things depend on.
It’s 2020 and we still haven’t solved the software distribution problem. I find that sad, especially the fact that I have no concrete suggestion of how we could do it better.
It starts to break down when you start having multiple sources.
This is a major reason why I switched to Arch. On Fedora (or whatever) if something isn’t in the repos, you have a few options. If you are lucky, upstream will have an rpm on their website. If not, you could compile it yourself (if you can figure out how to compile it). Of course, you will never be able to uninstall it if you ever type make install. You could use flatpak/snap/docker/whatever. However, those packages tend to be huge, and their dependencies can get stuck on really old versions. None of these solutions tie into any of the normal update mechanisms. So you can be stuck with outdated/insecure software without even knowing it. If you use a third-party repo you run into all the problems you mentioned. So I use arch where I just need one repo (actually three, but who’s counting) and the AUR. Of course, building everything from scratch is a bit of a pain, and I end up not updating most AUR packages anyway. But it’s by far the most painless solution in my experience.
It’s 2020 and we still haven’t solved the software distribution problem. I find that sad, especially the fact that I have no concrete suggestion of how we could do it better.
Has anyone solved it? Perhaps android? Though I think they have a lot of problems that snaps, et al. have.
I tried an AppImage program recently and was blown away by how easy it was to run it! It was like Go’s single static binary idea, but for GUI apps! Very nice.
Yeah the way he quickly brushes over support for things like games and video editing applications seemed weird to me. While the situation has improved slowly over the past decade or two, the lack of access to industry standard video and audio editing software seems like a compelling argument for why Linux on the desktop is infeasible for so many people. Likewise, the set of games available on Linux is a strict subset of those on Windows, and said games typically run better and are better supported on Windows. If either of these categories of applications are important to you, Linux is a poor choice.
There’s a lot of discussion of games being the factor for desktop Linux, but I don’t see it; at least as anything more than a value add. You can live without games, but you can’t live without the tools for work, whatever it might be. (Programmers like us have a luckier break here.) I think a lot of that discussion is because of how big the overlap between sites like Reddit and people who eat, live, and sleep PC gaming are.
You can live without games, but you can’t live without the tools for work, whatever it might be.
The home desktop computer is a dying breed. Its use case is slowly being usurped by laptops (which are still mostly desktops), tablets, and phones. However, one use case which is not going away is gaming. Desktop computers offer one of the best gaming experiences out there. Many users stick to windows primarily because their favorite game only runs there.
lack of access to industry standard video and audio editing software seems like a compelling argument for why Linux on the desktop is infeasible for so many people
Do many people use this kind of software? I would imagine it’s fairly specialized?
(Lack of) games are probably more important, especially because there’s a social component to that as well: if your friends are playing a game then you’d like to join in on that specific game. When the whole coronathing started some of my friends were playing Soldat, but I couldn’t get it running on my Linux machine so I missed out 🙁 (wine errored out, it’s been open sourced yesterday though, with Linux support, so need to look again).
I have helped around a dozen or so ‘regular people’ over the years who did not want to pay the Apple tax and whose Windows laptops had become totally unstable move over to Linux desktops. This is the list of apps they care about:
Firefox / Chrome
LibreOffice
One time I had to help someone get some weird Java app for their schoolwork installed. Most people are content consumers, not creators.
The games situation is pretty good, in no small way thanks to Valve. Nearly half of my large steam library is linux-native, whereas most of the remaining games work with proton, without hassle.
However, the situation with Video and Audio is more of a joke. All the open video editors are terrible. At least we can play videos now; I remember the pre-mplayer era, and now we have mpv which is fantastic.
As for audio, the laymen suffer from Pulseaudio, which is still irritatingly bad. The production side of audio is much better thanks to jack, but only if running under Linux-rt, as when for whatever reason I boot mainline, jack gets xruns after a while, even with 20ms buffer.
It depends somewhat what you want to do; OBS studio is pretty nice for its use case, but I wouldn’t want to produce a film.
As for audio
The lack of reliable timeslices is pretty terrible on mainline linux. Doesn’t affect me often, as I have 24 cores, but if I’m running something intensive I’ll sometimes get audio skipping in 2020 (which literally never happened back in 2005 on windows).
if I’m running something intensive I’ll sometimes get audio skipping in 2020
The Linux kernel likes to thread into long, dark and narrow corridors, not yielding the cpu to SCHED_FIFO/RR tasks until much later than the time they become runable.
I did boot into mainline recently and saw some of the usual pathological behaviour. Then I ran cyclictest -S -p99 and spotted 10000µs peak within seconds. Appalling.
(which literally never happened back in 2005 on windows).
The Linux kernel likes to thread into long, dark and narrow corridors, not yielding the cpu to SCHED_FIFO/RR tasks until much later than the time they become runable.
Are there open-source kernels that don’t do this and support a variety of mainstream hardware? Genuinely curious.
If lives depend on it, seL4 is the only protected mode (to my knowledge) kernel with formal proofs of response time (WCET) and correctness.
But if your use case is audio, you’ll probably be fine by simply booting into linux-rt (Linux with the realtime patchset) and generally avoiding pathologically bad (pulseaudio) software in your audio chain, using straight alsa or a pro-audio capable audio server (jackd, hopefully also pipewire in the future).
You should also ensure the relevant software does not complain about permissions not allowing execution as SCHED_FIFO or SCHED_RR. In my opinion, they should outright refuse to run in this situation (except perhaps by forcing them to SCHED_OTHER with a parameter) rather than run in a degraded manner, but it is a separate issue, another of the many issues in the ecosystem.
Where the Amiga is almost cheating is by using a 68k CPU. They have very fast interrupt response, further helped by being vectored. x86 is a sloth on this.
Wait, really? That’s earlier than I’d realized (the computer I was using in 1990 could hibernate a process but switching back was 2-3 seconds wait and IIRC the process had to ask to be switched away from).
Yes, AmigaOS had preemptive multitasking with priorities from day0.
Furthermore, it also had a message-passing system called ports, which was used for IPC, including a good part of user tasks talking to the OS, which was a multi-server OS (internally split as services, running as tasks, with exec.library being the closest thing to a kernel we had, as it did handle task switching, IPC, memory management and early hw initialization).
AmigaOS supports shaded libraries, and the os itself looks like a bunch of shaded libraries to application programmers.
Early AmigaOS (1985-1990) is <V37. Functions that claim to be V37+ are from the more modern AmigaOS days (AmigaOS 2.0+, 1991+), so you can easily see what was made available when.
The famous “Every OS sucks” song special cases AmigaOS as the exception… for good reason. Overall, I still consider its design to progress in many ways relative to UNIX. If you ever see me rant about UNIX-like systems, now you know where I am coming from.
I knew it a few seconds in, but I listened on 1.75x in the background… then he finally gets there that the folks who got rms to resign were complaining about stuff not worth mentioning by name which indeed was part of a national scandal.
Centering in part on a WWII era center of American thought MIT, the scandal over Epstein, over dehumanizing women’s lived experience, wasn’t a joke scandal. The acts of RMS for years were bugging people out, in ways that if he’d done them to fellow men, fellow young men, he probably would’ve gotten booted years ago. But here in this video it doesn’t even merit mention by name, only vague reference.
Being excellent to each other means replacing missing stairs.
There was no mob reaction, let alone a ‘linux users mob’ That’s really the crux of it. The video assumes Linux users had some majority agency in removing him, when in fact he was booted because no organization can have someone like that associated with them. It’s common decency not Linux community decency that forced him out. No mob of any sort needed.
If I’m ever in the mood for reading toxic rhetoric (which never happens), I go read RMS. While the things you listed are important, I never really knew about them until the mob happened, because I chose to ignore him.
Anyway, toxic rhetoric is exactly the thing that polarizes our world and turns everyone into abominable villains. Toxic rhetoric starts wars and tears communities apart. So in my book, toxic rhetoric alone is enough to get anyone fired.
Yeah I don’t get why some people insist on defending Richard Stallman after his:
pedophilia support
untoward behaviors towards women
utter lack of humility bordering on parody
It often makes me think the RMS defenders really think that low of women and that their only code is the bro code. As a woman in technology, it makes me feel somewhat jaded. Just because someone does good things doesn’t excuse them from acting like a human being. :-/ It feels more that people like me are perfectly fine to sacrifice as long as some figure head gets his adoration. And that bothers me a lot.
RMS never supported pedophilia. I actually read the supposed evidence. They are thoughts/questions, admittedly naive, on the subject of unintended consequences of laws (evidence against pedosexual activity being itself illegal) and whether non-coercive, mutually beneficial, pedosexual activity, could, in principle, be possible.
Also, please don’t use the term ‘pedophilia’ here. Pedosexual activity is child abuse and that is what is wrong, not ‘pedophilia’. We should encourage people to come forward as pedophiles to counselors and therapists so they can learn to live with a -philia that they must never follow up on. Shaming them for their feelings or even calling them evil merely for their feelings only makes the risk greater.
When I did initially read about it, I was quite skeptic, particularly as I had recently seen many cases of mob justice gone wrong.
Later it blew out of control and I did some digging. It turned out to be nothing else than the usual character assassination some collectives favor. Due to his personality and lack of awareness of current trends, Stallman proved an easy victim.
Hard agree. Unfortunately, this is happening in tech far too often. The free and open source software movements are getting caught in the cross-fire of US politics.
I disagree. RMS was a seminal contributor to the movement, but there is no reason to pretend that his behavior - which might have acceptable back in the day when computer sciences were a boys club and movies like Revenge of the Nerds were considered funny even though they depict non-consensual sex as a ‘prank’ - is compatible with today’s world.
Epstein’s case is not subject to ‘politics’: the guy was a known pedophile and sex trafficker. There’s not even a point in arguing that. Minsky, who Stallman defended, was well-aware of Epstein’s circumstances and willingly took money from him and sexual favors from one of his victims. One could argue that Stallman was trying to make a ‘philosophical’ argument or playing devil’s advocate, but you’d have to ignore the kind of message that would be sending to any young women or victim of sexual assault in that mailing list: welp, it’s a shame Minsky got caught doing something really bad, let’s just ignore this other victim so we avoid rocking the boat!
Epstein’s case is not subject to ‘politics’: the guy was a known pedophile and sex trafficker. There’s not even a point in arguing that. Minsky, who Stallman defended, was well-aware of Epstein’s circumstances and willingly took money from him and sexual favors from one of his victims. One could argue that Stallman was trying to make a ‘philosophical’ argument or playing devil’s advocate, but you’d have to ignore the kind of message that would be sending to any young women or victim of sexual assault in that mailing list: welp, it’s a shame Minsky got caught doing something really bad, let’s just ignore this other victim so we avoid rocking the boat!
It is insane to me that RMS’s opponents would denounce a person for making an argument that a personal friend of theirs is not guilty of a crime, on the grounds that making this argument “sends a message” to people who might see it who are members of a demographic they assume is likely to be a victim of that crime. I’m deliberately not addressing the question of whether or not Stallman’s argument is correct or not, in the context of the actual alleged crime. Maybe he’s wrong and Minsky really was guilty in a legal or moral sense of having illict sex. I’m not sure what I think about Stallman’s argument in context, although I agree with him that something seems morally wrong about charging a person with the crime of statutory rape who was unaware that the person they had sex with was under the age of consent.
I’m not particularly interested in litigating the details of a media-reported crime I have no special information about, and it doesn’t matter in any event. Young women as a demographic, or even actual victims of sexual assault, have no particular right to never see someone argue that a specific sort of sexual encounter wasn’t actually a sexual assault. I refuse to be complicit in condemning RMS for doing so.
Do you even understand how society works? Are you arguing that people - in particular people in a position of power in a learning institution - should be able to say whatever comes to their minds, disregarding how other people are going to take what they say?
That’s the kind of behavior that leads to the normalization of behaviors like Minsky’s. The fact that people like RMS are comfortable thinking this is some philosophical riddle we are able to discuss, instead of clearly gross behavior that would creep the fuck out of any young person in the lab, is the problem. This is not someone pondering whether a bear shits in the woods, this is someone defending a 74 year man having sex with people in the age range of his students in front of his students.
Now, if that’s perfectly normal behavior for you, then I don’t know what to tell you. Maybe a consultation with a therapist would be a good start (and no, I’m not being an flippant about it).
Are you arguing that people - in particular people in a position of power in a learning institution - should be able to say whatever comes to their minds,
Yes? I believe that anyone should be able to say almost anything. Of course, there are the traditional exceptions for slander and specific incitation of a crime.
disregarding how other people are going to take what they say?
Lacking foresight is no reason to deny someone’s voice.
But, on the other hand, it isn’t patronizing at all to assume how everyone should behave around people who say things that make them feel unsafe?
Yes? I believe that anyone should be able to say almost anything. Of course, there are the traditional exceptions for slander and specific incitation of a crime.
Sure, and I believe people should be able to fire a co-worker they disagree with or find generally disagreeable.
Lacking foresight is no reason to deny someone’s voice.
‘Lacking foresight’ is hardly the problem, when there’s an extensive email thread where RMS kept digging deeper and deeper. I could see him lacking foresight before the first email, but by the third reply you’d assume he’d have some hindsight.
Do you even understand how society works? Are you arguing that people - in particular people in a position of power in a learning institution - should be able to say whatever comes to their minds, disregarding how other people are going to take what they say?
Yes. In fact, providing a space for people to say things that (some) other people take to be offensive is an important function of universities as an institution. This is the purpose of tenure systems, for instance.
That’s the kind of behavior that leads to the normalization of behaviors like Minsky’s. The fact that people like RMS are comfortable thinking this is some philosophical riddle we are able to discuss, instead of clearly gross behavior that would creep the fuck out of any young person in the lab, is the problem.
This isn’t (only) a question over whether some kind of sexual behavior is gross on an abstract philisophical level, it’s a question about whether something a friend of his did in fact or should have have constituted a serious felony under law. Discussing questions of law is absolutely the rightful concern of any citizen. I completely reject the idea that the standard of whether a behavior is moral or not should be based on whether some people claim it makes young people in a lab feel grossed out or not.
This is not someone pondering whether a bear shits in the woods, this is someone defending a 74 year man having sex with people in the age range of his students in front of his students.
I defend this. I explicitly believe that it is possible for a 74 year old man to have sex with someone of the traditional age to go to college (18-22 or so - that is, legal adults!) without either party doing something immoral. In fact, I believed this when I myself was within the ages of 18-22! Again, I refuse to be complicit in condemning someone else for making this kind of argument.
Yes. In fact, providing a space for people to say things that (some) other people take to be offensive is an important function of universities as an institution. This is the purpose of tenure systems, for instance.
RMS, as a non-tenured member of MIT, should’ve known that didn’t apply to him.
This isn’t (only) a question over whether some kind of sexual behavior is gross on an abstract philisophical level, it’s a question about whether something a friend of his did in fact or should have have constituted a serious felony under law.
‘Gross’ vs. ‘legal’ isn’t abstract in the context he was discussing though. Let’s think of a different example: let’s say someone in an academic context talks about his experiences with prostitutes in a country where that’s legal. Would that be acceptable?
Just because something is legal, it doesn’t mean discussing it or defending it is appropriate in every context.
I defend this. I explicitly believe that it is possible for a 74 year old man to have sex with someone of the traditional age to go to college (18-22 or so - that is, legal adults!) without either party doing something immoral. In fact, I believed this when I myself was within the ages of 18-22! Again, I refuse to be complicit in condemning someone else for making this kind of argument.
Well, we agree to disagree on that. Personally, I feel like there are so many questions about power imbalance embedded in that statement, that it could lead to a loooooong conversation I’m not willing to have seeing as people have been flagging my replies because apparently not defending RMS is a sin or something.
Yes. In fact, providing a space for people to say things that (some) other people take to be offensive is an important function of universities as an institution. This is the purpose of tenure systems, for instance.
There is a time and place for this - for example, invited speakers, seminars, lectures. A free-form mailing list for students and faculty would fall outside of this in most contexts - i.e. if some idiots starts spouting Nazi propaganda for trolling purposes, they can be banned from the conversation.
Dr. Stallman did not have tenure at MIT. In fact, he was not even part of the staff. His office and access to the mailing list was provided as a courtesy.
This isn’t (only) a question over whether some kind of sexual behavior is gross on an abstract philisophical level, it’s a question about whether something a friend of his did in fact or should have have constituted a serious felony under law.
The sad part of this is before this happened, I had no idea that Marvin Minsky was mentioned in the Guiffre deposition[1]. Had Dr. Stallman not gone out on the field and broken a lance for him, I would not have to contend with the plausible possibility of him availing himself of sexual favors provided through Epstein.
I refuse to be complicit in condemning someone else for making this kind of argument.
One can simultanously agree that Dr. Stallman has and did have a right to make this argument, and also agree with the right of MIT to terminate his unofficial occupancy of an office, and the right of the FSF to remove him from a leadership position[2].
Free speech is the right of an individual not to be gagged by the state, not an obligation that private parties have to host that speech.
______
[1] a deposition isn’t a statement of fact under the law, it’s a document submitted by one party in an ongoing lawsuit.
[2] as an advocacy group, the FSF is reliant on persuading people to their ideals (and usually soliciting financial donations). A public view (no matter how legally absurd) that their primary spokesperson is a defender of pedofilia is counterproductve to the mission of the FSF.
Free speech is a principle of good society. Yes it has legal protection in some states but this constant appeal to ‘free speech is just a law stopping the STATE from censoring you’ is pathetic. Should we condone attacks on free speech in other states because it’s not protected by law in China or North Korea? Freedom of expression existed as a principle of a decent society far before it was ever enshrined in legislation. In New Zealand it isn’t even supreme law, essentially just a rule of administrative law and of legal interpretation (interpret ambiguity in favour of rights).
Nobody is talking about whether MIT had the right to terminate his privileges. That’s not in question, anywhere in this thread. The discussion is around whether it was right to do so.
Nobody is talking about whether MIT had the right to terminate his privileges. That’s not in question, anywhere in this thread. The discussion is around whether it was right to do so.
In the narrow circumstances of Epstein’s alleged contributions to Harvard (he also had access to an office there as a private citizen, I believe) which is currently tearing Harvard apart, it was absolutely correct of MIT to defensively cut off Dr. Stallman from access to official MIT facilities and mailing lists. Not doing so would only have hurt MIT’s image (and possible future endowments).
Note that if Dr. Stallman had been part of the faculty or student body, I would probably not accept MIT’s behavior.
What is your opinion on the FSF removing him from a leadership position?
Do you even understand how society works? Are you arguing that people - in particular people in a position of power in a learning institution - should be able to say whatever comes to their minds, disregarding how other people are going to take what they say?
I think that people should not be expected to self-censor on the basis that people might get offended on behalf of others.
This is not someone pondering whether a bear shits in the woods, this is someone defending a 74 year man having sex with people in the age range of his students in front of his students.
Society decided a long time ago - and has not changed its decision since then - that once you’re over the age of consent there’s nothing wrong with relationships with anyone of any age also above the age of consent.
You can advocate for change to that or that you think that’s wrong, but given that the primary basis for LGB rights advocacy I’ve seen is ‘consenting adults in private should be able to do what they like’ I think you should think carefully about what you’re implying.
I think that people should not be expected to self-censor on the basis that people might get offended on behalf of others.
So, is there any situation at all where you think people should self-censor? Say, for example, is sexual harassment appropriate? After all sexual harassment is just one person being offended about how someone else treats them.
Society decided a long time ago - and has not changed its decision since then - that once you’re over the age of consent there’s nothing wrong with relationships with anyone of any age also above the age of consent.
This is definitely not true. Society frowns upon all kinds of relationships where the age disparity is incongruous with the situation. For example, the terms ‘gold digger’, ‘crate robber’ and ‘cougar’ come to mind. Legality doesn’t equal acceptance.
You can advocate for change to that or that you think that’s wrong, but given that the primary basis for LGB rights advocacy I’ve seen is ‘consenting adults in private should be able to do what they like’ I think you should think carefully about what you’re implying.
If you can’t see the difference between two adults in a loving relationship wanting to be accepted by society vs. someone abusing a power imbalance to take advantage of people, then I don’t know what I can do to explain it to you.
Young women as a demographic, or even actual victims of sexual assault, have no particular right to never see someone argue that a specific sort of sexual encounter wasn’t actually a sexual assault.
Conversely, Stallman has no particular right to an office provided as a courtesy by a private university, nor does he have a particular right to a leadership position in a privately-held non-profit advocacy group.
Imagine someone who pretend to be very nice and morally virtuous to a crowd that’s obsessed with this, which can easily be any crowd when carefully herded the right way (most people will agree with superficial statements that sound “morally good”) and gains influence in this crowd.
Then, using this leverage (the belief this person is definitely a good person) and some character assassination material (an article, twits, whatever claiming a person is terrible; truth here is irrelevant, the holding of controversial opinions at any point in time, even the distant past, is often used as material), on someone (thereon subject), written by themselves or some convenient third party, calls on the mob to take on actions to try and destroy the subject’s life. Actions including online bullying and organized harassment of the subject’s employer, family and friends. This isn’t an exhaustive list.
There’s a name for a person who does this. It’s Sociopath, or as it used to be called, Psychopath. They are the actual monsters. Whereas the subject is actually nothing else than a victim. If you still have doubts, digging a little on the perpetrator will typically reveal they have had other targets. Yes, they do it, enjoy it, realize they can get away with it and then do it again.
It helps when in the mob there’s other monsters which enjoy doing this. They willingly help the mob leader, as in exchange they also get their help with other targets. There’s literally entire communities built around doing this.
This is getting out of control and it needs to stop. Awareness of how these monsters operate helps. At some point, however, instigators will hopefully have to start answering to Justice. The official sort, with trials, evidence, presumption of innocence and all these steps and safeguards which separate Justice from Mob Justice.
Imagine someone who pretend to be very nice and morally virtuous to a crowd that’s obsessed with this, which can easily be any crowd when carefully herded the right way (most people will agree with superficial statements that sound “morally good”) and gains influence in this crowd.
No, it is not. My comment is about a dark pattern I have noticed in recent years, nothing else than that. The intended audience is pretty much everybody reading the thread. The intended effect is to raise awareness of this dark pattern, and to promote critical thought (there’s never enough of this).
The poster I was replying to isn’t being targeted by me in any other way than being the post that incited my reply, and is absolutely not being pinpointed as the instigator. Thus, I am not making them into some strawman.
Instead, they are kindly and indirectly being nudged into considering the possibility that they might be participating in such a scenario, and into reflecting into whether what they’re doing is positive.
Can you cite an example of that ‘dark pattern’ you’ve noticed? Can you cite two examples? Can you cite examples where both sides of the political spectrum used that dark pattern to their advantage?
Here’s an example: there is a transgender YouTuber whose channel is called ‘ContraPoints’. Her name is Natalie Wynn. She makes videos about a variety of different topics. She’s clearly left-wing and has stated openly and frequently that she is not a transmedicalist (essentially someone with a very narrow view of what constitutes a ‘valid’ transgender person).
She was essentially ‘cancelled’ on Twitter, and left Twitter as a result, because she made a video where she used a particular transgender activist as a voice actor for all of 6 seconds in an hour long video. What this activist actually said had nothing to do with transmedicalism, he was there to be the voiceover for a particular quote.
However, because said activist is alleged (without any basis that I’ve seen) to have transmedicalist views, not only did ContraPoints get ostracised from Twitter and harassed so badly she deleted her account and left the platform, but anyone that expressed any support for her (her friends, etc.) were harassed, even if they didn’t actually say anything beyond ‘she’s my friend’.
So to be clear, people get harassed (death threats, other violent threats, spammed with abusive imagery, told to kill themselves, etc.) not just for being a transmedicalist, not just for allegedly being a transmedicalist, not just for collaborating in an unrelated way with someone that they did not know allegedly is a transmedicalist, inhales but for being friends with someone that collaborated with someone that they did not know allegedly is a transmedicalist.
But no you’re right I’m sure that cancel culture isn’t a problem.
Can you cite an example of that ‘dark pattern’ you’ve noticed? Can you cite two examples? Can you cite examples where both sides of the political spectrum used that dark pattern to their advantage?
The answer to all your questions is: I don’t need to.
I’ll be happy to discuss them.
I do not have the time nor the inclination to humor you any further than I have.
The answer to all your questions is: I don’t need to.
So… it was a straw man. You were just pushing the whole ‘virtue signaling’/‘conservative oppression’ talking point on a conversation that had literally nothing to do with that.
I do not have the time nor the inclination to humor you any further than I have.
I have a feeling that you are one of those people who thinks he’s right even when proven wrong, and has been proven wrong enough times he’s learned not to push the envelope when things aren’t going his way. Can’t say I’m surprised.
The upside to this whole debacle is that RMS will probably have more tile to work on the GNU project. IMO the role of president of the FSF wasn’t ever the best for him – even if I disagree with they way they amputated him. I’ve been following the Emacs mailing list in more detail recently, and maybe I have a wrong impression, but I see him taking part in the discussions more than at least over the last few years.
This is known as ad-hominem. The author’s personal views (or what kind of person they are) are irrelevant to the validity of arguments presented.
The linked twit is a good reminder of why I avoid twitter. It is a community full of hate and destructive energy, not one of reasoning and respect for difference of opinions.
If someone cannot tolerate the existence of human beings who hold opinions different than theirs, then they’re toxic. Twitter is toxic, as it’s full of this sort of people, to the point it hosts mobs that attack people they disagree with, with the full intent of destroying their lives. This is called mob justice (I believe those involved tend to use euphenisms for this), as opposed to justice. Basically a mob, typically herded by a sociopath, playing judge and executor. It isn’t just in any way.
Twitter tolerates this behaviour and thrives on it. Twitter is a platform for organized hate. It is literally the platform where most of this is conducted. If Twitter went away overnight, the world would be better for it.
It’s not really a stretch to say that the age of consent at 16 is too old. There are clearly kids having consensual sex that shouldn’t be illegal below that age, but not much below it. ‘Romeo and Juliet’ laws for anyone under 18 is probably a much more reasonable system.
Hm, that article doesn’t do a great job of proving Stallman’s supposed innocence.
His argument that Minsky having sex with Virginia Giuffre is not a crime even though she was a minor because she was coerced by someone else is ludicrous. By that argument, having sex with a victim of sexual trafficking Is acceptable. Minsky was a grown-ass man that should be responsible - and accountable - for his decisions, including deciding to have sex with a minor in very weird and strange circumstances.
Besides the potential legality based on jurisdiction, the very obvious lack of morality of the act should make anyone take a step back. One can’t equate a 17 year old having sex with a partner of similar age as part of a normal love relationship with a full-grown adult taking advantage of someone barely able to make a decision about their sexuality… and yet, the author of that article seems to think that because Stallman somehow has been consistent about that misrepresentation, that must mean he’s been wronged by someone pointing out it’s wrong.
Did you read the mail thread linked in that article? The whole point of the thread is pondering if they should be calling this sexual assault or not, because to Minsky’s knowledge she could’ve just been a really keen very young woman. For context, they are talking about a 74 year old thinking that a teenager is coming on to him.
The structure of your post throws around some ideas, but doesn’t construct any arguments. It reads as an appeal to emotions.
The whole point of the thread is pondering if they should be calling this sexual assault or not, because to Minsky’s knowledge she could’ve just been a really keen very young woman. For context, they are talking about a 74 year old thinking that a teenager is coming on to him.
Your point being? Be very specific, because through your roundabout strategy, you come out to me as pushing the idea that some topics should never be discussed, that some ideas should be never expressed, and that people who dare do so should be executed by mob. Or that it is alright if this is what happens.
Please correct me if I am wrong. By all means, please tell me this isn’t what you’re trying to push.
Being born in 1983, she couldn’t have been a minor in 2001 when she alleged this trafficking took place. Assuming that it happened, that Minsky was involved, and that Minsky had sex with her, the crime would not be having sex with a minor.
His argument that Minsky having sex with Virginia Giuffre is not a crime even though she was a minor because she was coerced by someone else is ludicrous. By that argument, having sex with a victim of sexual trafficking Is acceptable.
If you don’t know that someone is a victim of sexual trafficking then it isn’t wrong. Obviously.
That depends on the definition of ‘minor’. In most places that means ‘under 18’, and last time I checked, if she was born in - say - September 1983 and the sexual encounter happened in January 2001, that’d make her a minor. In fact, being that both of them are American, and considering that Americans aren’t exempt from crimes committed against other Americans abroad, the statue is even less clear.
If you don’t know that someone is a victim of sexual trafficking then it isn’t wrong. Obviously.
Millions of Johns that got thrown in jail would like to disagree with you.
As time passes, I think the biggest mistake is to assume that Linux is a platform. A distribution + desktop environment would be much closer to what people talk about when mentioning “platforms”. My “glass half full”-approach is: Linux is just the base, that happens to ensure that a lot of knowledge and effort is transferable between these platforms, it’s a great that Fedora and Ubuntu use the same c standard library, and don’t develop their own things. It’s structurally impossible for the “Linux world” to work as one platform, since the groups working on what’s falsely described as the “Linux Platform” .
It’s hard to develop or promote software that you refuse to use.
I wasn’t convinced by these arguments. To my knowledge, most “Open Source Linux Companies” are focused on cloud services and related tools, as was mentioned for servers and the like. People developing on these platforms might never have to see the machine or have to install anything. Microsoft, Google, Toyota aren’t investing in the Linux foundation because they want to have a good desktop experience.
Something that is subjective but puzzles me a lot is it seemed like in 1998-2002 improving the Linux desktop was a huge focus. Since then there’s been a big growth in corporate sponsorship, which has been server focused. But the desktop thing was never corporate, it was just enthusiasts trying to show they can make a better desktop environment and better desktop applications. Somehow that seems to have faded.
It’s interesting that there’s a lot of enthusiast projects on github now, where people try to show they can build a better version of some tool, often in Go or rust. But desktop environments or even GUI programs in general don’t seem like a focus, and I honestly don’t understand why. And it makes sense that without that, people end up exposed to Linux ideas over SSH, or WSL, or through the (not Linux) Mac terminal.
Some of it. The rest was killed by Android. Android works on phones, tablets, laptops, and desktops and has a well supported set of GUI toolkits and a large ecosystem and it is Linux. Today, Android is probably the most widely deployed client OS. You can get Microsoft Office for Android. You can get Spotify and Netflix clients for Android. You can get an insane number of games for Android. You can also install a load of open source software via F-Droid or other mechanisms.
The thing that hasn’t taken off as much is the GTK/Qt + X11 (/ Wayland / Mir / this week’s fashionable X11 replacement) stack on the client. If anything, Linux has killed the other open source *NIX systems on the client because it’s the only one that can run the Android stack (Linux-only things like cgroups are tightly woven through the Android runtime with no abstraction layer).
Some really important work on free-software desktops has been contributed by corporations in the past couple of decades. Case in point: GNOME accessibility, which was mostly implemented by Sun.
But desktop environments or even GUI programs in general don’t seem like a focus, and I honestly don’t understand why.
Partly because of the complexity, I believe. It is one thing to write a better grep in Go or Rust and a totally different one to write a GUI application, considering the complexities with cross-platform GUI toolkits, HiDPI support, packaging & distribution etc.
It’s certainly more complex, but why did open source culture change? Honestly, the people who started gimp, gnumeric, abiword, koffice, konqueror, wine etc. need to be mildly crazy people who are seeking out a very complex series of challenges. Today it seems like all of the complex parts are either presently corporate sponsored (Chrome, Mozilla, kernel, VirtualBox) or have been largely built with corporate sponsorship and left for the community to maintain (Libreoffice, Eclipse.) The “we don’t need companies for the complex stuff” culture seems to have really faded.
I see a huge, huge amount of talk about improving the Linux desktop. It seems to be all that people are interested in these days: desktop desktop desktop. Dumbing down the user experience in the hopes it’ll be usable by the developers’ parents, I guess.
Maybe it’s really just a disconnect. I mean, you’re right that Gnome/KDE development continues, and real developers have just learned to ignore them. My desktop is fvwm and has barely changed in two decades.
I think the open source desktop environments made a big mistake betting so heavily on Linux. If the Linux kernel doesn’t provide features that KDE or GNOME wants, they need to convince people who are largely paid by companies that sell server products to help them upstream changes. This is difficult. If they’d retained the focus on portability that these projects had at their start, they’d have found it easier to get the features into Free/Net/OpenBSD (who would all love to have some differentiating features for desktop users). At that point, it’s easy to convince the Linux community to accept the features. Instead, they’ve made everything that’s not Linux such a second-class citizen that the idea that GNOME / KDE developers would switch to a different platform is an empty threat.
most linux distributions on the desktop suck because instead of focusing on the strengths (composable tools which have a thight scope), they try to imitate windows and macos failing spectacularly, mostly replicating the weaknesses. i for one can’t name 50% of the things in an unbuntu which are doing magick in the background. if they break, the most sensible option is to just reinstall.
Most people just want to browse the web, check their email, type a document, etc. What they don’t want to do is learn about all those composable tools or otherwise spend a whole lot of time figuring out details of how their system works, which is perfectly reasonable. Are there trade-offs? Sure; but this doesn’t “suck”, it’s just different people with different requirements, priorities, and usage patterns.
(Edit: um, this turned out more ranty than I thought it would, so just to be clear – I absolutely agree with you here, and my rant below is pretty much orthogonal to
yoursyour comment. Also, I don’t think imitating Windows or macOS is a problem – but I do think that this imitation effort is driven by, and anchored in, a bunch of arrogant misconceptions about “non-expert” usage habits. That’s why it ends up “mostly replicating the weaknesses”, as @rbn nicely put it: because it’s guided by a sort of caricature image of the “novice user”, and we end up with software whose “novice-friendly” features are exaggerated to unrealistic proportions, so they end up being difficult to use for everyone, including novice users).I hear this a lot and every time I hear it, I have the same thing to say: spend a few weeks watching the people who “just” want to browse the web, check their email, type a document, and you’ll find that these things are not at all the simple things that the FOSS UX community insists on believing they are for some reason. Lots of these people don’t, and can’t do them on Linux not because it’s too complicated, but because it lacks the functionality they need. (Ironically, in large part because lots of “novice-friendly” software is written by people who have never stopped to ask a novice what they need, and have this silly picture in mind of “Aunt Tilly” who just wants to resize a photo every once in a while and is confused by all those buttons).
My informal experience with teachers and seniors (mom’s a teacher and, of course, has a lot of teachers friends, and I used to help a volunteer-backed organization for seniors) has cured me of this sort of programmer’s arrogance:
Unsurprisingly, desktops developed for “most people” (e.g. Gnome) don’t fare much better than those that aren’t. In fact, Gnome’s most successful “incarnation” – Ubuntu’s – is a pretty significant departure from upstream’s focus on visual simplicity and straightforwardness. And the biggest trouble we’ve had with either of them isn’t in esoteric things like whether the icons are the right size or not, it’s been primarily in the fact that support for desktop icons is finicky, followed closely by poor contrast that makes window content hard to read (Windows 10 also does some dimming for inactive windows, but only in menu items – at least for “legacy” Win32 apps. For some reason, it’s now fashionable to dim everything you can because novice users don’t use more than one application at a time and easily lose their focus. Hah.).
everyone “old” has used computers for much of their lives now, and they are the generation which may even have touched a dos shell. the problem is that current ux trends, regardless where, are constantly throwing away the experience of their users. this isn’t limited to “old” people. i’ve not used android for some years and am always lost for half a minute if i try to change some settings in it. with linux, this isn’t limited to gui applications, but for the complete system as everyone seems to love reinventing wheels, only in a slightly non-round fashion.
It’s my impression as well. Technology is now pervasive enough that “non-expert” users actually do use computers a lot, not because they want to become experts or because they’re the members of the computer-using vanguard of their generation, but because, well, what else are you gonna use?
The teachers at the school were my mother teachers have all been using computers to do all their school-related work for at least 15 years now. They all check every box in the “not an expert” list, from “will click Yes in every dialog box” to “has no idea what’s inside a computer”. But that doesn’t mean they only use basic word processor functions or have no more than twenty or thirty files lying around. They do – they’re required to do – every school-related thing on a computer now, and have twenty years’ worth of silly children’s songs, cartoons, arts & crafts crap and stuff lying around. Sometimes it’s neatly sorted in a bunch of folders under Documents, sometimes – as you might expect – in a bunch of folders on the desktop, because that’s where they’re most easily accessible.
The same goes for pics. Between school trips, birthday celebrations and whatnot, my mom must have tens of thousands of pictures on her hard drive. Every time there’s one of these, all the children, and sometimes all the parents, send her all the pictures they took. Watching someone try to make sense out of all that using Eye of Gnome’s super-declutered interface (which displays only a row of thumbnails at the bottom of the screen) is almost painful.
This isn’t some “power user” usage habit. This is 2020, when everyone has like 10,000 pictures on their phone and takes twenty selfies every time they go out. The idea that “simple users” have “rudimentary requirements” dates from back when computers really were something you used only when good ol’ pen and paper weren’t good enough, or when you were forced to. That doesn’t happen anymore.
This is a big qualm I have with encouraging “non technical” users to switch to desktop Linux. Because the UI changes all the time, you have to throw away all the skills and conventions you learned for older versions. GNOME 2 was fine. Sure, GNOME 3 might have some improvements (I would actually argue that it does not, but that’s another debate), but now existing users don’t know their way around any more. As a non-GNOME user looking in, it seems like GNOME 3 has continued to frequently make significant changes to the UI/UX that would throw me off if I did use it.
I think changing the UI out from under the user is very disrespectful of their time investment in the platform. If GNOME was marketed under the notion of “we’re just making what we want to use personally, and if you don’t like it bug off” I could buy the argument “but it’s a FOSS project, they can do what they want”. However, if you want to market your project as being friendly for “non technical users”, I think there is a certain expectation you will provide more in that area than just lip service.
In my experience with “old” or “non technical” users, they are normally people who have to use computers as a means to the ends of getting their real work done. They are usually sick of UIs constantly changing much more than they ever were of some drawbacks in one UI or another. In my anecdotal experience, these people could care less if the icons are skeuomorphic or flat, or whether the menus slide in or fade in, or what the drop shadow radius is. They just want to be able to learn how to use it once and then get on with what they actually want to use the computer for without having to learn how to use it all over again.
MATE is still fine. It’s well-maintained and better than GNOME2 without being worse in any regard. It’s not breaking the UX between releases in any way either. It just works.
Agreed, but does that ‘most people’ apply to ‘most people who use the Linux desktop’ or ‘most people who use desktop computers’? If we replace ‘Linux’ with ‘house’ or ‘car’ we’d be clearly talking about a mass-market consumer commodity which I am not sure Linux really is. Maybe it would be better to focus less on the mythical ‘normies’ and accept that desktop Linux is a niche created by and for it’s own niche users?
That being said, I do think having Ubuntu for example as a go-to normie friendly distro is wonderful. Just seems few projects can do that effectively.
This is very true. I was able to show my family how to install their linux box, because it’s a self explaining installer UI. Because the desktop after this “just works”*. No “but you have to know XY to start wifi/use the GPU/setup email/printer/bluetooth” or such bullshit.
All the people who want that amount of customization can still go and use their i3 and be happy. Just pick bare metal in the installer and be done. You won’t notice anything different.
*Yes I’m aware of the warts you can stumble upon, I’m using linux daily, just right now..
which has worked the same for at least 15 years. i can still use claws-mail, browsers are all the same now, abiword still exists. all those programs used to run everywhere. this isn’t about the application programs but the decisions in plumbing. which then in turn makes applications more complicated and less portable than before.
for example, all i know is that somehow it is considered to be fine that systemctl directly asks my password if required. just typing my password into some program which thinks it needs root feels completely wrong, and i don’t even know how these things work. some polkit-dbus-magick? maybe?
I find that “modern” Linux distributions have a lot of the problems that I had with Windows in the mid 2000s.
This is why I vehemently refuse to run derivative distributions.
There’s quite the difference between the likes of Gentoo, Arch or Debian, and their derivatives.
The derivatives are, majorly, made by typically technically inept conmen for their usage by laymen, which I’ll refer to as suckers thereon, in the spirit of the speech referenced at the top of this thread.
To attract as many suckers as possible, effort is put, besides doing a ton of marketing (If you’re Mike Rocketworth and got some equity, this is easier to do), into ensuring the interface appeals to the lowest common denominator.
Thus it has to be really dumb, to the point of insulting to intelligence, and unbearable to use by the technically inclined. This is deliberate, as we don’t make good suckers for this specific type of con. Good suckers are people who will remain blind to the fundamental problems, will install it everywhere, will promote it in social media, will help the person orchestrating all this behind the curtain.
At some point, the large and dumb user base will be leveraged for economic gain.
That’s how it’s always been, and I don’t see this changing anytime soon.
I personally like a lot of what Ubuntu has done, particularly on their software projects.
I’d argue that recent releases have gotten quite accessible, to the point where I have given my tech-illiterate mother a desktop with Ubuntu on it and haven’t run into any issues I couldn’t help debug over the phone. It just so happens there’s a lot that goes into making a usable GUI that non-{sysadmins,developers,geeks} can effectively use, while still being completely usable as my main driver for work. I’ve been happily running Ubuntu for years, and while I’ve mostly switched to Arch for my new installs, I don’t feel like my intelligence is being insulted or threatened because every piece of my system isn’t controlled by text files.
So what is your alternative? That everyone should become an expert? I find the amount of contempt towards non-technical/IT people in your post rather bewildering to be honest.
Not using nor recommending derivative distributions.
Why would everyone have to become an expert?
There’s none of that. It might be a language thing. Maybe the word “suckers” did rub you the wrong way? Con victims are traditionally referred to as suckers.
As to being bewildered over some website discussion post, I recommend against. It is, generally speaking, not worth it.
Let’s hypothetically say someone was more focused on curing a disease than figuring out how to use Linux. The time they saved on stuff like that gave them more time to find the cure. They save all kinds of lives, including yours. Your verdict: they’re suckers and dumb for curing disease with easy to use products vs not doing it to master harder-to-use products that they spent a ton of time customizing. Don’t seem right to me…
Also, this isn’t hypothetical: it’s a situation that plays out regularly in many non-OS things you depend on or enjoy in life. You’re likely depending on the very people you curse for what you enjoy in life.
i’m not in the discussion club, but this feels like a bad argument, suddenly including “THEY ARE SAVING LIVES!!1” here. in any case: it is expected from professionals to know their tools OR to hire someone who does.
still it is valid to have criticism of things.
It’s a technique I use when someone uses an argument that says everyone or everything is or isn’t such and such. From there, I pick the counterexample that most justifies not having that position. Both EMTs and hospital workers have griped to me about how their tech UI’s, among otherthings, cause them problems. So, I went right with that example.
Normally Id consider it cheap since person using it often going for rhetoric. In my case, a large subset of people in the original claim do fight with tech to save lives. I’ll leave that for your consideration.
medical appliances are a whole different beast. i hope that they aren’t running desktop linux, but something without touchscreen but good hardware buttons :)
Well, I was including desktops and appliances. That said, you might find this interesting or worrying.
The FDA part is half BS. The vendors of safe/secure RTOS’s, like INTEGRITY-178B and LynxOS-178B, have had offerings for this a long time. At least one straight-up advertised a medical platform. The suppliers appear to just be using insecure tech deliberately to soakup more profit.
For falling for a con. It literally is the name used for a con victim in the conmen world.
I realize this is where most of the negative impressions my post got are likely coming from. I should be more careful with language. Dumb there meant “relatively technically illiterate”, and I should have used this full form.
No cursing going on. I do not hate the victims.
At the end, I mostly take issue with:
I try not to be a shill, and not to promote projects I do not believe on. I often qualify my recommendations; This is why I do it. Experience has made me skeptic and somewhat cynical.
By cursing, I meant talking down about them unnecessarily. The word is overloaded in my country. I apologize for any confusion.
Re conmen. I have two replies on this. One category refutes it due to them simply being forced to use it at school and work. To get benefits of those, they must use the tech regardless of their belief in it. The tech also still brings those benefits even if unworthy.
The second reminds me of when a founder of high-assurance security, Dr.Roger Schell, met Black Forrest Group of CIOs/execs to convince them to buy software. Many folks that think they’re dumb at tech (I did). They said they’d love to buy software at higher assurance but wouldnt be allowed to. Pressing them, they said they believed software developers all left bugs in on purpose to later sell them the fixes mixed with more buggy features. With features they needed, they’d never be able to buy it high quality anyway.
The same is true today. Users want to get something done. Microsoft’s monopoly position made about all the apps, hardware, etc work with Windows. They were also probably brought up on it. So, the option that gets them the most benefit at the lowest cost… money plus effort (more important)… is Windows. For a while, it had less surprises than alternatives, too.
So, there’s no con or at least most know it’s garbage. It’s simple economics and psychology. They benefit more by using it than not using it. There’s benefits to things like shell and programming if they invest the time. They can’t see it or have no need where they’re at.
What we can do is show them by building better tools that help them and integrate with their workflow. When they’re amazed, get them interested in the how. It’s what Ive been doing at work.
What the hell makes you think people will care for your technical opinions if you call people “technically inept conmen” and “suckers” for having a hobby? Ease off on the condescension.
I covered the choices for language in this reply.
The overall tone of my comment is a reflection of my hate of seeing people people being taken advantage of, salted with me being too tired to comment properly.
i’m not convinced that every derivative distribution is a conjob, but some trends in linux sometimes sure have the feeling of one (booting in less seconds!11).
an example from my experience: while i prefer window managers, kde is working better/saner when packaged by a single person for slackware than with kubuntu.
Beware the claim was qualified with “majorly”. I know you’re not going for a strawman, but I feel necessary to prevent an accidental one.
I am not surprised; Distributions with “flavors” for DEs are a strong marker for poor design decisions in the packaging preventing adequate support for multiple DEs.
sorry, didn’t quite read that majorly in your first post. it was kind of late :)
Here are some more plausible reasons why Linux on desktop sucks:
This sounded like a good argument in 2003, but 17 years later something just doesn’t add up. KDE and Gnome, to pick just the two biggest players, are both 20+ years old at this point. Fragmentation does decrease the amount of work that gets put into any one project but it’s been twenty years.
IMHO the fact that neither of them quite cuts it has less to do with fragmentation, and more to do with the fact that, once you start picking them apart, most of their components aren’t actually 20 years old. So they don’t have twenty years’ worth of functionality, not to mention bugfixes and stability.
My favourite KDE bug is a good example, I guess: NT 3.51 had a pretty similar bug, more than 22 years ago, and I think a big part of the reason why Microsoft is still ahead in this game is that their bugfix is probably still around in explorer.exe’s source code, whereas KDE has gone through at least three completely different modules for that thing, in the same timeframe (“modules” is a bit hand-wavy here, I hesitate to call it “shell”, although I guess it wouldn’t be completely inaccurate as far as Plasma 5 is concerned).
(BTW, I call it “my favourite KDE bug” because I actually like KDE and I’d love to use it more, so I spent a few weekends trying to fix that thing, but alas my QML-fu is basically zero)
I mean, yes, if you were to put Gnome and KDE together, you’d get one project instead of two, so “50% less fragmentation”. But between the two of them they’ve basically written five or six desktop environments already, and I’m just counting the major shifts here, where few applications/technologies were retained, not the major releases (albeit there would be some merit in that, too). That is the real source of “fragmentation”, and just hypothetically merging projects isn’t going to help, not when a good portion of the community sees nothing wrong with a “well, they’‘ll have to decide if they’re a Gnome app, an XFCE app, or a GTK app” sort of approach.
It is a very deliberate tactic used by influencers to equate Linux and Open Source/Free Software in people’s minds, thus closing the door to other, often better designed, open source operating systems.
This allows them to control the discourse. For a real world example, you’ll see a lot of “Linux communities” (forums, irc, slack/telegram/discord/whateverpopularcrap) where almost all of the discussion going on isn’t about Linux (be it the kernel or linux-specific userspace), but about third party open source applications.
But the people in charge of these are often Linux fanatics, and will lock threads and ban people when they mention other open source OSs. Often enough, questioning their Idols’ (Linus and the core team) perfection will warrant such a response.
This is, for instance, a major component of why so many Linux users believe the Tanenbaum-Torvalds Debate was somehow won by Linus. It couldn’t be farther from the truth. Echo chambers breed brainwashing.
This is one of the primary reasons I use linux. Package management makes installing software much better than any other paradigm.
Which is faint praise indeed. Package managers work well with a single centralised repository. When you’re using Debian, Ubuntu, Fedora, FreeBSD, or whatever and the thing you want is in the default repositories for the version of the OS that you’re using, it works really well. So well that you forget the amount of effort that it takes to maintain those repositories, ensure that everything that depends on libWhatever works with the same version of libWhatever or, if not, that you can install libWhatever v42 and libWhatever v43 in different directories and point the respective dependencies at different versions.
It starts to break down when you start having multiple sources. For example, I have an Ubuntu system that has three different package repos configured that can all provide cmake. Which one do I get? Whichever is newer. That’s fine, as long as they’re all using the upstream versioning. In this case they are, so that’s fine. It’s also fine because CMake has very strong backwards compatibility guarantees and so nothing breaks when I install a newer version than the one older things were expecting.
Now try substituting something like ICU for CMake. ICU is not backwards compatible and it requires things to be recompiled when you install a new version. If an external package repo is shipping a program that depends on a newer version of ICU, they will provide a newer version. If two external package repos are shipping programs that depend on newer versions of ICU than the default repos, they may both provide different newer versions of ICU. They may make sure that their versions can be installed in parallel with the main repo’s version, but the that are probably unaware of each other and so won’t test for compatibility with them.
This is the kind of thing that PC-BSD’s PBI, Ubuntu’s Snap, Docker containers, and GNOME’s thing-that-I-can’t-remember-the-name-of are intended to solve. Unfortunately, they do it by brining in a lot of the problems of the distribution models that are common on Windows and Mac: everything comes with a complete set of its own dependencies and now you can’t easily do central updates to a security vulnerability in one of the libraries that many things depend on.
It’s 2020 and we still haven’t solved the software distribution problem. I find that sad, especially the fact that I have no concrete suggestion of how we could do it better.
This is a major reason why I switched to Arch. On Fedora (or whatever) if something isn’t in the repos, you have a few options. If you are lucky, upstream will have an rpm on their website. If not, you could compile it yourself (if you can figure out how to compile it). Of course, you will never be able to uninstall it if you ever type
make install
. You could use flatpak/snap/docker/whatever. However, those packages tend to be huge, and their dependencies can get stuck on really old versions. None of these solutions tie into any of the normal update mechanisms. So you can be stuck with outdated/insecure software without even knowing it. If you use a third-party repo you run into all the problems you mentioned. So I use arch where I just need one repo (actually three, but who’s counting) and the AUR. Of course, building everything from scratch is a bit of a pain, and I end up not updating most AUR packages anyway. But it’s by far the most painless solution in my experience.Has anyone solved it? Perhaps android? Though I think they have a lot of problems that snaps, et al. have.
I tried an AppImage program recently and was blown away by how easy it was to run it! It was like Go’s single static binary idea, but for GUI apps! Very nice.
Do watch with a critical mind.
Spoiler: This is Linux propaganda.
Yeah the way he quickly brushes over support for things like games and video editing applications seemed weird to me. While the situation has improved slowly over the past decade or two, the lack of access to industry standard video and audio editing software seems like a compelling argument for why Linux on the desktop is infeasible for so many people. Likewise, the set of games available on Linux is a strict subset of those on Windows, and said games typically run better and are better supported on Windows. If either of these categories of applications are important to you, Linux is a poor choice.
There’s a lot of discussion of games being the factor for desktop Linux, but I don’t see it; at least as anything more than a value add. You can live without games, but you can’t live without the tools for work, whatever it might be. (Programmers like us have a luckier break here.) I think a lot of that discussion is because of how big the overlap between sites like Reddit and people who eat, live, and sleep PC gaming are.
The home desktop computer is a dying breed. Its use case is slowly being usurped by laptops (which are still mostly desktops), tablets, and phones. However, one use case which is not going away is gaming. Desktop computers offer one of the best gaming experiences out there. Many users stick to windows primarily because their favorite game only runs there.
Do many people use this kind of software? I would imagine it’s fairly specialized?
(Lack of) games are probably more important, especially because there’s a social component to that as well: if your friends are playing a game then you’d like to join in on that specific game. When the whole coronathing started some of my friends were playing Soldat, but I couldn’t get it running on my Linux machine so I missed out 🙁 (wine errored out, it’s been open sourced yesterday though, with Linux support, so need to look again).
I have helped around a dozen or so ‘regular people’ over the years who did not want to pay the Apple tax and whose Windows laptops had become totally unstable move over to Linux desktops. This is the list of apps they care about:
One time I had to help someone get some weird Java app for their schoolwork installed. Most people are content consumers, not creators.
The games situation is pretty good, in no small way thanks to Valve. Nearly half of my large steam library is linux-native, whereas most of the remaining games work with proton, without hassle.
However, the situation with Video and Audio is more of a joke. All the open video editors are terrible. At least we can play videos now; I remember the pre-mplayer era, and now we have mpv which is fantastic.
As for audio, the laymen suffer from Pulseaudio, which is still irritatingly bad. The production side of audio is much better thanks to jack, but only if running under Linux-rt, as when for whatever reason I boot mainline, jack gets xruns after a while, even with 20ms buffer.
It depends somewhat what you want to do; OBS studio is pretty nice for its use case, but I wouldn’t want to produce a film.
The lack of reliable timeslices is pretty terrible on mainline linux. Doesn’t affect me often, as I have 24 cores, but if I’m running something intensive I’ll sometimes get audio skipping in 2020 (which literally never happened back in 2005 on windows).
The Linux kernel likes to thread into long, dark and narrow corridors, not yielding the cpu to SCHED_FIFO/RR tasks until much later than the time they become runable.
I did boot into mainline recently and saw some of the usual pathological behaviour. Then I ran
cyclictest -S -p99
and spotted 10000µs peak within seconds. Appalling.Or 1985 on AmigaOS.
Are there open-source kernels that don’t do this and support a variety of mainstream hardware? Genuinely curious.
Most RTOSs do try hard to handle this reasonably.
If lives depend on it, seL4 is the only protected mode (to my knowledge) kernel with formal proofs of response time (WCET) and correctness.
But if your use case is audio, you’ll probably be fine by simply booting into linux-rt (Linux with the realtime patchset) and generally avoiding pathologically bad (pulseaudio) software in your audio chain, using straight alsa or a pro-audio capable audio server (jackd, hopefully also pipewire in the future).
You should also ensure the relevant software does not complain about permissions not allowing execution as SCHED_FIFO or SCHED_RR. In my opinion, they should outright refuse to run in this situation (except perhaps by forcing them to SCHED_OTHER with a parameter) rather than run in a degraded manner, but it is a separate issue, another of the many issues in the ecosystem.
I’m more impressed by the 2005 result because it had a task scheduler. Of course the Amiga didn’t leave jobs paused - it didn’t pause them!
What do you specifically mean by task scheduler?
AmigaOS’s “kernel” (exec.library) provides preemptive multitasking with priorities.
This is what a task looks like: http://amigadev.elowar.com/read/ADCD_2.1/Libraries_Manual_guide/node02BB.html
And this for reference on the bitmap flags: http://amigadev.elowar.com/read/ADCD_2.1/Includes_and_Autodocs_2._guide/node008E.html
Essentially the run/ready/wait we’re used to.
Where the Amiga is almost cheating is by using a 68k CPU. They have very fast interrupt response, further helped by being vectored. x86 is a sloth on this.
Wait, really? That’s earlier than I’d realized (the computer I was using in 1990 could hibernate a process but switching back was 2-3 seconds wait and IIRC the process had to ask to be switched away from).
Yes, AmigaOS had preemptive multitasking with priorities from day0.
Furthermore, it also had a message-passing system called ports, which was used for IPC, including a good part of user tasks talking to the OS, which was a multi-server OS (internally split as services, running as tasks, with exec.library being the closest thing to a kernel we had, as it did handle task switching, IPC, memory management and early hw initialization).
AmigaOS supports shaded libraries, and the os itself looks like a bunch of shaded libraries to application programmers.
Your Amiga curiosity can further be satisfied by the documents available at: http://amigadev.elowar.com/
Early AmigaOS (1985-1990) is <V37. Functions that claim to be V37+ are from the more modern AmigaOS days (AmigaOS 2.0+, 1991+), so you can easily see what was made available when.
The famous “Every OS sucks” song special cases AmigaOS as the exception… for good reason. Overall, I still consider its design to progress in many ways relative to UNIX. If you ever see me rant about UNIX-like systems, now you know where I am coming from.
I knew it a few seconds in, but I listened on 1.75x in the background… then he finally gets there that the folks who got rms to resign were complaining about stuff not worth mentioning by name which indeed was part of a national scandal.
Centering in part on a WWII era center of American thought MIT, the scandal over Epstein, over dehumanizing women’s lived experience, wasn’t a joke scandal. The acts of RMS for years were bugging people out, in ways that if he’d done them to fellow men, fellow young men, he probably would’ve gotten booted years ago. But here in this video it doesn’t even merit mention by name, only vague reference.
Being excellent to each other means replacing missing stairs.
https://en.wikipedia.org/wiki/Missing_stair
I think there are two separate issues here: what RMS said and did, and the mob reaction that forced him out. Maybe both are wrong.
There was no mob reaction, let alone a ‘linux users mob’ That’s really the crux of it. The video assumes Linux users had some majority agency in removing him, when in fact he was booted because no organization can have someone like that associated with them. It’s common decency not Linux community decency that forced him out. No mob of any sort needed.
If I’m ever in the mood for reading toxic rhetoric (which never happens), I go read RMS. While the things you listed are important, I never really knew about them until the mob happened, because I chose to ignore him.
Anyway, toxic rhetoric is exactly the thing that polarizes our world and turns everyone into abominable villains. Toxic rhetoric starts wars and tears communities apart. So in my book, toxic rhetoric alone is enough to get anyone fired.
Yeah I don’t get why some people insist on defending Richard Stallman after his:
It often makes me think the RMS defenders really think that low of women and that their only code is the bro code. As a woman in technology, it makes me feel somewhat jaded. Just because someone does good things doesn’t excuse them from acting like a human being. :-/ It feels more that people like me are perfectly fine to sacrifice as long as some figure head gets his adoration. And that bothers me a lot.
RMS never supported pedophilia. I actually read the supposed evidence. They are thoughts/questions, admittedly naive, on the subject of unintended consequences of laws (evidence against pedosexual activity being itself illegal) and whether non-coercive, mutually beneficial, pedosexual activity, could, in principle, be possible.
Also, please don’t use the term ‘pedophilia’ here. Pedosexual activity is child abuse and that is what is wrong, not ‘pedophilia’. We should encourage people to come forward as pedophiles to counselors and therapists so they can learn to live with a -philia that they must never follow up on. Shaming them for their feelings or even calling them evil merely for their feelings only makes the risk greater.
It was a single philosophical blog banter that he later retracted. Calling it a support is a far far stretch.
Am I missing something or all he did was literaly ask out women on dates? Is that bad?
Stallman is weird in many ways but to consider him to be a malicious monster is ridiculous.
He’s an awkward, clearly aspergers guy that asked some women out on dates quite awkwardly. That’s ‘untoward behaviour towards women’ today.
When I did initially read about it, I was quite skeptic, particularly as I had recently seen many cases of mob justice gone wrong.
Later it blew out of control and I did some digging. It turned out to be nothing else than the usual character assassination some collectives favor. Due to his personality and lack of awareness of current trends, Stallman proved an easy victim.
Hard agree. Unfortunately, this is happening in tech far too often. The free and open source software movements are getting caught in the cross-fire of US politics.
I disagree. RMS was a seminal contributor to the movement, but there is no reason to pretend that his behavior - which might have acceptable back in the day when computer sciences were a boys club and movies like Revenge of the Nerds were considered funny even though they depict non-consensual sex as a ‘prank’ - is compatible with today’s world.
Epstein’s case is not subject to ‘politics’: the guy was a known pedophile and sex trafficker. There’s not even a point in arguing that. Minsky, who Stallman defended, was well-aware of Epstein’s circumstances and willingly took money from him and sexual favors from one of his victims. One could argue that Stallman was trying to make a ‘philosophical’ argument or playing devil’s advocate, but you’d have to ignore the kind of message that would be sending to any young women or victim of sexual assault in that mailing list: welp, it’s a shame Minsky got caught doing something really bad, let’s just ignore this other victim so we avoid rocking the boat!
It is insane to me that RMS’s opponents would denounce a person for making an argument that a personal friend of theirs is not guilty of a crime, on the grounds that making this argument “sends a message” to people who might see it who are members of a demographic they assume is likely to be a victim of that crime. I’m deliberately not addressing the question of whether or not Stallman’s argument is correct or not, in the context of the actual alleged crime. Maybe he’s wrong and Minsky really was guilty in a legal or moral sense of having illict sex. I’m not sure what I think about Stallman’s argument in context, although I agree with him that something seems morally wrong about charging a person with the crime of statutory rape who was unaware that the person they had sex with was under the age of consent.
I’m not particularly interested in litigating the details of a media-reported crime I have no special information about, and it doesn’t matter in any event. Young women as a demographic, or even actual victims of sexual assault, have no particular right to never see someone argue that a specific sort of sexual encounter wasn’t actually a sexual assault. I refuse to be complicit in condemning RMS for doing so.
Do you even understand how society works? Are you arguing that people - in particular people in a position of power in a learning institution - should be able to say whatever comes to their minds, disregarding how other people are going to take what they say?
That’s the kind of behavior that leads to the normalization of behaviors like Minsky’s. The fact that people like RMS are comfortable thinking this is some philosophical riddle we are able to discuss, instead of clearly gross behavior that would creep the fuck out of any young person in the lab, is the problem. This is not someone pondering whether a bear shits in the woods, this is someone defending a 74 year man having sex with people in the age range of his students in front of his students.
Now, if that’s perfectly normal behavior for you, then I don’t know what to tell you. Maybe a consultation with a therapist would be a good start (and no, I’m not being an flippant about it).
I believe this is a bit patronizing.
Yes? I believe that anyone should be able to say almost anything. Of course, there are the traditional exceptions for slander and specific incitation of a crime.
Lacking foresight is no reason to deny someone’s voice.
Good argument against arresting someone. None of this is illegal, nor should it be.
Bad argument for leaving someone in charge of the FSF. Figureheads have resigned for less.
Being cast out from society is, like it or not, a serious effect. It’s more serious, in many cases, than legal censorship.
Not being the head of the FSF any more is not the same thing as being banished.
Being ostracised by the community and accused of all manner of wrongthink and wrongdoing based on at best wilful misinterpretation is being banished.
If it works out anything like it worked out for Brian Eich, I’m sure Starman would do fine.
Brian Eich? Starman?
Come on if you’re going to participate in the discussion you could make a good faith effort to at least get the names right.
I agree, but it’s Brendan Eich and Stallman. Starman is someone else entirely.
But, on the other hand, it isn’t patronizing at all to assume how everyone should behave around people who say things that make them feel unsafe?
Sure, and I believe people should be able to fire a co-worker they disagree with or find generally disagreeable.
‘Lacking foresight’ is hardly the problem, when there’s an extensive email thread where RMS kept digging deeper and deeper. I could see him lacking foresight before the first email, but by the third reply you’d assume he’d have some hindsight.
Dr. Stallman’s free speech rights have not been infringed in any way.
Yes. In fact, providing a space for people to say things that (some) other people take to be offensive is an important function of universities as an institution. This is the purpose of tenure systems, for instance.
This isn’t (only) a question over whether some kind of sexual behavior is gross on an abstract philisophical level, it’s a question about whether something a friend of his did in fact or should have have constituted a serious felony under law. Discussing questions of law is absolutely the rightful concern of any citizen. I completely reject the idea that the standard of whether a behavior is moral or not should be based on whether some people claim it makes young people in a lab feel grossed out or not.
I defend this. I explicitly believe that it is possible for a 74 year old man to have sex with someone of the traditional age to go to college (18-22 or so - that is, legal adults!) without either party doing something immoral. In fact, I believed this when I myself was within the ages of 18-22! Again, I refuse to be complicit in condemning someone else for making this kind of argument.
RMS, as a non-tenured member of MIT, should’ve known that didn’t apply to him.
‘Gross’ vs. ‘legal’ isn’t abstract in the context he was discussing though. Let’s think of a different example: let’s say someone in an academic context talks about his experiences with prostitutes in a country where that’s legal. Would that be acceptable?
Just because something is legal, it doesn’t mean discussing it or defending it is appropriate in every context.
Well, we agree to disagree on that. Personally, I feel like there are so many questions about power imbalance embedded in that statement, that it could lead to a loooooong conversation I’m not willing to have seeing as people have been flagging my replies because apparently not defending RMS is a sin or something.
There is a time and place for this - for example, invited speakers, seminars, lectures. A free-form mailing list for students and faculty would fall outside of this in most contexts - i.e. if some idiots starts spouting Nazi propaganda for trolling purposes, they can be banned from the conversation.
Dr. Stallman did not have tenure at MIT. In fact, he was not even part of the staff. His office and access to the mailing list was provided as a courtesy.
The sad part of this is before this happened, I had no idea that Marvin Minsky was mentioned in the Guiffre deposition[1]. Had Dr. Stallman not gone out on the field and broken a lance for him, I would not have to contend with the plausible possibility of him availing himself of sexual favors provided through Epstein.
One can simultanously agree that Dr. Stallman has and did have a right to make this argument, and also agree with the right of MIT to terminate his unofficial occupancy of an office, and the right of the FSF to remove him from a leadership position[2].
Free speech is the right of an individual not to be gagged by the state, not an obligation that private parties have to host that speech.
[1] a deposition isn’t a statement of fact under the law, it’s a document submitted by one party in an ongoing lawsuit.
[2] as an advocacy group, the FSF is reliant on persuading people to their ideals (and usually soliciting financial donations). A public view (no matter how legally absurd) that their primary spokesperson is a defender of pedofilia is counterproductve to the mission of the FSF.
Free speech is a principle of good society. Yes it has legal protection in some states but this constant appeal to ‘free speech is just a law stopping the STATE from censoring you’ is pathetic. Should we condone attacks on free speech in other states because it’s not protected by law in China or North Korea? Freedom of expression existed as a principle of a decent society far before it was ever enshrined in legislation. In New Zealand it isn’t even supreme law, essentially just a rule of administrative law and of legal interpretation (interpret ambiguity in favour of rights).
Nobody is talking about whether MIT had the right to terminate his privileges. That’s not in question, anywhere in this thread. The discussion is around whether it was right to do so.
In the narrow circumstances of Epstein’s alleged contributions to Harvard (he also had access to an office there as a private citizen, I believe) which is currently tearing Harvard apart, it was absolutely correct of MIT to defensively cut off Dr. Stallman from access to official MIT facilities and mailing lists. Not doing so would only have hurt MIT’s image (and possible future endowments).
Note that if Dr. Stallman had been part of the faculty or student body, I would probably not accept MIT’s behavior.
What is your opinion on the FSF removing him from a leadership position?
I think that people should not be expected to self-censor on the basis that people might get offended on behalf of others.
Society decided a long time ago - and has not changed its decision since then - that once you’re over the age of consent there’s nothing wrong with relationships with anyone of any age also above the age of consent.
You can advocate for change to that or that you think that’s wrong, but given that the primary basis for LGB rights advocacy I’ve seen is ‘consenting adults in private should be able to do what they like’ I think you should think carefully about what you’re implying.
So, is there any situation at all where you think people should self-censor? Say, for example, is sexual harassment appropriate? After all sexual harassment is just one person being offended about how someone else treats them.
This is definitely not true. Society frowns upon all kinds of relationships where the age disparity is incongruous with the situation. For example, the terms ‘gold digger’, ‘crate robber’ and ‘cougar’ come to mind. Legality doesn’t equal acceptance.
If you can’t see the difference between two adults in a loving relationship wanting to be accepted by society vs. someone abusing a power imbalance to take advantage of people, then I don’t know what I can do to explain it to you.
Conversely, Stallman has no particular right to an office provided as a courtesy by a private university, nor does he have a particular right to a leadership position in a privately-held non-profit advocacy group.
Imagine someone who pretend to be very nice and morally virtuous to a crowd that’s obsessed with this, which can easily be any crowd when carefully herded the right way (most people will agree with superficial statements that sound “morally good”) and gains influence in this crowd.
Then, using this leverage (the belief this person is definitely a good person) and some character assassination material (an article, twits, whatever claiming a person is terrible; truth here is irrelevant, the holding of controversial opinions at any point in time, even the distant past, is often used as material), on someone (thereon subject), written by themselves or some convenient third party, calls on the mob to take on actions to try and destroy the subject’s life. Actions including online bullying and organized harassment of the subject’s employer, family and friends. This isn’t an exhaustive list.
There’s a name for a person who does this. It’s Sociopath, or as it used to be called, Psychopath. They are the actual monsters. Whereas the subject is actually nothing else than a victim. If you still have doubts, digging a little on the perpetrator will typically reveal they have had other targets. Yes, they do it, enjoy it, realize they can get away with it and then do it again.
It helps when in the mob there’s other monsters which enjoy doing this. They willingly help the mob leader, as in exchange they also get their help with other targets. There’s literally entire communities built around doing this.
This is getting out of control and it needs to stop. Awareness of how these monsters operate helps. At some point, however, instigators will hopefully have to start answering to Justice. The official sort, with trials, evidence, presumption of innocence and all these steps and safeguards which separate Justice from Mob Justice.
https://aeon.co/essays/how-the-cruel-moraliser-uses-a-halo-to-disguise-his-horns
I have just finished reading this. As I suspected, others have noticed this pattern, analyzed it and explained it much better than I could have.
Thank you for linking this excellent article on the matter.
This is a straw man.
No, it is not. My comment is about a dark pattern I have noticed in recent years, nothing else than that. The intended audience is pretty much everybody reading the thread. The intended effect is to raise awareness of this dark pattern, and to promote critical thought (there’s never enough of this).
The poster I was replying to isn’t being targeted by me in any other way than being the post that incited my reply, and is absolutely not being pinpointed as the instigator. Thus, I am not making them into some strawman.
Instead, they are kindly and indirectly being nudged into considering the possibility that they might be participating in such a scenario, and into reflecting into whether what they’re doing is positive.
Can you cite an example of that ‘dark pattern’ you’ve noticed? Can you cite two examples? Can you cite examples where both sides of the political spectrum used that dark pattern to their advantage?
I’ll be happy to discuss them.
Here’s an example: there is a transgender YouTuber whose channel is called ‘ContraPoints’. Her name is Natalie Wynn. She makes videos about a variety of different topics. She’s clearly left-wing and has stated openly and frequently that she is not a transmedicalist (essentially someone with a very narrow view of what constitutes a ‘valid’ transgender person).
She was essentially ‘cancelled’ on Twitter, and left Twitter as a result, because she made a video where she used a particular transgender activist as a voice actor for all of 6 seconds in an hour long video. What this activist actually said had nothing to do with transmedicalism, he was there to be the voiceover for a particular quote.
However, because said activist is alleged (without any basis that I’ve seen) to have transmedicalist views, not only did ContraPoints get ostracised from Twitter and harassed so badly she deleted her account and left the platform, but anyone that expressed any support for her (her friends, etc.) were harassed, even if they didn’t actually say anything beyond ‘she’s my friend’.
So to be clear, people get harassed (death threats, other violent threats, spammed with abusive imagery, told to kill themselves, etc.) not just for being a transmedicalist, not just for allegedly being a transmedicalist, not just for collaborating in an unrelated way with someone that they did not know allegedly is a transmedicalist, inhales but for being friends with someone that collaborated with someone that they did not know allegedly is a transmedicalist.
But no you’re right I’m sure that cancel culture isn’t a problem.
The answer to all your questions is: I don’t need to.
I do not have the time nor the inclination to humor you any further than I have.
So… it was a straw man. You were just pushing the whole ‘virtue signaling’/‘conservative oppression’ talking point on a conversation that had literally nothing to do with that.
I have a feeling that you are one of those people who thinks he’s right even when proven wrong, and has been proven wrong enough times he’s learned not to push the envelope when things aren’t going his way. Can’t say I’m surprised.
lol
I especially recommend reading the “Low grade “journalists” and internet mob attack RMS with lies.”, article, perhaps more for it’s content than it’s choice of words.
The upside to this whole debacle is that RMS will probably have more tile to work on the GNU project. IMO the role of president of the FSF wasn’t ever the best for him – even if I disagree with they way they amputated him. I’ve been following the Emacs mailing list in more detail recently, and maybe I have a wrong impression, but I see him taking part in the discussions more than at least over the last few years.
I remember that article. It had some weird phrasings, since edited:
https://twitter.com/gerikson/status/1176211260142231552
RMS’ more ardent defenders are in general a bit outside the mainstream.
This is known as ad-hominem. The author’s personal views (or what kind of person they are) are irrelevant to the validity of arguments presented.
The linked twit is a good reminder of why I avoid twitter. It is a community full of hate and destructive energy, not one of reasoning and respect for difference of opinions.
If someone cannot tolerate the existence of human beings who hold opinions different than theirs, then they’re toxic. Twitter is toxic, as it’s full of this sort of people, to the point it hosts mobs that attack people they disagree with, with the full intent of destroying their lives. This is called mob justice (I believe those involved tend to use euphenisms for this), as opposed to justice. Basically a mob, typically herded by a sociopath, playing judge and executor. It isn’t just in any way.
Twitter tolerates this behaviour and thrives on it. Twitter is a platform for organized hate. It is literally the platform where most of this is conducted. If Twitter went away overnight, the world would be better for it.
It’s not really a stretch to say that the age of consent at 16 is too old. There are clearly kids having consensual sex that shouldn’t be illegal below that age, but not much below it. ‘Romeo and Juliet’ laws for anyone under 18 is probably a much more reasonable system.
Hm, that article doesn’t do a great job of proving Stallman’s supposed innocence.
His argument that Minsky having sex with Virginia Giuffre is not a crime even though she was a minor because she was coerced by someone else is ludicrous. By that argument, having sex with a victim of sexual trafficking Is acceptable. Minsky was a grown-ass man that should be responsible - and accountable - for his decisions, including deciding to have sex with a minor in very weird and strange circumstances.
Besides the potential legality based on jurisdiction, the very obvious lack of morality of the act should make anyone take a step back. One can’t equate a 17 year old having sex with a partner of similar age as part of a normal love relationship with a full-grown adult taking advantage of someone barely able to make a decision about their sexuality… and yet, the author of that article seems to think that because Stallman somehow has been consistent about that misrepresentation, that must mean he’s been wronged by someone pointing out it’s wrong.
He’s not arguing that it wouldn’t be a crime. I don’t know how you read that from the very clear, incredibly specific text.
Did you read the mail thread linked in that article? The whole point of the thread is pondering if they should be calling this sexual assault or not, because to Minsky’s knowledge she could’ve just been a really keen very young woman. For context, they are talking about a 74 year old thinking that a teenager is coming on to him.
I know two women who in their teens were gerontophiles.
Ah, I see. So that makes it OK, I guess.
It makes it believable that an old man could think a teenager is coming onto him, at the least.
The structure of your post throws around some ideas, but doesn’t construct any arguments. It reads as an appeal to emotions.
Your point being? Be very specific, because through your roundabout strategy, you come out to me as pushing the idea that some topics should never be discussed, that some ideas should be never expressed, and that people who dare do so should be executed by mob. Or that it is alright if this is what happens.
Please correct me if I am wrong. By all means, please tell me this isn’t what you’re trying to push.
Being born in 1983, she couldn’t have been a minor in 2001 when she alleged this trafficking took place. Assuming that it happened, that Minsky was involved, and that Minsky had sex with her, the crime would not be having sex with a minor.
If you don’t know that someone is a victim of sexual trafficking then it isn’t wrong. Obviously.
That depends on the definition of ‘minor’. In most places that means ‘under 18’, and last time I checked, if she was born in - say - September 1983 and the sexual encounter happened in January 2001, that’d make her a minor. In fact, being that both of them are American, and considering that Americans aren’t exempt from crimes committed against other Americans abroad, the statue is even less clear.
Millions of Johns that got thrown in jail would like to disagree with you.
As time passes, I think the biggest mistake is to assume that Linux is a platform. A distribution + desktop environment would be much closer to what people talk about when mentioning “platforms”. My “glass half full”-approach is: Linux is just the base, that happens to ensure that a lot of knowledge and effort is transferable between these platforms, it’s a great that Fedora and Ubuntu use the same c standard library, and don’t develop their own things. It’s structurally impossible for the “Linux world” to work as one platform, since the groups working on what’s falsely described as the “Linux Platform” .
I wasn’t convinced by these arguments. To my knowledge, most “Open Source Linux Companies” are focused on cloud services and related tools, as was mentioned for servers and the like. People developing on these platforms might never have to see the machine or have to install anything. Microsoft, Google, Toyota aren’t investing in the Linux foundation because they want to have a good desktop experience.
Something that is subjective but puzzles me a lot is it seemed like in 1998-2002 improving the Linux desktop was a huge focus. Since then there’s been a big growth in corporate sponsorship, which has been server focused. But the desktop thing was never corporate, it was just enthusiasts trying to show they can make a better desktop environment and better desktop applications. Somehow that seems to have faded.
It’s interesting that there’s a lot of enthusiast projects on github now, where people try to show they can build a better version of some tool, often in Go or rust. But desktop environments or even GUI programs in general don’t seem like a focus, and I honestly don’t understand why. And it makes sense that without that, people end up exposed to Linux ideas over SSH, or WSL, or through the (not Linux) Mac terminal.
I have a feeling a large part of that has to do with the shift towards web-based applications.
Mac OS X sucked the oxygen out of the Linux desktop in the early 2000s.
Some of it. The rest was killed by Android. Android works on phones, tablets, laptops, and desktops and has a well supported set of GUI toolkits and a large ecosystem and it is Linux. Today, Android is probably the most widely deployed client OS. You can get Microsoft Office for Android. You can get Spotify and Netflix clients for Android. You can get an insane number of games for Android. You can also install a load of open source software via F-Droid or other mechanisms.
The thing that hasn’t taken off as much is the GTK/Qt + X11 (/ Wayland / Mir / this week’s fashionable X11 replacement) stack on the client. If anything, Linux has killed the other open source *NIX systems on the client because it’s the only one that can run the Android stack (Linux-only things like cgroups are tightly woven through the Android runtime with no abstraction layer).
Some really important work on free-software desktops has been contributed by corporations in the past couple of decades. Case in point: GNOME accessibility, which was mostly implemented by Sun.
Partly because of the complexity, I believe. It is one thing to write a better grep in Go or Rust and a totally different one to write a GUI application, considering the complexities with cross-platform GUI toolkits, HiDPI support, packaging & distribution etc.
It’s certainly more complex, but why did open source culture change? Honestly, the people who started gimp, gnumeric, abiword, koffice, konqueror, wine etc. need to be mildly crazy people who are seeking out a very complex series of challenges. Today it seems like all of the complex parts are either presently corporate sponsored (Chrome, Mozilla, kernel, VirtualBox) or have been largely built with corporate sponsorship and left for the community to maintain (Libreoffice, Eclipse.) The “we don’t need companies for the complex stuff” culture seems to have really faded.
I see a huge, huge amount of talk about improving the Linux desktop. It seems to be all that people are interested in these days: desktop desktop desktop. Dumbing down the user experience in the hopes it’ll be usable by the developers’ parents, I guess.
Maybe it’s really just a disconnect. I mean, you’re right that Gnome/KDE development continues, and real developers have just learned to ignore them. My desktop is fvwm and has barely changed in two decades.
I think the open source desktop environments made a big mistake betting so heavily on Linux. If the Linux kernel doesn’t provide features that KDE or GNOME wants, they need to convince people who are largely paid by companies that sell server products to help them upstream changes. This is difficult. If they’d retained the focus on portability that these projects had at their start, they’d have found it easier to get the features into Free/Net/OpenBSD (who would all love to have some differentiating features for desktop users). At that point, it’s easy to convince the Linux community to accept the features. Instead, they’ve made everything that’s not Linux such a second-class citizen that the idea that GNOME / KDE developers would switch to a different platform is an empty threat.