The comments there are frequently pretty toxic. The site seems to relentlessly keep tabs on various updates to every project known, but the comment section is to be avoided.
It absolutely is, I can confirm.
Unlike most online journos, I actively participate in the comments on my stories on The Register, and I have been thanked for this a lot.
A couple of years ago I joined Phoronix’s forums – I think it was to point out a mistake. A few people messaged me to welcome me, or to express surprise I wasn’t already a member.
But… OMG it is a cesspit. I have been on the internet since 1985, I have been on some of the nastiest sites it’d had to offer, but sheesh: the Phoronix forums are nasty. The worst of the Linux and FOSS world, gathered to insult one, mock and denigrate one another.
I don’t care about the name, but I do regard the “G” as a warning flag for FOSS fanaticism.
Whole-application redesign is… a big undertaking, even apart from all the overheated politics. Anyway, https://krita.org/ is pretty much all I require in terms of a replacement. I don’t get why people are so anxious to preserve crusty old codebases rather than move on to new and better applications.
I’ve carefully outlined already how LLM crawlers and the usual webcrawlers are two extremely separate things. Yet people still try to whatabout regarding search engines?
I don’t see where you outlined it. I see you distinguishing between web crawlers and LLM crawlers in that LLM crawlers are crawling at an “unprecedented rate”, but that doesn’t seem like a distinction between LLMs and search engines. I’ve also seen you mention that LLM crawlers are coming from a variety of IP addresses, which also does not seem like a meaningful distinction between LLMs and search engines. Feel free to point me to your careful outline of the differences between LLMs and search engines because I seem to have missed it.
I also think we may be talking past each other, because I don’t think I’m making a “whatabout search engines” argument–I just don’t see how crawling the web to feed LLMs is categorically different than crawling the web for other purposes.
You are in a thread of FOSS projects are getting DDoSed because of LLM crawlers, and you think holding them accountable is moving the goal post?
No, “moving the goal post” was about starting from “LLMs are fundamentally bad because of crawling” to “these LLM companies are badly behaved”. I’m not advocating for AI companies, I’m advocating for understanding this as a specific case of unwelcome Internet traffic–this analysis points FOSS projects toward the tools they need to use to mitigate: auth, CDNs, rate limiting, firewall rules, etc as appropriate. Ideally everyone on the Internet would be a well-behaved citizen and we wouldn’t have to worry about unwelcome traffic, but that’s not how the Internet works.
The original maintainers had tried to sell GIMP to schools and found that they could not, repeatedly. People very much did care in institutional settings, which was exactly what Glimpse referred to and what they were trying to remedy.
Is there any information about the countries where they tried that? I would be very surprised if people in non-English speaking countries had reacted like that.
I have it on reasonably good authority by someone who went pouring through mailing lists and through the history of it, that the origin of the name GIMP was intended as an insult/an in-joke. The reference to the actual slur was intentional from the start.
I am not denying that, but that does not invalidate that I believe that this is a problem for a subset of native English speakers only.
If nobody cared that much, why was there a flamewar? Why did people get death threats and harassment and hate mail?
people are awful, I don’t think this is good behaviour. But I would not confuse small idiotic groups with a general population. I would bet that the large majority of GIMP users have no idea what mailing lists are and how to participate in one.
Why were a very strong community of disabled programmers and artists and designers, subjected to such repeated harassment to the point that many of them swore off doing any FOSS work in the future?
I was not aware of that, so I can’t comment here.
Surely, if people do not care, then changing the name is simultaneously no big deal.
We often have the discussion that OSS developers owe their users nothing and can do as they please. I think this applies here too. Maybe it would indeed be better to change the name, but they can still do or not do as they please.
The original maintainers had tried to sell GIMP to schools and found that they could not, repeatedly. People very much did care in institutional settings, which was exactly what Glimpse referred to and what they were trying to remedy. Hell, even within the small sample size of this thread, there’s already someone further down in the comments who echoes these experiences with trying to introduce it in educational settings.
I have it on reasonably good authority by someone who went pouring through mailing lists and through the history of it, that the origin of the name GIMP was intended as an insult/an in-joke. The reference to the actual slur was intentional from the start.
If nobody cared that much, why was there a flamewar? Why did people get death threats and harassment and hate mail? Why were a very strong community of disabled programmers and artists and designers, subjected to such repeated harassment to the point that many of them swore off doing any FOSS work in the future? Surely, if people do not care, then changing the name is simultaneously no big deal.
Around 2008, I remember hear rumblings of Adobe looking to go the software-as-a-service, cloud route. I remember looking at the roadmap seeing nondestructive editing with layer adjustments thinking “awesome, I bet it will be just another 2–3 years til we get this”. My estimate was very far off, but I followed the GEGL updates & saw the possibility. At that point I had moved mostly to photography over graphics where darktable fit 90% of my needs without needing 2 tools with Hugin being a great companion for stitching panoramas (in college Lightroom couldn’t get everything done & I needed to move to Photoshop for the last layer of adjustments)—so I didn’t need GIMP much. But here we finally are! I find myself needing the GIMP raster image editor about once a year for something not really best suited by the other application, but Krita has come such a long way with a lot of overlap that has now turned into a more interesting friendly FOSS ‘competition’. I have been wanting to get back into pixel art so I will definitely need to give GIMP vs. Krita a real shot for advanced, free software workflows—which is pretty good timing to say the least.
Now if Wayland + WMs can finalize their color management situation we can be in business … (really need DCI-P3 + ICC corrections where it seems X11 is the only thing that reliably run DisplayCAL for measurements)
I like GIMP as an editor, but it stands to reason that it very obviously can’t be introduced into school or community environments because of it’s name, and the user interface is something I’m used to but everyone around me finds absolutely punishing. Every time a news item about GIMP shows up, I’m reminded of the short-lived Glimpse fork, and what could have been were the fork not gaslit out of existence by a huge group of (often seemingly frothing at the mouth) FOSS fanatics who didn’t read further than the “We want to fork to change the name” before loading their posting guns to talk about how “political correctness has gone mad”. People I, at the time, considered friends and acquaintances were suddenly getting incredibly, disproportionately irate about such a minor change, and ignoring the rest of the aspirations of the project to focus on such a tiny detail.
Don’t get me wrong, I’m a FOSS fanatic at times, but the fact that such a promising fork (that consisted of a prospective overhaul to the GIMP UI, and a name change so it could be easily pushed into the educational space) got ragebombed out of existence like that, still to this day does not sit well with me, and strikes me as one of the biggest losses in Free Software in the last ten years. The fact that the majority of people involved in Glimpse were disabled, or otherwise marginalised, really makes me wonder how many people stepped out of FOSS for good.
It’s sad that this is now my main association for the GIMP project. A mediocre tool that I have minor gripes with, and a huge posting scuffle over what was, essentially, piss in the wind.
So to be quite clear, LLMs are unethical because they are popular, and that popularity is driving an increase in the rate at which public webpages are crawled. Is archiving public websites unethical? Does it become ethical if disk prices fall and suddenly everyone decides to archive the public Internet? Similarly, if everyone decides they want to self-host a search engine, does Internet search become unethical?
I’ve carefully outlined already how LLM crawlers and the usual webcrawlers are two extremely separate things. Yet people still try to whatabout regarding search engines?
Jesus, people. Please stop. They are not the same thing. The argument is a surface level relevant argument. Please.
This in particular is a pretty egregious moving of the goalposts from LLMs to many of the companies that produce them.
You are in a thread of FOSS projects are getting DDoSed because of LLM crawlers, and you think holding them accountable is moving the goal post?
I’m confused about why this is the issue. That doesn’t seem like the issue if we’re discussing the ethics of performing the act, that seems like an issue with how one would respond to the act. I feel like this is unrelated, right?
They are intentionally not following it and whataboutism does not help you here.
This is not whataboutism. Bringing up symmetric situations and asking for a symmetry breaker is perfectly reasonable. I feel like you’ve also misunderstood that I’ve already granted that intentionally bypassing a site’s desire to limit access is likely unethical.
Then go engage with the FOSS communities providing you with a free service, because we are really struggling with this bullshit.
I feel like we’re talking past each other. For one thing, none of this has anything to do with me, I am not crawling anyone. I also granted that crawling when a site says not to seems unethical. What I am asking for is clarification about the moral judgments regarding the fundamentals of LLMs. I don’t think it’s unreasonable when faced with strong assertions like “this technology is fundamentally unethical” to ask for a justification and provide helpful questions to help guide such a justification.
The issue though is that you can’t tell. FOSS projects are getting absolutely hammered by hundreds of IPs from entire IP-ranges in Azure, GCP and Alibaba Cloud. You can’t tell who is who, who perpetrates this nor if they can do better. ArchWiki has had huge issues with uptime lately and we’ve had to put the entire history pages behind login as crawlers are aggressively going through all the links there.
Can one crawl in a way that is not aggressive, and if so, would that address the ethical issue? Lots of sites crawl the internet for many different purposes, it seems like we mostly just want them to be respectful as they do so, but otherwise we don’t typically care too much - at least, that’s my impression. Is a search engine unethical?
robots.txt has been a thing for 30 years. They are intentionally not following it and whataboutism does not help you here.
I suppose that’s all to say that I’m not really convinced.
Then go engage with the FOSS communities providing you with a free service, because we are really struggling with this bullshit.
I love this stuff. I have been following FOSS encoder development for a very long time, and I remember learning about the importance of subjective evaluation and psychovisual enhancements in the days of Theora being a potential “good enough” alternative to the patent-encumbered H.264. Then I watched x264 do things that seemed like magic alongside Google releasing VP8 as an open-source and royalty-free codec that was much closer to H.264. I literally got excited to the point of tears when the Alliance for Open Media was formed years later and again when AV1 was finalized.
I’ve been playing with video encoding for over 20 years and a fan and user of FOSS for well over 15 years, and AV1 has been the culmination of so many hopes and dreams. It has been awesome watching the work done by the community around the -PSY forks and related tools, and it gives me the best possible flashbacks of those wild days of x264 magic. Seeing so much of it get upstreamed into the official SVT-AV1 so it can reach a wider audience is chef’s kiss. Thank you for keeping this spirit of community obsession with better subjective quality alive.
It depends in part on what distribution you use. For example, I’ve been upgrading (in an officially supported way) from one Fedora release to the next since 9 or maybe 12 (it’s been a long time), and I can count the number of times I had to fix something on one hand (and if I waited to upgrade until the new release had been out a couple months, I might not have needed to manually intervene but I was too eager to upgrade).
I think I agree with your original point–FOSS projects are more likely to move on to the new than maintain the old–but in practice it hasn’t really mattered to me. Again only speaking for Fedora’s default GNOME install, the moves to PulseAudio and later to PipeWire were seamless. PipeWire is a dream, by the way. In the same period of time there have been far fewer major releases of Windows, but the changes have been drastic and unpleasant in comparison.
If you compare Fedora 12 to 42, sure, wildly different. But if you simply used them to do your work, upgrading along the way, I don’t think most people would notice. I also don’t think most people read the release notes like we do. The more things can just work the less it matters what components changed from one release to the next.
If I was building my own distro I might care more about all those underlying changes, but I can’t build my own macOS or Windows.
EDIT: My bad, PulseAudio came in Fedora 8 before I would have been upgrading from one release to the next. But that was in 2007, and the only other soundsystem change in the last 18 years was PipeWire which definitely was a seamless upgrade.
I used to contribute to Camino on OS X, and I knew that most appetite for embedding gecko in anything that’s not firefox died a while back, about the time Mozilla deprecated the embedding library, but I’d lost track of Epiphany. As an aside: I’m still sorry that Mozilla deprecated the embedding interface for gecko, and I wish I could find a way to make it practical to maintain that. Embedded Gecko was really nice to work with in its time.
The FOSS world needs a high quality, cross-platform browser engine that you can wrap your own UI around.
I strongly agree with this. I’d really like a non-blink thing to be an option for this. Not because there’s anything wrong with blink, but because that feels like a rug pull waiting to happen. I like that servo update, and hope that the momentum holds.
According to Wikipedia, Epiphany switched from Gecko to Webkit in 2008, because the Gecko API was too difficult to interface to / caused too much maintenance burden. Using Gecko as a library and wrapping your own UI around it is apparently quite different from soft forking the entire Firefox project and applying patches.
Webkit.org endorses Epiphany as the Linux browser that uses Webkit.
There used to be a QtWebKit wrapper in the Qt project, but it was abandoned in favour of QtWebEngine based on Blink. The QtWebEngine announcement in 2013 gives the rationale: https://www.qt.io/blog/2013/09/12/introducing-the-qt-webengine. At the time, the Qt project was doing all the work of making WebKit into a cross-platform API, and it was too much work. Google had recently forked Webkit to create Blink as a cross-platform library. Switching to Blink gave the Qt project better features and compatibility at a lower development cost.
The FOSS world needs a high quality, cross-platform browser engine that you can wrap your own UI around. It seems that Blink is the best implementation of such a library. WebKit is focused on macOS and iOS, and Firefox develops Gecko as an internal API for Firefox.
EDIT: I see that https://webkitgtk.org/ exists for the Gnome platform, and is reported to be easy to use.
I see Servo as the future, since it is written in Rust, not C++, and since it is developed as a cross platform API, to which you must bring your own UI. There is also Ladybird, and it’s also cross-platform, but it’s written in C++, which is less popular for new projects, and its web engine is not developed as a separate project. Servo isn’t ready yet, but they project it will be ready this year: https://servo.org/blog/2025/02/19/this-month-in-servo/.
I wasn’t sure Ungoogled Chromium was fully FOSS, and I completely forgot about Debian Chromium. I tried to use Qute for a while and it was broken enough for me at the time that I assumed it was not actively developed.
When did Epiphany switch from Gecko to Webkit? Last time I was aware of what it used, it was like “Camino for Linux” and was good, but I still had it on the Gecko pile.
Of the blink-based pure FOSS browsers, I use Ungoogled Chromium, which tracks the Chromium project and removes all binary blobs and Google services. There is also Debian Chromium; Iridium; Falkon from KDE; and Qute (keyboard driven UI with vim-style key bindings). Probably many others.
The best Webkit based browser I’m aware of on Linux is Epiphany, aka Gnome Web. It has built-in ad blocking and “experimental” support for chrome/firefox extensions. A hypothetical Orion port to Linux would presumably have non-experimental extension support. (I found some browsers based on the deprecated QtWebKit, but these should not be used due to unfixed security flaws.)
That’s totally valid, and I’d strongly prefer to use an open source UA as well!
In the context of browsers, though, where almost all traffic comes from either webkit-based browsers (chiefly if not only Safari on Mac/iPad/iPhone), blink-based browsers (chrome/edge/vivaldi/opera/other even smaller ones) or gecko-based browsers (Firefox/LibreWolf/Waterfox/IceCat/Seamonkey/Zen/other even smaller ones) two things stand out to me:
Only the gecko-based ones are mostly FOSS.
One of the 3 engines is practically Apple-exclusive.
I thought that Orion moving Webkit into a Linux browser was a promising development just from an ecosystem diversity perspective. And I thought having a browser that’s not ad-funded on Linux (because even those FOSS ones are, indirectly ad-funded) was also a promising development.
I’d also be happier with a production ready Ladybird. But that doesn’t diminish the notion that, in my eye, a new option that’s not beholden to advertisers feels like a really good step.
Sure, I was assuming in the P2P scenario that the peers were untrusted because that’s the scenario presented with Dropbox - that it is untrusted. If you remove that and say “I trust my peers” sure, that’s perfectly fine.
Right, that was my point. Remove the need to trust by removing third parties from the equation. In that sense, P2P file sync has the same benefits as self-hosting a client-server file sync service, except you don’t need to operate a server.
The rest boils down to “Can you trust proprietary software running on your computer?”, where I would argue “just use FOSS”. But even so, which malicious behavior would be more likely to be spotted by a user?
Sync client that’s supposed to send encrypted files to a server you don’t control actually uses weak or no crypto for the files themselves
P2P sync software that’s supposed to only talk to your other devices suddenly sends a ton of traffic to a machine controlled by the creator of said software
I feel like that matches exactly what I said. I never mentioned anything higher level like a cryptographic key or lack thereof.
You mentioned chunking and sending files to untrusted peers. That’s too specific and explicitly not what most P2P file sync tools do. Most of what I’ve said hinges on this sentence of yours:
Having a bunch of strangers integrate into your file storage solution does not seem to solve the problem of trust, it seems to make it far more complex.
The strangers never get to see your files, so this doesn’t matter.
Okay but if you don’t trust the person developing the P2P product, why would you trust them to implement this protocol?
The behavior of software I run on my own computer can be analyzed and verified to some extent, but I can’t verify what the software on your computer does. This can be improved further by using FOSS. If the files never end up on your computer, I don’t need to trust you to not do anything shady with them. If the files are only ever on my own devices, I don’t need to encrypt them either.
Interesting. I don’t personally care, but I know many FOSS sorts really do…
It absolutely is, I can confirm.
Unlike most online journos, I actively participate in the comments on my stories on The Register, and I have been thanked for this a lot.
A couple of years ago I joined Phoronix’s forums – I think it was to point out a mistake. A few people messaged me to welcome me, or to express surprise I wasn’t already a member.
But… OMG it is a cesspit. I have been on the internet since 1985, I have been on some of the nastiest sites it’d had to offer, but sheesh: the Phoronix forums are nasty. The worst of the Linux and FOSS world, gathered to insult one, mock and denigrate one another.
I don’t care about the name, but I do regard the “G” as a warning flag for FOSS fanaticism.
Whole-application redesign is… a big undertaking, even apart from all the overheated politics. Anyway, https://krita.org/ is pretty much all I require in terms of a replacement. I don’t get why people are so anxious to preserve crusty old codebases rather than move on to new and better applications.
I don’t see where you outlined it. I see you distinguishing between web crawlers and LLM crawlers in that LLM crawlers are crawling at an “unprecedented rate”, but that doesn’t seem like a distinction between LLMs and search engines. I’ve also seen you mention that LLM crawlers are coming from a variety of IP addresses, which also does not seem like a meaningful distinction between LLMs and search engines. Feel free to point me to your careful outline of the differences between LLMs and search engines because I seem to have missed it.
I also think we may be talking past each other, because I don’t think I’m making a “whatabout search engines” argument–I just don’t see how crawling the web to feed LLMs is categorically different than crawling the web for other purposes.
No, “moving the goal post” was about starting from “LLMs are fundamentally bad because of crawling” to “these LLM companies are badly behaved”. I’m not advocating for AI companies, I’m advocating for understanding this as a specific case of unwelcome Internet traffic–this analysis points FOSS projects toward the tools they need to use to mitigate: auth, CDNs, rate limiting, firewall rules, etc as appropriate. Ideally everyone on the Internet would be a well-behaved citizen and we wouldn’t have to worry about unwelcome traffic, but that’s not how the Internet works.
Is there any information about the countries where they tried that? I would be very surprised if people in non-English speaking countries had reacted like that.
I am not denying that, but that does not invalidate that I believe that this is a problem for a subset of native English speakers only.
people are awful, I don’t think this is good behaviour. But I would not confuse small idiotic groups with a general population. I would bet that the large majority of GIMP users have no idea what mailing lists are and how to participate in one.
I was not aware of that, so I can’t comment here.
We often have the discussion that OSS developers owe their users nothing and can do as they please. I think this applies here too. Maybe it would indeed be better to change the name, but they can still do or not do as they please.
The original maintainers had tried to sell GIMP to schools and found that they could not, repeatedly. People very much did care in institutional settings, which was exactly what Glimpse referred to and what they were trying to remedy. Hell, even within the small sample size of this thread, there’s already someone further down in the comments who echoes these experiences with trying to introduce it in educational settings.
I have it on reasonably good authority by someone who went pouring through mailing lists and through the history of it, that the origin of the name GIMP was intended as an insult/an in-joke. The reference to the actual slur was intentional from the start.
If nobody cared that much, why was there a flamewar? Why did people get death threats and harassment and hate mail? Why were a very strong community of disabled programmers and artists and designers, subjected to such repeated harassment to the point that many of them swore off doing any FOSS work in the future? Surely, if people do not care, then changing the name is simultaneously no big deal.
Around 2008, I remember hear rumblings of Adobe looking to go the software-as-a-service, cloud route. I remember looking at the roadmap seeing nondestructive editing with layer adjustments thinking “awesome, I bet it will be just another 2–3 years til we get this”. My estimate was very far off, but I followed the GEGL updates & saw the possibility. At that point I had moved mostly to photography over graphics where darktable fit 90% of my needs without needing 2 tools with Hugin being a great companion for stitching panoramas (in college Lightroom couldn’t get everything done & I needed to move to Photoshop for the last layer of adjustments)—so I didn’t need GIMP much. But here we finally are! I find myself needing the GIMP raster image editor about once a year for something not really best suited by the other application, but Krita has come such a long way with a lot of overlap that has now turned into a more interesting friendly FOSS ‘competition’. I have been wanting to get back into pixel art so I will definitely need to give GIMP vs. Krita a real shot for advanced, free software workflows—which is pretty good timing to say the least.
Now if Wayland + WMs can finalize their color management situation we can be in business … (really need DCI-P3 + ICC corrections where it seems X11 is the only thing that reliably run DisplayCAL for measurements)
I like GIMP as an editor, but it stands to reason that it very obviously can’t be introduced into school or community environments because of it’s name, and the user interface is something I’m used to but everyone around me finds absolutely punishing. Every time a news item about GIMP shows up, I’m reminded of the short-lived Glimpse fork, and what could have been were the fork not gaslit out of existence by a huge group of (often seemingly frothing at the mouth) FOSS fanatics who didn’t read further than the “We want to fork to change the name” before loading their posting guns to talk about how “political correctness has gone mad”. People I, at the time, considered friends and acquaintances were suddenly getting incredibly, disproportionately irate about such a minor change, and ignoring the rest of the aspirations of the project to focus on such a tiny detail.
Don’t get me wrong, I’m a FOSS fanatic at times, but the fact that such a promising fork (that consisted of a prospective overhaul to the GIMP UI, and a name change so it could be easily pushed into the educational space) got ragebombed out of existence like that, still to this day does not sit well with me, and strikes me as one of the biggest losses in Free Software in the last ten years. The fact that the majority of people involved in Glimpse were disabled, or otherwise marginalised, really makes me wonder how many people stepped out of FOSS for good.
It’s sad that this is now my main association for the GIMP project. A mediocre tool that I have minor gripes with, and a huge posting scuffle over what was, essentially, piss in the wind.
I’ve carefully outlined already how LLM crawlers and the usual webcrawlers are two extremely separate things. Yet people still try to whatabout regarding search engines?
Jesus, people. Please stop. They are not the same thing. The argument is a surface level relevant argument. Please.
You are in a thread of FOSS projects are getting DDoSed because of LLM crawlers, and you think holding them accountable is moving the goal post?
I’m confused about why this is the issue. That doesn’t seem like the issue if we’re discussing the ethics of performing the act, that seems like an issue with how one would respond to the act. I feel like this is unrelated, right?
This is not whataboutism. Bringing up symmetric situations and asking for a symmetry breaker is perfectly reasonable. I feel like you’ve also misunderstood that I’ve already granted that intentionally bypassing a site’s desire to limit access is likely unethical.
I feel like we’re talking past each other. For one thing, none of this has anything to do with me, I am not crawling anyone. I also granted that crawling when a site says not to seems unethical. What I am asking for is clarification about the moral judgments regarding the fundamentals of LLMs. I don’t think it’s unreasonable when faced with strong assertions like “this technology is fundamentally unethical” to ask for a justification and provide helpful questions to help guide such a justification.
The issue though is that you can’t tell. FOSS projects are getting absolutely hammered by hundreds of IPs from entire IP-ranges in Azure, GCP and Alibaba Cloud. You can’t tell who is who, who perpetrates this nor if they can do better. ArchWiki has had huge issues with uptime lately and we’ve had to put the entire history pages behind login as crawlers are aggressively going through all the links there.
robots.txthas been a thing for 30 years. They are intentionally not following it and whataboutism does not help you here.Then go engage with the FOSS communities providing you with a free service, because we are really struggling with this bullshit.
I love this stuff. I have been following FOSS encoder development for a very long time, and I remember learning about the importance of subjective evaluation and psychovisual enhancements in the days of Theora being a potential “good enough” alternative to the patent-encumbered H.264. Then I watched x264 do things that seemed like magic alongside Google releasing VP8 as an open-source and royalty-free codec that was much closer to H.264. I literally got excited to the point of tears when the Alliance for Open Media was formed years later and again when AV1 was finalized.
I’ve been playing with video encoding for over 20 years and a fan and user of FOSS for well over 15 years, and AV1 has been the culmination of so many hopes and dreams. It has been awesome watching the work done by the community around the -PSY forks and related tools, and it gives me the best possible flashbacks of those wild days of x264 magic. Seeing so much of it get upstreamed into the official SVT-AV1 so it can reach a wider audience is chef’s kiss. Thank you for keeping this spirit of community obsession with better subjective quality alive.
It depends in part on what distribution you use. For example, I’ve been upgrading (in an officially supported way) from one Fedora release to the next since 9 or maybe 12 (it’s been a long time), and I can count the number of times I had to fix something on one hand (and if I waited to upgrade until the new release had been out a couple months, I might not have needed to manually intervene but I was too eager to upgrade).
I think I agree with your original point–FOSS projects are more likely to move on to the new than maintain the old–but in practice it hasn’t really mattered to me. Again only speaking for Fedora’s default GNOME install, the moves to PulseAudio and later to PipeWire were seamless. PipeWire is a dream, by the way. In the same period of time there have been far fewer major releases of Windows, but the changes have been drastic and unpleasant in comparison.
If you compare Fedora 12 to 42, sure, wildly different. But if you simply used them to do your work, upgrading along the way, I don’t think most people would notice. I also don’t think most people read the release notes like we do. The more things can just work the less it matters what components changed from one release to the next.
If I was building my own distro I might care more about all those underlying changes, but I can’t build my own macOS or Windows.
EDIT: My bad, PulseAudio came in Fedora 8 before I would have been upgrading from one release to the next. But that was in 2007, and the only other soundsystem change in the last 18 years was PipeWire which definitely was a seamless upgrade.
I used to contribute to Camino on OS X, and I knew that most appetite for embedding gecko in anything that’s not firefox died a while back, about the time Mozilla deprecated the embedding library, but I’d lost track of Epiphany. As an aside: I’m still sorry that Mozilla deprecated the embedding interface for gecko, and I wish I could find a way to make it practical to maintain that. Embedded Gecko was really nice to work with in its time.
I strongly agree with this. I’d really like a non-blink thing to be an option for this. Not because there’s anything wrong with blink, but because that feels like a rug pull waiting to happen. I like that servo update, and hope that the momentum holds.
According to Wikipedia, Epiphany switched from Gecko to Webkit in 2008, because the Gecko API was too difficult to interface to / caused too much maintenance burden. Using Gecko as a library and wrapping your own UI around it is apparently quite different from soft forking the entire Firefox project and applying patches.
Webkit.org endorses Epiphany as the Linux browser that uses Webkit.
There used to be a QtWebKit wrapper in the Qt project, but it was abandoned in favour of QtWebEngine based on Blink. The QtWebEngine announcement in 2013 gives the rationale: https://www.qt.io/blog/2013/09/12/introducing-the-qt-webengine. At the time, the Qt project was doing all the work of making WebKit into a cross-platform API, and it was too much work. Google had recently forked Webkit to create Blink as a cross-platform library. Switching to Blink gave the Qt project better features and compatibility at a lower development cost.
The FOSS world needs a high quality, cross-platform browser engine that you can wrap your own UI around. It seems that Blink is the best implementation of such a library. WebKit is focused on macOS and iOS, and Firefox develops Gecko as an internal API for Firefox.
EDIT: I see that https://webkitgtk.org/ exists for the Gnome platform, and is reported to be easy to use.
I see Servo as the future, since it is written in Rust, not C++, and since it is developed as a cross platform API, to which you must bring your own UI. There is also Ladybird, and it’s also cross-platform, but it’s written in C++, which is less popular for new projects, and its web engine is not developed as a separate project. Servo isn’t ready yet, but they project it will be ready this year: https://servo.org/blog/2025/02/19/this-month-in-servo/.
I wasn’t sure Ungoogled Chromium was fully FOSS, and I completely forgot about Debian Chromium. I tried to use Qute for a while and it was broken enough for me at the time that I assumed it was not actively developed.
When did Epiphany switch from Gecko to Webkit? Last time I was aware of what it used, it was like “Camino for Linux” and was good, but I still had it on the Gecko pile.
There are non-gecko pure FOSS browsers on Linux.
Of the blink-based pure FOSS browsers, I use Ungoogled Chromium, which tracks the Chromium project and removes all binary blobs and Google services. There is also Debian Chromium; Iridium; Falkon from KDE; and Qute (keyboard driven UI with vim-style key bindings). Probably many others.
The best Webkit based browser I’m aware of on Linux is Epiphany, aka Gnome Web. It has built-in ad blocking and “experimental” support for chrome/firefox extensions. A hypothetical Orion port to Linux would presumably have non-experimental extension support. (I found some browsers based on the deprecated QtWebKit, but these should not be used due to unfixed security flaws.)
That’s totally valid, and I’d strongly prefer to use an open source UA as well!
In the context of browsers, though, where almost all traffic comes from either webkit-based browsers (chiefly if not only Safari on Mac/iPad/iPhone), blink-based browsers (chrome/edge/vivaldi/opera/other even smaller ones) or gecko-based browsers (Firefox/LibreWolf/Waterfox/IceCat/Seamonkey/Zen/other even smaller ones) two things stand out to me:
I thought that Orion moving Webkit into a Linux browser was a promising development just from an ecosystem diversity perspective. And I thought having a browser that’s not ad-funded on Linux (because even those FOSS ones are, indirectly ad-funded) was also a promising development.
I’d also be happier with a production ready Ladybird. But that doesn’t diminish the notion that, in my eye, a new option that’s not beholden to advertisers feels like a really good step.
Right, that was my point. Remove the need to trust by removing third parties from the equation. In that sense, P2P file sync has the same benefits as self-hosting a client-server file sync service, except you don’t need to operate a server.
The rest boils down to “Can you trust proprietary software running on your computer?”, where I would argue “just use FOSS”. But even so, which malicious behavior would be more likely to be spotted by a user?
You mentioned chunking and sending files to untrusted peers. That’s too specific and explicitly not what most P2P file sync tools do. Most of what I’ve said hinges on this sentence of yours:
The strangers never get to see your files, so this doesn’t matter.
The behavior of software I run on my own computer can be analyzed and verified to some extent, but I can’t verify what the software on your computer does. This can be improved further by using FOSS. If the files never end up on your computer, I don’t need to trust you to not do anything shady with them. If the files are only ever on my own devices, I don’t need to encrypt them either.