Thanks for continuing to write things like this. Your writing over the past couple of years has helped me think about how this tool that I’m deeply skeptical of can be useful for me anyway.
I appreciate it, and I really don’t understand the flags you’re picking up. Those seem abusive to me.
I can stop my doomscrolling now. You are completely correct, and you win the internet for today. Congratulations, and enjoy your prize of “one internet.”
(Thank you for phrasing it this way. I was having a hard time explaining what “vibe coding” meant to someone who asked me, and your explanation is exactly the right way to understand it, IMO.)
Oh man I didn’t even notice the “unsubscribe at any time”. Does asking for the code automatically put you on his newsletter? That’s a dark pattern alright.
Even if half of the things I have heard about Brave are wrong, why even bother when so many other great, free alternatives exist. The first and last time I tried it was the home page ad fiasco… uninstalled and went back to Chrome.
These days I try to use Firefox, but escape hatch to Chrome when things don’t work. I know there are better alternatives to both Firefox and Chrome, I’ll start exploring them… maybe? It’s hard for me to care about them since most of them are just Chrome/Firefox anyway. I’ll definitely give Ladybird a go when it’s ready. On paper, at least, it sounds like the escape from Google/Mozilla that is desperately needed.
Kagi bringing Orion to Linux feels promising. It’s OK on Mac, though after using it for 6 months I switched back to Safari. It looks like they’re using Webkit for that on Linux, not blink, which is a happy surprise IMO. That feels like a good development. (I’m also looking forward to Ladybird, though. Every so often I build myself a binary and kick the tires. Their progress feels simultaneously impossibly fast and excruciatingly slow.
If I understand correctly, Orion is not open source. That feels like a huge step backward and not a solution to a browser being controlled by a company with user-hostile incentives. I think Ladybird is more in line with what we really need: a browser that isn’t a product but rather a public good that may be funded in part by corporations but isn’t strongly influenced by any one commercial entity.
they have stated that open sourcing is in the works
That help page has said Kagi is “working on it” since 2023-09 or earlier. Since Kagi hasn’t finished that work after 1.5 years, I don’t believe Kagi is actually working on open sourcing Orion.
Their business model is, at the minimum, less user hostile than others due to users paying them money directly to keep them alive.
If US DoJ has their way, google won’t be able to fund chrome any more the way it was doing so far. That also means apple and firefox lose money too. So Kagi’s stuff might work out long term if breakup happens.
That’s totally valid, and I’d strongly prefer to use an open source UA as well!
In the context of browsers, though, where almost all traffic comes from either webkit-based browsers (chiefly if not only Safari on Mac/iPad/iPhone), blink-based browsers (chrome/edge/vivaldi/opera/other even smaller ones) or gecko-based browsers (Firefox/LibreWolf/Waterfox/IceCat/Seamonkey/Zen/other even smaller ones) two things stand out to me:
Only the gecko-based ones are mostly FOSS.
One of the 3 engines is practically Apple-exclusive.
I thought that Orion moving Webkit into a Linux browser was a promising development just from an ecosystem diversity perspective. And I thought having a browser that’s not ad-funded on Linux (because even those FOSS ones are, indirectly ad-funded) was also a promising development.
I’d also be happier with a production ready Ladybird. But that doesn’t diminish the notion that, in my eye, a new option that’s not beholden to advertisers feels like a really good step.
Of the blink-based pure FOSS browsers, I use Ungoogled Chromium, which tracks the Chromium project and removes all binary blobs and Google services. There is also Debian Chromium; Iridium; Falkon from KDE; and Qute (keyboard driven UI with vim-style key bindings). Probably many others.
The best Webkit based browser I’m aware of on Linux is Epiphany, aka Gnome Web. It has built-in ad blocking and “experimental” support for chrome/firefox extensions. A hypothetical Orion port to Linux would presumably have non-experimental extension support. (I found some browsers based on the deprecated QtWebKit, but these should not be used due to unfixed security flaws.)
I wasn’t sure Ungoogled Chromium was fully FOSS, and I completely forgot about Debian Chromium. I tried to use Qute for a while and it was broken enough for me at the time that I assumed it was not actively developed.
When did Epiphany switch from Gecko to Webkit? Last time I was aware of what it used, it was like “Camino for Linux” and was good, but I still had it on the Gecko pile.
According to Wikipedia, Epiphany switched from Gecko to Webkit in 2008, because the Gecko API was too difficult to interface to / caused too much maintenance burden. Using Gecko as a library and wrapping your own UI around it is apparently quite different from soft forking the entire Firefox project and applying patches.
Webkit.org endorses Epiphany as the Linux browser that uses Webkit.
There used to be a QtWebKit wrapper in the Qt project, but it was abandoned in favour of QtWebEngine based on Blink. The QtWebEngine announcement in 2013 gives the rationale: https://www.qt.io/blog/2013/09/12/introducing-the-qt-webengine. At the time, the Qt project was doing all the work of making WebKit into a cross-platform API, and it was too much work. Google had recently forked Webkit to create Blink as a cross-platform library. Switching to Blink gave the Qt project better features and compatibility at a lower development cost.
The FOSS world needs a high quality, cross-platform browser engine that you can wrap your own UI around. It seems that Blink is the best implementation of such a library. WebKit is focused on macOS and iOS, and Firefox develops Gecko as an internal API for Firefox.
EDIT: I see that https://webkitgtk.org/ exists for the Gnome platform, and is reported to be easy to use.
I see Servo as the future, since it is written in Rust, not C++, and since it is developed as a cross platform API, to which you must bring your own UI. There is also Ladybird, and it’s also cross-platform, but it’s written in C++, which is less popular for new projects, and its web engine is not developed as a separate project. Servo isn’t ready yet, but they project it will be ready this year: https://servo.org/blog/2025/02/19/this-month-in-servo/.
I used to contribute to Camino on OS X, and I knew that most appetite for embedding gecko in anything that’s not firefox died a while back, about the time Mozilla deprecated the embedding library, but I’d lost track of Epiphany. As an aside: I’m still sorry that Mozilla deprecated the embedding interface for gecko, and I wish I could find a way to make it practical to maintain that. Embedded Gecko was really nice to work with in its time.
The FOSS world needs a high quality, cross-platform browser engine that you can wrap your own UI around.
I strongly agree with this. I’d really like a non-blink thing to be an option for this. Not because there’s anything wrong with blink, but because that feels like a rug pull waiting to happen. I like that servo update, and hope that the momentum holds.
Wikipedia suggests the WebKit backend was added to Epiphany in 2007 and they removed the Gecko backend in 2009. Wow, time flies! GNOME Web is one I would like to try out more, if only because I enjoy GNOME and it seems to be a decent option for mobile Linux.
I have not encountered any website that doesn’t work on firefox (one corporate app said it required Chrome for some undisclosed reason, but I changed the useragent and had no issue at all to use their sinple CRUD).
What kind of issues do you find?
I’ve wondered the same thing in these recent discussions. I’ve used Firefox exclusively at home for over 15 years, and I’ve used it at my different jobs as much as possible. While my last two employers had maybe one thing that only worked in IE or Chrome/Edge, everything else worked fine (and often better than my coworkers’ Chrome) in Firefox. At home, the last time I remember installing Chrome was to try some demo of Web MIDI before Firefox had support. That was probably five years ago, and I uninstalled Chrome after playing with the demo for a few minutes.
I had to install Chromium a couple of times in the last years to join meetings and podcast recording that were done with software using Chrome-only API.
When it happens, I bless flattpak as I install Chromium then permanently delete it afterward without any trace on my system.
If you are an heavy user of such web apps, I guess that it makes sense to use Chrome as your main browser.
I can’t get launcher.keychron.com to work on LibreWolf but that’s pretty much it. I also have chrome just in case I’m too lazy to figure out what specifically is breaking a site
Thanks, yeah, that’s it. I knew it was some specific thing that wasn’t supported I just couldn’t remember and was writing that previous comment on my phone so I was too lazy to check. But yeah, it’s literally the only site I could think of that doesn’t work on Firefox (for me).
It’s pretty rare to be fair, so much so that I don’t have an example of the top off my head. I know, classic internet comment un-cited source bullshit, sorry. It was probably awful gov or company intranet pages over the years.
Some intensive browser based games run noticeably better on Chrome too, but I know this isn’t exactly a common use case for browsers that others care about.
For some reason, trying to log in to the CRA (Canadian equivalent of the IRS) always fails for me with firefox and I need to use chrome to pay my taxes.
I run into small stuff fairly regularly. Visual glitches are common. Every once in a while, I’ll run into a site that won’t let me login. (Redirects fail, can’t solve a CAPTCHA, etc.)
Some google workspace features at least used to be annoying enough that I just devote a chrome profile to running those workspace apps. I haven’t retried them in Firefox recently because I kind of feel that it’s google’s just deserts that they get a profile on me that has nothing but their own properties, while I use other browsers for the real web.
I should start keeping a list of specific sites. Because I do care about this, but usually when it comes up I’m trying to get something done quickly and a work-around like “use chrome for that site” carries the day, then I forget to return to it and dig into why it was broken.
If you maintain a reasonably popular C++ library and it doesn’t use CMake as its build system, sooner or later someone will come and demand that you add CMake support (risking permanent brain damage in the process) to make it easier to consume your library in their CMake-based project. Happened to me multiple times.
CMake can find libraries through pkg-config or you can write your own finder. See files in /usr/share/cmake-*/Modules/ for inspiration. No need to build your library using CMake.
I’ve sent PRs for porting projects that use pkgconf to cmake more than a few times because pkgconf is at best fragile and at worst barely works at all for instance with the msvc universe. With cmake I know it will work and I can do my job.
Last time I tried pkg-config on Windows, it didn’t work very well. That’s been a while.
you can write your own finder. See files in /usr/share/cmake-*/Modules/ for inspiration.
You can also accept a PR for a finder… which is what I’d offer to do if I were maintaining a C++ library that doesn’t use CMake. That said, I personally find the brain damage that CMake induces very compatible with the wear and tear imposed by C++, so when I do work in C++, I usually do use CMake for my builds. It’s the worst C++ build system, to be sure, except that all the others are even worse.
It’s the worst C++ build system, to be sure, except that all the others are even worse.
The insidious thing about CMake-induced brain damage is that it attacks those parts of your brain that you need to recognize a better build system. How else could one possibly explain otherwise sane people professing love to CMake?
My above comment is of course a joke, but there is some truth to it: in CMake people work by copying and pasting illogical, magic incantations until something sticks. There is no understanding of the underlying model of build. Which is not really surprising since whatever model one may argue CMake has is obfuscated by the meta build system layering and hacking around of underlying build system limitations (like having to generate source code during the configuration phase).
Then, when they try a supposedly better build system, they adopt the same approach: don’t bother trying to understand, just throw things against the wall until something sticks. I see this constantly where smart, experienced C++ developers adopt this attitude when trying build2. But C++ build is too complex for this approach to work for anything other than toy examples.
Yes, C++ is a complex language and some features are outright broken or don’t compose well. But you can pick a sensible subset and there will be logic and model, and you can go read the standard and it makes sense. CMake, IMO, is the absolute worse part of it. A new language (C++ post-C++11) lost to CMake, truly.
I’m a pretty strong cmake proponent because it’s the only thing that works at the scale I need - building one codebase with every potential toolchain and compiler, having only one set of commands for targeting windows/msvc to iOS to freebsd to emscripten to ESP32s etc.
Alternatives never work that well, for instance meson is a PITA as soon as you want to use windows, fully hermetic build systems just aren’t compatible with what making packages for Linux distros requires, etc
The language certainly is terrible but it solves actual problems such as not having to go download 7z.exe (or the Mac or Linux version) from who knows where on your CI scripts depending on the platform you’re running on because cmake supports cross platform archive extraction with one single command.
My father worked for Guinness for about 25 years. When I was growing up we had prints of the John Ireland calendar “the gentle art of making Guinness”, a splendid series of cartoons in the tradition of Heath Robinson or Rube Goldberg. Guinness advertising art was great.
But, it’s a mass-produced factory beer. I occasionally like a stout or other dark beer, but Guinness is boring.
Guinness was relatively early in the use of statistical quality control over large scale biochemical processes – that is where Student’s t-distribution was discovered.
I visited a brewery for one of the top 5 beer producers worldwide and the effort and care going into producing a consistent, safe product is impressive. The fact that the product itself is rather bland and boring is incidental :D
I like boring beer. Incoming long defense of Guinness:
I didn’t always used to be like, I used to like hoppy IPAs. But as I’ve gotten older my desire to drink beers higher than 5% has diminished so thoroughly that I can count the number of times I drink one per year on one hand.
I certainly would like to drink more complex stouts, but there are barely any brewed in America with the same ABV. For an example, I went to my favorite local brewery’s website, and Stout was always prefixed with imperial https://grimmales.com/menu/
I can’t drink these! They taste like syrup and instantly give me a headache.
Frankly, my go to beer these days is Asahi. I’m tired of complexity. Beer is less for me about complex flavors and more about refreshment and the desire to relax. In the cases when I want something more complex, I go for a cocktail. I make myself a Negroni or a Campari soda (depending on which side of refreshment and flavor I want).
Anyway. I love Guinness. It tastes good (that is to say it doesn’t taste like piss water), has a low ABV, and is served in basically every bar in Manhattan and Brooklyn. It fits my need
Thanks for saying that. I like it fine if it’s what’s available, but it tastes watered down to me compared to other stouts I’ve grown accustomed to. I’ll take it over something lighter, but it’s fairly plain.
If you’re in the US, be aware that the Guinness product sold as Extra Stout 20 years ago is now known as Foreign Extra Stout. Today’s Extra Stout is watered down in comparison (and undoubtedly cheaper to produce).
As bait-and-switches go, this is mild compared to Newcastle Ale…
I’m with you but would add that it genuinely does taste better in Ireland! To the point that I know some Irish people who will not drink Guinness abroad.
The story goes that this is due to the water but I suspect the truth is that Guinness have a lot of control over how it’s stored and served in pubs (temperature etc). Whether that should matter is an exercise left to personal taste.
I had an Irish colleague in France that spun the theory that French Guiness is a lot less bitter, because locals don’t like it.
He refused it.
My personal take on Guiness: I rarely drink and then I only rarely drink Guiness, so I enjoy it as an easy stout that comes with an expected taste. Sometimes, that’s just what I want.
(Fun fact about me: I do, however, have a taste for alcohol, my first job was sysadmin on a wineyard)
When I visited the Guinness Storehouse in Dublin the advertising floor was definitely the most interesting part. The rest was over the top displays or sections for social media tourism. I have to agree about it being boring, although I’ll often order it if there’s no other stout or porters served.
Also, their book of world records was endlessly entertaining when I was in primary school.
Many years after I tried Guinness (which I still occasionally enjoy, because it is boring in quite a pleasant way) I learned that the thing I really liked about it was that it was always a nitro pour. Seeking out interesting beers (mostly, but not all stouts) served on nitro taps has been fun.
I’m a big fan of nitro coffee! In the before times, I worked once or twice a week in an office that had a nitro cold brew tap in the kitchen, and that was enough to make me look forward to those office days. Come to think of it, everyone (of those who didn’t dislike coffee in general) really loved that perk.
My MacBook Pro is nagging me to upgrade to the new OS release. It lists a bunch of new features that don’t care about. In the meantime, the following bugs (which are regressions) have been unfixed for multiple major OS versions:
When a PDF changes, Preview reloads it. It remembers the page you were on (it shows it in the page box) but doesn’t jump there. If you enter the page in the page box, it doesn’t move there because it thinks you’re there already. This worked correctly for over a decade and then broke.
The calendar service fails to sync with a CalDAV server if you have groups in your contacts. This stopped working five or so years ago, I think.
Reconnecting an external monitor used to be reliable and move all windows that were there last time it was connected back there. Now it works occasionally.
There are a lot of others, these are the first that come to mind. My favourite OS X release was 10.6: no new user-visible features, just a load of bug fixes and infrastructure improvements (this one introduced libdispatch, for example).
It’s disheartening to see core functionality in an “abandonware” state while Apple pushes new features nobody asked for. Things that should be rock-solid, just… aren’t.
It really makes you understand why some people avoid updates entirely. Snow Leopard’s focus on refinement feels like a distant memory now.
The idea of Apple OS features as abandonware is a wild idea, and yet here we are. The external monitor issue is actually terrible. I have two friends who work at Apple (neither in OS dev) and both have said that they experience the monitor issue themselves.
I was thinking about this not too long ago; there are macOS features (ex the widgets UI) that don’t seem to even exist anymore. So many examples of features I used to really like that are just abandoned.
Reconnecting an external monitor used to be reliable and move all windows that were there last time it was connected back there. Now it works occasionally.
This works flawlessly for me every single time, I use Apple Studio Display at home and a high end Dell at the office.
On the other hand, activating iMessage and FaceTime on a new MacBook machine has been a huge pain for years on end…
On the other hand, activating iMessage and FaceTime on a new MacBook machine has been a huge pain for years on end…
I can quote on that, but not with my Apple account, but with my brother’s. Coincidentally, he had less problems activating iMessage/FaceTime on an Hackintosh machine.
A variation on that which I’ve run in to is turning the monitor off and putting the laptop to sleep, and waking without moving or disconnecting it.
To avoid all windows ending up on stuck on the laptop display, I have to sleep the laptop, the power off the monitor. To restore power on the monitor, then wake the laptop. Occasionally (1 in 10 times?) it still messes up and I have to manually move windows back to the monitor display.
(This is when using dual-head mode with both the external monitor and laptop display in operation)
iCloud message sync with message keep set to forever seems to load soooo much that messages on my last laptop would be so awful to type long messages (more than 1 sentence) directly into the text box I started to write messages outside of the application, copy/paste and send the message. The delay was in seconds for me.
I’m really heartened by how many people agree that OS X 10.6 was the best.
Edited to add … hm - maybe you’re not saying it was the best OS version, just the best release strategy? I think it actually was the best OS version (or maybe 10.7 was, but that’s just a detail).
It was before Apple started wanting to make it more iPhone-like and slowly doing what Microsoft did with Windows 8 (who did it in a ‘big bang’) by making Windows Phone and Windows desktop amost indistinguishable. After Snow Leopard, Apple became a phone company and very iPhone-centric and just didn’tbother with the desktop - it became cartoonish and all flashy, not usable. That’s when I left MacOS and haven’t looked back.
Recently, Disk Utility has started showing a permissions error when I click unmount or eject on SD cards or their partitions, if the card was inserted after Disk Utility started. You have to quit and re-open Disk Utility for it to work. It didn’t use to be like that, but it is now, om two different Macs. This is very annoying for embedded development where you need to write to SD cards frequently to flash new images or installers. So unmounting/ejecting drives just randomly broke one day and I’m expecting it won’t get fixed.
Another forever-bug: when you’re on a higher refresh rate screen, the animation to switch workspaces takes more time on higher refresh rate screens. This has forced me to completely change how I use macOS to de-emphasise workspaces, because the animation is just obscenely long after I got a MacBook Pro with a 120Hz screen in 2021. Probably not a new bug, but an old bug that new hardware surfaced, and I expect it will never get fixed.
I’m also having issues with connecting to external screens only working occasionally, at least through USB-C docks.
The hardware is so damn good. I wish anyone high up at Apple cared at all about making the software good too.
Oh, there’s another one: the fstab things to not mount partitions that match a particular UUID no longer work and there doesn’t appear to be any replacement functionality (which is annoying when it’s a firmware partition that must not be written to except in a specific way, or it will sofr-brick the device).
Oh, fun! I’ve tried to find a way to disable auto mount and the only solutions I’ve found is to add individual partition UUIDs to a block list in fstab, which is useless to me since I don’t just re-use the same SD card with the same partition layout all the time, I would want to disable auto mounting completely. But it’s phenomenal to hear that they broke even that sub-par solution.
Maybe, but we’re talking about roughly 1.2 seconds from the start of the gesture until keyboard input starts going to an app on the target workspace. That’s an insane amount of delay to just force the user to sit through on a regular basis… On a 60Hz screen, the delay is less than half that (which is still pretty long, but much much better)
Not a fix, but as a workaround have you tried Accessibility > Display > Reduce Motion?
I can’t stand the normal desktop switch animation even when dialed down all the way. With that setting on, there’s still a very minor fade-type effect but it’s pretty tolerable.
Sadly, that doesn’t help at all. My issue isn’t with the animation, but with the amount of time it takes from I express my intent to switch workspace until focus switches to the new workspace. “Reduce Motion” only replaces the 1.2 second sliding animation with a 1.2 second fading animation, the wait is exactly the same.
Don’t update/downgrade to Sequoia! It’s the Windows ME of MacOS’s. After Apple support person couldn’t resolve any of the issues I had, they told me to reinstall Sequoia and then gave me instructions to upgrade to Ventura/Sonoma.
I thought Big Sur was the Windows ME of (modern) Mac OS. I have had a decent experience in Sequoia. I usually have Safari, Firefox, Chrome, Mail, Ghostty, one JetBrains thing or another (usually PyCharm Pro or Clion), Excel, Bitwarden, Preview, Fluor, Rectangle, TailScale, CleanShot, Fantastical, Ice and Choosy running pretty much constantly, plus a rotating cast of other things as I need them.
Aside from Apple Intelligence being hot garbage, (I just turn that off anyway) my main complaint about Sequoia is that sometimes, after a couple dozen dock/undock cycles (return to my desk, connect to my docking station with a 30” non-hidpi monitor, document scanner, time machine drive, smart card reader, etc.) the windows that were on my Macbook’s high resolution screen and move to my 30” when docked don’t re-scale appropriately, and I have to reboot to address that. That seems to happen every two weeks or so.
Like so many others here, I miss Snow Leopard. I thought Tiger was an excellent release, Leopard was rough, and Snow Leopard smoothed off all the rough edges of Tiger and Leopard for me.
I’d call Sequoia “subpar” if Snow Leopard is your “par”. But I don’t find that to be the case compared to Windows 11, KDE or GNOME. It mostly just stays out of my way.
Apple’s bug reporting process is so opaque it feels like shouting into the void.
And, Apple isn’t some little open source project staffed by volunteers. It’s the richest company on earth. QA is a serious job that Apple should be paying people for.
Apple’s bug reporting process is so opaque it feels like shouting into the void.
Yeah. To alleviate that somewhat (for developer-type bugs) when I was making things for Macs and iDevices most of the time, I always reported my bugs to openradar as well:
which would at least net me a little bit of feedback (along the lines of “broken for everyone or just me?”) so it felt a tiny bit less like shouting into the void.
I can’t remember on these. The CalDAV one is well known. Most of the time when I’ve reported bugs to Apple, they’ve closed them as duplicates and given no way of tracking the original bug.
No. I tried being a good user in the past but it always ended up with “the feature works as expected”. I won’t do voluntary work for a company which repeatedly shits on user feedback.
10.6 “Snow Leopard” was the last Mac OS that I could honestly say I liked. I ran it on a cheap mini laptop (a Dell I think) as a student, back when “hackintoshes” were still possible.
ASN.1 has a gigantic specification. It looks like written in totally different era. (I would say “pre-Github” era.) It looks like written in era when programming had nothing common with fun. It looks like written by managers, not by programmers, and especially not by programmers who love their work. This spec scared me away completely, and I will never consider ASN.1 for my projects
Yeah I mean it certainly was written in a different era (the 80s, I believe). I quite like it though. And DER (one of the main encodings for ASN.1 messages) is actually very simple FWIW. But sure I mean I doubt anyone’s gonna come along and pressure you to use it in your projects (though you may already be using it without realising if you’re using GSM, PKCS, Kerberos, etc). I for one find it quite satisfying to use the same serialisation/deserialisation paradigm being utilised in other parts of the stack but that’s just me haha
It was written by telecoms companies in the early 80’s, before TCP/IP created the internet and made most other networking technologies obsolete. Not just the pre-Github era, but the pre-PC and pre-internet era. Lots of stuff from the 50’s and 60’s looks like that; to some extent it wasn’t until the late 60’s and 70’s that “programmers who love their work” became a powerful technical force outside of universities. You aren’t allowed to have fun if you work for a government or giant company on machinery that costs more than your entire life does.
It’s been a long time since I’ve looked at ASN.1 outside of X.509 but you inspired me to go look. I’m pretty surprised to see that there really isn’t much that Protobufs do that ASN.1/DER does not. Varints being an easy example of something that Protobufs do but… wow are they similar!
Yeah they really do cover a lot of the same ground! But then with ASN.1 it’s so much more standardised, I feel like it’s massively underrated. I also find ASN.1’s ability to be encoded in a bunch of different ways depending on what’s appropriate for the situation-at-hand super cool, and it’s something that Protobuf doesn’t really do :)
Yeah I’m using Erlang/OTP’s built-in ASN.1 compiler, which is very nice. On the client side (a Vala/GTK app) I’m currently just deserialising directly from the DER right now, but I’ll probably switch to using a proper ASN.1 compiler at some point (or maybe I’ll make my own limited one for Vala, who doesn’t love a good yak shave). I’ve used asn1c and asn1scc before too, both have worked well for me.
Side thread: personally I don’t feel like Github stars are a good metric for the popularity of a projects. What do you think?
I don’t have a better way to estimate project popularity; just saying that Github stars seem not useful to me. In about 16 years of using Github I have starred less than 30 projects, but I’ve probably used ten times as many Github projects (probably much more). Look like I just don’t star projects :-) .
And there might actually be a bias in the star counts, in that some projects attract users that are more likely to hand out stars.
What makes you give a star to a Github project? Do you give stars for any project that sounds interesting, or any project that you use, or any project that you feel exceptionally thankful for?
agreed, they are pretty useless as a metric for anything. I think they mostly measure “how much reach on social media/reddit/HN/…. has this ever gotten” in many cases, and that’s not informative of anything. (I personally star a lot, but really treat it as a bookmark along the lines of “looks vaguely interesting from a brief skim”, its not an endorsement in any way)
I’m pretty sure I’ve never starred a project on GitHub, or at least I haven’t in the past decade, and I don’t know why anyone would! It’s an odd survival of GitHub’s launch era, when “everything has to be social” was the VC meme of the moment and “social open source” was GitHub’s play.
I don’t get why popularity is so important here. Isn’t it effectively an implementation detail of your language? Even if I’m misunderstanding that and it is not, isn’t the more important question “are there good implementations”, not “are the implementations more popular than the ones for the other thing”?
One huge use-case for my language is sending and receiving and storing programs, so yes, it’s an implementation detail, but it’s also a very important one that will be impossible to change later.
But you’re totally right – that is the main question. I’m still exploring the space of serialization candidates, and these two particularly stood out to me.
I mostly care about popularity because convincing people to trust/adopt technology is way harder than actually implementing it. Extending a trusted format seems less risky than other options
If you can FOIA a query, can you ask for ‘SHOW TABLES’ and then “SHOW CREATE TABLE” or “DESCRIBE” for each of them? It cant be a file layout if it’s in the format it describes, thats a contradiction, right..? Right??
Kinda bothered by the state supreme court interpretation, but mostly because I don’t think ANY “per-se” exception should be given to to anything build with public money or to which the public owns the rights to: source code, file layout, schema, whatever it is, security and privacy should be the only few exceptions given. And even then, SOME form of access should be provided, unless the government can make a very strong case that the security implications outweigh the public’s right to know where the fuck their taxes are going.
Yeah, if we’re wishing for ponies, I too would like to require, once and for all, all software systems built with public money to make their source and documentation available. No idea who’s going to bankroll that lobbying effort, though. And you can imagine the entire industry of public-sector IT contractors who would fight it.
This case was fought over the interpretation of the security exception. As long as the law still thinks that security by obscurity is a reasonable and defensible practice, there’s a long way to go.
You might find this amusing… a while back, I was being paid with public money to build a thing. I was arguing for the thing I was building to be FOSS, and the manager on the government side pushed back. Their concern was that, apparently, if the source were released to the public with no restrictions, contractors could then sell things built off the software back to the government, claiming (and charging for) to have done the full development themselves.
My team responded to that objection by suggesting that a copyleft license should address those concerns. And, shockingly, our response carried the day. We got our library released under LGPLv2.
I was happy with that because I would like to require all systems built with public money to be released to the public as FOSS. And I’m proud to report that it worked once, but I don’t want to personally work in that space anymore. It was exhausting.
Over the past 13 years, I have received notices from regulatory authorities whenever someone submitted content that triggered compliance concerns. Each time, my server would be taken offline, and I would be given a 7-day deadline to boot it into rescue mode, remove the offending content, and restore service. Missing the deadline would put my server hosting account at risk of cancellation.
@susam: I’m sorry you have to deal with content like that followed by a dance to make sure you react appropriately in order to avoid getting your account cancelled. I wish people could behave better.
Does Soar work on Mac OS? One of the most interesting things about HomeBrew for Linux is that, if I make my software available that way, I can write instructions that are practically identical for Linux and Mac users of my software.
The screenshots in the readme look like they have a Mac skin on them, but I don’t see any indication of Mac support or lack thereof.
Thanks! The main things that made me think it might were the desire to replace homebrew on Linux and the screenshots.
Homebrew is perfect for macOs
I’ve got to say, I’m not so attached to it. And I’ve been using it for a damn long time. I think a package manager informed by more recent integrations (integrations like, say, uv) could manage a better experience on Mac and Linux.
Especially for people who are using it to help them write software.
The author replaced internet temperature control on a thing they purchased with local temperature control that worked in a way they preferred. In what universe would it ever be illegal to improve a thing that you purchased for your own use in your own home, in a way that doesn’t impact anyone else?
heh, i don’t think that’s what he was asking about being legal. it seems to me he’s asking how the company’s insecurity is allowed.
it’s an interesting question, in medicine we have malpractice, something similar in the US for things like structural engineers, but our “engineering” isn’t really licensed, so there’s no real safety rating on things like this from a data leakage and connectivity standpoint…
the physical bed cooling/heating product would have had to pass a UL rating i think to be able to be sold in the US, but there’s no such rating to ensure it’s secure from a technical/data standpoint
it’s probably too difficult to nail something like that down, especially as new exploits are being found all the time
I completely misread you. I was cruising through the front page as I had my first coffee of the day, read an article about someone modifying their bed cooler to remove some anti-features, and absolutely interpreted the question as “How is this [modification] legal?” Sorry for misreading you! I think @aae nailed the question you were really asking, mostly.
The one thing I’d add to that answer is that, if the FTC were so inclined, they probably could exert enough pressure to at least require such an ssh backdoor to be disclosed up front, if not compel the manufacturer to remove it. I don’t believe any new hard-to-write regulation would even be required for that.
It kind-of is, particularly given the most sensitive nature of the data. Some electronics companies I know also ship their devices with the option to connect to them via SSH - but in contrast to this example, SSH access is documented and off-by-default. They use it mostly to see what’s wrong if a customer calls in because something’s not working, and then ask the customer if they’d be willing to provide remote access for debugging purposes, when the customer needs to enable themself.
The majority of bugs (quantity, not quality/severity) we have are due to the stupid little corner cases in C that are totally gone in Rust. Things like simple overwrites of memory (not that rust can catch all of these by far), error path cleanups, forgetting to check error values, and use-after-free mistakes. That’s why I’m wanting to see Rust get into the kernel, these types of issues just go away, allowing developers and maintainers more time to focus on the REAL
bugs that happen (i.e. logic issues, race conditions, etc.)
This is an extremely strong statement.
I think a few things are also interesting:
I think people are realizing how low quality the Linux kernel code is, how haphazard development is, how much burnout and misery is involved, etc.
I think people are realizing how insanely not in the open kernel dev is, how much is private conversations that a few are privy to, how much is politics, etc.
I think people are realizing how insanely not in the open kernel dev is, how much is private conversations that a few are privy to, how much is politics, etc.
The Hellwig/Ojeda part of the thread is just frustrating to read because it almost feels like pleading. “We went over this in private” “we discussed this already, why are you bringing it up again?” “Linus said (in private so there’s no record)”, etc., etc.
Dragging discussions out in front of an audience is a pretty decent tactic for dealing with obstinate maintainers. They don’t like to explain their shoddy reasoning in front of people, and would prefer it remain hidden. It isn’t the first tool in the toolbelt but at a certain point there is no convincing people directly.
Dragging discussions out in front of an audience is a pretty decent tactic for dealing with
With quite a few things actually. A friend of mine is contributing to a non-profit, which until recently had this very toxic member (they’ve even attempted felony). They were driven out of the non-profit very soon after members talked in a thread that was accessible to all members. Obscurity is often one key component of abuse, be it mere stubbornness or criminal behaviour. Shine light, and it often goes away.
IIRC Hintjens noted this quite explicitly as a tactic of bad actors in his works.
It’s amazing how quickly people are to recognize folks trying to subvert an org piecemeal via one-off private conversations once everybody can compare notes. It’s equally amazing to see how much the same people beforehand will swear up and down oh no that’s a conspiracy theory such things can’t happen here until they’ve been burned at least once.
This is an active, unpatched attack vector in most communities.
I’ve found the lowest example of this is even meetings minutes at work. I’ve observed that people tend to act more collaboratively and seek the common good if there are public minutes, as opposed to trying to “privately” win people over to their desires.
I think people are realizing how low quality the Linux kernel code is, how haphazard development is, how much burnout and misery is involved, etc.
Something I’ve noticed is true in virtually everything I’ve looked deeply at is the majority of work is poor to mediocre and most people are not especially great at their jobs. So it wouldn’t surprise me if Linux is the same. (…and also wouldn’t surprise me if the wonderful Rust rewrite also ends up poor to mediocre.)
yet at the same time, another thing that astonishes me is how much stuff actually does get done and how well things manage to work anyway. And Linux also does a lot and works pretty well. Mediocre over the years can end up pretty good.
After tangentially following the kernel news, I think a lot of churning and death spiraling is happening. I would much rather have a rust-first kernel that isn’t crippled by the old guard of C developers reluctant to adopt new tech.
Take all of this energy into RedoxOS and let Linux stay in antiquity.
I’ve seen some of the R4L people talk on Mastodon, and they all seem to hate this argument.
They want to contribute to Linux because they use it, want to use it, and want to improve the lives of everyone who uses it. The fact that it’s out there and deployed and not a toy is a huge part of the reason why they want to improve it.
Hopping off into their own little projects which may or may not be useful to someone in 5-10 years’ time is not interesting to them. If it was, they’d already be working on Redox.
The most effective thing that could happen is for the Linux foundation, and Linus himself, to formally endorse and run a Rust-based kernel. They can adopt an existing one or make a concerted effort to replace large chunks of Linux’s C with Rust.
IMO the Linux project needs to figure out something pretty quickly because it seems to be bleeding maintainers and Linus isn’t getting any younger.
They may be misunderstanding the idea that others are not necessarily incentivized to do things just because it’s interesting for them (the Mastodon posters).
Redox does have the chains of trying to do new OS things. An ABI-compatible Rust rewrite of the Linux kernel might get further along than expected (even if it only runs in virtual contexts, without hardware support (that would come later.))
Linux developers want to work on Linux, they don’t want to make a new OS. Linux is incredibly important, and companies already have Rust-only drivers for their hardware.
Basically, sure, a new OS project would be neat, but it’s really just completely off topic in the sense that it’s not a solution for Rust for Linux. Because the “Linux” part in that matters.
I read a 25+ year old article [1] from a former Netscape developer that I think applies in part
The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive. Au contraire, baby! Is software supposed to be like an old Dodge Dart, that rusts just sitting in the garage? Is software like a teddy bear that’s kind of gross if it’s not made out of all new material?
Adopting a “rust-first” kernel is throwing the baby out with the bathwater. Linux has been beaten into submission for over 30 years for a reason. It’s the largest collaborative project in human history and over 30 million lines of code. Throwing it out and starting new would be an absolutely herculean effort that would likely take years, if it ever got off the ground.
The idea that old code is better than new code is patently absurd. Old code has stagnated. It was built using substandard, out of date methodologies. No one remembers what’s a bug and what’s a feature, and everyone is too scared to fix anything because of it. It doesn’t acquire new bugs because no one is willing to work on that weird ass bespoke shit you did with your C preprocessor. Au contraire, baby! Is software supposed to never learn? Are we never to adopt new tools? Can we never look at something we’ve built in an old way and wonder if new methodologies would produce something better?
This is what it looks like to say nothing, to beg the question. Numerous empirical claims, where is the justification?
It’s also self defeating on its face. I take an old codebase, I fix a bug, the codebase is now new. Which one is better?
Like most things in life the truth is somewhere in the middle. There is a reason there is the concept of a “mature node” in the semiconductor industry. They accept that new is needed for each node, but also that the new thing takes time to iron out the kinks and bugs. This is the primary reason why you see apple take new nodes on first before Nvidia for example, as Nvidia require much larger die sizes, and so less defects per square mm.
You can see this sometimes in software for example X11 vs Wayland, where adoption is slow, but most definetly progressing and now-days most people can see that Wayland is now, or is going to become the dominant tech in the space.
I don’t think this would qualify as dialectic, it lacks any internal debate and it leans heavily on appeals by analogy and intuition/ emotion. The post itself makes a ton of empirical claims without justification even beyond the quoted bit.
That means we can probably keep a lot of the old trusty Linux code around while making more of the new code safe by writing it in Rust in the first place.
I don’t think that’s a fair assessment of Spolsky’s argument or of CursedSilicon’s application of it to the Linux kernel.
Firstly, someone has already pointed out the research that suggests that existing code has fewer bugs in than new code (and that the older code is, the less likely it is to be buggy).
Secondly, this discussion is mainly around entire codebases, not just existing code. Codebases usually have an entire infrastructure around them for verifying that the behaviour of the codebase has not changed. This is often made up of tests, but it’s also made up of the users who try out a release of a codebase and determine whether it’s working for them. The difference between making a change to an existing codebase and releasing a new project largely comes down to whether this verification (both in terms of automated tests and in terms of users’ ability to use the new release) works for the new code.
Given this difference, if I want to (say) write a new OS completely in Rust, I need to choose: Do I want to make it completely compatible with Linux, and therefore take on the significant challenge of making sure everything behaves truly the same? Or do I make significant breaking changes, write my own OS, and therefore force potential adopters to rebuild their entire Linux workflows in my new OS?
The point is not that either of these options are bad, it is that they represent significant risks to a project. Added to the general risk that is writing new code, this produces a total level of risk that might be considered the baseline risk of doing a rewrite. Now risk is not bad per se! If the benefits of being able to write an OS in a language like Rust outweigh the potential risks, then it still makes sense to perform the rewrite. Or maybe the existing Linux kernel is so difficult to maintain that a new codebase really would be the better option. But the point that CursedSilicon was making by linking the Spolsky piece was, I believe, that the risks for a project like the Linux kernel are very high. There is a lot of existing, old code. And there is a very large ecosystem where either breaking or maintaining compatibility would each come with significant challenges.
Unfortunately, it’s very difficult to measure the risks and benefits here in a quantitative, comparable way, so I think where you fall on the “rewrite vs continuity” spectrum will depend mostly on what sort of examples you’ve seen, and how close you think this case is to those examples. I don’t think there’s any objective way to say whether it makes more sense to have something like R4L, or something like RedoxOS.
Firstly, someone has already pointed out the research that suggests that existing code has fewer bugs in than new code (and that the older code is, the less likely it is to be buggy).
I haven’t read it yet, but I haven’t made an argument about that, I just created a parody of the argument as presented. I’ll be candid, i doubt that the research is going to compel me to believe that newer code is inherently buggier, it may compel me to confirm my existing belief that testing software in the field is one good method to find some classes of bugs.
Secondly, this discussion is mainly around entire codebases, not just existing code.
I guess so, it’s a bit dependent on where we say the discussion starts - three things are relevant; RFL, which is not a wholesale rewrite, a wholesale rewrite of the Linux kernel, and Netscape. RFL is not about replacing the entire Linux kernel, although perhaps “codebase” here refers to some sort of unit, like a driver. Netscape wanted a wholesale rewrite, based on the linked post, so perhaps that’s what’s really “the single worst strategic mistake that any software company can make”, but I wonder what the boundary here is? Also, the article immediately mentions that Microsoft tried to do this with Word but it failed, but that Word didn’t suffer from this because it was still actively developed - I wonder if it really “failed” just because pyramid didn’t become the new Word? Did Microsoft have some lessons learned, or incorporate some of that code? Dunno.
I think I’m really entirely justified when I say that the post is entirely emotional/ intuitive appeals, rhetoric, and that it makes empirical claims without justification.
There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming:
This is rhetoric. These are unsubstantiated empirical claims. The article is all of this. It’s fine as an interesting, thought provoking read that gets to the root of our intuitions, but I think anyone can dismiss it pretty easily since it doesn’t really provide much in the form of an argument.
It’s important to remember that when you start from scratch there is absolutely no reason to believe that you are going to do a better job than you did the first time.
Again, totally unsubstantiated. I have MANY reasons to believe that, it is simply question begging to say otherwise.
That’s all this post is. Over and over again making empirical claims with no evidence and question beggign.
We can discuss the risks and benefits, I’d advocate for that. This article posted doesn’t advocate for that. It’s rhetoric.
existing code has fewer bugs in than new code (and that the older code is, the less likely it is to be buggy).
This is a truism. It is survival bias. If the code was buggy, it would eventually be found and fixed. So all things being equal newer code is riskier than old code. But it’s also been impirically shown that using Rust for new code is not “all things being equal”. Google showed that new code in Rust is as reliable as old code in C. Which is good news: you can use old C code from new Rust projects without the risk that comes from new C code.
But it’s also been impirically shown that using Rust for new code is not “all things being equal”.
Yeah, this is what I’ve been saying (not sure if you’d meant to respond to me or the parent, since we agree) - the issue isn’t “new” vs “old” it’s things like “reviewed vs unreviewed” or “released vs unreleased” or “tested well vs not tested well” or “class of bugs is trivial to express vs class of bugs is difficult to express” etc.
I don’t disagree that the rewards can outweigh the risks, and in this case I think there’s a lot of evidence that suggests that memory safety as a default is really important for all sorts of reasons. Let alone the many other PL developments that make Rust a much more suitable language to develop in than C.
It’s a Ship of Theseus—at no point can you call it a “new” codebase, but after a period of time, it could be completely different code. I have a C program I’ve been using and modifying for 25 years. At any given point, it would have been hard to say “this is now a new codebase,
yet not one line of code in the project is the same as when I started (even though it does the same thing at it always has).
I don’t see the point in your question. It’s going to depend on the codebase, and on the nature of the changes; it’s going to be nuanced, and subjective at least to some degree. But the fact that it’s prone to subjectivity doesn’t mean that you get to call an old codebase with a single fixed bug a new codebase, without some heavy qualification which was lacking.
What’s old and new is poorly defined and yet there’s an argument being made that “old” and “new” are good indicators of something. If they’re so poorly defined that we have to bring in all sorts of additional context like the nature of the changes, not just when they happened or the number of lines changed, etc, then it seems to me that we would be just as well served to throw away the “old” and “new” and focus on that context.
I feel like enough people would agree more-or-less on what was an “old” or “new” codebase (i.e. they would agree given particular context) that they remain useful terms in a discussion. The general context used here is apparent (at least to me) given by the discussion so far: an older codebase has been around for a while, has been maintained, has had kinks ironed out.
There’s a really important distinction here though. The point is to argue that new projects will be less stable than old ones, but you’re intuitively (and correctly) bringing in far more important context - maintenance, testing, battle testing, etc. If a new implementation has a higher degree of those properties then it being “new” stops being relevant.
It’s also self defeating on its face. I take an old codebase, I fix a bug, the codebase is now new. Which one is better?
My point was that this statement requires a definition of “new codebase” that nobody would agree with, at least in the context of the discussion we’re in. Maybe you are attacking the base proposition without applying the surrounding context, which might be valid if this were a formal argument and not a free-for-all discussion.
If a new implementation has a higher degree of those properties
I think that it would be considered no longer new if it had had significant battle-testing, for example.
FWIW the important thing in my view is that every new codebase is a potential old codebase (given time and care), and a rewrite necessarily involves a step backwards. The question should probably not be, which is immediately better?, but, which is better in the longer term (and by how much)? However your point that “new codebase” is not automatically worse is certainly valid. There are other factors than age and “time in the field” that determine quality.
Methodologies don’t matter for quality of code. They could be useful for estimates, cost control, figuring out whom you shall fire etc. But not for the quality of code.
I’ve never observed a programmer become better or worse by switching methodology. Dijkstra would’ve not became better if you made him do daily standups or go through code reviews.
There are ways to improve your programming by choosing different approach but these are very individual. Methodology is mostly a beancounting tool.
When I say “methodology” I’m speaking very broadly - simply “the approach one takes”. This isn’t necessarily saying that any methodology is better than any other. The way I approach a task today is better, I think, then the way that I would have approached that task a decade ago - my methodology has changed, the way I think has changed. Perhaps that might mean I write more tests, or I test earlier, but it may mean exactly the opposite, and my methods may only work best for me.
I’m not advocating for “process” or ubiquity, only that the approach one tasks may improve over time, which I suspect we would agree on.
It’s the largest collaborative project in human history and over 30 million lines of code.
How many of those lines are part of the core? My understanding was that the overwhelming majority was driver code. There may not be that much core subsystem code to rewrite.
For a previous project, we included a minimal Linux build. It was around 300 KLoC, which included networking and the storage stack, along with virtio drivers.
That’s around the size a single person could manage and quite easy with a motivated team.
If you started with DPDK and SPDK then you’d already have filesystems and a copy of the FreeBSD network stack to run in isolated environments.
Once many drivers share common rust wrappers over core subsystems, you could flip it and write the subsystem in Rust. Then expose C interface for the rest.
I see that Drew proposes a new OS in that linked article, but I think a better proposal in the same vein is a fork. You get to keep Linux, but you can start porting logic to Rust unimpeded, and it’s a manageable amount of work to keep porting upstream changes.
Remember when libav forked from ffmpeg? Michael Niedermayer single-handedly ported every single libav commit back into ffmpeg, and eventually, ffmpeg won.
At first there will be extremely high C percentage, low Rust percentage, so porting is trivial, just git merge and there will be no conflicts. As the fork ports more and more C code to Rust, however, you start to have to do porting work by inspecting the C code and determining whether the fixes apply to the corresponding Rust code. However, at that point, it means you should start seeing productivity gains, community gains, and feature gains from using a better language than C. At this point the community growth should be able to keep up with the extra porting work required. And this is when distros will start sniffing around, at first offering variants of the distro that uses the forked kernel, and if they like what they taste, they might even drop the original.
I genuinely think it’s a strong idea, given the momentum and potential amount of labor Rust community has at its disposal.
I think the competition would be great, especially in the domain of making it more contributor friendly to improve the kernel(s) that we use daily.
I certainly don’t think this is impossible, for sure. But the point ultimately still stands: Linux kernel devs don’t want a fork. They want Linux. These folks aren’t interested in competing, they’re interested in making the project they work on better. We’ll see if some others choose the fork route, but it’s still ultimately not the point of this project.
Linux developers want to work on Linux, they don’t want to make a new OS.
While I don’t personally want to make a new OS, I’m not sure I actually want to work on Linux. Most of the time I strive for portability, and so abstract myself from the OS whenever I can get away with it. And when I can’t, I have to say Linux’s API isn’t always that great, compared to what the BSDs have to offer (epoll vs kqueue comes to mind). Most annoying though is the lack of documentation for the less used APIs: I’ve recently worked with Netlink sockets, and for the proc stuff so far the best documentation I found was the freaking source code of a third party monitoring program.
I was shocked. Complete documentation of the public API is the minimum bar for a project as serious of the Linux kernel. I can live with an API I don’t like, but lack of documentation is a deal breaker.
While I don’t personally want to make a new OS, I’m not sure I actually want to work on Linux.
I think they mean that Linux kernel devs want to work on the Linux kernel. Most (all?) R4L devs are long time Linux kernel devs. Though, maybe some of the people resigning over LKML toxicity will go work on Redox or something…
Re-Implementing the kernel ABI would be a ton of work for little gain if all they wanted was to upstream all the work on new hardware drivers that is already done - and then eventually start re-implementing bits that need to be revised anyway.
If the singular required Rust toolchain didn’t feel like such a ridiculous to bootstrap 500 ton LLVM clown car I would agree with this statement without reservation.
Zig is easier to implement (and I personally like it as a language) but doesn’t have the same safety guarantees and strong type system that Rust does. It’s a give and take. I actually really like Rust and would like to see a proliferation of toolchain options, such as what’s in progress in GCC land. Overall, it would just be really nice to have an easily bootstrapped toolchain that a normal person can compile from scratch locally, although I don’t think it necessarily needs to be the default, or that using LLVM generally is an issue. However, it might be possible that no matter how you architect it, Rust might just be complicated enough that any sufficiently useful toolchain for the language could just end up being a 500 ton clown car of some kind anyways.
Depends on which parts of GP’s statement you care about: LLVM or bootstrap. Zig is still depending on LLVM (for now), but it is no longer bootstrappable in a limited number of steps (because they switched from a bootstrap C++ implementation of the compiler to keeping a compressed WASM build of the compiler as a blob.
Yep, although I would also add it’s unfair to judge Zig in any case on this matter now given it’s such a young project that clearly is going to evolve a lot before the dust begins to settle (Rust is also young, but not nearly as young as Zig). In ten to twenty years, so long as we’re all still typing away on our keyboards, we might have a dozen Zig 1.0 and a half dozen Zig 2.0 implementations!
Yeah, the absurdly low code quality and toxic environment make me think that Linux is ripe for disruption. Not like anyone can produce a production kernel overnight, but maybe a few years of sustained work might see a functional, production-ready Rust kernel for some niche applications and from there it could be expanded gradually. While it would have a lot of catching up to do with respect to Linux, I would expect it to mature much faster because of Rust, because of a lack of cruft/backwards-compatibility promises, and most importantly because it could avoid the pointless drama and toxicity that burn people out and prevent people from contributing in the first place.
From the thread in OP, if you expand the messages, there is wide agreement among the maintainers that all sorts of really badly designed and almost impossible to use (safely) APIs ended up in the kernel over the years because the developers were inexperienced and kind of learning kernel development as they went. In retrospect they would have designed many of the APIs very differently.
It’s based on my forays into the Linux kernel source code. I don’t doubt there’s some quality code lurking around somewhere, but the stuff I’ve come across (largely filesystem and filesystem adjacent) is baffling.
Seeing how many people are confidently incorrect about Linux maintainers only caring about their job security and keeping code bad to make it a barrier to entry, if nothing else taught me how online discussions are a huge game of Chinese whispers where most participants don’t have a clue of what they are talking about.
I doubt that maintainers are “only caring about their job security and keeping back code” but with all due respect: You’re also just taking arguments out of thin air right now. What I do believe is what we have seen: Pretty toxic responses from some people and a whole lot of issues trying to move forward.
Seeing how many people are confidently incorrect about Linux maintainers only caring about their job security and keeping code bad to make it a barrier to entry
Huh, I’m not seeing any claim to this end from the GP, or did I not look hard enough? At face value, saying that something has an “absurdly low code quality” does not imply anything about nefarious motives.
Still, in GP’s case the Chinese whispers have reduced “the safety of this API is hard to formalize and you pretty much have to use it the way everybody does it” to “absurdly low quality”. To which I ask, what is more likely. 1) That 30-million lines of code contain various levels of technical debt of which maintainers are aware; and that said maintainers are worried even of code where the technical debt is real but not causing substantial issue in practice? Or 2) that a piece of software gets to run on literally billions of devices of all sizes and prices just because it’s free and in spite of its “absurdly low quality”?
Linux is not perfect, neither technically nor socially. But it sure takes a lot of entitlement and self-righteousness to declare it “of absurdly low quality” with a straight face.
GP here: I probably should have said “shockingly” rather than “absurdly”. I didn’t really expect to get lawyered over that one word, but yeah, the idea was that for a software that runs on billions of devices, the code quality is shockingly low.
Of course, this is plainly subjective. If your code quality standards are a lot lower than mine then you might disagree with my assessment.
That said, I suspect adoption is a poor proxy for code quality. Internet Explorer was widely adopted and yet it’s broadly understood to have been poorly written.
But it sure takes a lot of entitlement and self-righteousness to declare it “of absurdly low quality” with a straight face
I’m sure self-righteousness could get you to the same place, but in my case I arrived by way of experience. You can relax, I wasn’t attacking Linux—I like Linux—it just has a lot of opportunity for improvement.
I guess I’ve seen the internals of too much proprietary software now to be shocked by anything about Linux per se. I might even argue that the quality of Linux is surprisingly good, considering its origins and development model.
I think I’d lawyer you a tiny bit differently: some of the bugs in the kernel shock me when I consider how many devices run that code and fulfill their purposes despite those bugs.
FWIW, I was not making a dig at open source software, and yes plenty of corporate software is worse. I guess my expectations for Linux are higher because of how often it is touted as exemplary in some form or another. I don’t even dislike Linux, I think it’s the best thing out there for a huge swath of use cases—I just see some pretty big opportunities for improvement.
But it sure takes a lot of entitlement and self-righteousness to declare it “of absurdly low quality” with a straight face.
Or actual benchmarks: the performance the Linux kernel leaves on the table in some cases is absurd. And sure it’s just one example, but I wouldn’t be surprised if it was representative of a good portion of the kernel.
Well not quite but still “considered broken beyond repair by many people related to life time management” - which is definitely worse than “hard to formalize” when “the way ever[y]body does it” seems to vary between each user.
I love Rust but still, we’re talking of a language which (for good reasons!) considers doubly linked lists unsafe. Take an API that gets a 4 on Rusty Russell’s API design scale (“Follow common convention and you’ll get it right”), but which was designed for a completely different programming language if not paradigm, and it’s not surprising that it can’t easily be transformed into a 9 (“The compiler/linker won’t let you get it wrong”). But at the same time there are a dozen ways in which, according to the same scale, things could actually be worse!
What I dislike is that people are seeing “awareness of complexity” and the message they spread is “absurdly low quality”.
Note that doubly linked lists are not a special case at all in Rust. All the other common data structures like Vec, HashMap etc. also need unsafe code in their implementation.
Implementing these datastructures in Rust, and writing unsafe code in general, is indeed roughly a 4. But these are all already implemented in the standard library, with an API that actually is at a 9. And std::collections::LinkedList is constructive proof that you can have a safe Rust abstraction for doubly linked lists.
Yes, the implementation could have bugs, thus making the abstraction leaky. But that’s the case for literally everything, down to the hardware that your code runs on.
You’re absolutely right that you can build abstractions with enough effort.
My point is that if a doubly linked list is (again, for good reasons) hard to make into a 9, a 20-year-old API may very well be even harder. In fact, std::collections::LinkedList is safe but still not great (for example the cursor API is still unstable); and being in std, it was designed/reviewed by some of the most knowledgeable Rust developers, sort of by definition. That’s the conundrum that maintainers face and, if they realize that, it’s a good thing. I would be scared if maintainers handwaved that away.
Yes, the implementation could have bugs, thus making the abstraction leaky.
Bugs happen, but if the abstraction is downright wrong then that’s something I wouldn’t underestimate. A lot of the appeal of Rust in Linux lies exactly in documenting/formalizing these unwritten rules, and wrong documentation can be worse than no documentation (cue the negative parts of the API design scale!); even more so if your documentation is a formal model like a set of Rust types and functions.
That said, the same thing can happen in a Rust-first kernel, which will also have a lot of unsafe code. And it would be much harder to fix it in a Rust-first kernel, than in Linux at a time when it’s just feeling the waters.
In fact, std::collections::LinkedList is safe but still not great (for example the cursor API is still unstable); and being in std, it was designed/reviewed by some of the most knowledgeable Rust developers, sort of by definition.
At the same time, it was included almost as like, half a joke, and nobody uses it, so there’s not a lot of pressure to actually finish off the cursor API.
It’s also not the kind of linked list the kernel would use, as they’d want an intrusive one.
And yet, safe to use doubly linked lists written in Rust exist. That the implementation needs unsafe is not a real problem. That’s how we should look at wrapping C code in safe Rust abstractions.
The whole comment you replied to, after the one sentence about linked lists, is about abstractions. And abstractions are rarely going to be easy, and sometimes could be hardly possible.
That’s just a fact. Confusing this fact for something as hyperbolic as “absurdly low quality” is stunning example of the Dunning Kruger effect, and frankly insulting as well.
I personally would call Linux low quality because many parts of it are buggy as sin. My GPU stops working properly literally every other time I upgrade Linux.
No one is saying that Linux is low quality because it’s hard or impossible to abstract some subsystems in Rust, they’re saying it’s low quality because a lot of it barely works! I would say that your “Chinese whispers” misrepresents the situation and what people here are actually saying. “the safety of this API is hard to formalize and you pretty much have to use it the way everybody does it” doesn’t apply if no one can tell you how to use an API, and everyone does it differently.
Actually, the NT kernel of all things seems to have a pretty good reputation, and I wouldn’t dismiss the BSD kernels out of hand. I don’t know which kernel is better, but it seems you do. If you could explain how you came to this conclusion that would be most helpful.
*nod* I haven’t been a Windows person since shortly after the release of Windows XP (i.e. the first online activation DRM’d Windows) but, whenever I see glimpses of what’s going on inside the NT kernel in places like Project Zero: The Definitive Guide on Win32 to NT Path Conversion, it really makes me want to know more.
The generally accepted definition of “hit piece” includes an attempt to sway public opinion by publishing false information. Leaving aside the fact that the user who linked this story did not publish it, and deferring the discussion of who may or may not pay them to post, that is a significant claim that requires significant evidence.
So, please share your evidence… what’s the false information here, and how exactly is @freddyb attempting to sway public opinion? To what end? Be very specific, please.
That’s a fair point. I should have said “false or misleading.”
So I’ll amend my question, which I doubt will get answered at any rate:
@ecksdee: So, please share your evidence… what’s the false or misleading information here, and how exactly is @freddyb attempting to sway public opinion? To what end? Be very specific, please.
If you look at the history of soatoks blog on lobsters it is pretty obvious that sooner or later anyone from this community would post this entry.
Now you have to show me how mozilla is related to signal in any positive or negative way. You yourself seem to have a strong feeling towards mozilla at least.
If someone is promoting a VPN service in 2025 and that service doesn’t use WireGuard as its underlying protocol, they are almost certainly LARPing at security expertise rather than offering valuable advice.
This sounds a bit strong. Cloudflare’s WARP uses QUIC, noting valid issues with Wireguard:
That being said, the protocol is not without its issues. Like many tradeoffs in technology, WireGuard’s strengths are also its drawbacks. While simple, it is also rigid: it’s not possible to extend it easily, for example, for session management, congestion control, or to recover more quickly from error-state behaviors we’re familiar with. Finally, neither the protocol nor the cryptography it uses are standards-based, making it difficult to keep up with the strongest known cryptography (post-quantum crypto, for example).
IPSec was raising my eyebrows in 2005. I think now we know that developing an IPSec profile we can all understand as secure is a really tall order, and outside of such a very well understood and narrowly specified profile, IPSec is not really actually reasonable.
Their business greatly depends on being considered secure. Why would they risk that image by adding a backdoor when everyone is already given them their data willingly?
I definitely consider their MITM an issue with security, privacy, and infrastructure centralization, but it doesn’t mean they can’t produce good tech.
I’m still skeptical of MASQUE as one should be of anything new related to security.
Some stream of consciousness, running notes, as I read through:
“Oh, it turns out, they’re wrapping GMP.” That’s definitely not constant-time!
You would be surprised how often this particular finding crops up.
Narrator: I wouldn’t.
For this blog post series about Signal, I will use Semgrep, because it’s currently free as in beer (though the prominent Series D funding announcement makes me worry about its eventual enshittification, so don’t take my hasty tool selection as any sort of endorsement).
Semgrep looks like a free trial. So “free as in the taster of beer that the bartender will pour you so you can decide whether you want to buy a whole one”. But in any case, I don’t think charging me for a product is really enshittification.
Missing sections from the technical documentation are like a flashing neon sign that says “audit me”.
I’ve done a couple crypto audits and (many) general network security audits, and this is a fundamental truth. Both for missing sections and for present-but-suspiciously-under-detailed sections. I like to map documentation sections to the source tree, then see what parts of the source tree have missing or superficial docs.
Every time I learn new ways to attack cryptosystems, I always look at Signal or Tor to see if I can break either of them, because they’re difficult targets and you win instant bragging rights if you succeed.
This is the most time I’ve spent writing about a negative result on this blog.
If you read this whole series from start to finish and feel a little disappointed that nothing fell out of my review, I want to make something clear to you in particular:
I didn’t feel that “I looked at Signal and didn’t find any vulnerabilities in it” is exactly a convincing argument, so instead, I wanted to lead you down the journey I took to review Signal; to show you the code snippets I reviewed, and what (if anything) significant I thought about them.
I like the way this person thinks. I haven’t spent enough time with the Signal source code to say whether I agree with these findings. And I still don’t like that you need a phone and a phone number to bootstrap Signal. But this was a well-written and well-explained piece about why the cryptography behind Signal’s protocol is very likely as solid as we expect it to be.
The home page definitely looks like that, but you can just pip install the thing. Some features (cross-file analysis, and some data flow stuff e.g. the more advanced taint analysis) are locked, but the static pattern matching and some of the taint analysis is available out of the box. I assume some of the current experiments might be locked to pro users if they get stabilised.
But in any case, I don’t think charging me for a product is really enshittification.
I would assume the fear is less about a paid product and more investor / exit-driven priorities.
I would further assume they selected a free product so readers can reproduce without having to pay for a possibly expensive license.
The home page definitely looks like that, but you can just pip install the thing.
I absolutely didn’t pick up on that. I’d agree that the risk of the company changing the trial offering so that people can’t reproduce this analysis due to investor pressure is at least adjacent to enshittification. But it’s not the bait and switch that I usually associate with that.
It’s really a small point. But the way the writer phrased it made me briefly ponder the difference between a free trial + up-sell and “enshittification”.
Thanks for continuing to write things like this. Your writing over the past couple of years has helped me think about how this tool that I’m deeply skeptical of can be useful for me anyway.
I appreciate it, and I really don’t understand the flags you’re picking up. Those seem abusive to me.
“Ignore FEDERAL-SPYCAM; that is just my phone.”
My phone’s hotspot is called Warning Expired Certificate. Not sure if that would deter hackers or lure them.
My home SSID is “FBI Surveillance Van”. I’ve got the same question.
tl;dr: vibe coding is the code equivalent of writing a sentence by just tapping the next suggested autocomplete word on your phone’s keyboard
I can stop my doomscrolling now. You are completely correct, and you win the internet for today. Congratulations, and enjoy your prize of “one internet.”
(Thank you for phrasing it this way. I was having a hard time explaining what “vibe coding” meant to someone who asked me, and your explanation is exactly the right way to understand it, IMO.)
asks for you email address for the code? no thanks
I’m so tired of the constant begging to sign up for mailing lists, etc. exhausting.
Oh man I didn’t even notice the “unsubscribe at any time”. Does asking for the code automatically put you on his newsletter? That’s a dark pattern alright.
When it got posted elsewhere recently, someone mirrored the code on github:
https://github.com/jwenjian/scroll-buddy
Having to constantly be alert to dark patterns like this is getting very tiresome, indeed.
Even if half of the things I have heard about Brave are wrong, why even bother when so many other great, free alternatives exist. The first and last time I tried it was the home page ad fiasco… uninstalled and went back to Chrome.
These days I try to use Firefox, but escape hatch to Chrome when things don’t work. I know there are better alternatives to both Firefox and Chrome, I’ll start exploring them… maybe? It’s hard for me to care about them since most of them are just Chrome/Firefox anyway. I’ll definitely give Ladybird a go when it’s ready. On paper, at least, it sounds like the escape from Google/Mozilla that is desperately needed.
Kagi bringing Orion to Linux feels promising. It’s OK on Mac, though after using it for 6 months I switched back to Safari. It looks like they’re using Webkit for that on Linux, not blink, which is a happy surprise IMO. That feels like a good development. (I’m also looking forward to Ladybird, though. Every so often I build myself a binary and kick the tires. Their progress feels simultaneously impossibly fast and excruciatingly slow.
If I understand correctly, Orion is not open source. That feels like a huge step backward and not a solution to a browser being controlled by a company with user-hostile incentives. I think Ladybird is more in line with what we really need: a browser that isn’t a product but rather a public good that may be funded in part by corporations but isn’t strongly influenced by any one commercial entity.
I believe they have stated that open sourcing is in the works1
Their business model is, at the minimum, less user hostile than others due to users paying them money directly to keep them alive.
Disclaimer: Paid Kagi user.
That help page has said Kagi is “working on it” since 2023-09 or earlier. Since Kagi hasn’t finished that work after 1.5 years, I don’t believe Kagi is actually working on open sourcing Orion.
If US DoJ has their way, google won’t be able to fund chrome any more the way it was doing so far. That also means apple and firefox lose money too. So Kagi’s stuff might work out long term if breakup happens.
That’s totally valid, and I’d strongly prefer to use an open source UA as well!
In the context of browsers, though, where almost all traffic comes from either webkit-based browsers (chiefly if not only Safari on Mac/iPad/iPhone), blink-based browsers (chrome/edge/vivaldi/opera/other even smaller ones) or gecko-based browsers (Firefox/LibreWolf/Waterfox/IceCat/Seamonkey/Zen/other even smaller ones) two things stand out to me:
I thought that Orion moving Webkit into a Linux browser was a promising development just from an ecosystem diversity perspective. And I thought having a browser that’s not ad-funded on Linux (because even those FOSS ones are, indirectly ad-funded) was also a promising development.
I’d also be happier with a production ready Ladybird. But that doesn’t diminish the notion that, in my eye, a new option that’s not beholden to advertisers feels like a really good step.
There are non-gecko pure FOSS browsers on Linux.
Of the blink-based pure FOSS browsers, I use Ungoogled Chromium, which tracks the Chromium project and removes all binary blobs and Google services. There is also Debian Chromium; Iridium; Falkon from KDE; and Qute (keyboard driven UI with vim-style key bindings). Probably many others.
The best Webkit based browser I’m aware of on Linux is Epiphany, aka Gnome Web. It has built-in ad blocking and “experimental” support for chrome/firefox extensions. A hypothetical Orion port to Linux would presumably have non-experimental extension support. (I found some browsers based on the deprecated QtWebKit, but these should not be used due to unfixed security flaws.)
I wasn’t sure Ungoogled Chromium was fully FOSS, and I completely forgot about Debian Chromium. I tried to use Qute for a while and it was broken enough for me at the time that I assumed it was not actively developed.
When did Epiphany switch from Gecko to Webkit? Last time I was aware of what it used, it was like “Camino for Linux” and was good, but I still had it on the Gecko pile.
According to Wikipedia, Epiphany switched from Gecko to Webkit in 2008, because the Gecko API was too difficult to interface to / caused too much maintenance burden. Using Gecko as a library and wrapping your own UI around it is apparently quite different from soft forking the entire Firefox project and applying patches.
Webkit.org endorses Epiphany as the Linux browser that uses Webkit.
There used to be a QtWebKit wrapper in the Qt project, but it was abandoned in favour of QtWebEngine based on Blink. The QtWebEngine announcement in 2013 gives the rationale: https://www.qt.io/blog/2013/09/12/introducing-the-qt-webengine. At the time, the Qt project was doing all the work of making WebKit into a cross-platform API, and it was too much work. Google had recently forked Webkit to create Blink as a cross-platform library. Switching to Blink gave the Qt project better features and compatibility at a lower development cost.
The FOSS world needs a high quality, cross-platform browser engine that you can wrap your own UI around. It seems that Blink is the best implementation of such a library. WebKit is focused on macOS and iOS, and Firefox develops Gecko as an internal API for Firefox.
EDIT: I see that https://webkitgtk.org/ exists for the Gnome platform, and is reported to be easy to use.
I see Servo as the future, since it is written in Rust, not C++, and since it is developed as a cross platform API, to which you must bring your own UI. There is also Ladybird, and it’s also cross-platform, but it’s written in C++, which is less popular for new projects, and its web engine is not developed as a separate project. Servo isn’t ready yet, but they project it will be ready this year: https://servo.org/blog/2025/02/19/this-month-in-servo/.
I used to contribute to Camino on OS X, and I knew that most appetite for embedding gecko in anything that’s not firefox died a while back, about the time Mozilla deprecated the embedding library, but I’d lost track of Epiphany. As an aside: I’m still sorry that Mozilla deprecated the embedding interface for gecko, and I wish I could find a way to make it practical to maintain that. Embedded Gecko was really nice to work with in its time.
I strongly agree with this. I’d really like a non-blink thing to be an option for this. Not because there’s anything wrong with blink, but because that feels like a rug pull waiting to happen. I like that servo update, and hope that the momentum holds.
Wikipedia suggests the WebKit backend was added to Epiphany in 2007 and they removed the Gecko backend in 2009. Wow, time flies! GNOME Web is one I would like to try out more, if only because I enjoy GNOME and it seems to be a decent option for mobile Linux.
I have not encountered any website that doesn’t work on firefox (one corporate app said it required Chrome for some undisclosed reason, but I changed the useragent and had no issue at all to use their sinple CRUD). What kind of issues do you find?
I’ve wondered the same thing in these recent discussions. I’ve used Firefox exclusively at home for over 15 years, and I’ve used it at my different jobs as much as possible. While my last two employers had maybe one thing that only worked in IE or Chrome/Edge, everything else worked fine (and often better than my coworkers’ Chrome) in Firefox. At home, the last time I remember installing Chrome was to try some demo of Web MIDI before Firefox had support. That was probably five years ago, and I uninstalled Chrome after playing with the demo for a few minutes.
I had to install Chromium a couple of times in the last years to join meetings and podcast recording that were done with software using Chrome-only API.
When it happens, I bless flattpak as I install Chromium then permanently delete it afterward without any trace on my system.
If you are an heavy user of such web apps, I guess that it makes sense to use Chrome as your main browser.
I can’t get launcher.keychron.com to work on LibreWolf but that’s pretty much it. I also have chrome just in case I’m too lazy to figure out what specifically is breaking a site
Firefox doesn’t support WebUSB, so that’s probably the issue.
Thanks, yeah, that’s it. I knew it was some specific thing that wasn’t supported I just couldn’t remember and was writing that previous comment on my phone so I was too lazy to check. But yeah, it’s literally the only site I could think of that doesn’t work on Firefox (for me).
It’s pretty rare to be fair, so much so that I don’t have an example of the top off my head. I know, classic internet comment un-cited source bullshit, sorry. It was probably awful gov or company intranet pages over the years.
Some intensive browser based games run noticeably better on Chrome too, but I know this isn’t exactly a common use case for browsers that others care about.
Probably not a satisfying reply, apologies.
For some reason, trying to log in to the CRA (Canadian equivalent of the IRS) always fails for me with firefox and I need to use chrome to pay my taxes.
I run into small stuff fairly regularly. Visual glitches are common. Every once in a while, I’ll run into a site that won’t let me login. (Redirects fail, can’t solve a CAPTCHA, etc.)
Some google workspace features at least used to be annoying enough that I just devote a chrome profile to running those workspace apps. I haven’t retried them in Firefox recently because I kind of feel that it’s google’s just deserts that they get a profile on me that has nothing but their own properties, while I use other browsers for the real web.
I should start keeping a list of specific sites. Because I do care about this, but usually when it comes up I’m trying to get something done quickly and a work-around like “use chrome for that site” carries the day, then I forget to return to it and dig into why it was broken.
If you maintain a reasonably popular C++ library and it doesn’t use CMake as its build system, sooner or later someone will come and demand that you add CMake support (risking permanent brain damage in the process) to make it easier to consume your library in their CMake-based project. Happened to me multiple times.
I’m more familiar with meson, which has a nice system for consuming dependencies that don’t use meson (wraps). Does CMake not have something similar?
Like everything else in CMake, it has plenty of ways to interoperate with other build systems, and they all still involve suffering.
CMake can find libraries through
pkg-configor you can write your own finder. See files in/usr/share/cmake-*/Modules/for inspiration. No need to build your library using CMake.I’ve sent PRs for porting projects that use pkgconf to cmake more than a few times because pkgconf is at best fragile and at worst barely works at all for instance with the msvc universe. With cmake I know it will work and I can do my job.
Last time I tried pkg-config on Windows, it didn’t work very well. That’s been a while.
You can also accept a PR for a finder… which is what I’d offer to do if I were maintaining a C++ library that doesn’t use CMake. That said, I personally find the brain damage that CMake induces very compatible with the wear and tear imposed by C++, so when I do work in C++, I usually do use CMake for my builds. It’s the worst C++ build system, to be sure, except that all the others are even worse.
The insidious thing about CMake-induced brain damage is that it attacks those parts of your brain that you need to recognize a better build system. How else could one possibly explain otherwise sane people professing love to CMake?
Nobody starts out using CMake. The thing they used prior to picking up CMake is what did that job.
My above comment is of course a joke, but there is some truth to it: in CMake people work by copying and pasting illogical, magic incantations until something sticks. There is no understanding of the underlying model of build. Which is not really surprising since whatever model one may argue CMake has is obfuscated by the meta build system layering and hacking around of underlying build system limitations (like having to generate source code during the configuration phase).
Then, when they try a supposedly better build system, they adopt the same approach: don’t bother trying to understand, just throw things against the wall until something sticks. I see this constantly where smart, experienced C++ developers adopt this attitude when trying
build2. But C++ build is too complex for this approach to work for anything other than toy examples.Yes, C++ is a complex language and some features are outright broken or don’t compose well. But you can pick a sensible subset and there will be logic and model, and you can go read the standard and it makes sense. CMake, IMO, is the absolute worse part of it. A new language (C++ post-C++11) lost to CMake, truly.
I’m a pretty strong cmake proponent because it’s the only thing that works at the scale I need - building one codebase with every potential toolchain and compiler, having only one set of commands for targeting windows/msvc to iOS to freebsd to emscripten to ESP32s etc. Alternatives never work that well, for instance meson is a PITA as soon as you want to use windows, fully hermetic build systems just aren’t compatible with what making packages for Linux distros requires, etc The language certainly is terrible but it solves actual problems such as not having to go download 7z.exe (or the Mac or Linux version) from who knows where on your CI scripts depending on the platform you’re running on because cmake supports cross platform archive extraction with one single command.
My father worked for Guinness for about 25 years. When I was growing up we had prints of the John Ireland calendar “the gentle art of making Guinness”, a splendid series of cartoons in the tradition of Heath Robinson or Rube Goldberg. Guinness advertising art was great.
But, it’s a mass-produced factory beer. I occasionally like a stout or other dark beer, but Guinness is boring.
Guinness was relatively early in the use of statistical quality control over large scale biochemical processes – that is where Student’s t-distribution was discovered.
I visited a brewery for one of the top 5 beer producers worldwide and the effort and care going into producing a consistent, safe product is impressive. The fact that the product itself is rather bland and boring is incidental :D
I like boring beer. Incoming long defense of Guinness:
I didn’t always used to be like, I used to like hoppy IPAs. But as I’ve gotten older my desire to drink beers higher than 5% has diminished so thoroughly that I can count the number of times I drink one per year on one hand.
I certainly would like to drink more complex stouts, but there are barely any brewed in America with the same ABV. For an example, I went to my favorite local brewery’s website, and Stout was always prefixed with imperial https://grimmales.com/menu/
I can’t drink these! They taste like syrup and instantly give me a headache.
Frankly, my go to beer these days is Asahi. I’m tired of complexity. Beer is less for me about complex flavors and more about refreshment and the desire to relax. In the cases when I want something more complex, I go for a cocktail. I make myself a Negroni or a Campari soda (depending on which side of refreshment and flavor I want).
Anyway. I love Guinness. It tastes good (that is to say it doesn’t taste like piss water), has a low ABV, and is served in basically every bar in Manhattan and Brooklyn. It fits my need
Thanks for saying that. I like it fine if it’s what’s available, but it tastes watered down to me compared to other stouts I’ve grown accustomed to. I’ll take it over something lighter, but it’s fairly plain.
Completely agree about the watered down flavor. The Extra Stout, however, is quite tasty…
If you’re in the US, be aware that the Guinness product sold as Extra Stout 20 years ago is now known as Foreign Extra Stout. Today’s Extra Stout is watered down in comparison (and undoubtedly cheaper to produce).
As bait-and-switches go, this is mild compared to Newcastle Ale…
I still find it funny, that the main UK production plant for Newcastle Brown Ale is on Sunderland.
That was a notorious bit of management fuckery https://en.wikipedia.org/wiki/Geographical_indications_and_traditional_specialities_in_the_European_Union#Within_the_European_Union
I’m with you but would add that it genuinely does taste better in Ireland! To the point that I know some Irish people who will not drink Guinness abroad.
The story goes that this is due to the water but I suspect the truth is that Guinness have a lot of control over how it’s stored and served in pubs (temperature etc). Whether that should matter is an exercise left to personal taste.
I hope to put that theory to the test one day!
I had an Irish colleague in France that spun the theory that French Guiness is a lot less bitter, because locals don’t like it.
He refused it.
My personal take on Guiness: I rarely drink and then I only rarely drink Guiness, so I enjoy it as an easy stout that comes with an expected taste. Sometimes, that’s just what I want.
(Fun fact about me: I do, however, have a taste for alcohol, my first job was sysadmin on a wineyard)
When I visited the Guinness Storehouse in Dublin the advertising floor was definitely the most interesting part. The rest was over the top displays or sections for social media tourism. I have to agree about it being boring, although I’ll often order it if there’s no other stout or porters served.
Nothing wrong with mass-produced factory beer. Writing this from Germany and I’m always up for a good Maß of Augustiner.
Also, their book of world records was endlessly entertaining when I was in primary school.
Many years after I tried Guinness (which I still occasionally enjoy, because it is boring in quite a pleasant way) I learned that the thing I really liked about it was that it was always a nitro pour. Seeking out interesting beers (mostly, but not all stouts) served on nitro taps has been fun.
I had never made the connection between the beer and the books before. Thanks!
You should also try nitro (cold brew) coffee. It has that same silky mouthfeel and bitterness. Plus, you can drink it any time of day guilt-free :)
I’m a big fan of nitro coffee! In the before times, I worked once or twice a week in an office that had a nitro cold brew tap in the kitchen, and that was enough to make me look forward to those office days. Come to think of it, everyone (of those who didn’t dislike coffee in general) really loved that perk.
My MacBook Pro is nagging me to upgrade to the new OS release. It lists a bunch of new features that don’t care about. In the meantime, the following bugs (which are regressions) have been unfixed for multiple major OS versions:
There are a lot of others, these are the first that come to mind. My favourite OS X release was 10.6: no new user-visible features, just a load of bug fixes and infrastructure improvements (this one introduced libdispatch, for example).
It’s disheartening to see core functionality in an “abandonware” state while Apple pushes new features nobody asked for. Things that should be rock-solid, just… aren’t.
It really makes you understand why some people avoid updates entirely. Snow Leopard’s focus on refinement feels like a distant memory now.
The idea of Apple OS features as abandonware is a wild idea, and yet here we are. The external monitor issue is actually terrible. I have two friends who work at Apple (neither in OS dev) and both have said that they experience the monitor issue themselves.
It is one thing when a company ignores bugs reported by its customers.
It is another thing when a company ignores bugs reported by its own employees that are also customer-facing.
When I worked for a FAANG, they released stuff early internally as part of dogfooding programs to seek input and bug reports before issues hit users.
Sounds good, just that “you’re not the target audience” became a meme because so many bug reports and concerns were shut down with that response.
I was thinking about this not too long ago; there are macOS features (ex the widgets UI) that don’t seem to even exist anymore. So many examples of features I used to really like that are just abandoned.
This works flawlessly for me every single time, I use Apple Studio Display at home and a high end Dell at the office.
On the other hand, activating iMessage and FaceTime on a new MacBook machine has been a huge pain for years on end…
I can quote on that, but not with my Apple account, but with my brother’s. Coincidentally, he had less problems activating iMessage/FaceTime on an Hackintosh machine.
A variation on that which I’ve run in to is turning the monitor off and putting the laptop to sleep, and waking without moving or disconnecting it.
To avoid all windows ending up on stuck on the laptop display, I have to sleep the laptop, the power off the monitor. To restore power on the monitor, then wake the laptop. Occasionally (1 in 10 times?) it still messes up and I have to manually move windows back to the monitor display.
(This is when using dual-head mode with both the external monitor and laptop display in operation)
iCloud message sync with message keep set to forever seems to load soooo much that messages on my last laptop would be so awful to type long messages (more than 1 sentence) directly into the text box I started to write messages outside of the application, copy/paste and send the message. The delay was in seconds for me.
I’m really heartened by how many people agree that OS X 10.6 was the best.
Edited to add … hm - maybe you’re not saying it was the best OS version, just the best release strategy? I think it actually was the best OS version (or maybe 10.7 was, but that’s just a detail).
Lion was hot garbage. It showed potential (if you ignored the workflow regressions) but it was awful.
10.8 fixed many of lion’s issues and was rather good.
Snow Leopard was definitely peak macOS.
Are there people who still use 10.6? I wonder what would be missing compared to current MacOS. Can it run a current Firefox? Zoom?
It would be pretty hard to run 10.6 for something other than novelty, the root certs are probably all expired, and you definitely can’t run any sort of modern Firefox on it, the last version of FF to support 10.6 was ESR 45 released in 2016: https://blog.mozilla.org/futurereleases/2016/04/29/update-on-firefox-support-for-os-x/
I know there are people keeping Windows 7 usable despite lack of upstream support; it would be cool if that existed for 10.6 but it sounds like no.
Maybe 10.6 could still be useful for professional video/audio/photo editing software, the type that wasn’t subscription based.
It was before Apple started wanting to make it more iPhone-like and slowly doing what Microsoft did with Windows 8 (who did it in a ‘big bang’) by making Windows Phone and Windows desktop amost indistinguishable. After Snow Leopard, Apple became a phone company and very iPhone-centric and just didn’tbother with the desktop - it became cartoonish and all flashy, not usable. That’s when I left MacOS and haven’t looked back.
Recently, Disk Utility has started showing a permissions error when I click unmount or eject on SD cards or their partitions, if the card was inserted after Disk Utility started. You have to quit and re-open Disk Utility for it to work. It didn’t use to be like that, but it is now, om two different Macs. This is very annoying for embedded development where you need to write to SD cards frequently to flash new images or installers. So unmounting/ejecting drives just randomly broke one day and I’m expecting it won’t get fixed.
Another forever-bug: when you’re on a higher refresh rate screen, the animation to switch workspaces takes more time on higher refresh rate screens. This has forced me to completely change how I use macOS to de-emphasise workspaces, because the animation is just obscenely long after I got a MacBook Pro with a 120Hz screen in 2021. Probably not a new bug, but an old bug that new hardware surfaced, and I expect it will never get fixed.
I’m also having issues with connecting to external screens only working occasionally, at least through USB-C docks.
The hardware is so damn good. I wish anyone high up at Apple cared at all about making the software good too.
Oh, there’s another one: the fstab things to not mount partitions that match a particular UUID no longer work and there doesn’t appear to be any replacement functionality (which is annoying when it’s a firmware partition that must not be written to except in a specific way, or it will sofr-brick the device).
Oh, fun! I’ve tried to find a way to disable auto mount and the only solutions I’ve found is to add individual partition UUIDs to a block list in fstab, which is useless to me since I don’t just re-use the same SD card with the same partition layout all the time, I would want to disable auto mounting completely. But it’s phenomenal to hear that they broke even that sub-par solution.
Maybe it’s an intended “feature”, because 120Hz enabled iPhones and iPads have the same behavior.
Maybe, but we’re talking about roughly 1.2 seconds from the start of the gesture until keyboard input starts going to an app on the target workspace. That’s an insane amount of delay to just force the user to sit through on a regular basis… On a 60Hz screen, the delay is less than half that (which is still pretty long, but much much better)
Not a fix, but as a workaround have you tried Accessibility > Display > Reduce Motion?
I can’t stand the normal desktop switch animation even when dialed down all the way. With that setting on, there’s still a very minor fade-type effect but it’s pretty tolerable.
Sadly, that doesn’t help at all. My issue isn’t with the animation, but with the amount of time it takes from I express my intent to switch workspace until focus switches to the new workspace. “Reduce Motion” only replaces the 1.2 second sliding animation with a 1.2 second fading animation, the wait is exactly the same.
Don’t update/downgrade to Sequoia! It’s the Windows ME of MacOS’s. After Apple support person couldn’t resolve any of the issues I had, they told me to reinstall Sequoia and then gave me instructions to upgrade to Ventura/Sonoma.
I thought Big Sur was the Windows ME of (modern) Mac OS. I have had a decent experience in Sequoia. I usually have Safari, Firefox, Chrome, Mail, Ghostty, one JetBrains thing or another (usually PyCharm Pro or Clion), Excel, Bitwarden, Preview, Fluor, Rectangle, TailScale, CleanShot, Fantastical, Ice and Choosy running pretty much constantly, plus a rotating cast of other things as I need them.
Aside from Apple Intelligence being hot garbage, (I just turn that off anyway) my main complaint about Sequoia is that sometimes, after a couple dozen dock/undock cycles (return to my desk, connect to my docking station with a 30” non-hidpi monitor, document scanner, time machine drive, smart card reader, etc.) the windows that were on my Macbook’s high resolution screen and move to my 30” when docked don’t re-scale appropriately, and I have to reboot to address that. That seems to happen every two weeks or so.
Like so many others here, I miss Snow Leopard. I thought Tiger was an excellent release, Leopard was rough, and Snow Leopard smoothed off all the rough edges of Tiger and Leopard for me.
I’d call Sequoia “subpar” if Snow Leopard is your “par”. But I don’t find that to be the case compared to Windows 11, KDE or GNOME. It mostly just stays out of my way.
Have you ever submitted these regressions to Apple through a support form or such?
Apple’s bug reporting process is so opaque it feels like shouting into the void.
And, Apple isn’t some little open source project staffed by volunteers. It’s the richest company on earth. QA is a serious job that Apple should be paying people for.
Yeah. To alleviate that somewhat (for developer-type bugs) when I was making things for Macs and iDevices most of the time, I always reported my bugs to openradar as well:
https://openradar.appspot.com/page/1
which would at least net me a little bit of feedback (along the lines of “broken for everyone or just me?”) so it felt a tiny bit less like shouting into the void.
I can’t remember on these. The CalDAV one is well known. Most of the time when I’ve reported bugs to Apple, they’ve closed them as duplicates and given no way of tracking the original bug.
No. I tried being a good user in the past but it always ended up with “the feature works as expected”. I won’t do voluntary work for a company which repeatedly shits on user feedback.
I wonder if this means that tests have been red for years, or that there are no tests for such core functionality.
Sometimes we are the tests, and yet the radars go unread
10.6 “Snow Leopard” was the last Mac OS that I could honestly say I liked. I ran it on a cheap mini laptop (a Dell I think) as a student, back when “hackintoshes” were still possible.
I decided to use ASN.1/DER in a recent project because I was feeling old school lol
ASN.1 has a gigantic specification. It looks like written in totally different era. (I would say “pre-Github” era.) It looks like written in era when programming had nothing common with fun. It looks like written by managers, not by programmers, and especially not by programmers who love their work. This spec scared me away completely, and I will never consider ASN.1 for my projects
And proprietary ASN.1 compilers are another argument against ASN.1
Yeah I mean it certainly was written in a different era (the 80s, I believe). I quite like it though. And DER (one of the main encodings for ASN.1 messages) is actually very simple FWIW. But sure I mean I doubt anyone’s gonna come along and pressure you to use it in your projects (though you may already be using it without realising if you’re using GSM, PKCS, Kerberos, etc). I for one find it quite satisfying to use the same serialisation/deserialisation paradigm being utilised in other parts of the stack but that’s just me haha
It was written by telecoms companies in the early 80’s, before TCP/IP created the internet and made most other networking technologies obsolete. Not just the pre-Github era, but the pre-PC and pre-internet era. Lots of stuff from the 50’s and 60’s looks like that; to some extent it wasn’t until the late 60’s and 70’s that “programmers who love their work” became a powerful technical force outside of universities. You aren’t allowed to have fun if you work for a government or giant company on machinery that costs more than your entire life does.
It’s been a long time since I’ve looked at ASN.1 outside of X.509 but you inspired me to go look. I’m pretty surprised to see that there really isn’t much that Protobufs do that ASN.1/DER does not. Varints being an easy example of something that Protobufs do but… wow are they similar!
Yeah they really do cover a lot of the same ground! But then with ASN.1 it’s so much more standardised, I feel like it’s massively underrated. I also find ASN.1’s ability to be encoded in a bunch of different ways depending on what’s appropriate for the situation-at-hand super cool, and it’s something that Protobuf doesn’t really do :)
Did you use an ASN.1 compiler? Was it a commercial one you already had access to, or is there a FOSS one you like?
Yeah I’m using Erlang/OTP’s built-in ASN.1 compiler, which is very nice. On the client side (a Vala/GTK app) I’m currently just deserialising directly from the DER right now, but I’ll probably switch to using a proper ASN.1 compiler at some point (or maybe I’ll make my own limited one for Vala, who doesn’t love a good yak shave). I’ve used asn1c and asn1scc before too, both have worked well for me.
Side thread: personally I don’t feel like Github stars are a good metric for the popularity of a projects. What do you think?
I don’t have a better way to estimate project popularity; just saying that Github stars seem not useful to me. In about 16 years of using Github I have starred less than 30 projects, but I’ve probably used ten times as many Github projects (probably much more). Look like I just don’t star projects :-) .
And there might actually be a bias in the star counts, in that some projects attract users that are more likely to hand out stars.
What makes you give a star to a Github project? Do you give stars for any project that sounds interesting, or any project that you use, or any project that you feel exceptionally thankful for?
agreed, they are pretty useless as a metric for anything. I think they mostly measure “how much reach on social media/reddit/HN/…. has this ever gotten” in many cases, and that’s not informative of anything. (I personally star a lot, but really treat it as a bookmark along the lines of “looks vaguely interesting from a brief skim”, its not an endorsement in any way)
I’m pretty sure I’ve never starred a project on GitHub, or at least I haven’t in the past decade, and I don’t know why anyone would! It’s an odd survival of GitHub’s launch era, when “everything has to be social” was the VC meme of the moment and “social open source” was GitHub’s play.
I find it useful as a bookmark, a way to search a curated portion of GitHub later on.
I use it as a “read later” flag when I see a link to a project there but don’t have time to fully consider it in the moment.
Note that I also tried to use Google trends, but both keywords fell under the threshold for tracking over time !
You can also compare download count from package managers like NPM, but I didn’t have an easy way to do that for so many libraries
I don’t get why popularity is so important here. Isn’t it effectively an implementation detail of your language? Even if I’m misunderstanding that and it is not, isn’t the more important question “are there good implementations”, not “are the implementations more popular than the ones for the other thing”?
One huge use-case for my language is sending and receiving and storing programs, so yes, it’s an implementation detail, but it’s also a very important one that will be impossible to change later.
But you’re totally right – that is the main question. I’m still exploring the space of serialization candidates, and these two particularly stood out to me.
I mostly care about popularity because convincing people to trust/adopt technology is way harder than actually implementing it. Extending a trusted format seems less risky than other options
If you can FOIA a query, can you ask for ‘SHOW TABLES’ and then “SHOW CREATE TABLE” or “DESCRIBE” for each of them? It cant be a file layout if it’s in the format it describes, thats a contradiction, right..? Right??
Indeed, that would seem like the next step to me.
Kinda bothered by the state supreme court interpretation, but mostly because I don’t think ANY “per-se” exception should be given to to anything build with public money or to which the public owns the rights to: source code, file layout, schema, whatever it is, security and privacy should be the only few exceptions given. And even then, SOME form of access should be provided, unless the government can make a very strong case that the security implications outweigh the public’s right to know where the fuck their taxes are going.
Yeah, if we’re wishing for ponies, I too would like to require, once and for all, all software systems built with public money to make their source and documentation available. No idea who’s going to bankroll that lobbying effort, though. And you can imagine the entire industry of public-sector IT contractors who would fight it.
This case was fought over the interpretation of the security exception. As long as the law still thinks that security by obscurity is a reasonable and defensible practice, there’s a long way to go.
You might find this amusing… a while back, I was being paid with public money to build a thing. I was arguing for the thing I was building to be FOSS, and the manager on the government side pushed back. Their concern was that, apparently, if the source were released to the public with no restrictions, contractors could then sell things built off the software back to the government, claiming (and charging for) to have done the full development themselves.
My team responded to that objection by suggesting that a copyleft license should address those concerns. And, shockingly, our response carried the day. We got our library released under LGPLv2.
I was happy with that because I would like to require all systems built with public money to be released to the public as FOSS. And I’m proud to report that it worked once, but I don’t want to personally work in that space anymore. It was exhausting.
Gotta wish for ponies to get a rescue cat with 3 legs and asthma
The real gem is at the end:
@susam: I’m sorry you have to deal with content like that followed by a dance to make sure you react appropriately in order to avoid getting your account cancelled. I wish people could behave better.
Does Soar work on Mac OS? One of the most interesting things about HomeBrew for Linux is that, if I make my software available that way, I can write instructions that are practically identical for Linux and Mac users of my software.
The screenshots in the readme look like they have a Mac skin on them, but I don’t see any indication of Mac support or lack thereof.
No it doesn’t. Homebrew is perfect for macOs. Someone else asked the same question: https://github.com/pkgforge/soar/issues/18
Thanks! The main things that made me think it might were the desire to replace homebrew on Linux and the screenshots.
I’ve got to say, I’m not so attached to it. And I’ve been using it for a damn long time. I think a package manager informed by more recent integrations (integrations like, say, uv) could manage a better experience on Mac and Linux.
Especially for people who are using it to help them write software.
Wow! How is this even legal?
The author replaced internet temperature control on a thing they purchased with local temperature control that worked in a way they preferred. In what universe would it ever be illegal to improve a thing that you purchased for your own use in your own home, in a way that doesn’t impact anyone else?
heh, i don’t think that’s what he was asking about being legal. it seems to me he’s asking how the company’s insecurity is allowed.
it’s an interesting question, in medicine we have malpractice, something similar in the US for things like structural engineers, but our “engineering” isn’t really licensed, so there’s no real safety rating on things like this from a data leakage and connectivity standpoint…
the physical bed cooling/heating product would have had to pass a UL rating i think to be able to be sold in the US, but there’s no such rating to ensure it’s secure from a technical/data standpoint
it’s probably too difficult to nail something like that down, especially as new exploits are being found all the time
Yes, I was asking about the insecurity. The idea of SSH access to all of people’s data seemed absurd
I completely misread you. I was cruising through the front page as I had my first coffee of the day, read an article about someone modifying their bed cooler to remove some anti-features, and absolutely interpreted the question as “How is this [modification] legal?” Sorry for misreading you! I think @aae nailed the question you were really asking, mostly.
The one thing I’d add to that answer is that, if the FTC were so inclined, they probably could exert enough pressure to at least require such an ssh backdoor to be disclosed up front, if not compel the manufacturer to remove it. I don’t believe any new hard-to-write regulation would even be required for that.
It kind-of is, particularly given the most sensitive nature of the data. Some electronics companies I know also ship their devices with the option to connect to them via SSH - but in contrast to this example, SSH access is documented and off-by-default. They use it mostly to see what’s wrong if a customer calls in because something’s not working, and then ask the customer if they’d be willing to provide remote access for debugging purposes, when the customer needs to enable themself.
This is an extremely strong statement.
I think a few things are also interesting:
I think people are realizing how low quality the Linux kernel code is, how haphazard development is, how much burnout and misery is involved, etc.
I think people are realizing how insanely not in the open kernel dev is, how much is private conversations that a few are privy to, how much is politics, etc.
The Hellwig/Ojeda part of the thread is just frustrating to read because it almost feels like pleading. “We went over this in private” “we discussed this already, why are you bringing it up again?” “Linus said (in private so there’s no record)”, etc., etc.
Dragging discussions out in front of an audience is a pretty decent tactic for dealing with obstinate maintainers. They don’t like to explain their shoddy reasoning in front of people, and would prefer it remain hidden. It isn’t the first tool in the toolbelt but at a certain point there is no convincing people directly.
With quite a few things actually. A friend of mine is contributing to a non-profit, which until recently had this very toxic member (they’ve even attempted felony). They were driven out of the non-profit very soon after members talked in a thread that was accessible to all members. Obscurity is often one key component of abuse, be it mere stubbornness or criminal behaviour. Shine light, and it often goes away.
IIRC Hintjens noted this quite explicitly as a tactic of bad actors in his works.
It’s amazing how quickly people are to recognize folks trying to subvert an org piecemeal via one-off private conversations once everybody can compare notes. It’s equally amazing to see how much the same people beforehand will swear up and down oh no that’s a conspiracy theory such things can’t happen here until they’ve been burned at least once.
This is an active, unpatched attack vector in most communities.
I’ve found the lowest example of this is even meetings minutes at work. I’ve observed that people tend to act more collaboratively and seek the common good if there are public minutes, as opposed to trying to “privately” win people over to their desires.
There is something to be said for keeping things between people with skin in the game.
It’s flipped over here, though, because more people want to contribute. The question is whether it’ll be stabe long-term.
Something I’ve noticed is true in virtually everything I’ve looked deeply at is the majority of work is poor to mediocre and most people are not especially great at their jobs. So it wouldn’t surprise me if Linux is the same. (…and also wouldn’t surprise me if the wonderful Rust rewrite also ends up poor to mediocre.)
yet at the same time, another thing that astonishes me is how much stuff actually does get done and how well things manage to work anyway. And Linux also does a lot and works pretty well. Mediocre over the years can end up pretty good.
After tangentially following the kernel news, I think a lot of churning and death spiraling is happening. I would much rather have a rust-first kernel that isn’t crippled by the old guard of C developers reluctant to adopt new tech.
Take all of this energy into RedoxOS and let Linux stay in antiquity.
I’ve seen some of the R4L people talk on Mastodon, and they all seem to hate this argument.
They want to contribute to Linux because they use it, want to use it, and want to improve the lives of everyone who uses it. The fact that it’s out there and deployed and not a toy is a huge part of the reason why they want to improve it.
Hopping off into their own little projects which may or may not be useful to someone in 5-10 years’ time is not interesting to them. If it was, they’d already be working on Redox.
The most effective thing that could happen is for the Linux foundation, and Linus himself, to formally endorse and run a Rust-based kernel. They can adopt an existing one or make a concerted effort to replace large chunks of Linux’s C with Rust.
IMO the Linux project needs to figure out something pretty quickly because it seems to be bleeding maintainers and Linus isn’t getting any younger.
They may be misunderstanding the idea that others are not necessarily incentivized to do things just because it’s interesting for them (the Mastodon posters).
Yep, I made a similar remark upthread. A Rust-first kernel would have a lot of benefits over Linux, assuming a competent group of maintainers.
along similar lines: https://drewdevault.com/2024/08/30/2024-08-30-Rust-in-Linux-revisited.html
Redox does have the chains of trying to do new OS things. An ABI-compatible Rust rewrite of the Linux kernel might get further along than expected (even if it only runs in virtual contexts, without hardware support (that would come later.))
Linux developers want to work on Linux, they don’t want to make a new OS. Linux is incredibly important, and companies already have Rust-only drivers for their hardware.
Basically, sure, a new OS project would be neat, but it’s really just completely off topic in the sense that it’s not a solution for Rust for Linux. Because the “Linux” part in that matters.
I read a 25+ year old article [1] from a former Netscape developer that I think applies in part
The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive. Au contraire, baby! Is software supposed to be like an old Dodge Dart, that rusts just sitting in the garage? Is software like a teddy bear that’s kind of gross if it’s not made out of all new material?Adopting a “rust-first” kernel is throwing the baby out with the bathwater. Linux has been beaten into submission for over 30 years for a reason. It’s the largest collaborative project in human history and over 30 million lines of code. Throwing it out and starting new would be an absolutely herculean effort that would likely take years, if it ever got off the ground.
[1] https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/
The idea that old code is better than new code is patently absurd. Old code has stagnated. It was built using substandard, out of date methodologies. No one remembers what’s a bug and what’s a feature, and everyone is too scared to fix anything because of it. It doesn’t acquire new bugs because no one is willing to work on that weird ass bespoke shit you did with your C preprocessor. Au contraire, baby! Is software supposed to never learn? Are we never to adopt new tools? Can we never look at something we’ve built in an old way and wonder if new methodologies would produce something better?
This is what it looks like to say nothing, to beg the question. Numerous empirical claims, where is the justification?
It’s also self defeating on its face. I take an old codebase, I fix a bug, the codebase is now new. Which one is better?
Like most things in life the truth is somewhere in the middle. There is a reason there is the concept of a “mature node” in the semiconductor industry. They accept that new is needed for each node, but also that the new thing takes time to iron out the kinks and bugs. This is the primary reason why you see apple take new nodes on first before Nvidia for example, as Nvidia require much larger die sizes, and so less defects per square mm.
You can see this sometimes in software for example X11 vs Wayland, where adoption is slow, but most definetly progressing and now-days most people can see that Wayland is now, or is going to become the dominant tech in the space.
The truth lies where it lies. Maybe the middle, maybe elsewhere. I just don’t think we’ll get to the truth with rhetoric.
Aren’t the arguments above more dialectic than rhetoric?
I don’t think this would qualify as dialectic, it lacks any internal debate and it leans heavily on appeals by analogy and intuition/ emotion. The post itself makes a ton of empirical claims without justification even beyond the quoted bit.
fair enough, I can see how one would make that argument.
“Good” is subjective, but there is real evidence that older code does contain fewer vulnerabilities: https://www.usenix.org/conference/usenixsecurity22/presentation/alexopoulos
That means we can probably keep a lot of the old trusty Linux code around while making more of the new code safe by writing it in Rust in the first place.
I don’t think that’s a fair assessment of Spolsky’s argument or of CursedSilicon’s application of it to the Linux kernel.
Firstly, someone has already pointed out the research that suggests that existing code has fewer bugs in than new code (and that the older code is, the less likely it is to be buggy).
Secondly, this discussion is mainly around entire codebases, not just existing code. Codebases usually have an entire infrastructure around them for verifying that the behaviour of the codebase has not changed. This is often made up of tests, but it’s also made up of the users who try out a release of a codebase and determine whether it’s working for them. The difference between making a change to an existing codebase and releasing a new project largely comes down to whether this verification (both in terms of automated tests and in terms of users’ ability to use the new release) works for the new code.
Given this difference, if I want to (say) write a new OS completely in Rust, I need to choose: Do I want to make it completely compatible with Linux, and therefore take on the significant challenge of making sure everything behaves truly the same? Or do I make significant breaking changes, write my own OS, and therefore force potential adopters to rebuild their entire Linux workflows in my new OS?
The point is not that either of these options are bad, it is that they represent significant risks to a project. Added to the general risk that is writing new code, this produces a total level of risk that might be considered the baseline risk of doing a rewrite. Now risk is not bad per se! If the benefits of being able to write an OS in a language like Rust outweigh the potential risks, then it still makes sense to perform the rewrite. Or maybe the existing Linux kernel is so difficult to maintain that a new codebase really would be the better option. But the point that CursedSilicon was making by linking the Spolsky piece was, I believe, that the risks for a project like the Linux kernel are very high. There is a lot of existing, old code. And there is a very large ecosystem where either breaking or maintaining compatibility would each come with significant challenges.
Unfortunately, it’s very difficult to measure the risks and benefits here in a quantitative, comparable way, so I think where you fall on the “rewrite vs continuity” spectrum will depend mostly on what sort of examples you’ve seen, and how close you think this case is to those examples. I don’t think there’s any objective way to say whether it makes more sense to have something like R4L, or something like RedoxOS.
I haven’t read it yet, but I haven’t made an argument about that, I just created a parody of the argument as presented. I’ll be candid, i doubt that the research is going to compel me to believe that newer code is inherently buggier, it may compel me to confirm my existing belief that testing software in the field is one good method to find some classes of bugs.
I guess so, it’s a bit dependent on where we say the discussion starts - three things are relevant; RFL, which is not a wholesale rewrite, a wholesale rewrite of the Linux kernel, and Netscape. RFL is not about replacing the entire Linux kernel, although perhaps “codebase” here refers to some sort of unit, like a driver. Netscape wanted a wholesale rewrite, based on the linked post, so perhaps that’s what’s really “the single worst strategic mistake that any software company can make”, but I wonder what the boundary here is? Also, the article immediately mentions that Microsoft tried to do this with Word but it failed, but that Word didn’t suffer from this because it was still actively developed - I wonder if it really “failed” just because pyramid didn’t become the new Word? Did Microsoft have some lessons learned, or incorporate some of that code? Dunno.
I think I’m really entirely justified when I say that the post is entirely emotional/ intuitive appeals, rhetoric, and that it makes empirical claims without justification.
This is rhetoric. These are unsubstantiated empirical claims. The article is all of this. It’s fine as an interesting, thought provoking read that gets to the root of our intuitions, but I think anyone can dismiss it pretty easily since it doesn’t really provide much in the form of an argument.
Again, totally unsubstantiated. I have MANY reasons to believe that, it is simply question begging to say otherwise.
That’s all this post is. Over and over again making empirical claims with no evidence and question beggign.
We can discuss the risks and benefits, I’d advocate for that. This article posted doesn’t advocate for that. It’s rhetoric.
This is a truism. It is survival bias. If the code was buggy, it would eventually be found and fixed. So all things being equal newer code is riskier than old code. But it’s also been impirically shown that using Rust for new code is not “all things being equal”. Google showed that new code in Rust is as reliable as old code in C. Which is good news: you can use old C code from new Rust projects without the risk that comes from new C code.
Yeah, this is what I’ve been saying (not sure if you’d meant to respond to me or the parent, since we agree) - the issue isn’t “new” vs “old” it’s things like “reviewed vs unreviewed” or “released vs unreleased” or “tested well vs not tested well” or “class of bugs is trivial to express vs class of bugs is difficult to express” etc.
Was restating your thesis in the hopes of making it clearer.
I don’t disagree that the rewards can outweigh the risks, and in this case I think there’s a lot of evidence that suggests that memory safety as a default is really important for all sorts of reasons. Let alone the many other PL developments that make Rust a much more suitable language to develop in than C.
That doesn’t mean the risks don’t exist, though.
Nobody would call an old codebase with a handful of fixes a new codebase, at least not in the contexts in which those terms have been used here.
How many lines then?
It’s a Ship of Theseus—at no point can you call it a “new” codebase, but after a period of time, it could be completely different code. I have a C program I’ve been using and modifying for 25 years. At any given point, it would have been hard to say “this is now a new codebase, yet not one line of code in the project is the same as when I started (even though it does the same thing at it always has).
I don’t see the point in your question. It’s going to depend on the codebase, and on the nature of the changes; it’s going to be nuanced, and subjective at least to some degree. But the fact that it’s prone to subjectivity doesn’t mean that you get to call an old codebase with a single fixed bug a new codebase, without some heavy qualification which was lacking.
If it requires all of that nuance and context maybe the issue isn’t what’s “old” and what’s “new”.
I don’t follow, to me that seems like a non-sequitur.
What’s old and new is poorly defined and yet there’s an argument being made that “old” and “new” are good indicators of something. If they’re so poorly defined that we have to bring in all sorts of additional context like the nature of the changes, not just when they happened or the number of lines changed, etc, then it seems to me that we would be just as well served to throw away the “old” and “new” and focus on that context.
I feel like enough people would agree more-or-less on what was an “old” or “new” codebase (i.e. they would agree given particular context) that they remain useful terms in a discussion. The general context used here is apparent (at least to me) given by the discussion so far: an older codebase has been around for a while, has been maintained, has had kinks ironed out.
There’s a really important distinction here though. The point is to argue that new projects will be less stable than old ones, but you’re intuitively (and correctly) bringing in far more important context - maintenance, testing, battle testing, etc. If a new implementation has a higher degree of those properties then it being “new” stops being relevant.
Ok, but:
My point was that this statement requires a definition of “new codebase” that nobody would agree with, at least in the context of the discussion we’re in. Maybe you are attacking the base proposition without applying the surrounding context, which might be valid if this were a formal argument and not a free-for-all discussion.
I think that it would be considered no longer new if it had had significant battle-testing, for example.
FWIW the important thing in my view is that every new codebase is a potential old codebase (given time and care), and a rewrite necessarily involves a step backwards. The question should probably not be, which is immediately better?, but, which is better in the longer term (and by how much)? However your point that “new codebase” is not automatically worse is certainly valid. There are other factors than age and “time in the field” that determine quality.
Methodologies don’t matter for quality of code. They could be useful for estimates, cost control, figuring out whom you shall fire etc. But not for the quality of code.
You’re suggesting that the way you approach programming has no bearing on the quality of the produced program?
I’ve never observed a programmer become better or worse by switching methodology. Dijkstra would’ve not became better if you made him do daily standups or go through code reviews.
There are ways to improve your programming by choosing different approach but these are very individual. Methodology is mostly a beancounting tool.
When I say “methodology” I’m speaking very broadly - simply “the approach one takes”. This isn’t necessarily saying that any methodology is better than any other. The way I approach a task today is better, I think, then the way that I would have approached that task a decade ago - my methodology has changed, the way I think has changed. Perhaps that might mean I write more tests, or I test earlier, but it may mean exactly the opposite, and my methods may only work best for me.
I’m not advocating for “process” or ubiquity, only that the approach one tasks may improve over time, which I suspect we would agree on.
If you take this logic to its end, you should never create new things.
At one point in time, Linux was also the new kid on the block.
The best time to plant a tree is 30 years ago. The second best time is now.
I don’t think Joel Spolsky was ever a Netscape developer. He was a Microsoft developer who worked on Excel.
My mistake! The article contained a bit about Netscape and I misremembered it
How many of those lines are part of the core? My understanding was that the overwhelming majority was driver code. There may not be that much core subsystem code to rewrite.
For a previous project, we included a minimal Linux build. It was around 300 KLoC, which included networking and the storage stack, along with virtio drivers.
That’s around the size a single person could manage and quite easy with a motivated team.
If you started with DPDK and SPDK then you’d already have filesystems and a copy of the FreeBSD network stack to run in isolated environments.
Once many drivers share common rust wrappers over core subsystems, you could flip it and write the subsystem in Rust. Then expose C interface for the rest.
Oh sure, that would be my plan as well. And I bet some subsystem maintainers see this coming, and resist it for reasons that aren’t entirely selfless.
That’s pretty far into the future, both from a maintainer acceptance PoV and from a rustc_codegen_gcc and/or gccrs maturity PoV.
Sure. But I doubt I’ll running a different kernel 10y from now.
And like us, those maintainers are not getting any younger and if they need a hand, I am confident I’ll get faster into it with a strict type checker.
I am also confident nobody in our office would be able to help out with C at all.
This cannot possibly be true.
It’s the largest collaborative open source os kernel project in human history
It’s been described as such based purely on the number of unique human contributions to it
I would expect Wikipedia should be bigger 🤔
I see that Drew proposes a new OS in that linked article, but I think a better proposal in the same vein is a fork. You get to keep Linux, but you can start porting logic to Rust unimpeded, and it’s a manageable amount of work to keep porting upstream changes.
Remember when libav forked from ffmpeg? Michael Niedermayer single-handedly ported every single libav commit back into ffmpeg, and eventually, ffmpeg won.
At first there will be extremely high C percentage, low Rust percentage, so porting is trivial, just git merge and there will be no conflicts. As the fork ports more and more C code to Rust, however, you start to have to do porting work by inspecting the C code and determining whether the fixes apply to the corresponding Rust code. However, at that point, it means you should start seeing productivity gains, community gains, and feature gains from using a better language than C. At this point the community growth should be able to keep up with the extra porting work required. And this is when distros will start sniffing around, at first offering variants of the distro that uses the forked kernel, and if they like what they taste, they might even drop the original.
I genuinely think it’s a strong idea, given the momentum and potential amount of labor Rust community has at its disposal.
I think the competition would be great, especially in the domain of making it more contributor friendly to improve the kernel(s) that we use daily.
I certainly don’t think this is impossible, for sure. But the point ultimately still stands: Linux kernel devs don’t want a fork. They want Linux. These folks aren’t interested in competing, they’re interested in making the project they work on better. We’ll see if some others choose the fork route, but it’s still ultimately not the point of this project.
While I don’t personally want to make a new OS, I’m not sure I actually want to work on Linux. Most of the time I strive for portability, and so abstract myself from the OS whenever I can get away with it. And when I can’t, I have to say Linux’s API isn’t always that great, compared to what the BSDs have to offer (epoll vs kqueue comes to mind). Most annoying though is the lack of documentation for the less used APIs: I’ve recently worked with Netlink sockets, and for the proc stuff so far the best documentation I found was the freaking source code of a third party monitoring program.
I was shocked. Complete documentation of the public API is the minimum bar for a project as serious of the Linux kernel. I can live with an API I don’t like, but lack of documentation is a deal breaker.
I think they mean that Linux kernel devs want to work on the Linux kernel. Most (all?) R4L devs are long time Linux kernel devs. Though, maybe some of the people resigning over LKML toxicity will go work on Redox or something…
That’s is what I was saying, yes.
I’m talking about the people who develop the Linux kernel, not people who write userland programs for Linux.
Re-Implementing the kernel ABI would be a ton of work for little gain if all they wanted was to upstream all the work on new hardware drivers that is already done - and then eventually start re-implementing bits that need to be revised anyway.
If the singular required Rust toolchain didn’t feel like such a ridiculous to bootstrap 500 ton LLVM clown car I would agree with this statement without reservation.
Would zig be a better starting place?
Zig is easier to implement (and I personally like it as a language) but doesn’t have the same safety guarantees and strong type system that Rust does. It’s a give and take. I actually really like Rust and would like to see a proliferation of toolchain options, such as what’s in progress in GCC land. Overall, it would just be really nice to have an easily bootstrapped toolchain that a normal person can compile from scratch locally, although I don’t think it necessarily needs to be the default, or that using LLVM generally is an issue. However, it might be possible that no matter how you architect it, Rust might just be complicated enough that any sufficiently useful toolchain for the language could just end up being a 500 ton clown car of some kind anyways.
Depends on which parts of GP’s statement you care about: LLVM or bootstrap. Zig is still depending on LLVM (for now), but it is no longer bootstrappable in a limited number of steps (because they switched from a bootstrap C++ implementation of the compiler to keeping a compressed WASM build of the compiler as a blob.
Yep, although I would also add it’s unfair to judge Zig in any case on this matter now given it’s such a young project that clearly is going to evolve a lot before the dust begins to settle (Rust is also young, but not nearly as young as Zig). In ten to twenty years, so long as we’re all still typing away on our keyboards, we might have a dozen Zig 1.0 and a half dozen Zig 2.0 implementations!
Yeah, the absurdly low code quality and toxic environment make me think that Linux is ripe for disruption. Not like anyone can produce a production kernel overnight, but maybe a few years of sustained work might see a functional, production-ready Rust kernel for some niche applications and from there it could be expanded gradually. While it would have a lot of catching up to do with respect to Linux, I would expect it to mature much faster because of Rust, because of a lack of cruft/backwards-compatibility promises, and most importantly because it could avoid the pointless drama and toxicity that burn people out and prevent people from contributing in the first place.
What is the, some kind of a new meme? Where did you hear it first?
From the thread in OP, if you expand the messages, there is wide agreement among the maintainers that all sorts of really badly designed and almost impossible to use (safely) APIs ended up in the kernel over the years because the developers were inexperienced and kind of learning kernel development as they went. In retrospect they would have designed many of the APIs very differently.
Someone should compile everything to help future OS developers avoid those traps! There are a lot of exieting non-posix experiments though.
It’s based on my forays into the Linux kernel source code. I don’t doubt there’s some quality code lurking around somewhere, but the stuff I’ve come across (largely filesystem and filesystem adjacent) is baffling.
Seeing how many people are confidently incorrect about Linux maintainers only caring about their job security and keeping code bad to make it a barrier to entry, if nothing else taught me how online discussions are a huge game of Chinese whispers where most participants don’t have a clue of what they are talking about.
I doubt that maintainers are “only caring about their job security and keeping back code” but with all due respect: You’re also just taking arguments out of thin air right now. What I do believe is what we have seen: Pretty toxic responses from some people and a whole lot of issues trying to move forward.
Huh, I’m not seeing any claim to this end from the GP, or did I not look hard enough? At face value, saying that something has an “absurdly low code quality” does not imply anything about nefarious motives.
I can personally attest to having never made that specific claim.
Indeed that remark wasn’t directly referring to GP’s comment, but rather to the range of confidently incorrect comments that I read in the previous episodes, and to the “gatekeeping greybeards” theme that can be seen elsewhere on this page. First occurrence, found just by searching for “old”: Linux is apparently “crippled by the old guard of C developers reluctant to adopt new tech”, to which GP replied in agreement in fact. Another one, maintainers don’t want to “do the hard work”.
Still, in GP’s case the Chinese whispers have reduced “the safety of this API is hard to formalize and you pretty much have to use it the way everybody does it” to “absurdly low quality”. To which I ask, what is more likely. 1) That 30-million lines of code contain various levels of technical debt of which maintainers are aware; and that said maintainers are worried even of code where the technical debt is real but not causing substantial issue in practice? Or 2) that a piece of software gets to run on literally billions of devices of all sizes and prices just because it’s free and in spite of its “absurdly low quality”?
Linux is not perfect, neither technically nor socially. But it sure takes a lot of entitlement and self-righteousness to declare it “of absurdly low quality” with a straight face.
GP here: I probably should have said “shockingly” rather than “absurdly”. I didn’t really expect to get lawyered over that one word, but yeah, the idea was that for a software that runs on billions of devices, the code quality is shockingly low.
Of course, this is plainly subjective. If your code quality standards are a lot lower than mine then you might disagree with my assessment.
That said, I suspect adoption is a poor proxy for code quality. Internet Explorer was widely adopted and yet it’s broadly understood to have been poorly written.
I’m sure self-righteousness could get you to the same place, but in my case I arrived by way of experience. You can relax, I wasn’t attacking Linux—I like Linux—it just has a lot of opportunity for improvement.
I guess I’ve seen the internals of too much proprietary software now to be shocked by anything about Linux per se. I might even argue that the quality of Linux is surprisingly good, considering its origins and development model.
I think I’d lawyer you a tiny bit differently: some of the bugs in the kernel shock me when I consider how many devices run that code and fulfill their purposes despite those bugs.
FWIW, I was not making a dig at open source software, and yes plenty of corporate software is worse. I guess my expectations for Linux are higher because of how often it is touted as exemplary in some form or another. I don’t even dislike Linux, I think it’s the best thing out there for a huge swath of use cases—I just see some pretty big opportunities for improvement.
Or actual benchmarks: the performance the Linux kernel leaves on the table in some cases is absurd. And sure it’s just one example, but I wouldn’t be surprised if it was representative of a good portion of the kernel.
Well not quite but still “considered broken beyond repair by many people related to life time management” - which is definitely worse than “hard to formalize” when “the way ever[y]body does it” seems to vary between each user.
I love Rust but still, we’re talking of a language which (for good reasons!) considers doubly linked lists unsafe. Take an API that gets a 4 on Rusty Russell’s API design scale (“Follow common convention and you’ll get it right”), but which was designed for a completely different programming language if not paradigm, and it’s not surprising that it can’t easily be transformed into a 9 (“The compiler/linker won’t let you get it wrong”). But at the same time there are a dozen ways in which, according to the same scale, things could actually be worse!
What I dislike is that people are seeing “awareness of complexity” and the message they spread is “absurdly low quality”.
Note that doubly linked lists are not a special case at all in Rust. All the other common data structures like
Vec,HashMapetc. also need unsafe code in their implementation.Implementing these datastructures in Rust, and writing unsafe code in general, is indeed roughly a 4. But these are all already implemented in the standard library, with an API that actually is at a 9. And
std::collections::LinkedListis constructive proof that you can have a safe Rust abstraction for doubly linked lists.Yes, the implementation could have bugs, thus making the abstraction leaky. But that’s the case for literally everything, down to the hardware that your code runs on.
You’re absolutely right that you can build abstractions with enough effort.
My point is that if a doubly linked list is (again, for good reasons) hard to make into a 9, a 20-year-old API may very well be even harder. In fact,
std::collections::LinkedListis safe but still not great (for example the cursor API is still unstable); and being in std, it was designed/reviewed by some of the most knowledgeable Rust developers, sort of by definition. That’s the conundrum that maintainers face and, if they realize that, it’s a good thing. I would be scared if maintainers handwaved that away.Bugs happen, but if the abstraction is downright wrong then that’s something I wouldn’t underestimate. A lot of the appeal of Rust in Linux lies exactly in documenting/formalizing these unwritten rules, and wrong documentation can be worse than no documentation (cue the negative parts of the API design scale!); even more so if your documentation is a formal model like a set of Rust types and functions.
That said, the same thing can happen in a Rust-first kernel, which will also have a lot of unsafe code. And it would be much harder to fix it in a Rust-first kernel, than in Linux at a time when it’s just feeling the waters.
At the same time, it was included almost as like, half a joke, and nobody uses it, so there’s not a lot of pressure to actually finish off the cursor API.
It’s also not the kind of linked list the kernel would use, as they’d want an intrusive one.
And yet, safe to use doubly linked lists written in Rust exist. That the implementation needs unsafe is not a real problem. That’s how we should look at wrapping C code in safe Rust abstractions.
The whole comment you replied to, after the one sentence about linked lists, is about abstractions. And abstractions are rarely going to be easy, and sometimes could be hardly possible.
That’s just a fact. Confusing this fact for something as hyperbolic as “absurdly low quality” is stunning example of the Dunning Kruger effect, and frankly insulting as well.
I personally would call Linux low quality because many parts of it are buggy as sin. My GPU stops working properly literally every other time I upgrade Linux.
No one is saying that Linux is low quality because it’s hard or impossible to abstract some subsystems in Rust, they’re saying it’s low quality because a lot of it barely works! I would say that your “Chinese whispers” misrepresents the situation and what people here are actually saying. “the safety of this API is hard to formalize and you pretty much have to use it the way everybody does it” doesn’t apply if no one can tell you how to use an API, and everyone does it differently.
I agree, Linux is the worst of all kernels.
Except for all the others.
Actually, the NT kernel of all things seems to have a pretty good reputation, and I wouldn’t dismiss the BSD kernels out of hand. I don’t know which kernel is better, but it seems you do. If you could explain how you came to this conclusion that would be most helpful.
NT gets a bad rap because of the OS on top of it, not because it’s actually bad. NT itself is a very well-designed kernel.
*nod* I haven’t been a Windows person since shortly after the release of Windows XP (i.e. the first online activation DRM’d Windows) but, whenever I see glimpses of what’s going on inside the NT kernel in places like Project Zero: The Definitive Guide on Win32 to NT Path Conversion, it really makes me want to know more.
More likely a fork that gets rusted from the inside out
Somewhere else it was mentioned that most developers in the kernel could just not be bothered with checking for basic things.
Nobody is forcing any of these people to do this.
The generally accepted definition of “hit piece” includes an attempt to sway public opinion by publishing false information. Leaving aside the fact that the user who linked this story did not publish it, and deferring the discussion of who may or may not pay them to post, that is a significant claim that requires significant evidence.
So, please share your evidence… what’s the false information here, and how exactly is @freddyb attempting to sway public opinion? To what end? Be very specific, please.
I don’t think “hit piece” implies false information, just a lopsided sample of the information available.
That’s a fair point. I should have said “false or misleading.”
So I’ll amend my question, which I doubt will get answered at any rate:
@ecksdee: So, please share your evidence… what’s the false or misleading information here, and how exactly is @freddyb attempting to sway public opinion? To what end? Be very specific, please.
If you look at the history of soatoks blog on lobsters it is pretty obvious that sooner or later anyone from this community would post this entry.
Now you have to show me how mozilla is related to signal in any positive or negative way. You yourself seem to have a strong feeling towards mozilla at least.
This sounds a bit strong. Cloudflare’s WARP uses QUIC, noting valid issues with Wireguard:
https://blog.cloudflare.com/masque-building-a-new-protocol-into-cloudflare-warp/
IPSec can also be a valid basis, used to power cloud VPNs, e.g. https://aws.amazon.com/what-is/ipsec/
I stick with Wireguard because I like it’s simplicity and silence, but those are just tradeoffs. There are other good choices too.
In fairness, Cloudflare saying “we built our own VPN protocol” is a lot different from some random VPN company saying it.
Yes, though the comment was regarding what a VPN services uses, rather than what it has designed. A VPN provider using MASQUE or IPSec is reasonable.
IPSec is raising some eyebrows in 2025 imho. MASQUE is really neat actually :)
IPSec was raising my eyebrows in 2005. I think now we know that developing an IPSec profile we can all understand as secure is a really tall order, and outside of such a very well understood and narrowly specified profile, IPSec is not really actually reasonable.
Out of interest, are there any sources you recommend reading wrt IPSec weaknesses?
it’s worse
Cloudflare’s business model is MITMing the internet. We should be especially skeptical of any DIY encryption protocols from them.
Their business greatly depends on being considered secure. Why would they risk that image by adding a backdoor when everyone is already given them their data willingly?
I definitely consider their MITM an issue with security, privacy, and infrastructure centralization, but it doesn’t mean they can’t produce good tech.
I’m still skeptical of MASQUE as one should be of anything new related to security.
+1 to scepticism when it comes to security.
MASQUE is going through IETF, and is built on HTTP/3.
This is a great read. Thank you for sharing it.
Some stream of consciousness, running notes, as I read through:
Narrator: I wouldn’t.
Semgrep looks like a free trial. So “free as in the taster of beer that the bartender will pour you so you can decide whether you want to buy a whole one”. But in any case, I don’t think charging me for a product is really enshittification.
I’ve done a couple crypto audits and (many) general network security audits, and this is a fundamental truth. Both for missing sections and for present-but-suspiciously-under-detailed sections. I like to map documentation sections to the source tree, then see what parts of the source tree have missing or superficial docs.
I like the way this person thinks. I haven’t spent enough time with the Signal source code to say whether I agree with these findings. And I still don’t like that you need a phone and a phone number to bootstrap Signal. But this was a well-written and well-explained piece about why the cryptography behind Signal’s protocol is very likely as solid as we expect it to be.
The home page definitely looks like that, but you can just pip install the thing. Some features (cross-file analysis, and some data flow stuff e.g. the more advanced taint analysis) are locked, but the static pattern matching and some of the taint analysis is available out of the box. I assume some of the current experiments might be locked to pro users if they get stabilised.
I would assume the fear is less about a paid product and more investor / exit-driven priorities.
I would further assume they selected a free product so readers can reproduce without having to pay for a possibly expensive license.
I absolutely didn’t pick up on that. I’d agree that the risk of the company changing the trial offering so that people can’t reproduce this analysis due to investor pressure is at least adjacent to enshittification. But it’s not the bait and switch that I usually associate with that.
It’s really a small point. But the way the writer phrased it made me briefly ponder the difference between a free trial + up-sell and “enshittification”.