This is a big change, and shows an impressive level of courage. I love to see how the leadership in Ubuntu can unafraid to be bold and question traditional ways of doing things (even though that sometimes results in results I find disagreeable, e.g snap packages).
I have recently had to run a vulnerability scanner against an embedded Linux system, and was surprised to find CVEs relating to memory safety bugs in recent versions of old GNU tools you’d think would have those sorts of issues ironed out by now, like GNU patch (and various Busybox tools for that matter). From an “exploitable memory safety bug” perspective, I think I would trust even a relatively young project like uutils over even old, battle-tested C software.
Yeah, that part sucks. And it’s a huge loss for GNU as a project.
But I don’t feel like GNU has been a good steward of such vital infrastructure, it shows many signs of a broken project. Software will have critical bug fixes committed to their repository but then fail to cut a release for a long time. GNU M4 was once incompatible with GNU glibc for years because glibc changed something which caused M4 to stop compiling, M4 got a fix committed to its repo, but no new release of M4 was made.
GNU as an operating system is stuck in the 80s; the only reason GNU has seen any success in the past 30 years is that others have picked up the slack for core operating system components like kernels, init systems, graphical environments, etc.
The GNU “operating system” delivers an excellent compiler collection, an okay libc with significant issues such as incompatibility with static linking and intentional lack of “forward compatibility”, an atrociously terrible build system, and an alright but stagnant set of commands line utilities and shell. It’s not a project that I feel deserves a lot of fealty. Although I do think it’s a tragedy that the project turned out this way, I think we’re well past the time where it makes sense to look at it critically and pick up the pieces with value (such as the compiler collection) and abandon the pieces without (such as the build system, arguably the coreutils, maybe eventually the libc; a man can dream).
Sounds like we agree, I appreciate the examples :)
I’m pro replacing or improving software, especially of core utility to a system. I’m also pro having such core code copyleft.
Broken leadership. Stallman is correct about freedom but wrong about product management. He ends up being a huge distraction (comments about minsky in particular).
There have been multiple examples of him not allowing features to be merged because they could enable closed competition, where the GNU alternative doesn’t exist or isn’t as good. In multiple cases he has had to relent after months or years because a different competitor implemented the same feature and its become obvious how far behind he is.
Making it cumbersome to add extensions to GCC because that could allow non-free stuff to be built on top of it. Clang and LLVM made this trivial from the start. Eventually GCC allowed easier extensions.
Allowing Emacs to better integrate with LLVM, debugger-adapter-protocol,tree-sitter, LLVM. He stalled all of these projects and they were all eventually merged.
insisting that Emacs use BZR and his own weird inhouse github clone. Emacs is on git after years of messing with bzr, while most devs used a bzr-git bridge.
Stalling any integration with modern CI for Emacs.
In a lot of cases he is fighting against GPL licensed alternatives (bzr/git), that aren’t under the GNU umbrella. Sometimes he’s fighting against more permissively licensed software.
If free software isn’t made better, it will lack adoption.
No matter how many times Go people try to gaslight me, I will not accept this approach to error-handling as anything approaching good. Here’s why:
Go’s philosophy regarding error handling forces developers to incorporate errors as first class citizens of most functions they write.
[…]
Most linters or IDEs will catch that you’re ignoring an error, and it will certaintly be visible to your teammates during code review.
Why must you rely on a linter or IDE to catch this mistake? Because the compiler doesn’t care if you do this.
If you care about correctness, you should want a compiler that considers handling errors part of its purview. This approach is no better than a dynamic language.
The fact that the compiler doesn’t catch it when you ignore an error return has definitely bitten me before. doTheThing() on its own looks like a perfectly innocent line of code, and the compiler won’t even warn on it, but it might be swallowing an error.
I learned that the compiler doesn’t treat unused function results as errors while debugging a bug in production; an operation which failed was treated as if it succeeded and therefore wasn’t re-tried as it should. I had been programming in Go for many years at that point, but it had never occurred to me that silently swallowing an error in Go could possibly be so easy as just calling a function in the normal way. I had always done _ = doTheThing() if I needed to ignore an error, out of the assumption that of course unused error returns is a compile error.
Because errors aren’t special to the Go compiler, and Go doesn’t yell at you if you ignore any return value. It’s probably not the most ideal design decision, but in practice it’s not really a problem. Most functions return something that you have to handle, so when you see a naked function call it stands out like a sore thumb. I obviously don’t have empirical evidence, but in my decade and a half of using Go collaboratively, this has never been a real pain point whether with junior developers or otherwise. It seems like it mostly chafes people who already had strong negative feelings toward Go.
Is there a serious argument behind the sarcasm as to how this is comparable to array bounds checks? Do you have any data about the vulnerabilities that have arisen in Go due to unhandled errors?
Because the programmer made an intentional decision to ignore the error. It won’t let you call a function that returns an error with out assigning it to something, that would be a compile time error. If the programmer decides to ignore it, that’s on the programmer (and so beware 3rd party code).
Now perhaps it might be a good idea for the compiler to insert code when assigned to _ that panics if the result is non-nil. Doesn’t really help at runtime, but at least it would fail loudly so they could be found.
I’ve spent my own share of time tracking down bugs because something appeared to be working but the error/exception was swallowed somewhere without a trace.
huh… til. I always assumed you needed to use the result, probably because of single vs multiple returns needing both being a compile time error. Thanks.
Because the programmer made an intentional decision to ignore the error.
f.Write(s)
is not an intentional decision to ignore the error. Neither is
_, err := f.Write(s)
Yet the go compiler will never flag the first one, and may not flag the second one depending on err being used elsewhere in the scope (e.g. in the unheard of case where you have two different possibly error-ing calls in the same scope and you check the other one).
Yet the go compiler will never flag the first one, and may not flag the second one depending on err being used elsewhere in the scope (e.g. in the unheard of case where you have two different possibly error-ing calls in the same scope and you check the other one).
_, err := f.Write(s) is a compiler error if err already exists (no new variables on left side of :=), and if err doesn’t already exist and you aren’t handling it, you get a different error (declared and not used: err). I think you would have to assign a new variable t, err := f.Write(s) and then take care to handle t in order to silently ignore the err, but yeah, with some work you can get Go to silently swallow it in the variable declaration case.
Because they couldn’t be arsed to add this in v0, and they can’t be arsed to work on it for cmd/vet, and there are third-party linters which do it, so it’s all good. Hopefully you don’t suffer from unknown unknowns and you know you should use one of these linters before you get bit, and they don’t get abandoned.
(TBF you need both that and errcheck, because the unused store one can’t catch ignoring return values entirely).
I don’t really care. Generally speaking, I would expect compilers to either warn or error on an implicitly swallowed error. The Go team could fix this issue by either adding warnings for this case specifically (going back on their decision to avoid warnings), or by making it a compile error, I don’t care which.
This is slightly more nuanced. Go project ships both go build and go vet. go vet is an isomorphic to how Rust handles warnings (that warnings apply to you, not your dependencies).
So there would be nothing wrong per se if this was caught by go vet and not go build.
What is the issue though, is that this isn’t caught by first-party go vet, and requires third party errcheck.
Meh plenty of code bases don’t regularly run go vet. This is a critical enough issue that it should be made apparent as part of any normal build, either as a warning or an error.
If you care about correctness, you should want a compiler that considers handling errors part of its purview. This approach is no better than a dynamic language.
I agree with you that it’s better for this to be a compiler error, but (1) I’ll never understand why this is such a big deal–I’m sure it’s caused bugs, but I don’t think I’ve ever seen one in the dozen or so years of using Go and (2) I don’t think many dynamic languages have tooling that could catch unhandled errors so I don’t really understand the “no better than a dynamic language” claim. I also suspect that the people who say good things about Go’s error handling are making a comparison to exceptions in other languages rather than to Rust’s approach to errors-as-values (which has its own flaws–no one has devised a satisfactory error handling system as far as I’m aware).
The fact that these bugs seem so rare and that the mitigation seems so trivial makes me feel like this is (yet another) big nothingburger.
The most common response to my critique of Go’s error-handling is always some variation on “this never happens”, which I also do not accept because I have seen this happen. In production. So good for you, if you have not; but I know from practice this is an issue of concern.
I don’t think many dynamic languages have tooling that could catch unhandled errors so I don’t really understand the “no better than a dynamic language” claim.
Relying on the programmer to comprehensively test inputs imperatively in a million little checks at runtime is how dynamic languages handle errors. This is how Go approached error-handling, with the added indignity of unnecessary verbosity. At least in Ruby you can write single-line guard clauses.
I don’t really follow your dismissal of Rust since you didn’t actually make an argument, but personally I consider Rust’s Option type the gold standard of error-handling so far. The type system forces you to deal with the possiblity of failure in order to access the inner value. This is objectively better at preventing “trivial” errors than what Go provides.
The most common response to my critique of Go’s error-handling is always some variation on “this never happens”, which I also do not accept because I have seen this happen. In production. So good for you, if you have not; but I know from practice this is an issue of concern.
I’m sure it has happened before, even in production. I think most places run linters in CI which default to checking errors, and I suspect if someone wasn’t doing this and experienced a bug in production, they would just turn on the linter and move on with life. Something so exceedingly rare and so easily mitigated does not meet my threshold for “issue of concern”.
Relying on the programmer to comprehensively test inputs imperatively in a million little checks at runtime is how dynamic languages handle errors
That’s how all languages handle runtime errors. You can’t handle them at compile time. But your original criticism was that Go is no better than a dynamic language with respect to detecting unhandled errors, which seems untrue to me because I’m not aware of any dynamic languages with these kinds of linters. Even if such a linter exists for some dynamic language, I’m skeptical that they’re so widely used that it merits elevating the entire category of dynamic languages.
I don’t really follow your dismissal of Rust since you didn’t actually make an argument, but personally I consider Rust’s Option type the gold standard of error-handling so far. The type system forces you to deal with the possiblity of failure in order to access the inner value. This is objectively better at preventing “trivial” errors than what Go provides.
I didn’t dismiss Rust, I was suggesting that you may have mistaken the article as some sort of criticism of Rust’s error handling. But I will happily register complaints with Rust’s error handling as well–while it does force you to check errors and is strictly better than Go in that regard, this is mostly a theoretical victory insofar as these sorts of bugs are exceedingly rare in Go even without strict enforcement, and Rust makes you choose between the verbosity of managing your own error types, debugging macro expansion errors from crates like thiserror, or punting altogether and doing the bare minimum to provide recoverable error information. I have plenty of criticism for Go’s approach to error handling, but pushing everything into an error interface and switching on the dynamic type gets the job done.
For my money, Rust has the better theoretical approach and Go has the better practical approach, and I think both of them could be significantly improved. They’re both the best I’m aware of, and yet it’s so easy for me to imagine something better (automatic stack trace annotations, capturing and formatting relevant context variables, etc). Neither of them seems so much better in relative or absolute terms that their proponents should express superiority or derision.
Fair enough. It’s a pity things like this are so difficult to answer empirically, and we must rely on our experiences. I am very curious how many orgs are bitten by this and how frequently.
Enabling a linter is different from doing “a million little checks at runtime”. This behaviour is not standard because you can use Go for many reasons other than writing production-grade services, and you don’t want to clutter your terminal with unchecked error warnings.
I admit that it would be better if this behaviour were part of go vet rather than an external linter.
The strange behaviour here is not “Go people are trying to gaslight me”, but people like you coming and complaining about Go’s error handling when you have no interest in the language at all.
Enabling a linter is different from doing “a million little checks at runtime”.
You can’t lint your way out of this problem. The Go type system is simply not good enough to encapsulate your program’s invarients, so even if your inputs pass a type check you still must write lots of imperative checks to ensure correctness.
Needing to do this ad-hoc is strictly less safe than relying on the type system to check this for you. err checks are simply one example of this much larger weakness in the language.
The strange behaviour here is not “Go people are trying to gaslight me”, but people like you coming and complaining about Go’s error handling when you have no interest in the language at all.
I have to work with it professionally, so I absolutely do have an interest in this. And I wouldn’t feel the need to develop this critique of it publicly if there weren’t a constant drip feed of stories telling me how awesome this obviously poor feature is.
Your views about how bad Go’s type system is are obviously not supported by the facts, otherwise Go programs would be full of bugs (or full of minuscule imperative checks) with respect to your_favourite_language.
I understand your point about being forced to use a tool in your $job that you don’t like, that happened to me with Java, my best advice to you is to just change $job instead of complaining under unrelated discussions.
Your views about how bad Go’s type system is are obviously not supported by the facts, otherwise Go programs would be full of bugs (or full of minuscule imperative checks)
They are full of bugs, and they are full of miniscule imperative checks. The verbosity of all the if err != nil checks is one of the first things people notice. Invoke “the facts” without bringing any isn’t meaningfully different than subjective opinion.
Your comments amount to “shut up and go away” and I refuse. To publish a blog post celebrating a language feature, and to surface it on a site of professionals, is to invite comment and critique. I am doing this, and I am being constructive by articulating specific downsides to this language decision and its impacts. This is relevant information that people use to evaluate languages and should be part of the conversation.
If if err != nil checks are the “minuscle imperative checks” you complain about, I have no problem with that.
That you have “facts” about Go programs having worse technical quality (and bug count) than any other language I seriously doubt, at most you have anecdotes.
And the only anecdote you’ve been able to come up with so far is that you’ve found “production bugs” caused by unchecked errors that can be fixed by a linter. Being constructive would mean indicating how the language should change to address your perceived problem, not implying that the entire language should be thrown out the window. If that’s how you feel, just avoid commenting on random Go post.
Yeah, I have seen it happen maybe twice in eight years of using Go professionally, but I have seen it complained about in online comment sections countless times. :-)
If I were making a new language today, I wouldn’t copy Go’s error handling. It would probably look more like Zig. But I also don’t find it to be a source of bugs in practice.
Everyone who has mastered a language builds up muscle memory of how to avoid the Bad Parts. Every language has them. This is not dispositive to the question of whether a particular design is good or not.
Not seeing a problem as a bug in production doesn’t tell you much. It usually just means that the developers spent more writing tests or doing manual testing - and this is just not visible to you. The better the compiler and type-system, the fewer tests you need for the same quality.
Not seeing a problem as a bug in production doesn’t tell you much
Agreed, but I wasn’t talking about just production–I don’t recall seeing a bug like this in any environment, at any stage.
It usually just means that the developers spent more writing tests or doing manual testing - and this is just not visible to you.
In a lot of cases I am the developer, or I’m working closely with junior developers, so it is visible to me.
The better the compiler and type-system, the fewer tests you need for the same quality.
Of course with Go we don’t need to write tests for unhandled errors any more than with Rust, we just use a linter. And even when static analysis isn’t an option, I disagree with the logic that writing tests is always slower. Not all static analysis is equal, and in many cases it’s not cheap from a developer velocity perspective. Checking for errors is very cheap from a developer velocity perspective, but pacifying the borrow checker is not. In many cases, you can write a test or two in the time it would take to satisfy rustc and in some cases I’ve even introduced bugs precisely because my attention was so focused on the borrow checker and not on the domain problem (these were bugs in a rewrite from an existing Go application which didn’t have the bugs to begin with despite not having the hindsight benefit that the Rust rewrite enjoyed). I’m not saying Rust is worse or static analysis is bad, but that the logic that more static analysis necessarily improves quality or velocity is overly simplistic, IMHO.
Of course with Go we don’t need to write tests for unhandled errors any more than with Rust, we just use a linter.
I just want to emphasize that It’s not the same thing - as you also hint to in the next sentence.
I disagree with the logic that writing tests is always slower.
I didn’t say that writing tests is always slower or that using the compiler to catch these things is necessarily always better. I’m not a Rust developer btw. and Rust’s errorhandling is absolutely not the current gold-standard by my own judgement.
I just want to emphasize that It’s not the same thing - as you also hint to in the next sentence.
It kind of is the same thing: static analysis. The only difference is that the static analysis is broken out into two tools instead of one, so slightly more care needs to be taken to ensure the linter is run in CI or locally or wherever appropriate. To be clear, I think Rust is strictly better for having it in the compiler–I mostly just disagree with the implications in this thread that if the compiler isn’t doing the static analysis then the situation is no better than a dynamic language.
I didn’t say that writing tests is always slower or that using the compiler to catch these things is necessarily always better.
What did you mean when you said “It usually just means that the developers spent more writing tests or doing manual testing … The better the compiler and type-system, the fewer tests you need for the same quality.” if not an argument about more rigorous static analysis saving development time? Are we just disagreeing about “always”?
I mostly just disagree with the implications in this thread that if the compiler isn’t doing the static analysis then the situation is no better than a dynamic language.
Ah I see - that is indeed an exaggeration that I don’t share.
Are we just disagreeing about “always”?
First that, but it also in general has other disadvantages. For instance, writing tests or doing manual tests is often easy to do. Learning how to deal with a complex time system is not. Go was specifically created to get people to contribute fast.
Just one example that shows that it’s not so easy to decide which way is more productive.
Swallowing errors is the very worst option there is. Even segfaulting is better, you know at least something is up in that case.
Dynamic languages usually just throw an exception and those have way better behavior (you can’t forget, an empty catch is a deliberate sign to ignore an error, not an implicit one like with go), at least some handler further up will log something and more importantly the local block that experienced the error case won’t just continue executing as if nothing happened.
Mozilla emphasized that it doesn’t sell or buy data about its users, and that it made the changes because certain jurisdictions define the term “sell” more broadly than others, incorporating the various ways by which a consumer’s personal information changes hands with another party in exchange for monetary or other benefits.
I’m not aware of ways in which my “personal information” could possibly “change hands with another party in exchange for monetary or other benefits” that I personally wouldn’t consider selling my data. I would appreciate it if Mozilla would either bring back the promise that they don’t sell my data (and then keep that promise), or explain exactly how my data “changes hands with another party in exchange for monetary or other benefits” so that I can be the judge of whether or not I consider that acceptable.
Collecting and sharing data with partners to show ads is something which I would consider to be “selling data”, FWIW.
To me, it sounds like Mozilla has realized that it’s breaking their promise to never “sell data” (in ways that its users would consider to be “selling data”) and is trying to weasel their way out of admitting that.
To me, it sounds like Mozilla has realized that it’s breaking their promise to never “sell data” (in ways that its users would consider to be “selling data”) and is trying to weasel their way out of admitting that.
They also have a very low view of the intelligence of their users if they think we’ll actually believe their excuses.
I’m not dogmatic to a fault. I will walk back my criticism if Mozilla can point to one example where “we do X and you wouldn’t describe that as selling your data but it MIGHT possibly run afoul of the CCPA’s definition of selling your data.”
I don’t think X exists. And why should I when the CCPAs definition sounds extremely clear cut to me. The onus is on Mozilla to explain to me how this is more nuanced than I realize. Just give us ONE example.
I don’t understand why Mozilla needs a license to do anything with my content. What is Mozilla’s role in this relationship? My computer is running a piece of software, I input some data into the software, I ask the software to send the data to servers of my choice (for example the lobste.rs servers, when I hit “Post” after typing this comment). What part of this process requires Mozilla to have a “nonexclusive, royalty-free, worldwide license” to that content? And why did they not need to have that “nonexclusive, royalty-free, worldwide license” to that content a week ago? I would get it if it only applied while using their VPN, but it’s for Firefox too?
Why do I not need to accept a similar ToS to use e.g Curl? My relationship with Curl is exactly the same as my relationship with Firefox: I enter some data into it (via a GUI in Firefox’s case, via command-line arguments in Curl’s case), Curl/Firefox makes a request towards the servers I asked it to with the data I entered, Curl/Firefox shows me whatever the server returned. Is it Mozilla’s view that Curl is somehow infringing on my intellectual property by not obtaining a license to the data I provide?
Basically, they are trying to have some service to sell. Go to about:preferences#privacy and scroll down to “Firefox Data Collection and Use” and every section below there is about data that Firefox collects and sends to Mozilla so they can do something nominally-useful with it. In my version there’s also “Sync” and “More From Mozilla” tabs, which are even more of the same.
Someone at Mozilla has decided that the fact you don’t want to buy the services is irrelevant, they’ll just sell all that juicy data produced as a side-effect to whoever wants it. More than they already were, anyway.
I don’t understand why Mozilla needs a license to do anything with my content. What is Mozilla’s role in this relationship? My computer is running a piece of software, I input some data into the software, I ask the software to send the data to servers of my choice (for example the lobste.rs servers, when I hit “Post” after typing this comment).
Maybe they only mean inputs into Firefox itself and not the sites that you visit with Firefox. Things like Pocket, the add-on store, the password manager, and the “report broken site” form. I’m sure they could make this clearer if it’s the case, but I’m personally willing to lean towards this.
If that’s the case, it’s seriously impressive to be 2 “clarifications” in after the original announcement and still not have made that part clear. Anything that’s left unclear at this point is surely being left unclear intentionally.
Why do I not need to accept a similar ToS to use e.g Curl?
Ha. I wish I’d thought of that question.
Arguably you do have to agree to something to use curl, but it’s very minimal and certainly supports your point. Here is curl’s licence (which is not one of the standard ones), from https://curl.se/docs/copyright.html :
COPYRIGHT AND PERMISSION NOTICE
Copyright (c) 1996 - 2025, Daniel Stenberg, daniel@haxx.se, and many contributors, see the THANKS file.
All rights reserved.
Permission to use, copy, modify, and distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Except as contained in this notice, the name of a copyright holder shall not be used in advertising or otherwise to promote the sale, use or other dealings in this Software without prior written authorization of the copyright holder.
It’s unfortunate (but understandable) that this test was run on such different hardware when what we really want to compare is OSes.
So I ran it on my own M1 Pro 2021 MacBook Pro (10 cores, 8 performance / 2 efficiency) which has both macOS and Linux installed. Numbers are milliseconds running the open program, and milliseconds running open_pool in parens:
macOS Sequoia 15.3.1:
1 thread: 73.01ms (79.98ms)
2 threads: 43.96ms (48.09ms)
4 threads: 32.14ms (38.82ms)
8 threads: 78.28ms (80.77ms)
16 threads: 127.48ms (133.80ms)
Fedora Asahi Remix 41, Linux 6.12.12:
1 thread: 17.91ms (40.27ms)
2 threads: 23.79ms (31.16ms)
4 threads: 17.92ms (24.20ms)
8 threads: 15.50ms (24.13ms)
16 threads: 14.85ms (38.41ms)
Moral of the story: Linux is vastly faster at opening files than macOS on the same hardware, and Linux doesn’t really benefit from using multiple threads.
Also, the slow-down in Linux from 2 threads compared to 1 is consistent and happens every time I run it, it’s not just some outlier.
I do not like the author’s misrepresentations in this article. You can be technically not lying, but when you write things that most people who aren’t highly technical would believe to mean one thing, and that thing is clearly not the case, you don’t get points for technically not lying. You’re being disingenuous.
An example: I do not like Google, and I do not like Chrome, and I make no apologies for them, but when someone writes, “When you log into Chrome, it automatically logs you into your Google account on the web.”
Non- and less-technical people do not make a distinction between using a web browser and “log(ging) into Chrome” - if someone were to say, “log in to Chrome, please”, many, if not most people, would assume that what’s meant is for someone to simply launch Chrome. They wouldn’t think, “I’ll launch the browser, then go in to settings or whatever, then I’ll log in to my Google account inside of Chrome because I was asked to log in to Chrome”.
We (technical people) can tell others all we want that they should use another browser, but most people aren’t going to care and aren’t going to listen. But should people know that they can use Chrome and don’t need to be logged in to Google? Absolutely. Is the author implying by their choice of wording that this isn’t the case? Yes. This is deceptive, and it weakens the case the author is trying to make.
As we learned from South Park, we’re asked to choose between a giant douche and a turd sandwich. We really don’t need to trick people to make the case that they suck.
Non- and less-technical people do not make a distinction between using a web browser and “log(ging) into Chrome”
That’s only because Google made it so, and less-technical people don’t understand the amount of data that Google collects on them, via searches, maps, or Google Analytics, or what damage that can do.
I am pretty sure that the linking happened because they noticed that many people don’t really want or need to log into a Google account, so they wanted more people to share more data. It is my opinion that less-technical people aren’t stupid, and get confused by the web when their mental model is deliberately flawed, more often than not.
I strongly disagree with the example given, only because I know that when I say to log in I always mean to enter identification information (like username and password). It wouldn’t have crossed my mind before reading your comment that someone would equate logging into an application with simply opening an application. Logging in should only mean that you’ve chosen to enter identifying information in order to gain access to something. They shouldn’t need to address whether you can use Chrome without logging in if all they’re talking about is the logged-in behavior. (We also shouldn’t ignore Google’s dark patterns that make it seem like you do need to log in to use it, though the article doesn’t go into that.)
You can’t really expect every piece of technical content to pander to people who don’t understand the difference between launching a program and logging in to an account. What makes you think that this post is directed at people who are 100% technologically illiterate to that degree?
It shouldn’t be targeted at people who are 100% technically illiterate, but just as much it shouldn’t be targeted at people who are 0% technically illiterate.
A good example is when technical people talk about computer viruses and conflate them with Trojans. If you don’t know any better, you learn only from usage like this and you have no real awareness of the difference between them (meaning we’re shirking our responsibility to teach correct things). But when technical people, who really should know better, refer to a Trojan as a virus, that can cause real confusion and miscommunication. Someone tasked with cleanup after an infection can easily end up with very different work by this misuse. Additionally, there’s no good reason for a technical person to not use the correct term.
So when I, a technical person, sees someone, also ostensibly a technical person, describing things to others, particularly to non-technical people, incorrectly or in ways that we know will be misunderstood, it’d really bugs me. There’s no good reason for it.
This is a bold assumption to make, and it’s not reasonable to expect an author to imagine every possible way a reader might be confused. Words have meaning, and it should be enough for an author of a more-technical-than-not article to use words accurately. Audience analysis matters, but I really don’t think this author expected their article to be read by someone with so little knowledge that they would confuse “log in” with “open application”. As I shared earlier I wouldn’t have even imagined that scenario until you presented it, so if I were to have written this same article I could say confidently that it was not written with the expectation of it being misunderstood.
“Think of Chrome. When you log into Chrome, it automatically logs you into your Google account on the web.”
How many non-technical people wouldn’t realize the difference between logging in to Google and simply using Chrome? How many technical people wouldn’t be certain that the author is referring specifically to logging in to Google, and would require context to be sure?
The term “log in” has been in use in computing since at least the ‘60s and in common home usage since at least the late ‘90s. Additionally, it is not unique to computing. If I told someone to log in or sign in at the bank, I would have no expectation that they would think I simply meant for them to walk into the bank. They’d be expected to sign a log book or check in with someone. If they don’t know what “log into” means, I expect them to ask the question, “What does that mean?” and look it up. I never expect my reader to simply a) make a misinformed assumption, and b) take what I said at face value without question. If what I said strikes them as odd, they should look it up. That is what we should expect of ourselves and each other. We can’t expect an author to imagine every possible way that a reader might be misinformed.
That’s what I’ve been talking about, yes. Who is this mythical “common person” and why do you feel the author needed to write with them in mind instead of another imagined audience? I have certainly known people who were that confused (not about that specifically), but I wouldn’t write a blog post aimed at catching every possible misunderstanding that type of person might have. I think it’s pretty clear that type of extra-confused user was not the audience this author had in mind, and I don’t think we can demand they reinagine their audience like that.
I want to plainly say that I don’t believe there was anything incorrect about what they said about logging into Chrome and I don’t believe that the absence of qualifying language for an unintended imagined audience means they’re being in any way disingenuous. Maybe you have another example from the article that makes your point, but I contend the example you gave does not.
license all content you transmit through Firefox to Mozilla (“When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox. -https://www.mozilla.org/en-US/about/legal/terms/firefox)
allow Mozilla to both sell your private information
If you’re already using Firefox, I can confirm that porting your profile over to Librewolf (https://librewolf.net) is relatively painless, and the only issues you’ll encounter are around having the resist fingerprinting setting turned on by default (which you can choose to just disable if you don’t like the trade-offs). I resumed using Firefox in 2016 and just switched away upon this shift in policy, and I do so sadly and begrudgingly, but you’d be crazy to allow Mozilla to cross these lines without switching away.
If you’re a macOS + Littlesnitch user, I can also recommend setting Librewolf to not allow communication to any Mozilla domain other than addons.mozilla.org, just in case.
👋 I respect your opinion and LibreWolf is a fine choice; however, it shares the same problem that all “forks” have and that I thought I made clear in the article…
Developing Firefox costs half a billion per year. There’s overhead in there for sure, but you couldn’t bring that down to something more manageable, like 100 million per year, IMO, without making it completely uncompetitive to Chrome, whose estimate cost exceeds 1 billion per year. The harsh reality is that you’re still using Mozilla’s work and if Mozilla goes under, LibreWolf simply ceases to exist because it’s essentially Firefox + settings. So you’re not really sticking it to the man as much as you’d like.
There are 3 major browser engines left (minus the experiments still in development that nobody uses). All 3 browser engines are, in fact, funded by Google’s Ads and have been for almost the past 2 decades. And any of the forks would become unviable without Apple’s, Google’s or Mozilla’s hard work, which is the reality we are in.
Not complaining much, but I did mention the recent controversy you’re referring to and would’ve preferred comments on what I wrote, on my reasoning, not on the article’s title.
I do what I can and no more, which used to mean occasionally being a Firefox advocate when I could, giving Mozilla as much benefit of the doubt as I could muster, paying for an MDN subscription, and sending some money their way when possible. Now it means temporarily switching to Librewolf, fully acknowledging how unsustainable that is, and waiting for a more sustainable option to come along.
I don’t disagree with the economic realities you mentioned and I don’t think any argument you made is bad or wrong. I’m just coming to a different conclusion: If Firefox can’t take hundreds of millions of dollars from Google every year and turn that into a privacy respecting browser that doesn’t sell my data and doesn’t prohibit me from visiting whatever website I want, then what are we even doing here? I’m sick of this barely lesser of two evils shit. Burn it to the fucking ground.
I think “barely lesser of two evils” is just way off the scale, and I can’t help but feel that it is way over-dramatized.
Also, what about the consequences of having a chrome-only web? Many websites are already “Hyrum’s lawed” to being usable only in Chrome, developers only test for Chrome, the speed of development is basically impossible to follow as is.
Firefox is basically the only thing preventing the most universal platform from becoming a Google-product.
Well there’s one other: Apple. Their hesitance to allow non-Safari browsers on iOS is a bigger bulwark against a Chrome-only web than Firefox at this point IMO.
I’m a bit afraid that the EU is in the process of breaking that down though. If proper Chrome comes over to iOS and it becomes easy to install, I’m certain that Google will start their push to move iOS users over.
I know it’s not exactly the same but Safari is also in the WebKit family and Safari is nether open source nor cross platform nor anywhere close to Firefox in many technical aspects (such as by far having the most functional and sane developer tools of any browser it there).
Pretty much the same here: I used to use Firefox, I have influenced some people in the past to at least give Firefox a shot, some people ended up moving to it from Chrome based on my recommendations. But Mozilla insists on breaking trust roughly every year, so when the ToS came around, there was very little goodwill left and I have permanently switched to LibreWolf.
Using a fork significantly helps my personal short-term peace of mind: whenever Mozilla makes whatever changes they’re planning to make which requires them to have a license to any data I input into Firefox, I trust that I will hear about those changes before LibreWolf incorporates them, and there’s a decent chance that LibreWolf will rip them out and keep them out for a few releases as I assess the situation. If I’m using Firefox directly, there’s a decent probability that I’ll learn about those changes after Firefox updates itself to include them. Hell, for all I know, Firefox is already sending enough telemetry to Mozilla that someone there decided to make money off it and that’s why they removed the “Mozilla will doesn’t and will never sell your data” FAQ item; maybe LibreWolf ripping out telemetry is protecting me against Mozilla right now, I don’t know.
Long term, what I personally do doesn’t matter. The fact that Mozilla has lost so much good-will that long-term Firefox advocates are switching away should be terrifying to Mozilla and citizens of the Web broadly, but my personal actions here have close to 0 effect on that. I could turn into a disingenuous Mozilla shill but I don’t exactly think I’d be able to convince enough people to keep using Firefox to cancel out Mozilla’s efforts to sink their own brand.
If Firefox is just one of three browsers funded by Google which don’t respect user privacy, then what’s the point of it?
People want Firefox and Mozilla to be an alternative to Google’s crap. If they’re not going to be the alternative, instead choosing to copy every terrible idea Google has, then I don’t see why Mozilla is even needed.
Well to be fair to Mozilla, they’re pushing back against some web standard ideas Google has. They’ve come out against things like WebUSB and WebHID for example.
How the heck do they spend that much? At ~20M LoC, they’re spending 25K per line of code a year. While details are hard to find, I think that puts them way above the industry norms.
I’m pretty sure that’s off by 3 orders of magnitude; OP’s figure would be half a US billion, i.e. half a milliard. That means 500M / 20M = 25 $/LOC. Not 25K.
I see your point, but by that same logic, shouldn’t we all then switch to Librewolf? If Firefox’s funding comes from Google, instead of its user base, then even if a significant portion of Firefox’s users switch, it can keep on getting funded, and users who switched can get the privacy non-exploitation they need?
TL;DR 90% of Mozilla’s revenue comes from ad partnerships (Google) and Apple received ca. 19 Bn $ per annum to keep Google as the default search engine.
Where did you get those numbers? Are you referring to the whole effort, (legal, engineering, marketing, administration, etc) ot just development?
That’s an absolutely bonkers amount of money, and while i absolutely believe it, im also kind of curious what other software products are in a similar league
Yeah, that seems like legal butt-covering. If someone in a criminalizing jurisdiction accesses these materials and they try to sue to the browser, Mozilla can say the user violated TOS.
This is just a lie. It’s just a lie. Firefox is gratis, and it’s FLOSS. These stupid paragraphs about legalese are just corporate crap every business of a certain size has to start qualifying so they can’t get their wallet gaped by lawyers in the future. Your first bullet point sucks - you don’t agree to the Acceptable Use Policy to use Firefox, you agree to it when using Mozilla services, i.e. Pocket or whatever. Similarly, your second bulletpoint is completely false, that paragraph doesn’t even exist:
You give Mozilla the rights necessary to operate Firefox. This includes processing your data as we describe in the Firefox Privacy Notice. It also includes a nonexclusive, royalty-free, worldwide license for the purpose of doing as you request with the content you input in Firefox. This does not give Mozilla any ownership in that content.
The text was recently clarified because of the inane outrage over basic legalese. And Mozilla isn’t selling your information. That’s not something they can casually lie about and there’s no reason to lie about it unless they want to face lawsuits from zealous legal types in the future. Why constantly lie to attack Mozilla? Are you being paid to destroy Free Software?
Consciously lying should be against Lobsters rules.
Let’s really look at what’s written here, because either u/altano or u/WilhelmVonWeiner is correct, not both.
The question we want to answer: do we “agree to an acceptable use policy” when we use Firefox? Let’s look in the various terms of service agreements (Terms Of Use, Terms Of Service, Mozilla Accounts Privacy). We see that it has been changed. It originally said:
“When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox.”
Note that this makes no distinction between Firefox as a browser and services offered by Mozilla. The terms did make a distinction between Firefox as distributed by Mozilla and Firefox source code, but that’s another matter. People were outraged, and rightfully so, because you were agreeing to an acceptable use policy to use Firefox, the binary from Mozilla. Period.
That changed to:
“You give Mozilla the rights necessary to operate Firefox. This includes processing your data as we describe in the Firefox Privacy Notice. It also includes a nonexclusive, royalty-free, worldwide license for the purpose of doing as you request with the content you input in Firefox. This does not give Mozilla any ownership in that content.”
Are the legally equivalent, but they’re just using “nicer”, “more acceptable” language? No. The meaning is changed in important ways, and this is probably what you’re referring to when you say, “you don’t agree to the Acceptable Use Policy to use Firefox, you agree to it when using Mozilla services”
However, the current terms still say quite clearly that we agree to the AUP for Mozilla Services when we use Firefox whether or not we use Mozilla Services. The claim that “you don’t agree to the Acceptable Use Policy to use Firefox” is factually incorrect.
So is it OK for u/WilhelmVonWeiner to say that u/altano is lying, and call for censure? No. First, it’s disingenuous for u/WilhelmVonWeiner to pretend that the original wording didn’t exist. Also, the statement, “Similarly, your second bulletpoint is completely false, that paragraph doesn’t even exist:” is plainly false, because we can see that paragraph verbatim here:
So if u/WilhelmVonWeiner is calling someone out for lying, they really shouldn’t lie themselves, or they should afford others enough benefit of the doubt to distinguish between lying and being mistaken. After all, is u/WilhelmVonWeiner lying, or just mistaken here?
I’m all for people venting when someone is clearly in the wrong, but it seems that u/WilhelmVonWeiner is not only accusing others of lying, but is perhaps lying or at very least being incredibly disingenuous themselves.
Oh - and I take exception to this in particular:
“every business of a certain size has to start qualifying so they can’t get their wallet gaped by lawyers”
Being an apologist for large organizations that are behaving poorly is the kind of behavior we expect on Reddit or on the orange site, but not here. We do not want to or should we need to engage with people who do not make good faith arguments.
Consciously lying should be against Lobsters rules.
This is a pretty rude reply so I’m not going to respond to the specifics.
Mozilla has edited their acceptable use policy and terms of service to do damage control and so my exact quotes might not be up anymore, but yeah sure, assume that everyone quoting Mozilla is just a liar instead of that explanation if you want.
Sorry for being rude. It was unnecessary of me and I apologise, I was agitated. I strongly disagree with your assessment of what Mozilla is doing as “damage control” - they are doing what is necessary to legally protect the Mozilla Foundation and Corporation from legal threats by clarifying how they use user data. It is false they are selling your private information. It is false they have a nonexclusive … license to everything you do using Firefox. It is false that you have to agree to the Acceptable Use Policy to use Firefox. It’s misinformation, it’s FUD and it’s going to hurt one of the biggest FLOSS nonprofits and alternate web browsers.
It is false that you have to agree to the Acceptable Use Policy to use Firefox.
So people can judge for them selves, the relevant quote from the previous Terms of Use was:
Your use of Firefox must follow Mozilla’s Acceptable Use Policy, and you agree that you will not use Firefox to infringe anyone’s rights or violate any applicable laws or regulations.
This is a pretty incendiary comment and I would expect any accusation of outright dishonesty to come with evidence that they know they’re wrong. I am not taking a position on who has the facts straight, but I don’t see how you could prove altano is lying. Don’t attribute to malice what can be explained by…simply being incorrect.
FYI this is a change made in response to the recent outrage, the original version of the firefox terms included
Your use of Firefox must follow Mozilla’s Acceptable Use Policy, and you agree that you will not use Firefox to infringe anyone’s rights or violate any applicable laws or regulations.
Your locale is forced to en-US, your timezone is UTC, your system is set to Windows. It will put canvas behind a prompt and randomizes some pixels such that fingerprinting based on rendering is a bit harder. It will also disable using SVG and fonts that you have installed on your systems
Btw, I don’t recommend anyone using resist fingerprinting. This is the “hard mode” that is known to break a lot of pages and has no site-specific settings. Only global on or off. A lot of people turn it on and then end up hating Firefox and switching browsers because their web experience sucks and they don’t know how to turn it off. This is why we now show a rather visible info bar in settings under privacy/security when you turn this on and that’s also why we are working on a new mode that can spoof only specific APIs and only on specific sites. More to come.
Yes, if everyone is running a custom set of spoofs you’d end up being unique again. The intent for the mechanism is for us to be able to experiment and test out a variety of sets before we know what works (in terms of webcompat). In the end, we want everyone to look as uniform as possible
It breaks automatic dark mode and sites don’t remember their zoom setting. Dates are also not always localized correctly. That’s what I’ve noticed so far at least.
My MacBook Pro is nagging me to upgrade to the new OS release. It lists a bunch of new features that don’t care about. In the meantime, the following bugs (which are regressions) have been unfixed for multiple major OS versions:
When a PDF changes, Preview reloads it. It remembers the page you were on (it shows it in the page box) but doesn’t jump there. If you enter the page in the page box, it doesn’t move there because it thinks you’re there already. This worked correctly for over a decade and then broke.
The calendar service fails to sync with a CalDAV server if you have groups in your contacts. This stopped working five or so years ago, I think.
Reconnecting an external monitor used to be reliable and move all windows that were there last time it was connected back there. Now it works occasionally.
There are a lot of others, these are the first that come to mind. My favourite OS X release was 10.6: no new user-visible features, just a load of bug fixes and infrastructure improvements (this one introduced libdispatch, for example).
It’s disheartening to see core functionality in an “abandonware” state while Apple pushes new features nobody asked for. Things that should be rock-solid, just… aren’t.
It really makes you understand why some people avoid updates entirely. Snow Leopard’s focus on refinement feels like a distant memory now.
The idea of Apple OS features as abandonware is a wild idea, and yet here we are. The external monitor issue is actually terrible. I have two friends who work at Apple (neither in OS dev) and both have said that they experience the monitor issue themselves.
I was thinking about this not too long ago; there are macOS features (ex the widgets UI) that don’t seem to even exist anymore. So many examples of features I used to really like that are just abandoned.
Reconnecting an external monitor used to be reliable and move all windows that were there last time it was connected back there. Now it works occasionally.
This works flawlessly for me every single time, I use Apple Studio Display at home and a high end Dell at the office.
On the other hand, activating iMessage and FaceTime on a new MacBook machine has been a huge pain for years on end…
On the other hand, activating iMessage and FaceTime on a new MacBook machine has been a huge pain for years on end…
I can quote on that, but not with my Apple account, but with my brother’s. Coincidentally, he had less problems activating iMessage/FaceTime on an Hackintosh machine.
A variation on that which I’ve run in to is turning the monitor off and putting the laptop to sleep, and waking without moving or disconnecting it.
To avoid all windows ending up on stuck on the laptop display, I have to sleep the laptop, the power off the monitor. To restore power on the monitor, then wake the laptop. Occasionally (1 in 10 times?) it still messes up and I have to manually move windows back to the monitor display.
(This is when using dual-head mode with both the external monitor and laptop display in operation)
iCloud message sync with message keep set to forever seems to load soooo much that messages on my last laptop would be so awful to type long messages (more than 1 sentence) directly into the text box I started to write messages outside of the application, copy/paste and send the message. The delay was in seconds for me.
I’m really heartened by how many people agree that OS X 10.6 was the best.
Edited to add … hm - maybe you’re not saying it was the best OS version, just the best release strategy? I think it actually was the best OS version (or maybe 10.7 was, but that’s just a detail).
It was before Apple started wanting to make it more iPhone-like and slowly doing what Microsoft did with Windows 8 (who did it in a ‘big bang’) by making Windows Phone and Windows desktop amost indistinguishable. After Snow Leopard, Apple became a phone company and very iPhone-centric and just didn’tbother with the desktop - it became cartoonish and all flashy, not usable. That’s when I left MacOS and haven’t looked back.
Recently, Disk Utility has started showing a permissions error when I click unmount or eject on SD cards or their partitions, if the card was inserted after Disk Utility started. You have to quit and re-open Disk Utility for it to work. It didn’t use to be like that, but it is now, om two different Macs. This is very annoying for embedded development where you need to write to SD cards frequently to flash new images or installers. So unmounting/ejecting drives just randomly broke one day and I’m expecting it won’t get fixed.
Another forever-bug: when you’re on a higher refresh rate screen, the animation to switch workspaces takes more time on higher refresh rate screens. This has forced me to completely change how I use macOS to de-emphasise workspaces, because the animation is just obscenely long after I got a MacBook Pro with a 120Hz screen in 2021. Probably not a new bug, but an old bug that new hardware surfaced, and I expect it will never get fixed.
I’m also having issues with connecting to external screens only working occasionally, at least through USB-C docks.
The hardware is so damn good. I wish anyone high up at Apple cared at all about making the software good too.
Oh, there’s another one: the fstab things to not mount partitions that match a particular UUID no longer work and there doesn’t appear to be any replacement functionality (which is annoying when it’s a firmware partition that must not be written to except in a specific way, or it will sofr-brick the device).
Oh, fun! I’ve tried to find a way to disable auto mount and the only solutions I’ve found is to add individual partition UUIDs to a block list in fstab, which is useless to me since I don’t just re-use the same SD card with the same partition layout all the time, I would want to disable auto mounting completely. But it’s phenomenal to hear that they broke even that sub-par solution.
Maybe, but we’re talking about roughly 1.2 seconds from the start of the gesture until keyboard input starts going to an app on the target workspace. That’s an insane amount of delay to just force the user to sit through on a regular basis… On a 60Hz screen, the delay is less than half that (which is still pretty long, but much much better)
Not a fix, but as a workaround have you tried Accessibility > Display > Reduce Motion?
I can’t stand the normal desktop switch animation even when dialed down all the way. With that setting on, there’s still a very minor fade-type effect but it’s pretty tolerable.
Sadly, that doesn’t help at all. My issue isn’t with the animation, but with the amount of time it takes from I express my intent to switch workspace until focus switches to the new workspace. “Reduce Motion” only replaces the 1.2 second sliding animation with a 1.2 second fading animation, the wait is exactly the same.
Don’t update/downgrade to Sequoia! It’s the Windows ME of MacOS’s. After Apple support person couldn’t resolve any of the issues I had, they told me to reinstall Sequoia and then gave me instructions to upgrade to Ventura/Sonoma.
I thought Big Sur was the Windows ME of (modern) Mac OS. I have had a decent experience in Sequoia. I usually have Safari, Firefox, Chrome, Mail, Ghostty, one JetBrains thing or another (usually PyCharm Pro or Clion), Excel, Bitwarden, Preview, Fluor, Rectangle, TailScale, CleanShot, Fantastical, Ice and Choosy running pretty much constantly, plus a rotating cast of other things as I need them.
Aside from Apple Intelligence being hot garbage, (I just turn that off anyway) my main complaint about Sequoia is that sometimes, after a couple dozen dock/undock cycles (return to my desk, connect to my docking station with a 30” non-hidpi monitor, document scanner, time machine drive, smart card reader, etc.) the windows that were on my Macbook’s high resolution screen and move to my 30” when docked don’t re-scale appropriately, and I have to reboot to address that. That seems to happen every two weeks or so.
Like so many others here, I miss Snow Leopard. I thought Tiger was an excellent release, Leopard was rough, and Snow Leopard smoothed off all the rough edges of Tiger and Leopard for me.
I’d call Sequoia “subpar” if Snow Leopard is your “par”. But I don’t find that to be the case compared to Windows 11, KDE or GNOME. It mostly just stays out of my way.
Apple’s bug reporting process is so opaque it feels like shouting into the void.
And, Apple isn’t some little open source project staffed by volunteers. It’s the richest company on earth. QA is a serious job that Apple should be paying people for.
Apple’s bug reporting process is so opaque it feels like shouting into the void.
Yeah. To alleviate that somewhat (for developer-type bugs) when I was making things for Macs and iDevices most of the time, I always reported my bugs to openradar as well:
which would at least net me a little bit of feedback (along the lines of “broken for everyone or just me?”) so it felt a tiny bit less like shouting into the void.
I can’t remember on these. The CalDAV one is well known. Most of the time when I’ve reported bugs to Apple, they’ve closed them as duplicates and given no way of tracking the original bug.
No. I tried being a good user in the past but it always ended up with “the feature works as expected”. I won’t do voluntary work for a company which repeatedly shits on user feedback.
10.6 “Snow Leopard” was the last Mac OS that I could honestly say I liked. I ran it on a cheap mini laptop (a Dell I think) as a student, back when “hackintoshes” were still possible.
When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox.
That’s… wow. Thank you for highlighting that. I am seriously considering using something other than Firefox for the first time in… ever. Regardless of how one might choose to interpret that statement, it’s frightening that they would even write it. This is not the Mozilla I knew or want. I’d love to know what alternatives people might suggest that are more community focused and completely FOSS, ideally still non-Chromium.
Isn’t it? Most GDPR consent screens have an easy “accept to everything” button and requires going through multiple steps to “not accept”, and many many more steps to “object” to their “legitimate interest” in tracking for the purposes of advertising. As long as these screens remain allowed and aren’t cracked down on (which I don’t foresee happening, ever), that’s the de facto meaning of “consent” in GDPR as far as I’m concerned: something that’s assumed given unless you actively go out of your way to revoke it.
It’s not what the text of the GDPR defines it as, but the text isn’t relevant; only its effect on the real world is.
Yes, definitely. Consent in GDPR is opt-in not opt-out. If it’s opt-out, that’s not consensual. And the law is the law.
Furthermore, for interstitials, to reject everything should be at least as easy as it is to accept everything, without dark patterns. Interstitials (e.g., from IAB and co.) first tried to make it hard to reject everything, but now you usually get a clear button for rejecting everything on most websites.
As I mentioned in another comment, the DPAs are understaffed and overworked. But they do move. A real-world example of a company affected by the GDPR, and that tries testing its limits, is Meta with Facebook. For user profiling, first they tried the Terms of Service, then they tried claiming a legitimate interest, then they introduced expensive subscriptions for those that tried to decline, now they introduced a UI degradation, delaying the user scrolling, which is illegal as well.
Many complain, on one hand, that the EU is too regulated, suffocating inovation, and with US’s tech oligarhs now sucking up to Trump to force the EU into allowing US companies to break the law. On the other hand, there are people who believe that the GDPR isn’t enforced enough. I wish people would make up their mind.
Many complain, on one hand, that the EU is too regulated, suffocating inovation, and with US’s tech oligarhs now sucking up to Trump to force the EU into allowing US companies to break the law. On the other hand, there are people who believe that the GDPR isn’t enforced enough. I wish people would make up their mind.
Those are different people, all who have made up their mind.
I thought I made it reasonably clear that I don’t care that much about what the text of the law is, I care about what material impact it has on the world.
To be fair, @mort’s feeling may come from non-actually-GDPR-compliant cookie consent forms. I have certainly seen where I couldn’t find the “reject all” button, and felt obligated to manually click up to 15 “legitimate interest” boxes. (And dammit could they please stop with their sliding buttons and use actual square check boxes instead?)
The facts you provided aren’t relevant. I’m talking about the de facto situation as it applies to 99% of companies, you’re talking about the text of the law and enforcement against one particular company. These are different things which don’t have much to do with each other.
You even acknowledge that DPAs are understaffed and overworked, which results in the lacking enforcement which is exactly what I’m complaining about. For what I can tell, we don’t disagree about any facts here.
I’m talking about GDPR as well, focusing about what impact it has in practice. I have been 100% consistent on that, since my first message in this sub-thread (https://lobste.rs/s/de2ab1/firefox_adds_terms_use#c_3sxqe1) which explicitly talks about what it means de facto. I don’t know where you got the impression that I’m talking about something else.
But there is enforcement, it’s just slower than we’d like. For example, screens making it harder to not opt in rather than opt in have gotten much rarer than they used to be. IME now they mostly come from American companies that don’t have much of a presence in the EU. So enforcement is causing things to move in the right direction, even if it is at a slow pace.
There is a website tracking fines against companies for GDPR violations [1] and as you can see, there are lots of fines against companies big and small every single month. “Insufficient legal basis for data processing” isn’t close to being the most common violation, but it’s pretty common, and has also been lobbed against companies big and small. It is not the case that there is only enforcement against a few high profile companies.
it’s the other way around - most of the time you have to actively revoke “legitimate interest”, consent should be off by default. Unfortunately, oftentimes “legitimate interest” is just “consent, but on by default” and they take exactly the same data for the same purpose (IIRC there are NGOs (such as NOYB, Panoptykon) fighting against IAB and other companies in those terms)
“Legitimate interest” is the GDPR loophole that ad tech companies use to spy on us without an easy opt-out option, right? I don’t know what this means in this context but I don’t trust it.
It is not, ad tech has been considered not a legitimate interest for… Ever… By the Europeans DPAs. Report to your DPA the one that abuse this. There have been enforcement.
Every website with a consent screen has a ton of ad stuff under “legitimate interest”, most ask you to “object” to each individually. The continued existence of this patterns means it’s de facto legal under the GDPR in my book. “Legitimate interest” is a tool to continue forced ad tracking.
I don’t think you’re disagreeing with me. It’s de jure illegal but de facto legal. I don’t care much what the text of the GDPR says, I care about its material effect on the real world; and the material effect is one where websites put up consent screens where the user has to “object” individually to every ad tech company’s “legitimate interest” in tracking the user for ad targeting purposes.
I used to be optimistic about the GDPR because there’s a lot of good stuff in the text of the law, but it has been long enough that we can clearly see that most of its actual effect is pretty underwhelming. Good law without enforcement is worthless.
De facto illegal for entities at Facebook’s scale? Maybe. But it’s certainly de facto legal for everyone else. It has been 7 years since it was implemented; if it was going to have a positive effect we’d have seen it by now. My patience has run out. GDPR failed.
I just gave you a concrete example of a powerful Big Tech company, with infinite resources for political lobbying, that was blasted for their practices. They first tried hiding behind their Terms of Use, then they tried claiming a legitimate interest, then they offered the choice of a paid subscription, and now they’ve introduced delays in scrolling for people that don’t consent to being profiled, which will be deemed illegal as well.
Your patience isn’t important. This is the legal system in action. Just because, for example, tax evasion happens, that doesn’t mean that anti tax evasion laws don’t work. Similarly with data protection laws. I used to work in the adtech industry. I know for a fact that there have been companies leaving the EU because of GDPR. I also know some of the legwork that IAB tried pulling off, but it won’t last.
Just the fact that you’re getting those interstitials is a win. Microsoft’s Edge browser, for example, gives EU citizens that IAB dialog on the first run, thus informing them that they are going to share their data with the entire advertising industry. That is in itself valuable for me, because it informs me that Edge is spyware.
I agree that the “we’re spying on you” pop-ups is a win in itself. I’m just complaining that it’s so toothless as to in practice allow websites to put up modals where each ad tech company’s “legitimate interest” in tracking me has to be individually disabled. If the goal of the GDPR was to in any way make it reasonably easy for users to opt out of tracking, it failed.
I agree that the “we’re spying on you” pop-ups is a win in itself.
I’m not so sure. I’ve even seen this used as an argument against the GDPR: The spin they give it is “this is the law that forces us to put up annoying cookie popups”. See for example this article on the Dutch public broadcasting agency (which is typically more left-leaning and not prone to give a platform to liberals).
“Alle AI-innovaties werken hier slechter dan in de VS. En waarom moet je op elke website op cookies klikken?”, zegt Van der Voort.
Roughly translated “all innovations in AI don’t work as well here as in the US. And why do you have to click on cookies (sic) on every single website?”
I’ve even seen this used as an argument against the GDPR: The spin they give it is “this is the law that forces us to put up annoying cookie popups”.
I have seen that as well, and I think it’s bullshit. The GDPR doesn’t force anyone to make any form of pop-up, nobody is forced to track users in a way which requires consent. The GDPR only requires disclosure and an opt-out mechanism if you do decide to spy on your users, which I consider good..
The GDPR only requires disclosure and an opt-out mechanism if you do decide to spy on your users, which I consider good..
I agree, but at the same time I think the average user just sees it as a nuisance, especially because in most cases there’s no other place to go where they don’t have a cookie popup. The web development/advertising industry knowingly and willfully “complied” in the most malicious and obnoxious way possible, resulting in this shitty situation. That’s 1 for the industry, 0 for the lawgivers.
I agree that it didn’t have the desired effect (which, incidentally, I have spent a lot of this thread complaining about, hehe). I think everyone was surprised about just how far everyone is willing to go in destroying their website’s user experience in order to keep tracking people.
has to “object” individually to every ad tech company’s “legitimate interest” in tracking the user
I’m not sure if you’re deep in grumpy posting or didn’t understand the idea here, but for legitimate interest you don’t need to agree and companies normally don’t give you the option. If you’re talking about the extra options you unset manually, they’re a different thing. The “legitimate interest” part is for example validating your identity through a third party before paying out money. Things you typically can’t opt out of without also refusing to use the service.
If you get a switch for “tracking” or “ads” that you can turn off, that’s not a part of the “legitimate interest” group of data.
I’m sorry but this isn’t true. I have encountered plenty consent screens with two tabs, “consent” and “legitimate interest”, and where the stuff under “consent” are default off while the stuff under “legitimate interest” is on by default and must be “objected to” individually. Some have an “object to all” button to “object” to all ad tracking in the “legitimate interest” category.
Here’s one example: https://i.imgur.com/J4dnptX.png, the Financial Times is clearly of the opinion that tracking for the purpose of advertising counts as “legitimate interest”.
I’m not saying that there’s any relationship between this pattern and what’s actually required by the GDPR, my understanding of the actual text of the law reflects yours. I’m saying that this is how it works in practice.
Mozilla updated the article with a clarifying statement:
UPDATE: We’ve seen a little confusion about the language regarding licenses, so we want to clear that up. We need a license to allow us to make some of the basic functionality of Firefox possible. Without it, we couldn’t use information typed into Firefox, for example. It does NOT give us ownership of your data or a right to use it for anything other than what is described in the Privacy Notice.
the problem is it doesn’t clarify anything. “basic functionality” is not defined. my guess is they want to be able to feed anything we type or upload to a site, to also be able to feed that into an LLM. “anything other than what is described” doesnt help because what is described is so vague as to mean anything “help you experience and interact with online content”
Mozilla updated the article with a clarifying statement:
UPDATE: We’ve seen a little confusion about the language regarding licenses, so we want to clear that up. We need a license to allow us to make some of the basic functionality of Firefox possible. Without it, we couldn’t use information typed into Firefox, for example. It does NOT give us ownership of your data or a right to use it for anything other than what is described in the Privacy Notice.
That is… not clarifying. And not comforting. “What is described” in the ToS is “to help you navigate, experience, and interact with online content.” That’s absurdly vague. And what is described in the Privacy Notice is absurdly broad:
To provide you with the Firefox browser
To adapt Firefox to your needs
To provide and improve search functionality
To serve relevant content and advertising on Firefox New Tab
To provide Mozilla Accounts
To provide AI Chatbots
To provide Review Checker, including serving sponsored content
To enable add-ons (addons.mozilla.org, “AMO”), including offering personalized suggestions
To maintain and improve features, performance and stability
To improve security
To understand usage of Firefox
To market our services.
To pseudonymize, de-identify, aggregate or anonymize data.
To communicate with you.
To comply with applicable laws, and identify and prevent harmful, unauthorized or illegal activity.
I’m glad we have this contextless legalese to clarify things. I wonder if there’s some kind of opt-in data collection in Firefox that Mozilla might have legal obligations to clarify their rights to? Couldn’t be that… No, let’s put a pause on critical thinking and post stupid TOS excerpts as if Mozilla are going to steal our Deviantart uploads and sell them as AI training data.
I’m glad we have this contextless legalese to clarify things. I wonder if there’s some kind of opt-in data collection in Firefox that Mozilla might have legal obligations to clarify their rights to? Couldn’t be that… No, let’s put a pause on critical thinking and post stupid TOS excerpts as if Mozilla are going to steal our Deviantart uploads and sell them as AI training data.
If they need a ToS for a particular feature, then that “contextless legalese” should be scoped to that feature, not to Firefox as a whole.
This is precisely why the same organization should not do all of these things. If they want to do non-tool stuff to continue funding their mission they should start up independently managed companies that can establish these consents for a narrow band of services. They can give the existing organization control as a majority shareholder, with dividends flowing back to the main organization. That is the way to ensure that incentives don’t become misaligned with the mission.
Having owned a Framework since April of 2022, I cannot recommend them to people who need even basic durability in their devices. Since then, I have done two mainboard replacements, two top cover replacements, a hinge replacement, a battery replacement, several glue jobs after the captive screw hubs sheared from the plastic backing…
It’s just such an absurdly fragile device with incredibly poor thermals. They sacrificed a ton of desirable features to make the laptop repairable, but ultimately have released a set of devices that, when used in real-world settings, end with you repairing the device more often than not. And these repairs are often non-trivial.
I will personally be migrating to another machine. The Framework 12’s focus on durability may be trending in the right direction, but to regain trust, I’d need to see things like drop and wear tests. A laptop that can be repaired, but needs constant upkeep/incredibly delicate handling, is ultimately not an actual consumer device, but a hobbyist device.
Maybe they’ll get better in a few years. Maybe the Framework 12 will be better. Their new focus on AI, the soldered RAM in the desktop offering, and the failure to address the flimsy plastic chassis innards, among other things, mean that they have a long way to go.
It’s definitely a “be part of the community that helps solve our product problems” sort of feeling.
I have an AMD FW13, and was trying to figure out why it loses 50+% of its battery charge overnight when I close the lid, because I don’t use this computer every single day and don’t want to have remember to charge yet another device.
So I check the basics-I’m running their officially supported Linux distro, BIOS is current, etc. And half an hour into reading forum threads about diagnosing sleep power draw, I realize that this is not how I want to spend my time on this planet. I love that they’re trying to build repairable/upgradeable devices, but that goal doesn’t matter so much if people end up ditching your products for another option because they’re just tired of trying to fix it.
I’ll chime in with the opposite experience - I’ve owned an AMD Framework 13 since it came out, and had no durability issues with it whatsoever, and it’s been one of my 2 favorite computers I’ve ever owned. I’ve done one main board replacement that saved my butt after a bottle of gin fell over on top of it in transport.
Development and light gaming (on Linux, I very much appreciate their Linux support) have been great, and the reparability both gives me peace of mind, an upgrade path, and has already saved me quite a bit of money.
I’ve owned a framework since Batch 1. Durability has not been a problem for me. My original screen has a small chip in it from when I put it in a bag with something that had sharp edges and pressured the screen for a whole flight. Slowly growing. Otherwise, it’s been solid.
Same. I have a batch 1. There are quirks, which I expected and knew I am supporting a startup with little experience. I since have upgraded and put my old board into a cooler master case. This is so amazing, and what I cared about. I am still super happy with having bought the Framework and particular for tinkerers and people who will have a use for their old mainboards it’s amazing.
I get harbouring resentment for a company you felt sold then a bad product. But at the same time, you bought a laptop from a very inexperienced company which was brand new at making laptops, a pretty difficult product category to get right when you’re not just re-branding someone else’s white-label hardware.
3 years have passed since then, if I were in the market for a category which Framework competes in these days I would be inclined to look at more recent reviews and customer testimonials. I don’t think flaws in that 3 year old hardware is that relevant anymore. Not because 3 years is a particularly long time in the computer hardware business, but because it’s a really long time relative to the short life of this particular company.
I would agree that 3 years is enough time for a company to use their production lessons to improve their product. But nothing has changed in the Framework 13.
I don’t resent Framework. I think that’s putting words in my mouth. I just cannot, in good faith, recommend their products to people who need even a semi-durable machine. That’s just fact.
a very inexperienced company which was brand new at making laptops
Founded by people who had experience designing laptops already, and manufactured by a company that manufactures many laptops. Poor explanations for the problems, IMO.
I’ve had a 12th gen Intel since Sept 2022 (running NixOS btw) and I have not had any issues, I will admit it sits in one place 99% of the time. I might order the replacement hinge since mine is a bit floppy but not too big a deal.
As for the event, I was hoping for a minipc using the 395 and I got my wish. Bit pricey and not small enough for where I want to put it and I have no plans for AI work so it’s probably not the right machine for me.
I was originally interested in the HP machine coming with the same CPU (which should be small enough to fit) but I’ve been pricing an AMD 9950 and it comes out cheaper. I was also disappointed there wasn’t a sku with 385 Max w/64GB of RAM , which I might have have ordered to keep the cost down.
For reference a new machine is intended to replace a 10 year old Devils Canyon system.
I’ve also had my Framework 13 since beginning of 2022. I’ve had to do a hinge replacement, input cover replacement, and mainboard replacement. But I sort of expected that since it’s a young company and hardware is hard. And through all of it support was very responsive and helpful.
I would expect that nowadays the laptops are probably more solidly built than those early batches!
Support was definitely helpful. I just don’t have time or money to replace parts on my machine anymore.
From what I understand, the laptops aren’t any stronger. Even the Framework 16 just got some aftermarket/post-launch foam pads to put below the keyboard to alleviate the strain on the keyboard. The entire keyboard deck would flex.
The fact that these products have these flaws makes me wonder how Framework organizes its engineering priorities.
When compared to other similar laptops from brands like HP or Lenovo, how does the deck flex compare? I definitely feel sympathetic to not being better or on par with Apple - given the heaps of money Apple has for economies of scale + lots of mechanical engineers, but it would be a bit rough if mid-tier laptops in that category were far superior.
The deck flex is on par with or worse than an HP EliteBook circa 2019. The problem is that it’s incredibly easy to bend the entire frame of the machine, to the point where it interferes with the touchpad’s ability to click.
It’s really bad, bordering on unexcusable. The fact that there’s no concrete reinforcment says that they sacrificed build quality for repairability, which is equivalent to making a leaky boat with a very fast bilge pump.
I’m not sure what you’re doing to your laptop; how are you bending the entire frame of the machine?
It’s a new company that is largely doing right by open source, and especially open hardware. The quality isn’t incredible but it is worth its value, and I find these claims you’re making dubious.
It’s a fairly common flex point for the chassis, and a common support problem. The base of the mousepad, towards the front of the laptop where there’s a depression in the case, is where the majority of the flex is.
My laptop has seen nothing but daily, regular use. You can find the claims dubious, but others are having them too.
I’ll chime in too: I’ve had the Framework 13 AMD since it came out (mid 2023) and it has been great.
I upgraded the display after the new 2.8K panel came out, it took 2 minutes. Couple months later it developed some dead pixels, so they sent me a replacement. In the process of swapping it out, I accidentally tore the display cable. It took me a while to notice/debug it, but in the end it was just a $15 cable replacement that I’m fairly sure would have otherwise resulted in a full mainboard replacement for any other laptop. (When I had Macbooks, I lost count how many times Apple replaced the mainboard for the smallest thing.)
I haven’t been too precious with it, I toss it around like I did my Thinkpad before this. There’s some scuffs but it has been fine, perhaps the newer models are more sturdy? It’s comforting to know that if anything breaks, I’ll be able to fix it.
I also run NixOS on it, it does everything I need it to do, the battery life is great (8-10 hours of moderate use) and I’ll happily swap out the battery in a few more years once it starts losing capacity.
I spend so much of my life at the computer that feeling a sense of ownership over the components makes a lot of sense to me. I don’t want to feel like I’m living in a hotel.
It is, in fact, how I want to spend my time on this planet.
To add to the chorus, I bought a 12th gen intel framework 13 on release and it’s been flawless so far. Nixos worked out of the box. I love the 3:2 screen. I can totally believe that a small/young manufacturing company has quality control issues and some people are getting lemons, but the design itself seems solid to me.
On my old dell laptop I snapped all the usb ports on one side (by lifting up the other side while keyboard/mouse were still connected). Since they’re connected directly to the motherboard they weren’t repairable without buying a new cpu. If I did the same on the framework it would only break the $12 expansion cards and I wouldn’t even have to turn it off to replace them.
Later I dropped that same dell about 20cm on to a couch with the screen open. The impact swung the screen open all the way and snapped the hinges. They wanted me to send it back for repairs but I couldn’t handle the downtime, so for a year I just had the hinges duck-taped together. I’ve dropped my framework the same way, but because the screen opens the full 180 degrees it doesn’t leverage the hinges at all. And if it did break I’d be able to ship the part and replace it myself.
Not that I support the desktop offering as anything but waste, but the soldered RAM is apparently all about throughput:
We spent months working with AMD to explore ways around this but ultimately determined that it wasn’t technically feasible to land modular memory at high throughput with the 256-bit memory bus. (source)
I wish I could name something in good faith that was comparable to a hyper-repairable x86-64 laptop. Lenovo is pivoting towards repairability with the T14 Gen 5, but I can’t recommend that either yet.
Star Labs, System76, some old Thinkpad models.. there are “competitive” things, but few things that pitch the things Framework does.
While I agree on some of that, I must stress that I’ve had hardware that was fine until just one thing suddenly broke and everything was unusable. I’ll try an analogy: with repairability, if all your components are 99% reliable and working, the whole machine is at 99% but without it, even if all of them are at 99.9% instead, when you have 10 components, you’re not in a better situation overall.
And I say that while I need to finish going through support for a mainboard replacement due to fried USB ports on a first-gen machine (although not an initial batch). BTW, funnily I’m wondering if there’s an interaction with my yubikey. I also wish the chassis was a bit sturdier but that’s more of a wish.
As for thermals, while I think they could probably be better, the 11th gen Intel CPU that you have (just like I do) isn’t great at all: 13th gen ones are much better AFAIK.
I’ve experienced a full main board failure which led to me upgrading to a 12th gen on my own dime.
The thermal problems are still there, and their fans have some surprising QA problems that are exacerbated by thermal issues.
I wish I could excuse the fact that my machine feels like it’s going to explode even with power management. The fans grind after three replacements now, and I lack the energy and motivation to do a fourth.
I think 12th gen is pretty similar to 11th gen. I contemplated the upgrade for similar reasons but held off because I didn’t need to know and the gains seemed low. IIRC it’s really with 13th gen that Intel improved the CPUs. But I agree the thermals/power seems sub-par; I feel like it could definitely be better.
BTW, I just “remembered” that I use mine mostly on my desk and it’s not directly sitting on it which greatly improves its cooling (I can’t give hard numbers but I see the temps under load are better and max CPU frequency can be maintained).
Sorry to hear about the trouble with your Framework 13. To offer another data point: I have a 12th gen Framework 13 and haven’t needed to repair a thing, I’m still super happy with it. The frame-bending I’ve also not seen, it’s a super sturdy device for me.
I don’t understand the necessity for a Framework Desktop edition. It’s just more waste. Just make a better laptop or sell it headless. It would also be nice to see consistency in the offerings instead of 12 being Intel, 13 being AMD AI thingy, 16 being last-gen AMD etc.
I don’t know what “waste” means here. I also don’t understand what the significant difference is in your mind between a “headless laptop” and a “desktop with laptop components”. Are you complaining that the desktop doesn’t come with worse cooling and a built-in keyboard?
The way I see it, if Framework has the capacity to build and sell a desktop computer, and they expect it to sell well enough to cover costs, it’s not hurting anything. For a small company there’s always the risk of spreading yourself too thin, but I don’t think any of us have enough insight into their operations to tell if that’s happening here.
When I say a headless laptop I mean it literally, as in a Framework 13 with some nubs on the hinge or just a plastic block instead of a screen assembly. No keyboard either, make a keyboard-slot-shaped fan assembly for cooling or something - they’re smart people, they could figure it out. The only thing weird about it would be the form factor not being the box shape desktop users are used to. I am making the complaint you’re trying to dismiss. “Waste” is more plastic crap and manufacturing supply chain crap, from moulds to energy usage to time expenditure.
Framework 13 insides exist in both AMD and Intel variants, in various generations
Framework Desktop uses a technically laptop chip but can’t be adequately cooled in the laptop form factor (which is also the reason hardly any laptops with the chip exist). I was looking for a semi-portable small form factor PC and imo this is actually better because it’s smaller than any SFF PC you could build, while still being much more powerful than a NUC-style PC. Its main selling point is LLMs and other GPGPU compute due to the vast unified memory, not much competition in the specific space and GPUs being much more expensive.
You can already have headless ones. I think there’s even a case on their shop.
edit:
As for the variety in CPUs, I think they started with intel partly because TB4 but then they started a partnership with AMD (and AMD CPUs are definitely in a better position now). Moreover, I’ve seen much more variety with other manufacturers (to a point that’s maddening tbh). Finally, the “AMD AI” is AMD’s marketing name: just ignore that part of the name.
I think at this point rustls can use multiple cryptographic backends but I could be wrong. Last time I was doing crypto stuff with it I had to explicitly choose to use ring for some stuff iirc
Yeah, 30kloc really isn’t very big… I regularly work with a ~14 kloc C++ code base which builds in 15 seconds on my 3.5 year old laptop, and a ~20 kloc code base (+ vendored catch2, + auto generated protobuf code) which builds in just under a minute on the same laptop. And C++ isn’t exactly known for its fast compile times either. 10 minutes is an eternity, and I imagine Apple gives its developer significantly more powerful machines than my 2021 M1 Pro.
I want to add: the auto generated protobuf code is 39kloc across 24 source files. I didn’t know it was that much code. So in total it’s 59 kloc, twice as much as the Swift code base in the article.
I can’t find build time benchmarks but both communities are of the opinion compile times are not good enough. In this project’s case I suspect the ten minutes includes pulling their dependencies and building them too, which is the norm with SwiftPM.
This causes a bit of headache for me. I doubled down on ring as the default backend for rustls when releasing ureq 3.0 just a couple of months ago. But this might mean I should switch to aws-lc-rs. Hopefully that doesn’t upset too many ureq users :/
There’s been some positive momentum on the GitHub discussion since you posted. Namely the crates.io ownership has been transferred to the rustls people and 2 of them have explicitly said they’ll maintain it. They need to make a new release to reflect the change and then will be able to resolve the advisory.
My dream for ureq is a Rust native library without C underpinnings. The very early releases of rustls made noises I interpreted to be that too, even though they never explicitly stated it being a goal (and ring certainly isn’t Rust native like that). rustls picked ring, and ureq 1.x and 2.x used rust/ring.
As I was working on ureq 3.x, rustls advertised they were switching their default to aws-lc-rs. However the build requirements for aws-lc-rs were terrible – like requiring users on Windows to install nasm (this has since been fixed).
One of ureq’s top priorities has always been to “just work”, especially for users new to Rust. I don’t want new users to face questions about which TLS backend to chose. Hence I stuck with rustls/ring for ureq 3.x.
aws-lc-rs has improved, but it is still the case that ring has a higher chance to compile on more platforms. RISCV is the one I keep hearing about.
Wait does that mean the Rust ecosystem is moving towards relying on Amazon and AWS for its cryptography? That doesn’t sound great. Not that I believe Amazon would add backdoors or anything like that, but I expect them to maintain aws-lac and aws-lc-rs to suit their own needs rather than the needs of the community. It makes me lose some confidence in Rust for these purposes to be honest.
I expect them to maintain aws-lac and aws-lc-rs to suit their own needs rather than the needs of the community
What do you see as the conflict here, i.e. where would the needs differ for crypto libraries?
I’d expect a corporate funded crypto project to be more likely to get paid audits, do compliance work (FIPS etc), and add performance changes for the hardware they use (AWS graviton processors I guess), but none of that seems necessarily bad to me.
Things like maintaining API stability, keeping around ciphers and cryptographic primitives which AWS happens to not need, accepting contributions to add features which AWS doesn’t need or fix bugs that doesn’t affect AWS, and improving performance on platforms which AWS doesn’t use are all things that I wouldn’t trust Amazon for.
Most of Asahi’s linux work is in Rust, such as their new GPU drivers for Apple Silicon. Part of that requires access to the DMA subsystem. An asahi maintainer other than Marcan wanted to upstream Rust bindings to DMA. These would have been part of the “Rust for Linux” subsystem, which explicitly is exempted from the usual kernel stability guarantees. The maintainer of DMA in Linux objected to them upstreaming those bindings, as he felt he would then be obligated to maintain the Rust code, and he did not want to maintain Rust. This was combined with some uncharitable comments on the Rust for Linux project’s goals. There has been a lot of debate on other internet forums about whether these comments were uncharitable about Rust, Rust for Linux, or some other distinction that personally I think doesn’t really matter.
Marcan and that maintainer then exchanged words on the LKML. Unsatisfied, Marcan appealed to his Mastodon for support. Linus then stepped in with a reply heavily condemning the appeal to social media and not really addressing the complaints that led to that point. Marcan later resigned from his kernel maintainerships, and now also from the Asahi project.
I think this understates the position of the maintainer. He sent a message to LKML that states it pretty clearly: he thinks a multi-language kernel is a terrible mistake. And I can understand his position — which to me demonstrates that there is a leadership failure happening, because this kind of fundamental conflict shouldn’t be allowed to go on this long. “Disagree and Commit”, as they say at Amazon.
Every additional bit that the another language creeps in drastically reduces the maintainability of the kernel as an integrated project. The only reason Linux managed to survive so long is by not having internal boundaries, and adding another language complely breaks this. You might not like my answer, but I will do everything I can do to stop this. This is NOT because I hate Rust.
…
The common ground is that I have absolutely no interest in helping to spread a multi-language code base. I absolutely support using Rust in new codebase, but I do not at all in Linux.
Since there hasn’t been a reply so far, I assume that we’re good with
maintaining the DMA Rust abstractions separately.
Hence, the next version of this patch series will have the corresponding
maintainer entry.
Danilo
I.e. after the explicit statement that the rust people, not Hellwig, would be maintaining it separately.
(And then we reach the post you quoted, where he makes it clear that his reasons are that he doesn’t want Rust in Linux at all, not that he doesn’t want to maintain it)
Yes, I understated the understatement. :) His stated goal is to use his influence and control over the DMA subsystem to prevent Rust (edit: or any other non-C language), from being in the kernel at all. As his NAK says:
If you want to make Linux
impossible to maintain due to a cross-language codebase do that in
your driver so that you have to do it instead of spreading this cancer
to core subsystems. (where this cancer explicitly is a cross-language
codebase and not rust itself […])
Classic LKML rhetorical style here, to use a loaded word like “cancer” to make what is actually a reasonable technical point.
It is not a reasonable technical point! It demonstrates a fundamental incuriosity about technical matters. Categorical technical statements are rarely reasonable.
I disagree in general, but not in specifics. I’ve spoken to Chris Hellman at lengths (literally 6 hours) about the topic of Rust and it was not combative. He was curious. It’s hard to sum up at length, but his disagreements are rooted in specifics and he’s a hardliner about the single source language thing. But not for the sake of being a hardliner. His hard-line stance comes out of different weighting of the issues at hand, not an outright dismissal.
Those people existing is to be expected and it doesn’t invalidate their opinions.
Obviously not a fan of LKML rhetoric, but yep, you get the tone you built over years.
You can disagree, but for the rest of us who haven’t had these super private, intimate exchanges, where we get to “really know” someone, what he puts into the public is our perception of him. To that end, he’s called the project a cancer, and declared he’s going to do whatever he can to stop it which is not what a curios person does. He comes across instead as a religious zealot or “hardliner” in your terms.
That’s fair, but it’s also a categorical mistake to read a (brusque and inappropriate) email and judge on a whole person for days and weeks on the internet. I’m not saying you “really need to know someone”, but you also need to be aware that hundreds of hours are spent on reading what those statements may mean.
I’ve personally had to make the decision whether to allow an additional language into a large codebase, and there are some major negatives to consider that are unrelated to the technical aspects of the language. The maintainer said three times in the above excerpts that he doesn’t have a negative opinion of Rust itself.
I’ve always heard that Google allows only four languages in their codebase, and one of those is only because they invented it. :)
By “categorical”, you mean saying “no second language could ever have upsides that overcome the downsides of having two languages”? I agree that’s a pretty conservative position. It’s hard to even imagine another candidate for second language, though (I dunno, Zig maybe?), so I didn’t take it as literally categorical.
And I also do not want another maintainer. If you want to make Linux
impossible to maintain due to a cross-language codebase do that in
your driver so that you have to do it instead of spreading this cancer
to core subsystems. (where this cancer explicitly is a cross-language
codebase and not rust itself, just to escape the flameware [sic] brigade).
I think Zig sees itself as a candidate second language for C projects, but that’s probably beside the point; nobody’s trying to bring it into the kernel.
Indeed, but that call was already decided when Rust support was first merged.
The goal of the experiment is to explore those tradeoffs. You can’t do that if you don’t merge things!
I’ve always heard that Google allows only four languages in their codebase
And yet they are one of the companies pushing Rust for Linux… They understand the tradeoff. The rust for linux people understand the tradeoff. They just think it is worth it.
Totally. But that does seem like a valid engineering question people could disagree about.
However, this is exactly the kind of huge impactful question that should get escalated to the chief architect. And if there’s a sanctioned scouting effort to gather experience to figure out the right answer, somebody found to be actively subverting that effort should be…corrected…by the chief architect.
It’s interesting that they won’t allow it in their internal codebase but they’re happy to let the Linux maintainers take on the burden of adding it there.
R4L is officially an “experiment” with goal to explore tradeoffs. I.E. satisfy curiosity.
This whole debate is so tired. The “including Rust” ship has, at least temporarily, sailed, and maintainers blocking it based on not wanting Rust/a second language doesn’t make sense when the whole point is to explore that option.
Maybe all the code will get deleted at some point, but the code needs to be merged first to then decide that based on how well it works in practice.
To have a thorough experiment, it must be merged into the official tree. One must be able to see how this interacts with a diverse set of hardware and software configurations, and also how this affects the development process inside the kernel community. Some maintainers are afraid of how the introduction of Rust would affect them in the future and the maintainability of the code base. Without it being in the official tree, there wouldn’t be any conclusion on those points.
It sounds like some of the kernel maintainers at least don’t want to have to pay the cost of this large-scale experiment? Like they don’t feel it’s worth the downsides of massively complicating the kernel into a multi-language codebase? Like they understand the potential benefits of memory safety but they feel they are outweighed by the risks?
These individuals believe that they know the result of the experimental before it’s been run. But they do not, because they cannot, by definition. Of course they don’t want to pay the cost, because change is hard, and deep down they know that there’s a chance that the experiment is successful, and thus their responsibilities will either change or they will have to cede responsibility to someone more capable.
Whether there is benefit to Rust in theory, there is little question. It provably solves problems that the kernel has. Whether it can solve those problems without causing larger ones is why the experiment is required. Blocking the experiment for essentially any reason is more or less indefenisble, because the entire point is that they don’t know.
That sounds way too confident and assured of a statement to me. You are discounting the opinion of subject matter experts as if they have no idea what they’re talking about. If someone came to you at your workplace and said that they want to run an experiment to convert your codebase to a new language, and they want you to help maintain and support that experiment while also delivering on all your existing obligations, would you be so quick to accept that? I don’t know about you, but personally I would and have pushed back on rewrite suggestions as a bad idea, despite the perceived benefits (for example, Management thinks that rewriting in JavaScript will make it easier to hire people for the project in the future).
Would rewriting in JavaScript have possibly made me redundant? Maybe. But it would also be massively expensive, cause huge churn, and have a real possibility of failing as a project. We can’t just ignore the very real possibility of risks because of the expected benefits.
There is no interpretation in which it’s too assured. My position is that the result of the experiment is unknown, and that the potential benefits are known, both of which are hard facts.
I have run such experiments in my workplace before. Some have succeeded and some have failed. The successes have massively outweighed the impact of the failures.
Assuming that you know the outcome before you begin is the pinnacle of hubris, which we see time and again with these entrenched maintainers. They may be domain experts, but they deserve every ounce of criticism that they receive. Closed mindedness is not how progress is achieved.
My point is that there must be some threshold that you use to decide whether some experiment is going to be useful to run or not even worth the effort because your experience and expertise tells you otherwise? Or would you accept every experiment that anyone proposed? Eg, suppose someone wanted to model your entire technical stack with TLA+ for high assurance purposes? On paper it sounds like a great idea–formally verify your concurrency properties–but you don’t see how a reasonable project lead might say ‘While this project could certainly bring great benefits, it’s not worth the tradeoff with our team size and capabilities right now. That may change in the future’?
Some threshold must exist, yes. Presumably, that threshold should be somewhere below “the progenitor of the system has decide that the experiment should be run”, which is currently the case of RFL.
Individual system maintainers should not and must not be able to usurp the direction of the project as a whole.
Your core misunderstanding is that we are coming from a tabula rasa. We are not, and if your position is that Linus’s previous endorsements shouldn’t stand, then we’re having an entirely different (and much less tenable) conversation.
“the progenitor of the system has decide that the experiment should be run”, which is currently the case of RFL.
Is it though? To my understanding the decision was only that they will try it and see if it works on both the technical and the social levels. There was never any guarantee given that Rust will be shipped in Linux at all costs. To my understanding Linus’s approach is to see if Rust is overall viable for the kernel, and that includes the opinions of the other subsystem maintainers. Because if they are overall against it, the experiment is dead in the water.
Again, that decision has already been made. There’s no point questioning it over and over. Doing so is distracting from the debates that are important now: how can leadership do better at enforcing these project wide decisions to not let the situation fester like R4L, and reviewing the Rust code without raising the same solved questions again and again (concern trolling).
If you’re genuinely curious about those questions, there’s plenty of text written about it. LWN is a good place to start.
I hear you, but it is a bit in the nature of project that are very distributed in decision making with high individual authority that indeed, decisions will be questioned again and again and again at every point. Those projects need a long breath and the willingness to talk about points again and again and again.
It comes with its strength, particularly, those projects are extremely resilient and fast in other regards. You can’t have both. Pushing top-down decisions in such an environment can have other unintended effects.
You’re right that it’s in rust/, tho unclear to me if it’s properly in the rust subsystem or not. You and others may be right that this patch may still go thru, that seems to depend on Greg KH and Linus.
so he has the authority to block it, but not to reject it? him being the guy referred to above as “the guy who blocked the patches that the Asahi Linux folks wanted.”
There has been a lot of debate on other internet forums about whether these comments were uncharitable about Rust, Rust for Linux, or some other distinction that personally I think doesn’t really matter.
Personally I think it does matter when a core maintainer calls a project that’s part of the kernel “cancer” and the guy who’s in charge doesn’t seem to care.
And I also do not want another maintainer. If you want to make Linux
impossible to maintain due to a cross-language codebase do that in
your driver so that you have to do it instead of spreading this cancer
to core subsystems. (where this cancer explicitly is a cross-language
codebase and not rust itself, just to escape the flameware brigade).
The cancer is not Marcan, Asahi, R4L, or Rust. The cancer is anything-but-C. The cancer is a two-language codebase.
Still a very discouraging comment from a maintainer, and it bucks Linus’ decision that trying out Rust in the kernel is OK, but the distortion of this comment has been bugging me for the last week.
I don’t have an opinion, but I think a fair and less charged way than “cancer” of stating the position is just this: using Rust would have positive benefits, but they are far outweighed by the negatives of allowing a second language into the kernel.
The pros and cons of RfL are somewhat subjective, and the funal balance is hotly debated.
But I find it quite telling that Hellwig’s core argument (that a multi-lamguage codebase requires more work) is held by people who didn’t try doing any of that work. Whereas the kernel devs who started using Rust are explicitly saying “the API compat churn isn’t that big a deal, we can do that work for you if you want”.
We mostly hear about the drama, but it seems that the overall feeling toward Rust among kernel devs (not just RfL devs) is generally positive.
I don’t think the metaphor conveys any technical meaning beyond what I said. I don’t know what “kills the organism” is supposed to relate to. Is the “organism” the kernel? The community of kernel developers? And what does “kill” equate to in reality? It would be better to have the discussion in concrete terms rather than metaphors.
It does convey the personal outrage of the maintainer better, certainly.
Well, notice that you aren’t absorbing my actual point too well, because you’re very focused on making your own.
yes, I’m ignoring the less absurd things you’ve said because I don’t know what to take seriously when you haven’t retracted the most absurd thing.
To give him the benefit of the doubt, he could have been trying to convey the technical meaning of “growth” in the sense of “if you let Rust in, we’ll just have more Rust”. However, that’s an utterly vacuous thing to say, because obviously the whole point of the exercise was to let Rust in and see what happens, and why would you do that unless you want to have more of it if the experiment succeeds?
one characteristic of cancerous growth is that it happens whether you want it to or not.
It conveys no useful information beyond “I don’t think Rust is a good idea at all, for reasons”; in other words, “the benefits don’t outweigh the negatives”. In fact it confuses matters because it implies he thinks Rust would be OK if only it didn’t grow.
it doesn’t imply that, but I do think he would be much more OK with a little bit of Rust if the scope were somehow guaranteed not to expand. like I don’t know, maybe there’s a carve-out for testing a particular class of rust drivers in linux to help find bugs, but eventually the drivers are ported to C for use in linux and the rust version is used in a separate kernel written from scratch in rust?
I’m actually trying to be polite to the guy here, because I truly believe it was an emotional outburst, not an attempt at a technical argument. His actual technical argument (stated much more clearly elsewhere) is that there should be no Rust at all in the kernel, ever, not that once there is some, there will be more. So if “cancer” was supposed to mean “growth” as a technical argument, he misstated his own argument.
so you wouldn’t say that you want to have no cancer at all in your body, ever? if it’s a non-cancerous tumor, I have much less of a problem with it being in my body.
I don’t want to dig into the minutiae of why this metaphor is inaccurate — my point is that using this metaphor at all is unprofessional and counterproductive because of the heavy emotional baggage it comes with. Just say “the introduction of Rust anywhere in the codebase will inevitably result in a growing amount of Rust in the codebase”.
okay, so do you retract your previous statement that “I don’t think the metaphor conveys any technical meaning beyond what I said”?
my intention was not to get into minutia either, but we’ve gone back and forth three times and you still haven’t retracted that statement so I don’t know where you stand.
It’s especially odd in this context, given the history of this metaphor regarding Linux.
so our enemies allow themselves to use the metaphor to efficiently communicate with each other, but we have to forswear it.
If you don’t want it there at all, it’s redundant to also complain that it’s growing. Cancer also has many other attributes besides “growing” that are irrelevant or contradictory to the position taken.
The reason he used that particular metaphor was to express disgust and horror, not to be technically accurate. If he just wanted to evoke “growth” he could have said it was like ivy. Disgust and horror are not an appropriate device to call on in this context, IMO.
Re Mr. Ballmer, the enemies were wrong that time, so don’t you think evoking that argument by using the same metaphor muddies the rhetorical water a bit?
if I can’t tell whether you stand by your statements, it feels futile to interpret them as if you believe them. that’s my issue with continuing in light of the fact that you are just leaving “I don’t think the metaphor conveys any technical meaning beyond what I said” without clarifying whether you believe it.
“complaining that it’s growing” is completely different from identifying the growth as a factor in the technical drawbacks.
Cancer has a multitude of characteristics, of which growth is only one. If anything, it has many characteristics contradictory to the point being made: it’s an undesirable outcome of natural processes, it is typically self-generated from within rather than deliberately introduced, it is hard to eliminate because it’s nourished by the same system that sustains healthy cells…
If you have to tell me exactly which part of the metaphor I’m supposed to pay attention to in order to get the technical meaning, then you aren’t conveying technical meaning, you’re muddying the water. Just say what you mean.
The additional meaning being conveyed here is not technical, it’s emotional, and I do agree emotional meaning was conveyed.
The additional meaning being conveyed here is not technical
I take this to mean that you still stand by the statement that “I don’t think the metaphor conveys any technical meaning beyond what I said.” if you treat all technical metaphors like this, I’m glad you’re not my coworker.
If you have to tell me exactly which part of the metaphor I’m supposed to pay attention to in order to get the technical meaning, then you aren’t conveying technical meaning, you’re muddying the water.
or it could mean I am conveying technical meaning but you’re being willfully obtuse. just maybe…
Well, notice that you aren’t absorbing my actual point too well, because you’re very focused on making your own.
To give him the benefit of the doubt, he could have been trying to convey the technical meaning of “growth” in the sense of “if you let Rust in, we’ll just have more Rust”. However, that’s an utterly vacuous thing to say, because obviously the whole point of the exercise was to let Rust in and see what happens, and why would you do that unless you want to have more of it if the experiment succeeds? It conveys no useful information beyond “I don’t think Rust is a good idea at all, for reasons”; in other words, “the benefits don’t outweigh the negatives”. In fact it confuses matters because it implies he thinks Rust would be OK if only it didn’t grow.
I’m actually trying to be polite to the guy here, because I truly believe it was an emotional outburst, not an attempt at a technical argument. His actual technical argument (stated much more clearly elsewhere) is that there should be no Rust at all in the kernel, ever, not that once there is some, there will be more. So if “cancer” was supposed to mean “growth” as a technical argument, he misstated his own argument.
I could be wrong, but I get the feeling that people didn’t quite get why Hellwig used the word “cancer”. I could be wrong about this in a lot of ways, maybe I too misunderstood what Hellwid meant, maybe I misjudged how others are interpreting what he said. But allow me to add my 2 cents.
People seem to be interpreting what Hellwig said (i.e. calling R4L “cancer”) as him saying R4L is bad, evil, a terrible thing, etc. And I totally understand why many would think that, and would agree this is a terrible choice of words. But I think that Hellwig’s focus is actually on the fact that cancer spreads. A better word to use would probably be like “infectious”, or “viral”. Although I disagree with him, I think what he was saying is that, despite what Rust developers promised, Rust will spread to more corners of the kernel and increase the maintenance burden and will have a negative impact on the kernel as a whole.
I think you’re right, but that doesn’t exculpate Hellwig in the least. One of the defining characteristics of cancer is that it is malignant. It seems pretty clear that he used this metaphor because he thought that Rust would spread and that it is dangerous and harmful.
“infectious” or “viral” have different connotations. viral would be if they thought the BSDs might catch it. infectious is not as specific as cancer; cancer specifically grows from one area and consumes more and more of the things next to it. infections can do that but not necessarily.
Unrelated to the rust issue, my understanding is that such toxic language actually isn’t all that strange in the Linux kernel maintainer community. It feels like they try somewhat not to overdo it, but they aren’t that much concerned about the toxicity. More like, “I’ll try but I don’t actually care”.
It’s useful in some cases, but when the effect of it is raising the temperature of the discussion, then I would say it isn’t useful in that instance. There are other ways to convey the same meaning that wouldn’t have attracted the same amount of strong emotions. We’ve seen examples of such alternate arguments in this thread that would have been more suitable.
sure. but then don’t fixate on the word, and acknowledge that their choice to use strong language can also be an effect of the raising temperature of the discussion, for which responsibility is shared among all participants.
There’s this idea that multi-language codebases are categorically a disaster, right? What really bothers me is how small-minded it is. There are many C and C++ codebases that have introduced Rust. It’s not easy but it is possible, and it often is worth it. Some attempts have been successful and others haven’t. You can learn so much from other people’s experiences here!
Simply calling multi-language codebases a cancer closes off discussion rather then opening it. That goes against the basic curiosity that is a hallmark of engineering excellence.
Honestly everything around R4L that has been going has just cemented in me the belief that the average long time Linux maintainer is fundamentally incompetent. They’re reasonably good at One Thing, but totally incurious and scared of anything outside of that One Thing they’re good at.
Is it possible that we could take an experienced kernel maintainer’s opinion seriously instead of dismissing it immediately as incurious and ‘against’ engineering excellence? If a co-worker who specializes in a different domain came to you and kept insisting that something you know is very difficult and not worth the tradeoff is actually quite easy, would you be incurious and ‘against’ engineering excellence if you dismissed their wild suggestions without a deep discussion of the tradeoffs?
Evaluating who to take seriously and who not to is a matter of judgment cultivated over a lifetime.
If a co-worker who specializes in a different domain came to you and kept insisting that something you know is very difficult and not worth the tradeoff is actually quite easy, would you be incurious and ‘against’ engineering excellence if you dismissed their wild suggestions without a deep discussion of the tradeoffs?
Depends on many things, but in general, yes, of course, especially if they were someone I took seriously on other matters. If nothing else, it’s a chance to sharpen my arguments. If I keep having the same arguments over and over I might even step back and write an essay or two explaining what I believe and why I believe it. Here’s one that’s public that was a result of the sort of back-and-forth you’re talking about.
Reading the discussions on this elsewhere, there’s no way to describe this that wasn’t going to get accused of bias by people holding strong opinions on either marcan or on the rust for linux project.
I’m not accusing of bias (which is more or less unavoidable), but of constructing a narrative. The facts are: he exchanged words on the LKML and then wrote Mastodon posts about the exchange. The narrative you’re constructing is: he exchanged words on LKML, then because he was unsatisfied with how that discussion was going, he wrote Mastodon posts about the exchange in an attempt to gather support.
I think it would be better to leave out the guesswork about intentions, or at least clearly separate it from the factual retelling of events.
Correction: Marcan never “exchanged words” with Christoph Hellwig on the patches, which had not even been posted by him. He just replied to the thread with a tirade.
I’m one of those using an unusual window manager (sawfish) with xfce4. I’m starting to dread the inevitable day that I’ll have to change back to some mainstream system that will of course include all the pain points that led me to my weird setup in the first place. Fine, so X11 needs work, but replacing it with something as antimodular as wayland seems to be is just depressing. Something must be done (I agree!); this is something (er, yes?); therefore this must be done (no, wait!!). We’ve seen it with systemd and wayland recently, and linux audio infrastructure back in the day; I wonder what will be next :-(
First, reports of X’s death are greatly exaggerated. Like the author said in this link, it probably isn’t actually going to break for many years to come. So odds are good you can just keep doing what you’re doing and not worry too much about it.
But, even if it did die, and forking it and keeping it going proves impossible, you might be able to go a while with a fullscreen xwayland rootful window and still run your same window manager in there. If applications force the wayland on you, maybe you can nest a wayland compositor in the xwayland instance to run that program under your window manager too. I know how silly that sounds - three layers of stuff - but there’s a good chance it’d work, and patching up the holes there may be a reasonable course of action.
So I don’t think the outlook is super bleak yet either.
I’m not convinced that building something like those kinds of window managers on top of wlroots is significantly more difficult than building those kinds of window managers on top of X11. There’s a pretty healthy ecosystem of wlroots-based compositors which fill that need at least for me, and I believe that whatever is missing from the Wayland ecosystem is just missing because nobody has made it yet, not because it couldn’t be made. Therefore, I don’t think your issue is with “antimodularity”, but with the fact that there isn’t 40 years of history.
I assure you, my issue is with antimodularity. The ICCCM and EWMH are protocols, not code. I can’t think of much tighter a coupling than having to link with and operate within a 60,000-line C-language API, ABI and runtime. (From a fault isolation perspective alone, this is a disaster. If an X window manager crashes, you restart it. If the wayland compositor crashes, that’s the end of the session.) You also don’t have to use C or any kind of FFI to write an X window manager. They can be very small and simple indeed (e.g. tinywm in 42 lines of C or 29 lines of Python; a similarly-sized port to Scheme). You’d be hard pressed to achieve anything similar using wlroots; you’d be pushing your line budget before you’d finished writing the makefile.
There is nothing inherent in Wayland that would prevent decoupling the window manager part from the server part. Window managers under X are perfectly fit to be trivial plugins, and this whole topic is a bit overblown.
Also, feel free to write a shim layer to wlroots so you can program against it in anything you want.
I’m talking about modularity. Modularity is what makes it feasible to maintain a slightly-different ecosystem adjacent to another piece of software. Without modularity, it’s not feasible to do so. Obviously there’s nothing inherently preventing me from doing a bunch of programming to get back to where I started. But then there’s nothing inherently preventing me from designing and implementing a graphics server from scratch in any way I like. Or writing my own operating system. There’s a point beyond which one just stops, and learns to stop worrying and love the bomb. Does that sound overblown? Perhaps it always seems a bit overblown when software churn breaks other people’s software.
With all due respect, I feel your points are a bit demagogue without going into a specific tradeoff.
The wayland protocol is very simple, it is basically a trivial IPC over Linux-native DRM buffers and features. Even adding a plug-in on top of wlroots would still result in much less complexity than what one would have with X. Modularity is not an inherent goal we should strive for, it is a tradeoff with some positives and negatives.
Modular software by definition has more surface area, so more complexity, more code, more possibility of bugs. Whether it’s worth it in this particular case depends on how likely we consider a window manager “plugin” to crash. In my personal opinion, this is extremely rare - they have a quite small scope and it’s much more likely to have a bug in the display server part, at which point X will fail in the exact same manner as a “non-modular” Wayland server.
I’m not sure exactly what pain points led you to your current setup, but I don’t think the outlook is that bleak. There are some interestingly customizable, automatable, and nerd-driven options out there. Sway and hyprland being the best-known but there are more niche ones if you go looking.
I use labwc and while it’s kind of enough, it’s years away from being anywhere near something like Sawfish (or any X11 window manager that’s made it past version 0.4 or so). Sawfish may be a special case due to how customizable and scriptable it is but basically everything other than KWin and Gnome’s compositor are basically in the TWM stage of their existence :).
Tiling compositors fare a little better because there’s more prior art on them (wlroots is what Sway is built on) and, in many ways, they’re simpler to write. Both fare pretty badly because there’s a lot of variation; and, at least the last time I tried it (but bear in mind that was like six years ago?) there was a lot more code to write, even with wlroots.
There are some interestingly customizable, automatable, and nerd-driven options out there.
Ah, sorry, I misread that via this part:
There are some interestingly customizable, automatable, and nerd-driven options out there.
…as in, there are, but none of the stacking ones are anywhere near the point where they’re on-par with window managers that went past the tinkering stage today. It’s certainly reasonable to expect it to get better; Wayland compositors are the cool thing to build now, X11 WMs are not :). labwc, in fact, is “almost” there, and it’s a fairly recent development.
Just digging into Hyprland and it’s pretty nice. The keybindings make sense, especially when switching windows, the mouse will move to the focused window. Necessary? Probably not. But it’s a really nice QoL improvement for a new Linux user like myself.
I’m pretty much in the same situation as you. I’m running XFCE (not with sawfish, but with whatever default WM that it ships with). I didn’t really ever explore using Wayland since X11 is working for me so I found no reason to switch yet. For a while it seems like if you wanted a “lightweight” desktop environment you’re stuck with tiling Wayland environments like Sway. I still prefer stacking-based desktop environments, but don’t really want to run something as heavy as GNOME or KDE. I’ll probably eventually switch to LXQt which will get Wayland support soon.
It sounds like many things are changing for the better. But at the same time it doesn’t look like we’re already in a working state for people that don’t want to tinker around. I’d rather not have random issues and quirks while actual recording support is so-so depending on the compositor. Especially when every problem is either “you’re using the wrong distro” or “you’re using the wrong compositor, works on mine”.
I also would rather not have random issues and quirks, and honestly, that means I don’t want X11. X is the king of random issues and quirks in my experience.
I mean, if the problem is that they’ve finally got a good solid solution but the software you’re using hasn’t gotten around to implementing it yet, or the software you’re using has implemented it upstream but your distro hasn’t pulled that version in yet, what other response do you really expect? You can use a system where it already works, or you can wait for the necessary changes to make their way to your setup, or you can pitch in to make it happen faster.
On a technical level I agree with you. But on a consumer level it sounds like you just have a long time of frustration during which you probably would rather off board from linux and move to a mac (there I said it, you either die a hero..). After which there is even more friction for migrating back to linux in x years when the whole ecosystem and LTS train arrived at the state of wayland that is actually usable - without resorting to hacks that make you sound like someone trying to regedit copilot out of win11.
The language is interesting, and I have no real opinion about it the syntax. I played a little with it in user space, and I just absolutely hate the cargo concept. It is like pip and to me I hate having to pull down other code that I do not trust. At least with shared libraries, I can trust a third party to have done the build and all that. Also, if you have a large library it does bloat the applications that use it.
I don’t understand this attitude. The first thing I don’t understand is this about “having” to pull down code that he doesn’t trust. You don’t have to do anything, you choose what you want to depend on yourself. But then what I really don’t understand is why some random third party compiling the code and giving you a shared library suddenly makes it reasonably trustworthy.
I interpreted this as saying: In C the norm is to use fairly large shared library dependencies that come as a binary from someone you trust to have built and tested it correctly; most often it’s from a distribution with a whole CI process to manage this. The only source code you compile is your own. Whereas in Rust/Go/Python/etc. the norm is to download everybody else’s source code and compile it yourself on the fly, and the libraries are typically smaller and come from more places. Also, in the typical default setup (”^” versions) it’s easy to pull down some very recent source code without realizing it.
I can see how that would feel like you’ve thrown away a whole layer of stability and testing compared to the lib.so model.
When using the package manager of my OS to install dependencies for my program I have some safeguards. First of all I see this dependence is important enough for someone to add the package. Also there are some checks done before with the package is added. Changes are also checked before added. Also when security updates are available these are added and I can simple update them, so all running software (even the one not managed by the package manager) get the security fix after reload the dependency. Then there can’t be a maintainer go rough and just break my dependence. I get this also for dependencies of my dependence.
Of course this doesn’t do a full audit of all my dependencies and fix all problems. It just adds help to manage dependencies and trust in them. While with cargo/pip/… you need to do all this on your own for every dependence and every update.
I know it’s a spectrum, but really OS upgrades aren’t a good benchmark. There have been some incredible bugs over the years, often due to OS packaging having or ignoring the defaults of the upstream. For example when Debian screwed up the private key generation…
It is very, very hard to write any serious Rust program without any dependencies from an untrusted code repository such as Cargo. In C and C++, the same is trivial, because almost all dependencies you could ever want are provided by your operating system vendor. The core difference is between trusting some random developer who uploaded some code to Cargo, and trusting your operating system.
I first wanted to write a long comment and decided to just blog it.
The TLDR is somewhere along the lines that I doubt you can get away with only using OS dependencies and making that work is even more work. While not actually gaining as much security as you would like.
As a user, I don’t have much more trust in libfoo packaged by my distro (if it’s packaged) than in libfoo fetched from upstream. And as a distributor, I have much less trust in libfoo packaged by my users’ 50 different OSes than in libfoo I fetch from upstream.
It is very, very hard to write any serious Rust program without any dependencies from an untrusted code repository such as Cargo.
When you use distro provided libraries, you have to trust thousands of authors. I trust my distro maintainers to not do anything malicious, but I don’t trust them to vet 10000 C packages. Which means I have to trust the authors of the code to not be malicious.
The distro package update process is to pull down new sources, compile, run the test suite (if one is provided), only look at the code if something isn’t working, and then package it. After that, someone might notice if there is something bad going on. But it’s a roll of the die. At no point in this process is anything actually vetted. At most you get some vetting of a few important packages, if your distro is big and serious enough.
In C and C++, the same is trivial, because almost all dependencies you could ever want are provided by your operating system vendor.
Considering how often you find random bespoke implementations of stuff like hash tables in C projects, this is clearly untrue.
In C and C++, the same is trivial, because almost all dependencies you could ever want are provided by your operating system vendor.
Provided by your operating system vendor? Which operating system? If you write cross-platform code, you only get to use the dependencies that all platforms provide (which is pretty much none at all). C and C++ don’t even ship with proper UTF-8 support. It’s completely impossible to write any serious software in these languages without pulling in some external code (or reinventing the wheel). As someone who has earned a living with C/C++ development for years, I have a hard time understanding how you arrived at this conclusion.
The scope of the C/C++ standard library and the Rust standard library is very similar. Rust’s standard library has no time features or random number generators (for that, the semi-official crates chrono, time and rand can be used), but it has great UTF-8 support. Overall, you’ve got about the same power without using any dependencies.
My operating system vendor is the Fedora project on my desktop, and Canonical or the Debian project on various servers. All of those provide plenty of C and C++ libraries in their repositories.
I don’t understand why you talk about C++’s UTF-8 support or the size of C++‘s stdlib, that’s completely irrelevant to what I said.
If you use Windows (MSVC), you basically have no third party libraries available whatsoever. You need to download binaries and headers from various websites.
I don’t understand why you talk about C++’s UTF-8 support or the size of C++‘s stdlib, that’s completely irrelevant to what I said.
It’s the best example of a third party library you have to pull in for basically every project. This is not provided out of the box or by the operating system vendor.
I don’t use Windows. The person in the article doesn’t either. TFA contains a criticism of Rust from the perspective of a Linux user, not from the perspective of a Windows user.
In C and C++, [to write any serious program without any dependencies from an untrusted code repository] is trivial, because almost all dependencies you could ever want are provided by your operating system vendor.
to:
I don’t use Windows.
Yes, dependency management is trivial if you only ever use a system where all your desired dependencies are provided through official channels. However, this is not the experience most C/C++ programmers will have. It’s not even the case if you just use Linux: Recently, I used SDL3 and I had to download the tarball and compile the source myself because it was too new to hit the apt repository of my distro.
The complaints from the article are from the perspective of a Linux user. My post was also from the perspective of a Linux user. Windows is not relevant in this conversation.
Unless you specified it upfront then Windows is going to be presumed to be relevant. Pretending to be surprised that someone thought it wasn’t is silly.
The person you are talking to then goes on to describe a Linux counterexample that is quite common for C/C++ development. If your response to this is welll I don’t do any games/graphics/gui development then cool. You have now defined an extremely narrow category of C/C++ development where it’s common to only use libraries that come already installed. It is no where close to the general case though. Which is the point of the commenter you are responding to.
As for the SDL example: Sure, there are situations where you may need to add dependencies which aren’t in your distro in C or C++. However, this is fairly rare and you can perform a risk assessment every time (where you’d conclude that the risk associated with pulling down SDL3 outside of your distro’s repos is minimal).
Contrast this with Rust (or node.js for that matter), where you’re likely to have so many dependencies not provided by your distro that it’s completely unreasonable to vet every single one of them. For my own Rust projects, the size of my Cargo.lock file is something that worries me, just like the size of the package-lock.json file worries me for my node.js projects. I don’t know most of those dependencies.
As expected, it’s not very easy to read. The typography of raw text in web browsers is, sadly, horrible. It’s like motherfuckingwebsite.com compared to bettermotherfuckingwebsite.com, except raw text is even worse than unfromatted HTML.
I always felt that bettermotherfuckingwebsite.com was an indictment of browser implementers. There’s no reason browsers can’t have good default styles. A publisher shouldn’t have to do the work that bettermotherfuckingwebsite did to make things look readable. That can be entirely on the user agent.
Oh I 100% agree. It’s a bloody shame that the web is, and always has, looked completely unreadable by default. Even (or especially!) in its original form as a linked rich text document sharing network, there’s no excuse for not implementing basic typographic conventions to make the content readable.
Yet here we are, and it isn’t changing any time soon. And so, as bloggers, I feel like we have a responsibility to our readers to present them with something that’s better than the default HTML or raw text formatting.
I mostly agree, I add css to my html for a living and on my free time, this blog is an exception and I think I will keep it like this, mostly for the challenge and the ability I have to add ascii art if I want without needing to add html pre tag :P
Backwards-compatibility something-something? Like viewport adjustment by default- it’s painful everyone has to set it up, but likely a lot of stuff breaks if browsers did that by default.
I was recently told about font-family: system-ui and adopted it.
But unfortunately, I suspect most users do not adjust their browser defaults either to make things readable to them, because likely this only has an effect on a few random websites.
I kinda think the Gemini world works better- just allow no styling and in theory users will just set up their browser.
To clarify what I said, because I was unclear: I think not allowing styling is a good idea. Semantic markup (or very simple “bold”/“italics” styling) is frequently a good idea.
(I don’t think Gemini is perfect. I think it’s mostly inspiring.)
This is a big change, and shows an impressive level of courage. I love to see how the leadership in Ubuntu can unafraid to be bold and question traditional ways of doing things (even though that sometimes results in results I find disagreeable, e.g snap packages).
I have recently had to run a vulnerability scanner against an embedded Linux system, and was surprised to find CVEs relating to memory safety bugs in recent versions of old GNU tools you’d think would have those sorts of issues ironed out by now, like GNU patch (and various Busybox tools for that matter). From an “exploitable memory safety bug” perspective, I think I would trust even a relatively young project like uutils over even old, battle-tested C software.
Slowly but surely replacing copyleft software 🙁
(with MIT here)
Yeah, that part sucks. And it’s a huge loss for GNU as a project.
But I don’t feel like GNU has been a good steward of such vital infrastructure, it shows many signs of a broken project. Software will have critical bug fixes committed to their repository but then fail to cut a release for a long time. GNU M4 was once incompatible with GNU glibc for years because glibc changed something which caused M4 to stop compiling, M4 got a fix committed to its repo, but no new release of M4 was made.
GNU as an operating system is stuck in the 80s; the only reason GNU has seen any success in the past 30 years is that others have picked up the slack for core operating system components like kernels, init systems, graphical environments, etc.
The GNU “operating system” delivers an excellent compiler collection, an okay libc with significant issues such as incompatibility with static linking and intentional lack of “forward compatibility”, an atrociously terrible build system, and an alright but stagnant set of commands line utilities and shell. It’s not a project that I feel deserves a lot of fealty. Although I do think it’s a tragedy that the project turned out this way, I think we’re well past the time where it makes sense to look at it critically and pick up the pieces with value (such as the compiler collection) and abandon the pieces without (such as the build system, arguably the coreutils, maybe eventually the libc; a man can dream).
Sounds like we agree, I appreciate the examples :)
I’m pro replacing or improving software, especially of core utility to a system. I’m also pro having such core code copyleft.
What is the reason that the GNU project is as dysfunctional as you describe (for certain projects)? Lack of fresh talent? Broken leadership? Politics?
I am curious because I have no insights into GNU but I never perceived the tools as so poorly maintained
Broken leadership. Stallman is correct about freedom but wrong about product management. He ends up being a huge distraction (comments about minsky in particular).
There have been multiple examples of him not allowing features to be merged because they could enable closed competition, where the GNU alternative doesn’t exist or isn’t as good. In multiple cases he has had to relent after months or years because a different competitor implemented the same feature and its become obvious how far behind he is.
In a lot of cases he is fighting against GPL licensed alternatives (bzr/git), that aren’t under the GNU umbrella. Sometimes he’s fighting against more permissively licensed software.
If free software isn’t made better, it will lack adoption.
Honestly I have no idea. I’m also just an outside observer.
No matter how many times Go people try to gaslight me, I will not accept this approach to error-handling as anything approaching good. Here’s why:
Why must you rely on a linter or IDE to catch this mistake? Because the compiler doesn’t care if you do this.
If you care about correctness, you should want a compiler that considers handling errors part of its purview. This approach is no better than a dynamic language.
The fact that the compiler doesn’t catch it when you ignore an error return has definitely bitten me before.
doTheThing()on its own looks like a perfectly innocent line of code, and the compiler won’t even warn on it, but it might be swallowing an error.I learned that the compiler doesn’t treat unused function results as errors while debugging a bug in production; an operation which failed was treated as if it succeeded and therefore wasn’t re-tried as it should. I had been programming in Go for many years at that point, but it had never occurred to me that silently swallowing an error in Go could possibly be so easy as just calling a function in the normal way. I had always done
_ = doTheThing()if I needed to ignore an error, out of the assumption that of course unused error returns is a compile error.Does anyone know the reason why the Go compiler allows ignored errors?
Because errors aren’t special to the Go compiler, and Go doesn’t yell at you if you ignore any return value. It’s probably not the most ideal design decision, but in practice it’s not really a problem. Most functions return something that you have to handle, so when you see a naked function call it stands out like a sore thumb. I obviously don’t have empirical evidence, but in my decade and a half of using Go collaboratively, this has never been a real pain point whether with junior developers or otherwise. It seems like it mostly chafes people who already had strong negative feelings toward Go.
It’s similar to array bounds checks in c - not really a problem.
I hope this is sarcasm.
Yes
Is there a serious argument behind the sarcasm as to how this is comparable to array bounds checks? Do you have any data about the vulnerabilities that have arisen in Go due to unhandled errors?
Because the programmer made an intentional decision to ignore the error. It won’t let you call a function that returns an error with out assigning it to something, that would be a compile time error. If the programmer decides to ignore it, that’s on the programmer (and so beware 3rd party code).
Now perhaps it might be a good idea for the compiler to insert code when assigned to _ that panics if the result is non-nil. Doesn’t really help at runtime, but at least it would fail loudly so they could be found.
I’ve spent my own share of time tracking down bugs because something appeared to be working but the error/exception was swallowed somewhere without a trace.
This is incorrect: https://go.dev/play/p/k7ErZU5QYCu
huh… til. I always assumed you needed to use the result, probably because of single vs multiple returns needing both being a compile time error. Thanks.
To be fair I was just as certain as you that of course Go requires using the return values, until I had to debug this production bug. No worries.
is not an intentional decision to ignore the error. Neither is
Yet the go compiler will never flag the first one, and may not flag the second one depending on
errbeing used elsewhere in the scope (e.g. in the unheard of case where you have two different possibly error-ing calls in the same scope and you check the other one).Yeah, I always thought the _ was required, I learned something today!
I do have a few places with err and err2, it does kind of suck - I should probably breakup those functions.
_, err := f.Write(s)is a compiler error iferralready exists (no new variables on left side of :=), and iferrdoesn’t already exist and you aren’t handling it, you get a different error (declared and not used: err). I think you would have to assign a new variablet, err := f.Write(s)and then take care to handletin order to silently ignore theerr, but yeah, with some work you can get Go to silently swallow it in the variable declaration case.Because they couldn’t be arsed to add this in v0, and they can’t be arsed to work on it for cmd/vet, and there are third-party linters which do it, so it’s all good. Hopefully you don’t suffer from unknown unknowns and you know you should use one of these linters before you get bit, and they don’t get abandoned.
(TBF you need both that and errcheck, because the unused store one can’t catch ignoring return values entirely).
Considering how much effort the Go team puts in basically everything, this language makes it very hard to to take you serious.
Yes, except for things that they decide not to be arsed about. I can confirm this as a very real experience of dealing with Go.
Which is fair enough.
Sure, but then it is equally fair to criticize them for it.
The go compiler doesn’t do warnings, only errors. Linters do warnings, and do warn about unchecked errors.
I don’t really care. Generally speaking, I would expect compilers to either warn or error on an implicitly swallowed error. The Go team could fix this issue by either adding warnings for this case specifically (going back on their decision to avoid warnings), or by making it a compile error, I don’t care which.
This is slightly more nuanced. Go project ships both
go buildandgo vet.go vetis an isomorphic to how Rust handles warnings (that warnings apply to you, not your dependencies).So there would be nothing wrong per se if this was caught by
go vetand notgo build.What is the issue though, is that this isn’t caught by first-party
go vet, and requires third partyerrcheck.Meh plenty of code bases don’t regularly run
go vet. This is a critical enough issue that it should be made apparent as part of any normal build, either as a warning or an error.And that’s perfectly fine given that Go is pleasurable even for quick and dirt prototypes, fun side projects, and so on.
I agree with you that it’s better for this to be a compiler error, but (1) I’ll never understand why this is such a big deal–I’m sure it’s caused bugs, but I don’t think I’ve ever seen one in the dozen or so years of using Go and (2) I don’t think many dynamic languages have tooling that could catch unhandled errors so I don’t really understand the “no better than a dynamic language” claim. I also suspect that the people who say good things about Go’s error handling are making a comparison to exceptions in other languages rather than to Rust’s approach to errors-as-values (which has its own flaws–no one has devised a satisfactory error handling system as far as I’m aware).
The fact that these bugs seem so rare and that the mitigation seems so trivial makes me feel like this is (yet another) big nothingburger.
The most common response to my critique of Go’s error-handling is always some variation on “this never happens”, which I also do not accept because I have seen this happen. In production. So good for you, if you have not; but I know from practice this is an issue of concern.
Relying on the programmer to comprehensively test inputs imperatively in a million little checks at runtime is how dynamic languages handle errors. This is how Go approached error-handling, with the added indignity of unnecessary verbosity. At least in Ruby you can write single-line guard clauses.
I don’t really follow your dismissal of Rust since you didn’t actually make an argument, but personally I consider Rust’s
Optiontype the gold standard of error-handling so far. The type system forces you to deal with the possiblity of failure in order to access the inner value. This is objectively better at preventing “trivial” errors than what Go provides.I’m sure it has happened before, even in production. I think most places run linters in CI which default to checking errors, and I suspect if someone wasn’t doing this and experienced a bug in production, they would just turn on the linter and move on with life. Something so exceedingly rare and so easily mitigated does not meet my threshold for “issue of concern”.
That’s how all languages handle runtime errors. You can’t handle them at compile time. But your original criticism was that Go is no better than a dynamic language with respect to detecting unhandled errors, which seems untrue to me because I’m not aware of any dynamic languages with these kinds of linters. Even if such a linter exists for some dynamic language, I’m skeptical that they’re so widely used that it merits elevating the entire category of dynamic languages.
I didn’t dismiss Rust, I was suggesting that you may have mistaken the article as some sort of criticism of Rust’s error handling. But I will happily register complaints with Rust’s error handling as well–while it does force you to check errors and is strictly better than Go in that regard, this is mostly a theoretical victory insofar as these sorts of bugs are exceedingly rare in Go even without strict enforcement, and Rust makes you choose between the verbosity of managing your own error types, debugging macro expansion errors from crates like
thiserror, or punting altogether and doing the bare minimum to provide recoverable error information. I have plenty of criticism for Go’s approach to error handling, but pushing everything into an error interface and switching on the dynamic type gets the job done.For my money, Rust has the better theoretical approach and Go has the better practical approach, and I think both of them could be significantly improved. They’re both the best I’m aware of, and yet it’s so easy for me to imagine something better (automatic stack trace annotations, capturing and formatting relevant context variables, etc). Neither of them seems so much better in relative or absolute terms that their proponents should express superiority or derision.
I don’t accept your unsubstantiated assertion that this is rare, so it seems we are at an impasse.
Fair enough. It’s a pity things like this are so difficult to answer empirically, and we must rely on our experiences. I am very curious how many orgs are bitten by this and how frequently.
Couldn’t agree more (honourable mention to Zig, though).
Enabling a linter is different from doing “a million little checks at runtime”. This behaviour is not standard because you can use Go for many reasons other than writing production-grade services, and you don’t want to clutter your terminal with unchecked error warnings.
I admit that it would be better if this behaviour were part of
go vetrather than an external linter.The strange behaviour here is not “Go people are trying to gaslight me”, but people like you coming and complaining about Go’s error handling when you have no interest in the language at all.
You can’t lint your way out of this problem. The Go type system is simply not good enough to encapsulate your program’s invarients, so even if your inputs pass a type check you still must write lots of imperative checks to ensure correctness.
Needing to do this ad-hoc is strictly less safe than relying on the type system to check this for you.
errchecks are simply one example of this much larger weakness in the language.I have to work with it professionally, so I absolutely do have an interest in this. And I wouldn’t feel the need to develop this critique of it publicly if there weren’t a constant drip feed of stories telling me how awesome this obviously poor feature is.
Your views about how bad Go’s type system is are obviously not supported by the facts, otherwise Go programs would be full of bugs (or full of minuscule imperative checks) with respect to your_favourite_language.
I understand your point about being forced to use a tool in your $job that you don’t like, that happened to me with Java, my best advice to you is to just change $job instead of complaining under unrelated discussions.
They are full of bugs, and they are full of miniscule imperative checks. The verbosity of all the
if err != nilchecks is one of the first things people notice. Invoke “the facts” without bringing any isn’t meaningfully different than subjective opinion.Your comments amount to “shut up and go away” and I refuse. To publish a blog post celebrating a language feature, and to surface it on a site of professionals, is to invite comment and critique. I am doing this, and I am being constructive by articulating specific downsides to this language decision and its impacts. This is relevant information that people use to evaluate languages and should be part of the conversation.
If
if err != nilchecks are the “minuscle imperative checks” you complain about, I have no problem with that.That you have “facts” about Go programs having worse technical quality (and bug count) than any other language I seriously doubt, at most you have anecdotes.
And the only anecdote you’ve been able to come up with so far is that you’ve found “production bugs” caused by unchecked errors that can be fixed by a linter. Being constructive would mean indicating how the language should change to address your perceived problem, not implying that the entire language should be thrown out the window. If that’s how you feel, just avoid commenting on random Go post.
Yeah, I have seen it happen maybe twice in eight years of using Go professionally, but I have seen it complained about in online comment sections countless times. :-)
If I were making a new language today, I wouldn’t copy Go’s error handling. It would probably look more like Zig. But I also don’t find it to be a source of bugs in practice.
Everyone who has mastered a language builds up muscle memory of how to avoid the Bad Parts. Every language has them. This is not dispositive to the question of whether a particular design is good or not.
The happy people are just happily working on solving their real problems. not wasting time complaining.
Not seeing a problem as a bug in production doesn’t tell you much. It usually just means that the developers spent more writing tests or doing manual testing - and this is just not visible to you. The better the compiler and type-system, the fewer tests you need for the same quality.
Agreed, but I wasn’t talking about just production–I don’t recall seeing a bug like this in any environment, at any stage.
In a lot of cases I am the developer, or I’m working closely with junior developers, so it is visible to me.
Of course with Go we don’t need to write tests for unhandled errors any more than with Rust, we just use a linter. And even when static analysis isn’t an option, I disagree with the logic that writing tests is always slower. Not all static analysis is equal, and in many cases it’s not cheap from a developer velocity perspective. Checking for errors is very cheap from a developer velocity perspective, but pacifying the borrow checker is not. In many cases, you can write a test or two in the time it would take to satisfy rustc and in some cases I’ve even introduced bugs precisely because my attention was so focused on the borrow checker and not on the domain problem (these were bugs in a rewrite from an existing Go application which didn’t have the bugs to begin with despite not having the hindsight benefit that the Rust rewrite enjoyed). I’m not saying Rust is worse or static analysis is bad, but that the logic that more static analysis necessarily improves quality or velocity is overly simplistic, IMHO.
I just want to emphasize that It’s not the same thing - as you also hint to in the next sentence.
I didn’t say that writing tests is always slower or that using the compiler to catch these things is necessarily always better. I’m not a Rust developer btw. and Rust’s errorhandling is absolutely not the current gold-standard by my own judgement.
It kind of is the same thing: static analysis. The only difference is that the static analysis is broken out into two tools instead of one, so slightly more care needs to be taken to ensure the linter is run in CI or locally or wherever appropriate. To be clear, I think Rust is strictly better for having it in the compiler–I mostly just disagree with the implications in this thread that if the compiler isn’t doing the static analysis then the situation is no better than a dynamic language.
What did you mean when you said “It usually just means that the developers spent more writing tests or doing manual testing … The better the compiler and type-system, the fewer tests you need for the same quality.” if not an argument about more rigorous static analysis saving development time? Are we just disagreeing about “always”?
Ah I see - that is indeed an exaggeration that I don’t share.
First that, but it also in general has other disadvantages. For instance, writing tests or doing manual tests is often easy to do. Learning how to deal with a complex time system is not. Go was specifically created to get people to contribute fast.
Just one example that shows that it’s not so easy to decide which way is more productive.
Ah, I think we’re agreed then. “always” in particular was probably a poor choice of words on my part.
Swallowing errors is the very worst option there is. Even segfaulting is better, you know at least something is up in that case.
Dynamic languages usually just throw an exception and those have way better behavior (you can’t forget, an empty catch is a deliberate sign to ignore an error, not an implicit one like with go), at least some handler further up will log something and more importantly the local block that experienced the error case won’t just continue executing as if nothing happened.
I’m not aware of ways in which my “personal information” could possibly “change hands with another party in exchange for monetary or other benefits” that I personally wouldn’t consider selling my data. I would appreciate it if Mozilla would either bring back the promise that they don’t sell my data (and then keep that promise), or explain exactly how my data “changes hands with another party in exchange for monetary or other benefits” so that I can be the judge of whether or not I consider that acceptable.
Collecting and sharing data with partners to show ads is something which I would consider to be “selling data”, FWIW.
To me, it sounds like Mozilla has realized that it’s breaking their promise to never “sell data” (in ways that its users would consider to be “selling data”) and is trying to weasel their way out of admitting that.
They also have a very low view of the intelligence of their users if they think we’ll actually believe their excuses.
Additionally, somehow Mozilla has managed to go 20-25 years without needing to update this wording, so why now?
💯 well put.
I’m not dogmatic to a fault. I will walk back my criticism if Mozilla can point to one example where “we do X and you wouldn’t describe that as selling your data but it MIGHT possibly run afoul of the CCPA’s definition of selling your data.”
I don’t think X exists. And why should I when the CCPAs definition sounds extremely clear cut to me. The onus is on Mozilla to explain to me how this is more nuanced than I realize. Just give us ONE example.
I don’t understand why Mozilla needs a license to do anything with my content. What is Mozilla’s role in this relationship? My computer is running a piece of software, I input some data into the software, I ask the software to send the data to servers of my choice (for example the lobste.rs servers, when I hit “Post” after typing this comment). What part of this process requires Mozilla to have a “nonexclusive, royalty-free, worldwide license” to that content? And why did they not need to have that “nonexclusive, royalty-free, worldwide license” to that content a week ago? I would get it if it only applied while using their VPN, but it’s for Firefox too?
Why do I not need to accept a similar ToS to use e.g Curl? My relationship with Curl is exactly the same as my relationship with Firefox: I enter some data into it (via a GUI in Firefox’s case, via command-line arguments in Curl’s case), Curl/Firefox makes a request towards the servers I asked it to with the data I entered, Curl/Firefox shows me whatever the server returned. Is it Mozilla’s view that Curl is somehow infringing on my intellectual property by not obtaining a license to the data I provide?
Basically, they are trying to have some service to sell. Go to
about:preferences#privacyand scroll down to “Firefox Data Collection and Use” and every section below there is about data that Firefox collects and sends to Mozilla so they can do something nominally-useful with it. In my version there’s also “Sync” and “More From Mozilla” tabs, which are even more of the same.Someone at Mozilla has decided that the fact you don’t want to buy the services is irrelevant, they’ll just sell all that juicy data produced as a side-effect to whoever wants it. More than they already were, anyway.
Maybe they only mean inputs into Firefox itself and not the sites that you visit with Firefox. Things like Pocket, the add-on store, the password manager, and the “report broken site” form. I’m sure they could make this clearer if it’s the case, but I’m personally willing to lean towards this.
If that’s the case, it’s seriously impressive to be 2 “clarifications” in after the original announcement and still not have made that part clear. Anything that’s left unclear at this point is surely being left unclear intentionally.
Ha. I wish I’d thought of that question.
Arguably you do have to agree to something to use curl, but it’s very minimal and certainly supports your point. Here is curl’s licence (which is not one of the standard ones), from https://curl.se/docs/copyright.html :
It’s unfortunate (but understandable) that this test was run on such different hardware when what we really want to compare is OSes.
So I ran it on my own M1 Pro 2021 MacBook Pro (10 cores, 8 performance / 2 efficiency) which has both macOS and Linux installed. Numbers are milliseconds running the
openprogram, and milliseconds runningopen_poolin parens:macOS Sequoia 15.3.1:
Fedora Asahi Remix 41, Linux 6.12.12:
Moral of the story: Linux is vastly faster at opening files than macOS on the same hardware, and Linux doesn’t really benefit from using multiple threads.
Also, the slow-down in Linux from 2 threads compared to 1 is consistent and happens every time I run it, it’s not just some outlier.
I do not like the author’s misrepresentations in this article. You can be technically not lying, but when you write things that most people who aren’t highly technical would believe to mean one thing, and that thing is clearly not the case, you don’t get points for technically not lying. You’re being disingenuous.
An example: I do not like Google, and I do not like Chrome, and I make no apologies for them, but when someone writes, “When you log into Chrome, it automatically logs you into your Google account on the web.”
Non- and less-technical people do not make a distinction between using a web browser and “log(ging) into Chrome” - if someone were to say, “log in to Chrome, please”, many, if not most people, would assume that what’s meant is for someone to simply launch Chrome. They wouldn’t think, “I’ll launch the browser, then go in to settings or whatever, then I’ll log in to my Google account inside of Chrome because I was asked to log in to Chrome”.
We (technical people) can tell others all we want that they should use another browser, but most people aren’t going to care and aren’t going to listen. But should people know that they can use Chrome and don’t need to be logged in to Google? Absolutely. Is the author implying by their choice of wording that this isn’t the case? Yes. This is deceptive, and it weakens the case the author is trying to make.
As we learned from South Park, we’re asked to choose between a giant douche and a turd sandwich. We really don’t need to trick people to make the case that they suck.
That’s only because Google made it so, and less-technical people don’t understand the amount of data that Google collects on them, via searches, maps, or Google Analytics, or what damage that can do.
I am pretty sure that the linking happened because they noticed that many people don’t really want or need to log into a Google account, so they wanted more people to share more data. It is my opinion that less-technical people aren’t stupid, and get confused by the web when their mental model is deliberately flawed, more often than not.
I strongly disagree with the example given, only because I know that when I say to log in I always mean to enter identification information (like username and password). It wouldn’t have crossed my mind before reading your comment that someone would equate logging into an application with simply opening an application. Logging in should only mean that you’ve chosen to enter identifying information in order to gain access to something. They shouldn’t need to address whether you can use Chrome without logging in if all they’re talking about is the logged-in behavior. (We also shouldn’t ignore Google’s dark patterns that make it seem like you do need to log in to use it, though the article doesn’t go into that.)
You can’t really expect every piece of technical content to pander to people who don’t understand the difference between launching a program and logging in to an account. What makes you think that this post is directed at people who are 100% technologically illiterate to that degree?
It shouldn’t be targeted at people who are 100% technically illiterate, but just as much it shouldn’t be targeted at people who are 0% technically illiterate.
A good example is when technical people talk about computer viruses and conflate them with Trojans. If you don’t know any better, you learn only from usage like this and you have no real awareness of the difference between them (meaning we’re shirking our responsibility to teach correct things). But when technical people, who really should know better, refer to a Trojan as a virus, that can cause real confusion and miscommunication. Someone tasked with cleanup after an infection can easily end up with very different work by this misuse. Additionally, there’s no good reason for a technical person to not use the correct term.
So when I, a technical person, sees someone, also ostensibly a technical person, describing things to others, particularly to non-technical people, incorrectly or in ways that we know will be misunderstood, it’d really bugs me. There’s no good reason for it.
This is a bold assumption to make, and it’s not reasonable to expect an author to imagine every possible way a reader might be confused. Words have meaning, and it should be enough for an author of a more-technical-than-not article to use words accurately. Audience analysis matters, but I really don’t think this author expected their article to be read by someone with so little knowledge that they would confuse “log in” with “open application”. As I shared earlier I wouldn’t have even imagined that scenario until you presented it, so if I were to have written this same article I could say confidently that it was not written with the expectation of it being misunderstood.
“Think of Chrome. When you log into Chrome, it automatically logs you into your Google account on the web.”
How many non-technical people wouldn’t realize the difference between logging in to Google and simply using Chrome? How many technical people wouldn’t be certain that the author is referring specifically to logging in to Google, and would require context to be sure?
The term “log in” has been in use in computing since at least the ‘60s and in common home usage since at least the late ‘90s. Additionally, it is not unique to computing. If I told someone to log in or sign in at the bank, I would have no expectation that they would think I simply meant for them to walk into the bank. They’d be expected to sign a log book or check in with someone. If they don’t know what “log into” means, I expect them to ask the question, “What does that mean?” and look it up. I never expect my reader to simply a) make a misinformed assumption, and b) take what I said at face value without question. If what I said strikes them as odd, they should look it up. That is what we should expect of ourselves and each other. We can’t expect an author to imagine every possible way that a reader might be misinformed.
But is this what we’re talking about? What would the common person do if someone asked them to “log in to Chrome”?
That’s what I’ve been talking about, yes. Who is this mythical “common person” and why do you feel the author needed to write with them in mind instead of another imagined audience? I have certainly known people who were that confused (not about that specifically), but I wouldn’t write a blog post aimed at catching every possible misunderstanding that type of person might have. I think it’s pretty clear that type of extra-confused user was not the audience this author had in mind, and I don’t think we can demand they reinagine their audience like that.
I want to plainly say that I don’t believe there was anything incorrect about what they said about logging into Chrome and I don’t believe that the absence of qualifying language for an unintended imagined audience means they’re being in any way disingenuous. Maybe you have another example from the article that makes your point, but I contend the example you gave does not.
The ideals of this post are dead. Firefox is neither private nor free. Do not use Firefox in 2025.
Mozilla has done an about face and now demands that Firefox users:
See https://lobste.rs/s/de2ab1/firefox_adds_terms_use for more discussion.
If you’re already using Firefox, I can confirm that porting your profile over to Librewolf (https://librewolf.net) is relatively painless, and the only issues you’ll encounter are around having the resist fingerprinting setting turned on by default (which you can choose to just disable if you don’t like the trade-offs). I resumed using Firefox in 2016 and just switched away upon this shift in policy, and I do so sadly and begrudgingly, but you’d be crazy to allow Mozilla to cross these lines without switching away.
If you’re a macOS + Littlesnitch user, I can also recommend setting Librewolf to not allow communication to any Mozilla domain other than addons.mozilla.org, just in case.
👋 I respect your opinion and LibreWolf is a fine choice; however, it shares the same problem that all “forks” have and that I thought I made clear in the article…
Developing Firefox costs half a billion per year. There’s overhead in there for sure, but you couldn’t bring that down to something more manageable, like 100 million per year, IMO, without making it completely uncompetitive to Chrome, whose estimate cost exceeds 1 billion per year. The harsh reality is that you’re still using Mozilla’s work and if Mozilla goes under, LibreWolf simply ceases to exist because it’s essentially Firefox + settings. So you’re not really sticking it to the man as much as you’d like.
There are 3 major browser engines left (minus the experiments still in development that nobody uses). All 3 browser engines are, in fact, funded by Google’s Ads and have been for almost the past 2 decades. And any of the forks would become unviable without Apple’s, Google’s or Mozilla’s hard work, which is the reality we are in.
Not complaining much, but I did mention the recent controversy you’re referring to and would’ve preferred comments on what I wrote, on my reasoning, not on the article’s title.
I do what I can and no more, which used to mean occasionally being a Firefox advocate when I could, giving Mozilla as much benefit of the doubt as I could muster, paying for an MDN subscription, and sending some money their way when possible. Now it means temporarily switching to Librewolf, fully acknowledging how unsustainable that is, and waiting for a more sustainable option to come along.
I don’t disagree with the economic realities you mentioned and I don’t think any argument you made is bad or wrong. I’m just coming to a different conclusion: If Firefox can’t take hundreds of millions of dollars from Google every year and turn that into a privacy respecting browser that doesn’t sell my data and doesn’t prohibit me from visiting whatever website I want, then what are we even doing here? I’m sick of this barely lesser of two evils shit. Burn it to the fucking ground.
I think “barely lesser of two evils” is just way off the scale, and I can’t help but feel that it is way over-dramatized.
Also, what about the consequences of having a chrome-only web? Many websites are already “Hyrum’s lawed” to being usable only in Chrome, developers only test for Chrome, the speed of development is basically impossible to follow as is.
Firefox is basically the only thing preventing the most universal platform from becoming a Google-product.
Well there’s one other: Apple. Their hesitance to allow non-Safari browsers on iOS is a bigger bulwark against a Chrome-only web than Firefox at this point IMO.
I’m a bit afraid that the EU is in the process of breaking that down though. If proper Chrome comes over to iOS and it becomes easy to install, I’m certain that Google will start their push to move iOS users over.
I know it’s not exactly the same but Safari is also in the WebKit family and Safari is nether open source nor cross platform nor anywhere close to Firefox in many technical aspects (such as by far having the most functional and sane developer tools of any browser it there).
Pretty much the same here: I used to use Firefox, I have influenced some people in the past to at least give Firefox a shot, some people ended up moving to it from Chrome based on my recommendations. But Mozilla insists on breaking trust roughly every year, so when the ToS came around, there was very little goodwill left and I have permanently switched to LibreWolf.
Using a fork significantly helps my personal short-term peace of mind: whenever Mozilla makes whatever changes they’re planning to make which requires them to have a license to any data I input into Firefox, I trust that I will hear about those changes before LibreWolf incorporates them, and there’s a decent chance that LibreWolf will rip them out and keep them out for a few releases as I assess the situation. If I’m using Firefox directly, there’s a decent probability that I’ll learn about those changes after Firefox updates itself to include them. Hell, for all I know, Firefox is already sending enough telemetry to Mozilla that someone there decided to make money off it and that’s why they removed the “Mozilla will doesn’t and will never sell your data” FAQ item; maybe LibreWolf ripping out telemetry is protecting me against Mozilla right now, I don’t know.
Long term, what I personally do doesn’t matter. The fact that Mozilla has lost so much good-will that long-term Firefox advocates are switching away should be terrifying to Mozilla and citizens of the Web broadly, but my personal actions here have close to 0 effect on that. I could turn into a disingenuous Mozilla shill but I don’t exactly think I’d be able to convince enough people to keep using Firefox to cancel out Mozilla’s efforts to sink their own brand.
If Firefox is just one of three browsers funded by Google which don’t respect user privacy, then what’s the point of it?
People want Firefox and Mozilla to be an alternative to Google’s crap. If they’re not going to be the alternative, instead choosing to copy every terrible idea Google has, then I don’t see why Mozilla is even needed.
Well to be fair to Mozilla, they’re pushing back against some web standard ideas Google has. They’ve come out against things like WebUSB and WebHID for example.
How the heck do they spend that much? At ~20M LoC, they’re spending 25K per line of code a year. While details are hard to find, I think that puts them way above the industry norms.
I’m pretty sure that’s off by 3 orders of magnitude; OP’s figure would be half a US billion, i.e. half a milliard. That means 500M / 20M = 25 $/LOC. Not 25K.
I see your point, but by that same logic, shouldn’t we all then switch to Librewolf? If Firefox’s funding comes from Google, instead of its user base, then even if a significant portion of Firefox’s users switch, it can keep on getting funded, and users who switched can get the privacy non-exploitation they need?
I gathered some numbers on that here: https://untested.sonnet.io/notes/defaults-matter-dont-assume-consent/#h-dollar510000000
TL;DR 90% of Mozilla’s revenue comes from ad partnerships (Google) and Apple received ca. 19 Bn $ per annum to keep Google as the default search engine.
Where did you get those numbers? Are you referring to the whole effort, (legal, engineering, marketing, administration, etc) ot just development?
That’s an absolutely bonkers amount of money, and while i absolutely believe it, im also kind of curious what other software products are in a similar league
doesn’t seem like a particularly grave concern to me
That page says “Services”. Does it apply to Firefox or the VPN?
The sexuality and violence thing I suspect is so that they are covered for use in Saudi Arabia and Missouri.
Yeah, that seems like legal butt-covering. If someone in a criminalizing jurisdiction accesses these materials and they try to sue to the browser, Mozilla can say the user violated TOS.
i assume it applies mostly to Bugzilla / Mozilla Connect / Phabricator / etc
This is just a lie. It’s just a lie. Firefox is gratis, and it’s FLOSS. These stupid paragraphs about legalese are just corporate crap every business of a certain size has to start qualifying so they can’t get their wallet gaped by lawyers in the future. Your first bullet point sucks - you don’t agree to the Acceptable Use Policy to use Firefox, you agree to it when using Mozilla services, i.e. Pocket or whatever. Similarly, your second bulletpoint is completely false, that paragraph doesn’t even exist:
The text was recently clarified because of the inane outrage over basic legalese. And Mozilla isn’t selling your information. That’s not something they can casually lie about and there’s no reason to lie about it unless they want to face lawsuits from zealous legal types in the future. Why constantly lie to attack Mozilla? Are you being paid to destroy Free Software?
Consciously lying should be against Lobsters rules.
Let’s really look at what’s written here, because either u/altano or u/WilhelmVonWeiner is correct, not both.
The question we want to answer: do we “agree to an acceptable use policy” when we use Firefox? Let’s look in the various terms of service agreements (Terms Of Use, Terms Of Service, Mozilla Accounts Privacy). We see that it has been changed. It originally said:
“When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox.”
Note that this makes no distinction between Firefox as a browser and services offered by Mozilla. The terms did make a distinction between Firefox as distributed by Mozilla and Firefox source code, but that’s another matter. People were outraged, and rightfully so, because you were agreeing to an acceptable use policy to use Firefox, the binary from Mozilla. Period.
That changed to:
“You give Mozilla the rights necessary to operate Firefox. This includes processing your data as we describe in the Firefox Privacy Notice. It also includes a nonexclusive, royalty-free, worldwide license for the purpose of doing as you request with the content you input in Firefox. This does not give Mozilla any ownership in that content.”
Are the legally equivalent, but they’re just using “nicer”, “more acceptable” language? No. The meaning is changed in important ways, and this is probably what you’re referring to when you say, “you don’t agree to the Acceptable Use Policy to use Firefox, you agree to it when using Mozilla services”
However, the current terms still say quite clearly that we agree to the AUP for Mozilla Services when we use Firefox whether or not we use Mozilla Services. The claim that “you don’t agree to the Acceptable Use Policy to use Firefox” is factually incorrect.
So is it OK for u/WilhelmVonWeiner to say that u/altano is lying, and call for censure? No. First, it’s disingenuous for u/WilhelmVonWeiner to pretend that the original wording didn’t exist. Also, the statement, “Similarly, your second bulletpoint is completely false, that paragraph doesn’t even exist:” is plainly false, because we can see that paragraph verbatim here:
https://www.mozilla.org/en-US/about/legal/terms/firefox/
So if u/WilhelmVonWeiner is calling someone out for lying, they really shouldn’t lie themselves, or they should afford others enough benefit of the doubt to distinguish between lying and being mistaken. After all, is u/WilhelmVonWeiner lying, or just mistaken here?
I’m all for people venting when someone is clearly in the wrong, but it seems that u/WilhelmVonWeiner is not only accusing others of lying, but is perhaps lying or at very least being incredibly disingenuous themselves.
Oh - and I take exception to this in particular:
“every business of a certain size has to start qualifying so they can’t get their wallet gaped by lawyers”
Being an apologist for large organizations that are behaving poorly is the kind of behavior we expect on Reddit or on the orange site, but not here. We do not want to or should we need to engage with people who do not make good faith arguments.
This is a pretty rude reply so I’m not going to respond to the specifics.
Mozilla has edited their acceptable use policy and terms of service to do damage control and so my exact quotes might not be up anymore, but yeah sure, assume that everyone quoting Mozilla is just a liar instead of that explanation if you want.
EDIT:
https://blog.mozilla.org/en/products/firefox/update-on-terms-of-use/
Sorry for being rude. It was unnecessary of me and I apologise, I was agitated. I strongly disagree with your assessment of what Mozilla is doing as “damage control” - they are doing what is necessary to legally protect the Mozilla Foundation and Corporation from legal threats by clarifying how they use user data. It is false they are selling your private information. It is false they have a nonexclusive … license to everything you do using Firefox. It is false that you have to agree to the Acceptable Use Policy to use Firefox. It’s misinformation, it’s FUD and it’s going to hurt one of the biggest FLOSS nonprofits and alternate web browsers.
So people can judge for them selves, the relevant quote from the previous Terms of Use was:
Source: http://archive.today/btoQM
The updated terms make no mention of the Acceptable Use Policy.
This is a pretty incendiary comment and I would expect any accusation of outright dishonesty to come with evidence that they know they’re wrong. I am not taking a position on who has the facts straight, but I don’t see how you could prove altano is lying. Don’t attribute to malice what can be explained by…simply being incorrect.
that’s not binding to firefox. that’s binding to mozilla services like websites and other services. https://www.mozilla.org/en-US/about/legal/terms/mozilla/ links to the acceptable use page for instance. whereas the firefox one does not. https://www.mozilla.org/en-US/about/legal/terms/firefox/
firefox is fine. your other points are also largely incorrect.
FYI this is a change made in response to the recent outrage, the original version of the firefox terms included
Which has now been removed.
What are the trade-offs for resisting fingerprinting? Does it disable certain CSS features, or?
Your locale is forced to en-US, your timezone is UTC, your system is set to Windows. It will put canvas behind a prompt and randomizes some pixels such that fingerprinting based on rendering is a bit harder. It will also disable using SVG and fonts that you have installed on your systems
Btw, I don’t recommend anyone using resist fingerprinting. This is the “hard mode” that is known to break a lot of pages and has no site-specific settings. Only global on or off. A lot of people turn it on and then end up hating Firefox and switching browsers because their web experience sucks and they don’t know how to turn it off. This is why we now show a rather visible info bar in settings under privacy/security when you turn this on and that’s also why we are working on a new mode that can spoof only specific APIs and only on specific sites. More to come.
Now that I know about it, I’m really looking forward to the new feature!
I’m using CanvasBlocker but its performance and UX could use some love.
This is the kind of thing Mozilla still does that sets it very far appart from the rest. Thanks!
heh, I wonder how many bits of entropy will be there in roughly “which of the spoofs are enabled”? :D
Yes, if everyone is running a custom set of spoofs you’d end up being unique again. The intent for the mechanism is for us to be able to experiment and test out a variety of sets before we know what works (in terms of webcompat). In the end, we want everyone to look as uniform as possible
It breaks automatic dark mode and sites don’t remember their zoom setting. Dates are also not always localized correctly. That’s what I’ve noticed so far at least.
My MacBook Pro is nagging me to upgrade to the new OS release. It lists a bunch of new features that don’t care about. In the meantime, the following bugs (which are regressions) have been unfixed for multiple major OS versions:
There are a lot of others, these are the first that come to mind. My favourite OS X release was 10.6: no new user-visible features, just a load of bug fixes and infrastructure improvements (this one introduced libdispatch, for example).
It’s disheartening to see core functionality in an “abandonware” state while Apple pushes new features nobody asked for. Things that should be rock-solid, just… aren’t.
It really makes you understand why some people avoid updates entirely. Snow Leopard’s focus on refinement feels like a distant memory now.
The idea of Apple OS features as abandonware is a wild idea, and yet here we are. The external monitor issue is actually terrible. I have two friends who work at Apple (neither in OS dev) and both have said that they experience the monitor issue themselves.
It is one thing when a company ignores bugs reported by its customers.
It is another thing when a company ignores bugs reported by its own employees that are also customer-facing.
When I worked for a FAANG, they released stuff early internally as part of dogfooding programs to seek input and bug reports before issues hit users.
Sounds good, just that “you’re not the target audience” became a meme because so many bug reports and concerns were shut down with that response.
I was thinking about this not too long ago; there are macOS features (ex the widgets UI) that don’t seem to even exist anymore. So many examples of features I used to really like that are just abandoned.
This works flawlessly for me every single time, I use Apple Studio Display at home and a high end Dell at the office.
On the other hand, activating iMessage and FaceTime on a new MacBook machine has been a huge pain for years on end…
I can quote on that, but not with my Apple account, but with my brother’s. Coincidentally, he had less problems activating iMessage/FaceTime on an Hackintosh machine.
A variation on that which I’ve run in to is turning the monitor off and putting the laptop to sleep, and waking without moving or disconnecting it.
To avoid all windows ending up on stuck on the laptop display, I have to sleep the laptop, the power off the monitor. To restore power on the monitor, then wake the laptop. Occasionally (1 in 10 times?) it still messes up and I have to manually move windows back to the monitor display.
(This is when using dual-head mode with both the external monitor and laptop display in operation)
iCloud message sync with message keep set to forever seems to load soooo much that messages on my last laptop would be so awful to type long messages (more than 1 sentence) directly into the text box I started to write messages outside of the application, copy/paste and send the message. The delay was in seconds for me.
I’m really heartened by how many people agree that OS X 10.6 was the best.
Edited to add … hm - maybe you’re not saying it was the best OS version, just the best release strategy? I think it actually was the best OS version (or maybe 10.7 was, but that’s just a detail).
Lion was hot garbage. It showed potential (if you ignored the workflow regressions) but it was awful.
10.8 fixed many of lion’s issues and was rather good.
Snow Leopard was definitely peak macOS.
Are there people who still use 10.6? I wonder what would be missing compared to current MacOS. Can it run a current Firefox? Zoom?
It would be pretty hard to run 10.6 for something other than novelty, the root certs are probably all expired, and you definitely can’t run any sort of modern Firefox on it, the last version of FF to support 10.6 was ESR 45 released in 2016: https://blog.mozilla.org/futurereleases/2016/04/29/update-on-firefox-support-for-os-x/
I know there are people keeping Windows 7 usable despite lack of upstream support; it would be cool if that existed for 10.6 but it sounds like no.
Maybe 10.6 could still be useful for professional video/audio/photo editing software, the type that wasn’t subscription based.
It was before Apple started wanting to make it more iPhone-like and slowly doing what Microsoft did with Windows 8 (who did it in a ‘big bang’) by making Windows Phone and Windows desktop amost indistinguishable. After Snow Leopard, Apple became a phone company and very iPhone-centric and just didn’tbother with the desktop - it became cartoonish and all flashy, not usable. That’s when I left MacOS and haven’t looked back.
Recently, Disk Utility has started showing a permissions error when I click unmount or eject on SD cards or their partitions, if the card was inserted after Disk Utility started. You have to quit and re-open Disk Utility for it to work. It didn’t use to be like that, but it is now, om two different Macs. This is very annoying for embedded development where you need to write to SD cards frequently to flash new images or installers. So unmounting/ejecting drives just randomly broke one day and I’m expecting it won’t get fixed.
Another forever-bug: when you’re on a higher refresh rate screen, the animation to switch workspaces takes more time on higher refresh rate screens. This has forced me to completely change how I use macOS to de-emphasise workspaces, because the animation is just obscenely long after I got a MacBook Pro with a 120Hz screen in 2021. Probably not a new bug, but an old bug that new hardware surfaced, and I expect it will never get fixed.
I’m also having issues with connecting to external screens only working occasionally, at least through USB-C docks.
The hardware is so damn good. I wish anyone high up at Apple cared at all about making the software good too.
Oh, there’s another one: the fstab things to not mount partitions that match a particular UUID no longer work and there doesn’t appear to be any replacement functionality (which is annoying when it’s a firmware partition that must not be written to except in a specific way, or it will sofr-brick the device).
Oh, fun! I’ve tried to find a way to disable auto mount and the only solutions I’ve found is to add individual partition UUIDs to a block list in fstab, which is useless to me since I don’t just re-use the same SD card with the same partition layout all the time, I would want to disable auto mounting completely. But it’s phenomenal to hear that they broke even that sub-par solution.
Maybe it’s an intended “feature”, because 120Hz enabled iPhones and iPads have the same behavior.
Maybe, but we’re talking about roughly 1.2 seconds from the start of the gesture until keyboard input starts going to an app on the target workspace. That’s an insane amount of delay to just force the user to sit through on a regular basis… On a 60Hz screen, the delay is less than half that (which is still pretty long, but much much better)
Not a fix, but as a workaround have you tried Accessibility > Display > Reduce Motion?
I can’t stand the normal desktop switch animation even when dialed down all the way. With that setting on, there’s still a very minor fade-type effect but it’s pretty tolerable.
Sadly, that doesn’t help at all. My issue isn’t with the animation, but with the amount of time it takes from I express my intent to switch workspace until focus switches to the new workspace. “Reduce Motion” only replaces the 1.2 second sliding animation with a 1.2 second fading animation, the wait is exactly the same.
Don’t update/downgrade to Sequoia! It’s the Windows ME of MacOS’s. After Apple support person couldn’t resolve any of the issues I had, they told me to reinstall Sequoia and then gave me instructions to upgrade to Ventura/Sonoma.
I thought Big Sur was the Windows ME of (modern) Mac OS. I have had a decent experience in Sequoia. I usually have Safari, Firefox, Chrome, Mail, Ghostty, one JetBrains thing or another (usually PyCharm Pro or Clion), Excel, Bitwarden, Preview, Fluor, Rectangle, TailScale, CleanShot, Fantastical, Ice and Choosy running pretty much constantly, plus a rotating cast of other things as I need them.
Aside from Apple Intelligence being hot garbage, (I just turn that off anyway) my main complaint about Sequoia is that sometimes, after a couple dozen dock/undock cycles (return to my desk, connect to my docking station with a 30” non-hidpi monitor, document scanner, time machine drive, smart card reader, etc.) the windows that were on my Macbook’s high resolution screen and move to my 30” when docked don’t re-scale appropriately, and I have to reboot to address that. That seems to happen every two weeks or so.
Like so many others here, I miss Snow Leopard. I thought Tiger was an excellent release, Leopard was rough, and Snow Leopard smoothed off all the rough edges of Tiger and Leopard for me.
I’d call Sequoia “subpar” if Snow Leopard is your “par”. But I don’t find that to be the case compared to Windows 11, KDE or GNOME. It mostly just stays out of my way.
Have you ever submitted these regressions to Apple through a support form or such?
Apple’s bug reporting process is so opaque it feels like shouting into the void.
And, Apple isn’t some little open source project staffed by volunteers. It’s the richest company on earth. QA is a serious job that Apple should be paying people for.
Yeah. To alleviate that somewhat (for developer-type bugs) when I was making things for Macs and iDevices most of the time, I always reported my bugs to openradar as well:
https://openradar.appspot.com/page/1
which would at least net me a little bit of feedback (along the lines of “broken for everyone or just me?”) so it felt a tiny bit less like shouting into the void.
I can’t remember on these. The CalDAV one is well known. Most of the time when I’ve reported bugs to Apple, they’ve closed them as duplicates and given no way of tracking the original bug.
No. I tried being a good user in the past but it always ended up with “the feature works as expected”. I won’t do voluntary work for a company which repeatedly shits on user feedback.
I wonder if this means that tests have been red for years, or that there are no tests for such core functionality.
Sometimes we are the tests, and yet the radars go unread
10.6 “Snow Leopard” was the last Mac OS that I could honestly say I liked. I ran it on a cheap mini laptop (a Dell I think) as a student, back when “hackintoshes” were still possible.
https://www.mozilla.org/en-US/about/legal/terms/firefox/
:)
That’s… wow. Thank you for highlighting that. I am seriously considering using something other than Firefox for the first time in… ever. Regardless of how one might choose to interpret that statement, it’s frightening that they would even write it. This is not the Mozilla I knew or want. I’d love to know what alternatives people might suggest that are more community focused and completely FOSS, ideally still non-Chromium.
Thankfully, the lawful base for data use is spelled out in their privacy policy:
https://www.mozilla.org/en-US/privacy/firefox/#lawful-bases
e.g. Browsing, Interaction and Search data are “Legitimate interest” and “Consent”-based.
Consent being the kind that I haven’t given, but I’m supposed to actively revoke? Until the next update?
That unfortunately seems to be the current usage of the term “consent” in the tech industry.
Fortunately, that’s not consent as the GDPR defines it
Isn’t it? Most GDPR consent screens have an easy “accept to everything” button and requires going through multiple steps to “not accept”, and many many more steps to “object” to their “legitimate interest” in tracking for the purposes of advertising. As long as these screens remain allowed and aren’t cracked down on (which I don’t foresee happening, ever), that’s the de facto meaning of “consent” in GDPR as far as I’m concerned: something that’s assumed given unless you actively go out of your way to revoke it.
It’s not what the text of the GDPR defines it as, but the text isn’t relevant; only its effect on the real world is.
Yes, definitely. Consent in GDPR is opt-in not opt-out. If it’s opt-out, that’s not consensual. And the law is the law.
Furthermore, for interstitials, to reject everything should be at least as easy as it is to accept everything, without dark patterns. Interstitials (e.g., from IAB and co.) first tried to make it hard to reject everything, but now you usually get a clear button for rejecting everything on most websites.
As I mentioned in another comment, the DPAs are understaffed and overworked. But they do move. A real-world example of a company affected by the GDPR, and that tries testing its limits, is Meta with Facebook. For user profiling, first they tried the Terms of Service, then they tried claiming a legitimate interest, then they introduced expensive subscriptions for those that tried to decline, now they introduced a UI degradation, delaying the user scrolling, which is illegal as well.
Many complain, on one hand, that the EU is too regulated, suffocating inovation, and with US’s tech oligarhs now sucking up to Trump to force the EU into allowing US companies to break the law. On the other hand, there are people who believe that the GDPR isn’t enforced enough. I wish people would make up their mind.
Those are different people, all who have made up their mind.
I thought I made it reasonably clear that I don’t care that much about what the text of the law is, I care about what material impact it has on the world.
I corrected you with facts, and you’re replying with your feelings. Fair enough.
To be fair, @mort’s feeling may come from non-actually-GDPR-compliant cookie consent forms. I have certainly seen where I couldn’t find the “reject all” button, and felt obligated to manually click up to 15 “legitimate interest” boxes. (And dammit could they please stop with their sliding buttons and use actual square check boxes instead?)
I think the worse case is you click “reject all”, but you don’t actually reject all, and the legitimate interests are still checked.
The facts you provided aren’t relevant. I’m talking about the de facto situation as it applies to 99% of companies, you’re talking about the text of the law and enforcement against one particular company. These are different things which don’t have much to do with each other.
You even acknowledge that DPAs are understaffed and overworked, which results in the lacking enforcement which is exactly what I’m complaining about. For what I can tell, we don’t disagree about any facts here.
Well, other people in this sub-thread are talking about GDPR. You might have switched the topic, but that isn’t alexelcu’s fault.
I’m talking about GDPR as well, focusing about what impact it has in practice. I have been 100% consistent on that, since my first message in this sub-thread (https://lobste.rs/s/de2ab1/firefox_adds_terms_use#c_3sxqe1) which explicitly talks about what it means de facto. I don’t know where you got the impression that I’m talking about something else.
But there is enforcement, it’s just slower than we’d like. For example, screens making it harder to not opt in rather than opt in have gotten much rarer than they used to be. IME now they mostly come from American companies that don’t have much of a presence in the EU. So enforcement is causing things to move in the right direction, even if it is at a slow pace.
There is a website tracking fines against companies for GDPR violations [1] and as you can see, there are lots of fines against companies big and small every single month. “Insufficient legal basis for data processing” isn’t close to being the most common violation, but it’s pretty common, and has also been lobbed against companies big and small. It is not the case that there is only enforcement against a few high profile companies.
[1] https://www.enforcementtracker.com/
Why do you lay this at the feet of GDPR?
it’s the other way around - most of the time you have to actively revoke “legitimate interest”, consent should be off by default. Unfortunately, oftentimes “legitimate interest” is just “consent, but on by default” and they take exactly the same data for the same purpose (IIRC there are NGOs (such as NOYB, Panoptykon) fighting against IAB and other companies in those terms)
“Legitimate interest” is the GDPR loophole that ad tech companies use to spy on us without an easy opt-out option, right? I don’t know what this means in this context but I don’t trust it.
It is not, ad tech has been considered not a legitimate interest for… Ever… By the Europeans DPAs. Report to your DPA the one that abuse this. There have been enforcement.
Every website with a consent screen has a ton of ad stuff under “legitimate interest”, most ask you to “object” to each individually. The continued existence of this patterns means it’s de facto legal under the GDPR in my book. “Legitimate interest” is a tool to continue forced ad tracking.
Yes, all of that is illegal under GDPR.
The problem has been that DPAs are understaffed and overworked.
I don’t think you’re disagreeing with me. It’s de jure illegal but de facto legal. I don’t care much what the text of the GDPR says, I care about its material effect on the real world; and the material effect is one where websites put up consent screens where the user has to “object” individually to every ad tech company’s “legitimate interest” in tracking the user for ad targeting purposes.
I used to be optimistic about the GDPR because there’s a lot of good stuff in the text of the law, but it has been long enough that we can clearly see that most of its actual effect is pretty underwhelming. Good law without enforcement is worthless.
No, it’s de facto illegal a well, law enforcement is just slower that we’d like. Ask, for example, Facebook.
De facto illegal for entities at Facebook’s scale? Maybe. But it’s certainly de facto legal for everyone else. It has been 7 years since it was implemented; if it was going to have a positive effect we’d have seen it by now. My patience has run out. GDPR failed.
I just gave you a concrete example of a powerful Big Tech company, with infinite resources for political lobbying, that was blasted for their practices. They first tried hiding behind their Terms of Use, then they tried claiming a legitimate interest, then they offered the choice of a paid subscription, and now they’ve introduced delays in scrolling for people that don’t consent to being profiled, which will be deemed illegal as well.
Your patience isn’t important. This is the legal system in action. Just because, for example, tax evasion happens, that doesn’t mean that anti tax evasion laws don’t work. Similarly with data protection laws. I used to work in the adtech industry. I know for a fact that there have been companies leaving the EU because of GDPR. I also know some of the legwork that IAB tried pulling off, but it won’t last.
Just the fact that you’re getting those interstitials is a win. Microsoft’s Edge browser, for example, gives EU citizens that IAB dialog on the first run, thus informing them that they are going to share their data with the entire advertising industry. That is in itself valuable for me, because it informs me that Edge is spyware.
I agree that the “we’re spying on you” pop-ups is a win in itself. I’m just complaining that it’s so toothless as to in practice allow websites to put up modals where each ad tech company’s “legitimate interest” in tracking me has to be individually disabled. If the goal of the GDPR was to in any way make it reasonably easy for users to opt out of tracking, it failed.
I’m not so sure. I’ve even seen this used as an argument against the GDPR: The spin they give it is “this is the law that forces us to put up annoying cookie popups”. See for example this article on the Dutch public broadcasting agency (which is typically more left-leaning and not prone to give a platform to liberals).
Roughly translated “all innovations in AI don’t work as well here as in the US. And why do you have to click on cookies (sic) on every single website?”
I have seen that as well, and I think it’s bullshit. The GDPR doesn’t force anyone to make any form of pop-up, nobody is forced to track users in a way which requires consent. The GDPR only requires disclosure and an opt-out mechanism if you do decide to spy on your users, which I consider good..
I agree, but at the same time I think the average user just sees it as a nuisance, especially because in most cases there’s no other place to go where they don’t have a cookie popup. The web development/advertising industry knowingly and willfully “complied” in the most malicious and obnoxious way possible, resulting in this shitty situation. That’s 1 for the industry, 0 for the lawgivers.
I agree that it didn’t have the desired effect (which, incidentally, I have spent a lot of this thread complaining about, hehe). I think everyone was surprised about just how far everyone is willing to go in destroying their website’s user experience in order to keep tracking people.
I’m not sure if you’re deep in grumpy posting or didn’t understand the idea here, but for legitimate interest you don’t need to agree and companies normally don’t give you the option. If you’re talking about the extra options you unset manually, they’re a different thing. The “legitimate interest” part is for example validating your identity through a third party before paying out money. Things you typically can’t opt out of without also refusing to use the service.
If you get a switch for “tracking” or “ads” that you can turn off, that’s not a part of the “legitimate interest” group of data.
I’m sorry but this isn’t true. I have encountered plenty consent screens with two tabs, “consent” and “legitimate interest”, and where the stuff under “consent” are default off while the stuff under “legitimate interest” is on by default and must be “objected to” individually. Some have an “object to all” button to “object” to all ad tracking in the “legitimate interest” category.
Here’s one example: https://i.imgur.com/J4dnptX.png, the Financial Times is clearly of the opinion that tracking for the purpose of advertising counts as “legitimate interest”.
I’m not saying that there’s any relationship between this pattern and what’s actually required by the GDPR, my understanding of the actual text of the law reflects yours. I’m saying that this is how it works in practice.
So when I login to lobste.rs (or any other important website) do I grant them the permission to use my credentials? ;-)
Pretty much
this comment remains property of the Mozilla Foundation and is presented here with their kind permission
Mozilla updated the article with a clarifying statement:
the problem is it doesn’t clarify anything. “basic functionality” is not defined. my guess is they want to be able to feed anything we type or upload to a site, to also be able to feed that into an LLM. “anything other than what is described” doesnt help because what is described is so vague as to mean anything “help you experience and interact with online content”
That is… not clarifying. And not comforting. “What is described” in the ToS is “to help you navigate, experience, and interact with online content.” That’s absurdly vague. And what is described in the Privacy Notice is absurdly broad:
Yes. That’s the fucking point.
I’m glad we have this contextless legalese to clarify things. I wonder if there’s some kind of opt-in data collection in Firefox that Mozilla might have legal obligations to clarify their rights to? Couldn’t be that… No, let’s put a pause on critical thinking and post stupid TOS excerpts as if Mozilla are going to steal our Deviantart uploads and sell them as AI training data.
If they need a ToS for a particular feature, then that “contextless legalese” should be scoped to that feature, not to Firefox as a whole.
This is precisely why the same organization should not do all of these things. If they want to do non-tool stuff to continue funding their mission they should start up independently managed companies that can establish these consents for a narrow band of services. They can give the existing organization control as a majority shareholder, with dividends flowing back to the main organization. That is the way to ensure that incentives don’t become misaligned with the mission.
They’re future-proofing their terms of service. That’s even worse than future-proofing one’s code, Though for different reasons.
That language comes off a bit … onerous
But what does it mean? To “navigate”.
That’s it I guess. Thanks for the find! Firefox is dead to me now. What’s the non-evil browser to go to nowadays?
librewolf seems to be the rage now: https://librewolf.net/
On MacOS/iOS there is the Kagi browser Orion: https://kagi.com/orion/
Having owned a Framework since April of 2022, I cannot recommend them to people who need even basic durability in their devices. Since then, I have done two mainboard replacements, two top cover replacements, a hinge replacement, a battery replacement, several glue jobs after the captive screw hubs sheared from the plastic backing…
It’s just such an absurdly fragile device with incredibly poor thermals. They sacrificed a ton of desirable features to make the laptop repairable, but ultimately have released a set of devices that, when used in real-world settings, end with you repairing the device more often than not. And these repairs are often non-trivial.
I will personally be migrating to another machine. The Framework 12’s focus on durability may be trending in the right direction, but to regain trust, I’d need to see things like drop and wear tests. A laptop that can be repaired, but needs constant upkeep/incredibly delicate handling, is ultimately not an actual consumer device, but a hobbyist device.
Maybe they’ll get better in a few years. Maybe the Framework 12 will be better. Their new focus on AI, the soldered RAM in the desktop offering, and the failure to address the flimsy plastic chassis innards, among other things, mean that they have a long way to go.
It’s definitely a “be part of the community that helps solve our product problems” sort of feeling.
I have an AMD FW13, and was trying to figure out why it loses 50+% of its battery charge overnight when I close the lid, because I don’t use this computer every single day and don’t want to have remember to charge yet another device.
So I check the basics-I’m running their officially supported Linux distro, BIOS is current, etc. And half an hour into reading forum threads about diagnosing sleep power draw, I realize that this is not how I want to spend my time on this planet. I love that they’re trying to build repairable/upgradeable devices, but that goal doesn’t matter so much if people end up ditching your products for another option because they’re just tired of trying to fix it.
I’ll chime in with the opposite experience - I’ve owned an AMD Framework 13 since it came out, and had no durability issues with it whatsoever, and it’s been one of my 2 favorite computers I’ve ever owned. I’ve done one main board replacement that saved my butt after a bottle of gin fell over on top of it in transport.
Development and light gaming (on Linux, I very much appreciate their Linux support) have been great, and the reparability both gives me peace of mind, an upgrade path, and has already saved me quite a bit of money.
I’ve owned a framework since Batch 1. Durability has not been a problem for me. My original screen has a small chip in it from when I put it in a bag with something that had sharp edges and pressured the screen for a whole flight. Slowly growing. Otherwise, it’s been solid.
Same. I have a batch 1. There are quirks, which I expected and knew I am supporting a startup with little experience. I since have upgraded and put my old board into a cooler master case. This is so amazing, and what I cared about. I am still super happy with having bought the Framework and particular for tinkerers and people who will have a use for their old mainboards it’s amazing.
I get harbouring resentment for a company you felt sold then a bad product. But at the same time, you bought a laptop from a very inexperienced company which was brand new at making laptops, a pretty difficult product category to get right when you’re not just re-branding someone else’s white-label hardware.
3 years have passed since then, if I were in the market for a category which Framework competes in these days I would be inclined to look at more recent reviews and customer testimonials. I don’t think flaws in that 3 year old hardware is that relevant anymore. Not because 3 years is a particularly long time in the computer hardware business, but because it’s a really long time relative to the short life of this particular company.
I would agree that 3 years is enough time for a company to use their production lessons to improve their product. But nothing has changed in the Framework 13.
I don’t resent Framework. I think that’s putting words in my mouth. I just cannot, in good faith, recommend their products to people who need even a semi-durable machine. That’s just fact.
Founded by people who had experience designing laptops already, and manufactured by a company that manufactures many laptops. Poor explanations for the problems, IMO.
I’ve had a 12th gen Intel since Sept 2022 (running NixOS btw) and I have not had any issues, I will admit it sits in one place 99% of the time. I might order the replacement hinge since mine is a bit floppy but not too big a deal.
As for the event, I was hoping for a minipc using the 395 and I got my wish. Bit pricey and not small enough for where I want to put it and I have no plans for AI work so it’s probably not the right machine for me.
I was originally interested in the HP machine coming with the same CPU (which should be small enough to fit) but I’ve been pricing an AMD 9950 and it comes out cheaper. I was also disappointed there wasn’t a sku with 385 Max w/64GB of RAM , which I might have have ordered to keep the cost down.
For reference a new machine is intended to replace a 10 year old Devils Canyon system.
I’ve also had my Framework 13 since beginning of 2022. I’ve had to do a hinge replacement, input cover replacement, and mainboard replacement. But I sort of expected that since it’s a young company and hardware is hard. And through all of it support was very responsive and helpful.
I would expect that nowadays the laptops are probably more solidly built than those early batches!
Support was definitely helpful. I just don’t have time or money to replace parts on my machine anymore.
From what I understand, the laptops aren’t any stronger. Even the Framework 16 just got some aftermarket/post-launch foam pads to put below the keyboard to alleviate the strain on the keyboard. The entire keyboard deck would flex.
The fact that these products have these flaws makes me wonder how Framework organizes its engineering priorities.
When compared to other similar laptops from brands like HP or Lenovo, how does the deck flex compare? I definitely feel sympathetic to not being better or on par with Apple - given the heaps of money Apple has for economies of scale + lots of mechanical engineers, but it would be a bit rough if mid-tier laptops in that category were far superior.
The deck flex is on par with or worse than an HP EliteBook circa 2019. The problem is that it’s incredibly easy to bend the entire frame of the machine, to the point where it interferes with the touchpad’s ability to click.
It’s really bad, bordering on unexcusable. The fact that there’s no concrete reinforcment says that they sacrificed build quality for repairability, which is equivalent to making a leaky boat with a very fast bilge pump.
I’m not sure what you’re doing to your laptop; how are you bending the entire frame of the machine?
It’s a new company that is largely doing right by open source, and especially open hardware. The quality isn’t incredible but it is worth its value, and I find these claims you’re making dubious.
It’s a fairly common flex point for the chassis, and a common support problem. The base of the mousepad, towards the front of the laptop where there’s a depression in the case, is where the majority of the flex is.
My laptop has seen nothing but daily, regular use. You can find the claims dubious, but others are having them too.
This has been my experience with the Framework. It’s not Apple hardware, which is best in class all around, but it is on-par with my Dell XPS.
I’ll chime in too: I’ve had the Framework 13 AMD since it came out (mid 2023) and it has been great.
I upgraded the display after the new 2.8K panel came out, it took 2 minutes. Couple months later it developed some dead pixels, so they sent me a replacement. In the process of swapping it out, I accidentally tore the display cable. It took me a while to notice/debug it, but in the end it was just a $15 cable replacement that I’m fairly sure would have otherwise resulted in a full mainboard replacement for any other laptop. (When I had Macbooks, I lost count how many times Apple replaced the mainboard for the smallest thing.)
I haven’t been too precious with it, I toss it around like I did my Thinkpad before this. There’s some scuffs but it has been fine, perhaps the newer models are more sturdy? It’s comforting to know that if anything breaks, I’ll be able to fix it.
I also run NixOS on it, it does everything I need it to do, the battery life is great (8-10 hours of moderate use) and I’ll happily swap out the battery in a few more years once it starts losing capacity.
I spend so much of my life at the computer that feeling a sense of ownership over the components makes a lot of sense to me. I don’t want to feel like I’m living in a hotel.
It is, in fact, how I want to spend my time on this planet.
To add to the chorus, I bought a 12th gen intel framework 13 on release and it’s been flawless so far. Nixos worked out of the box. I love the 3:2 screen. I can totally believe that a small/young manufacturing company has quality control issues and some people are getting lemons, but the design itself seems solid to me.
On my old dell laptop I snapped all the usb ports on one side (by lifting up the other side while keyboard/mouse were still connected). Since they’re connected directly to the motherboard they weren’t repairable without buying a new cpu. If I did the same on the framework it would only break the $12 expansion cards and I wouldn’t even have to turn it off to replace them.
Later I dropped that same dell about 20cm on to a couch with the screen open. The impact swung the screen open all the way and snapped the hinges. They wanted me to send it back for repairs but I couldn’t handle the downtime, so for a year I just had the hinges duck-taped together. I’ve dropped my framework the same way, but because the screen opens the full 180 degrees it doesn’t leverage the hinges at all. And if it did break I’d be able to ship the part and replace it myself.
Not that I support the desktop offering as anything but waste, but the soldered RAM is apparently all about throughput:
With the focus of the desktop being “AI applications” that prioritize high throughpout, I’d say they could’ve gone with an entirely different chip.
I get the engineering constraint, but the reason for the constraint is something I disagree with.
Who else is making something competitive?
I wish I could name something in good faith that was comparable to a hyper-repairable x86-64 laptop. Lenovo is pivoting towards repairability with the T14 Gen 5, but I can’t recommend that either yet.
Star Labs, System76, some old Thinkpad models.. there are “competitive” things, but few things that pitch the things Framework does.
While I agree on some of that, I must stress that I’ve had hardware that was fine until just one thing suddenly broke and everything was unusable. I’ll try an analogy: with repairability, if all your components are 99% reliable and working, the whole machine is at 99% but without it, even if all of them are at 99.9% instead, when you have 10 components, you’re not in a better situation overall.
And I say that while I need to finish going through support for a mainboard replacement due to fried USB ports on a first-gen machine (although not an initial batch). BTW, funnily I’m wondering if there’s an interaction with my yubikey. I also wish the chassis was a bit sturdier but that’s more of a wish.
As for thermals, while I think they could probably be better, the 11th gen Intel CPU that you have (just like I do) isn’t great at all: 13th gen ones are much better AFAIK.
I’ve experienced a full main board failure which led to me upgrading to a 12th gen on my own dime.
The thermal problems are still there, and their fans have some surprising QA problems that are exacerbated by thermal issues.
I wish I could excuse the fact that my machine feels like it’s going to explode even with power management. The fans grind after three replacements now, and I lack the energy and motivation to do a fourth.
I think 12th gen is pretty similar to 11th gen. I contemplated the upgrade for similar reasons but held off because I didn’t need to know and the gains seemed low. IIRC it’s really with 13th gen that Intel improved the CPUs. But I agree the thermals/power seems sub-par; I feel like it could definitely be better.
BTW, I just “remembered” that I use mine mostly on my desk and it’s not directly sitting on it which greatly improves its cooling (I can’t give hard numbers but I see the temps under load are better and max CPU frequency can be maintained).
Sorry to hear about the trouble with your Framework 13. To offer another data point: I have a 12th gen Framework 13 and haven’t needed to repair a thing, I’m still super happy with it. The frame-bending I’ve also not seen, it’s a super sturdy device for me.
I can second that. I’ve had a 12th gen Intel system since late 2022 and no issues of the sort. Even dropping it once did nothing to it
I don’t understand the necessity for a Framework Desktop edition. It’s just more waste. Just make a better laptop or sell it headless. It would also be nice to see consistency in the offerings instead of 12 being Intel, 13 being AMD AI thingy, 16 being last-gen AMD etc.
I don’t know what “waste” means here. I also don’t understand what the significant difference is in your mind between a “headless laptop” and a “desktop with laptop components”. Are you complaining that the desktop doesn’t come with worse cooling and a built-in keyboard?
The way I see it, if Framework has the capacity to build and sell a desktop computer, and they expect it to sell well enough to cover costs, it’s not hurting anything. For a small company there’s always the risk of spreading yourself too thin, but I don’t think any of us have enough insight into their operations to tell if that’s happening here.
When I say a headless laptop I mean it literally, as in a Framework 13 with some nubs on the hinge or just a plastic block instead of a screen assembly. No keyboard either, make a keyboard-slot-shaped fan assembly for cooling or something - they’re smart people, they could figure it out. The only thing weird about it would be the form factor not being the box shape desktop users are used to. I am making the complaint you’re trying to dismiss. “Waste” is more plastic crap and manufacturing supply chain crap, from moulds to energy usage to time expenditure.
You can already have headless ones. I think there’s even a case on their shop.
edit: As for the variety in CPUs, I think they started with intel partly because TB4 but then they started a partnership with AMD (and AMD CPUs are definitely in a better position now). Moreover, I’ve seen much more variety with other manufacturers (to a point that’s maddening tbh). Finally, the “AMD AI” is AMD’s marketing name: just ignore that part of the name.
Yeah, kind of a big problem for something so huge and important.
It does appear that a few volunteers are stepping forward to handle patching.
For something important the README is quite lacking.
It seems to be a dependency for rustls.
I think at this point rustls can use multiple cryptographic backends but I could be wrong. Last time I was doing crypto stuff with it I had to explicitly choose to use ring for some stuff iirc
In what way?
It doesn’t say what it is. It says it has code from BoringSSL but not an outright fork?
fair. The one-liner is at the top of the docs though at least: “Safe, fast, small crypto using Rust with BoringSSL’s cryptography primitives.”
10 minutes of just build time would be too much for me…
Yeah, 30kloc really isn’t very big… I regularly work with a ~14 kloc C++ code base which builds in 15 seconds on my 3.5 year old laptop, and a ~20 kloc code base (+ vendored catch2, + auto generated protobuf code) which builds in just under a minute on the same laptop. And C++ isn’t exactly known for its fast compile times either. 10 minutes is an eternity, and I imagine Apple gives its developer significantly more powerful machines than my 2021 M1 Pro.
Yeah, 50 lines per second.
Swift has ridiculous compile times.
60 MB binary also seems excessive for 30KLOC.
Only 4x faster than (interpreted) Python 2 also seems a bit on the slow side for the a compiled language. Unless it’s all DB access or other I/O.
I want to add: the auto generated protobuf code is 39kloc across 24 source files. I didn’t know it was that much code. So in total it’s 59 kloc, twice as much as the Swift code base in the article.
That is kind of long! But I assume that’s a clean build; hopefully incremental builds are a lot quicker.
Compile times are my main complaint. If not for that it would be my clear favorite language.
Is this worse than Rust?
I’m compiling 12,000 lines in five seconds on an ancient Mac Mini. No dependencies outside the standard library though.
I can’t find build time benchmarks but both communities are of the opinion compile times are not good enough. In this project’s case I suspect the ten minutes includes pulling their dependencies and building them too, which is the norm with SwiftPM.
This causes a bit of headache for me. I doubled down on ring as the default backend for rustls when releasing ureq 3.0 just a couple of months ago. But this might mean I should switch to aws-lc-rs. Hopefully that doesn’t upset too many ureq users :/
There’s been some positive momentum on the GitHub discussion since you posted. Namely the crates.io ownership has been transferred to the rustls people and 2 of them have explicitly said they’ll maintain it. They need to make a new release to reflect the change and then will be able to resolve the advisory.
That does buy some time. It’s the same people stepping up who are writing/maintaining rustls, which makes me happy.
Out of interest, what factors did you consider when choosing between aws-lc-rs and ring?
My dream for ureq is a Rust native library without C underpinnings. The very early releases of rustls made noises I interpreted to be that too, even though they never explicitly stated it being a goal (and ring certainly isn’t Rust native like that). rustls picked ring, and ureq 1.x and 2.x used rust/ring.
As I was working on ureq 3.x, rustls advertised they were switching their default to aws-lc-rs. However the build requirements for aws-lc-rs were terrible – like requiring users on Windows to install nasm (this has since been fixed).
One of ureq’s top priorities has always been to “just work”, especially for users new to Rust. I don’t want new users to face questions about which TLS backend to chose. Hence I stuck with rustls/ring for ureq 3.x.
aws-lc-rs has improved, but it is still the case that ring has a higher chance to compile on more platforms. RISCV is the one I keep hearing about.
Wait does that mean the Rust ecosystem is moving towards relying on Amazon and AWS for its cryptography? That doesn’t sound great. Not that I believe Amazon would add backdoors or anything like that, but I expect them to maintain aws-lac and aws-lc-rs to suit their own needs rather than the needs of the community. It makes me lose some confidence in Rust for these purposes to be honest.
What do you see as the conflict here, i.e. where would the needs differ for crypto libraries?
I’d expect a corporate funded crypto project to be more likely to get paid audits, do compliance work (FIPS etc), and add performance changes for the hardware they use (AWS graviton processors I guess), but none of that seems necessarily bad to me.
Things like maintaining API stability, keeping around ciphers and cryptographic primitives which AWS happens to not need, accepting contributions to add features which AWS doesn’t need or fix bugs that doesn’t affect AWS, and improving performance on platforms which AWS doesn’t use are all things that I wouldn’t trust Amazon for.
Yeah. To AWS advantage is quantum resistant cryptos and FIPS. In a comment below I found there is another initiative “graviola” that seems promising.
There is also the RustCrypto set of projects.
There is, but AIUI AWS-LC uses assembly code and thus can provide timing-safety, whereas RustCrypto is “written in pure Rust”.
Can anyone explain what the kernel maintainer drama they’re talking about is?
Most of Asahi’s linux work is in Rust, such as their new GPU drivers for Apple Silicon. Part of that requires access to the DMA subsystem. An asahi maintainer other than Marcan wanted to upstream Rust bindings to DMA. These would have been part of the “Rust for Linux” subsystem, which explicitly is exempted from the usual kernel stability guarantees. The maintainer of DMA in Linux objected to them upstreaming those bindings, as he felt he would then be obligated to maintain the Rust code, and he did not want to maintain Rust. This was combined with some uncharitable comments on the Rust for Linux project’s goals. There has been a lot of debate on other internet forums about whether these comments were uncharitable about Rust, Rust for Linux, or some other distinction that personally I think doesn’t really matter.
Marcan and that maintainer then exchanged words on the LKML. Unsatisfied, Marcan appealed to his Mastodon for support. Linus then stepped in with a reply heavily condemning the appeal to social media and not really addressing the complaints that led to that point. Marcan later resigned from his kernel maintainerships, and now also from the Asahi project.
I think this understates the position of the maintainer. He sent a message to LKML that states it pretty clearly: he thinks a multi-language kernel is a terrible mistake. And I can understand his position — which to me demonstrates that there is a leadership failure happening, because this kind of fundamental conflict shouldn’t be allowed to go on this long. “Disagree and Commit”, as they say at Amazon.
https://lwn.net/ml/all/20250131075751.GA16720@lst.de/
I’ll go further, it misstates the maintainers position.
The maintainer never indicated “he felt he would then be obligated to maintain the Rust code”.
The maintainers “NAK” occurred in response to the following post
I.e. after the explicit statement that the rust people, not Hellwig, would be maintaining it separately.
(And then we reach the post you quoted, where he makes it clear that his reasons are that he doesn’t want Rust in Linux at all, not that he doesn’t want to maintain it)
Yes, I understated the understatement. :) His stated goal is to use his influence and control over the DMA subsystem to prevent Rust (edit: or any other non-C language), from being in the kernel at all. As his NAK says:
Classic LKML rhetorical style here, to use a loaded word like “cancer” to make what is actually a reasonable technical point.
It is not a reasonable technical point! It demonstrates a fundamental incuriosity about technical matters. Categorical technical statements are rarely reasonable.
I disagree in general, but not in specifics. I’ve spoken to Chris Hellman at lengths (literally 6 hours) about the topic of Rust and it was not combative. He was curious. It’s hard to sum up at length, but his disagreements are rooted in specifics and he’s a hardliner about the single source language thing. But not for the sake of being a hardliner. His hard-line stance comes out of different weighting of the issues at hand, not an outright dismissal.
Those people existing is to be expected and it doesn’t invalidate their opinions.
Obviously not a fan of LKML rhetoric, but yep, you get the tone you built over years.
You can disagree, but for the rest of us who haven’t had these super private, intimate exchanges, where we get to “really know” someone, what he puts into the public is our perception of him. To that end, he’s called the project a cancer, and declared he’s going to do whatever he can to stop it which is not what a curios person does. He comes across instead as a religious zealot or “hardliner” in your terms.
That’s fair, but it’s also a categorical mistake to read a (brusque and inappropriate) email and judge on a whole person for days and weeks on the internet. I’m not saying you “really need to know someone”, but you also need to be aware that hundreds of hours are spent on reading what those statements may mean.
Wanting to keep a large, complex, codebase in a single language is a very reasonable technical point.
Using the term “cancer” to describe it the adoption of another language, though, is inflammatory and probably didn’t help matters.
A general inclination towards this is reasonable (and one I agree with). A categorical statement is not.
Again, many other C and/or C++ projects have introduced Rust into them. Some have succeeded, while others have not. It is worth learning from them.
I’ve personally had to make the decision whether to allow an additional language into a large codebase, and there are some major negatives to consider that are unrelated to the technical aspects of the language. The maintainer said three times in the above excerpts that he doesn’t have a negative opinion of Rust itself.
I’ve always heard that Google allows only four languages in their codebase, and one of those is only because they invented it. :)
Yes, I was part of the first Rust team at Meta so I’m quite aware of the upsides and downsides! That’s why I said “categorical”.
By “categorical”, you mean saying “no second language could ever have upsides that overcome the downsides of having two languages”? I agree that’s a pretty conservative position. It’s hard to even imagine another candidate for second language, though (I dunno, Zig maybe?), so I didn’t take it as literally categorical.
This is the quote:
This is quite categorical!
I think Zig sees itself as a candidate second language for C projects, but that’s probably beside the point; nobody’s trying to bring it into the kernel.
Indeed, but that call was already decided when Rust support was first merged.
The goal of the experiment is to explore those tradeoffs. You can’t do that if you don’t merge things!
And yet they are one of the companies pushing Rust for Linux… They understand the tradeoff. The rust for linux people understand the tradeoff. They just think it is worth it.
Totally. But that does seem like a valid engineering question people could disagree about.
However, this is exactly the kind of huge impactful question that should get escalated to the chief architect. And if there’s a sanctioned scouting effort to gather experience to figure out the right answer, somebody found to be actively subverting that effort should be…corrected…by the chief architect.
It’s interesting that they won’t allow it in their internal codebase but they’re happy to let the Linux maintainers take on the burden of adding it there.
What’s your source for Google not allowing Rust in their internal code base? Google publishes crate audits for google3: https://github.com/google/rust-crate-audits/blob/main/manual-sources/google3-audits.toml
I’m not sure where you get that from - I worked at Google and literally all the code I wrote was in rust
In the google3 main repo?
Is the Linux kernel a place for curiosity or for consistency?
R4L is officially an “experiment” with goal to explore tradeoffs. I.E. satisfy curiosity.
This whole debate is so tired. The “including Rust” ship has, at least temporarily, sailed, and maintainers blocking it based on not wanting Rust/a second language doesn’t make sense when the whole point is to explore that option.
Maybe all the code will get deleted at some point, but the code needs to be merged first to then decide that based on how well it works in practice.
Why does it need to be merged though? If it’s officially an experiment, maintain it in a separate R4L tree, and everybody’s happy?
To have a thorough experiment, it must be merged into the official tree. One must be able to see how this interacts with a diverse set of hardware and software configurations, and also how this affects the development process inside the kernel community. Some maintainers are afraid of how the introduction of Rust would affect them in the future and the maintainability of the code base. Without it being in the official tree, there wouldn’t be any conclusion on those points.
It sounds like some of the kernel maintainers at least don’t want to have to pay the cost of this large-scale experiment? Like they don’t feel it’s worth the downsides of massively complicating the kernel into a multi-language codebase? Like they understand the potential benefits of memory safety but they feel they are outweighed by the risks?
These individuals believe that they know the result of the experimental before it’s been run. But they do not, because they cannot, by definition. Of course they don’t want to pay the cost, because change is hard, and deep down they know that there’s a chance that the experiment is successful, and thus their responsibilities will either change or they will have to cede responsibility to someone more capable.
Whether there is benefit to Rust in theory, there is little question. It provably solves problems that the kernel has. Whether it can solve those problems without causing larger ones is why the experiment is required. Blocking the experiment for essentially any reason is more or less indefenisble, because the entire point is that they don’t know.
That sounds way too confident and assured of a statement to me. You are discounting the opinion of subject matter experts as if they have no idea what they’re talking about. If someone came to you at your workplace and said that they want to run an experiment to convert your codebase to a new language, and they want you to help maintain and support that experiment while also delivering on all your existing obligations, would you be so quick to accept that? I don’t know about you, but personally I would and have pushed back on rewrite suggestions as a bad idea, despite the perceived benefits (for example, Management thinks that rewriting in JavaScript will make it easier to hire people for the project in the future).
Would rewriting in JavaScript have possibly made me redundant? Maybe. But it would also be massively expensive, cause huge churn, and have a real possibility of failing as a project. We can’t just ignore the very real possibility of risks because of the expected benefits.
There is no interpretation in which it’s too assured. My position is that the result of the experiment is unknown, and that the potential benefits are known, both of which are hard facts.
I have run such experiments in my workplace before. Some have succeeded and some have failed. The successes have massively outweighed the impact of the failures.
Assuming that you know the outcome before you begin is the pinnacle of hubris, which we see time and again with these entrenched maintainers. They may be domain experts, but they deserve every ounce of criticism that they receive. Closed mindedness is not how progress is achieved.
My point is that there must be some threshold that you use to decide whether some experiment is going to be useful to run or not even worth the effort because your experience and expertise tells you otherwise? Or would you accept every experiment that anyone proposed? Eg, suppose someone wanted to model your entire technical stack with TLA+ for high assurance purposes? On paper it sounds like a great idea–formally verify your concurrency properties–but you don’t see how a reasonable project lead might say ‘While this project could certainly bring great benefits, it’s not worth the tradeoff with our team size and capabilities right now. That may change in the future’?
Some threshold must exist, yes. Presumably, that threshold should be somewhere below “the progenitor of the system has decide that the experiment should be run”, which is currently the case of RFL.
Individual system maintainers should not and must not be able to usurp the direction of the project as a whole.
Your core misunderstanding is that we are coming from a tabula rasa. We are not, and if your position is that Linus’s previous endorsements shouldn’t stand, then we’re having an entirely different (and much less tenable) conversation.
Is it though? To my understanding the decision was only that they will try it and see if it works on both the technical and the social levels. There was never any guarantee given that Rust will be shipped in Linux at all costs. To my understanding Linus’s approach is to see if Rust is overall viable for the kernel, and that includes the opinions of the other subsystem maintainers. Because if they are overall against it, the experiment is dead in the water.
where do the interests of actual linux users factor into this?
I like it when drivers work correctly and don’t break every other time I update the kernel due to weird concurrency issues.
then you should update your kernel twice as often, then it will be every fourth time instead of every other time.
Again, that decision has already been made. There’s no point questioning it over and over. Doing so is distracting from the debates that are important now: how can leadership do better at enforcing these project wide decisions to not let the situation fester like R4L, and reviewing the Rust code without raising the same solved questions again and again (concern trolling).
If you’re genuinely curious about those questions, there’s plenty of text written about it. LWN is a good place to start.
I hear you, but it is a bit in the nature of project that are very distributed in decision making with high individual authority that indeed, decisions will be questioned again and again and again at every point. Those projects need a long breath and the willingness to talk about points again and again and again.
It comes with its strength, particularly, those projects are extremely resilient and fast in other regards. You can’t have both. Pushing top-down decisions in such an environment can have other unintended effects.
It’s not something to take up (period, but even more so) with the R4L devs.
Doing that is like berating a cashier for the price of an item.
It sounds like the decision has actually not been made, just dictated by ‘top management’ and now getting pushback from the rank-and-file?
The guy who blocked it is ‘top management’, inasmuch as that exists for Linux.
sorry, the guy who blocked what? has the decision been made?
The guy who blocked the patches that the Asahi Linux folks wanted, which I think were to allow writing some rust drivers.
The guy who blocked has the authority to reject patches in their subsystem.
It wasn’t a patch in their subsystem though, it was in
rust/and used the DMA subsystem.You’re right that it’s in
rust/, tho unclear to me if it’s properly in the rust subsystem or not. You and others may be right that this patch may still go thru, that seems to depend on Greg KH and Linus.but did they have the authority to reject the patch?
I don’t believe so.
so he has the authority to block it, but not to reject it? him being the guy referred to above as “the guy who blocked the patches that the Asahi Linux folks wanted.”
So the original statement, “that decision has already been made,” is supposed to mean that the decision not to allow that patch has already been made?
ThinkChaos was talking about the decision to do the Rust for Linux (R4L) experiment.
I was talking about the decision to block the patch.
Sorry for the confusion.
how could the decision have been made, if the guy who blocked the patches is ‘top management’, inasmuch as that exists for Linux?
how was the decision made?
I sure hope it’s for both! If we cannot find a balance between the two, then we have failed as engineers.
Personally I think it does matter when a core maintainer calls a project that’s part of the kernel “cancer” and the guy who’s in charge doesn’t seem to care.
Marcan got a lot of mileage out of theatrically misinterpreting the “cancer” comment, and you’re carrying it here.
The actual comment:
The cancer is not Marcan, Asahi, R4L, or Rust. The cancer is anything-but-C. The cancer is a two-language codebase.
Still a very discouraging comment from a maintainer, and it bucks Linus’ decision that trying out Rust in the kernel is OK, but the distortion of this comment has been bugging me for the last week.
R4L is a project to make Linux a two-language code base. So he quite explicitly called the goal of the project cancer.
Yep, that’s what the commenter above is referring to. He thinks that project goal is unworthy. Or, well, in his words, cancer.
I don’t have an opinion, but I think a fair and less charged way than “cancer” of stating the position is just this: using Rust would have positive benefits, but they are far outweighed by the negatives of allowing a second language into the kernel.
The pros and cons of RfL are somewhat subjective, and the funal balance is hotly debated.
But I find it quite telling that Hellwig’s core argument (that a multi-lamguage codebase requires more work) is held by people who didn’t try doing any of that work. Whereas the kernel devs who started using Rust are explicitly saying “the API compat churn isn’t that big a deal, we can do that work for you if you want”.
We mostly hear about the drama, but it seems that the overall feeling toward Rust among kernel devs (not just RfL devs) is generally positive.
but then you lose the meaning conveyed by the metaphor, of something that grows beyond its intended scope until it kills the organism.
I don’t think the metaphor conveys any technical meaning beyond what I said. I don’t know what “kills the organism” is supposed to relate to. Is the “organism” the kernel? The community of kernel developers? And what does “kill” equate to in reality? It would be better to have the discussion in concrete terms rather than metaphors.
It does convey the personal outrage of the maintainer better, certainly.
thread depth limit reached; continuing here.
yes, I’m ignoring the less absurd things you’ve said because I don’t know what to take seriously when you haven’t retracted the most absurd thing.
one characteristic of cancerous growth is that it happens whether you want it to or not.
it doesn’t imply that, but I do think he would be much more OK with a little bit of Rust if the scope were somehow guaranteed not to expand. like I don’t know, maybe there’s a carve-out for testing a particular class of rust drivers in linux to help find bugs, but eventually the drivers are ported to C for use in linux and the rust version is used in a separate kernel written from scratch in rust?
so you wouldn’t say that you want to have no cancer at all in your body, ever? if it’s a non-cancerous tumor, I have much less of a problem with it being in my body.
I agree that “kill” could have multiple meanings.
I think “growing beyond its intended scope” is technical meaning that is not included in “the benefits outweigh the negatives.” you disagree?
Cancer has no intended scope, right? “Allowing into the kernel” conveys that there shouldn’t be any in the kernel.
sure. so do you think “growing” is a technically meaningful word that is not implied in “the benefits outweigh the negatives”?
I don’t want to dig into the minutiae of why this metaphor is inaccurate — my point is that using this metaphor at all is unprofessional and counterproductive because of the heavy emotional baggage it comes with. Just say “the introduction of Rust anywhere in the codebase will inevitably result in a growing amount of Rust in the codebase”.
It’s especially odd in this context, given the history of this metaphor regarding Linux.
okay, so do you retract your previous statement that “I don’t think the metaphor conveys any technical meaning beyond what I said”?
my intention was not to get into minutia either, but we’ve gone back and forth three times and you still haven’t retracted that statement so I don’t know where you stand.
so our enemies allow themselves to use the metaphor to efficiently communicate with each other, but we have to forswear it.
If you don’t want it there at all, it’s redundant to also complain that it’s growing. Cancer also has many other attributes besides “growing” that are irrelevant or contradictory to the position taken.
The reason he used that particular metaphor was to express disgust and horror, not to be technically accurate. If he just wanted to evoke “growth” he could have said it was like ivy. Disgust and horror are not an appropriate device to call on in this context, IMO.
Re Mr. Ballmer, the enemies were wrong that time, so don’t you think evoking that argument by using the same metaphor muddies the rhetorical water a bit?
if I can’t tell whether you stand by your statements, it feels futile to interpret them as if you believe them. that’s my issue with continuing in light of the fact that you are just leaving “I don’t think the metaphor conveys any technical meaning beyond what I said” without clarifying whether you believe it.
“complaining that it’s growing” is completely different from identifying the growth as a factor in the technical drawbacks.
Cancer has a multitude of characteristics, of which growth is only one. If anything, it has many characteristics contradictory to the point being made: it’s an undesirable outcome of natural processes, it is typically self-generated from within rather than deliberately introduced, it is hard to eliminate because it’s nourished by the same system that sustains healthy cells…
If you have to tell me exactly which part of the metaphor I’m supposed to pay attention to in order to get the technical meaning, then you aren’t conveying technical meaning, you’re muddying the water. Just say what you mean.
The additional meaning being conveyed here is not technical, it’s emotional, and I do agree emotional meaning was conveyed.
I take this to mean that you still stand by the statement that “I don’t think the metaphor conveys any technical meaning beyond what I said.” if you treat all technical metaphors like this, I’m glad you’re not my coworker.
or it could mean I am conveying technical meaning but you’re being willfully obtuse. just maybe…
Well, notice that you aren’t absorbing my actual point too well, because you’re very focused on making your own.
To give him the benefit of the doubt, he could have been trying to convey the technical meaning of “growth” in the sense of “if you let Rust in, we’ll just have more Rust”. However, that’s an utterly vacuous thing to say, because obviously the whole point of the exercise was to let Rust in and see what happens, and why would you do that unless you want to have more of it if the experiment succeeds? It conveys no useful information beyond “I don’t think Rust is a good idea at all, for reasons”; in other words, “the benefits don’t outweigh the negatives”. In fact it confuses matters because it implies he thinks Rust would be OK if only it didn’t grow.
I’m actually trying to be polite to the guy here, because I truly believe it was an emotional outburst, not an attempt at a technical argument. His actual technical argument (stated much more clearly elsewhere) is that there should be no Rust at all in the kernel, ever, not that once there is some, there will be more. So if “cancer” was supposed to mean “growth” as a technical argument, he misstated his own argument.
I don’t see the misinterpretation, lonjil said
Not
Or whatever phrase you would need to make your objection stand.
I could be wrong, but I get the feeling that people didn’t quite get why Hellwig used the word “cancer”. I could be wrong about this in a lot of ways, maybe I too misunderstood what Hellwid meant, maybe I misjudged how others are interpreting what he said. But allow me to add my 2 cents.
People seem to be interpreting what Hellwig said (i.e. calling R4L “cancer”) as him saying R4L is bad, evil, a terrible thing, etc. And I totally understand why many would think that, and would agree this is a terrible choice of words. But I think that Hellwig’s focus is actually on the fact that cancer spreads. A better word to use would probably be like “infectious”, or “viral”. Although I disagree with him, I think what he was saying is that, despite what Rust developers promised, Rust will spread to more corners of the kernel and increase the maintenance burden and will have a negative impact on the kernel as a whole.
I think technical leaders should focus on clarity rather than everyone else trying to do exegesis on what they write.
That’s very fair, I was not trying to defend Hellwig’s communication skills in the least here.
I think you’re right, but that doesn’t exculpate Hellwig in the least. One of the defining characteristics of cancer is that it is malignant. It seems pretty clear that he used this metaphor because he thought that Rust would spread and that it is dangerous and harmful.
yes, which is a perfectly valid technical position and to disallow a maintainer from expressing that position is completely insane.
“infectious” or “viral” have different connotations. viral would be if they thought the BSDs might catch it. infectious is not as specific as cancer; cancer specifically grows from one area and consumes more and more of the things next to it. infections can do that but not necessarily.
actively expelling the cancer probably would have drawn even more ire.
Unrelated to the rust issue, my understanding is that such toxic language actually isn’t all that strange in the Linux kernel maintainer community. It feels like they try somewhat not to overdo it, but they aren’t that much concerned about the toxicity. More like, “I’ll try but I don’t actually care”.
The only reason they’ve toned it down is due to financial pressure from major funders of LF.
The fact that many of those LF funders are also the employers of major Linux maintainers is also relevant, I feel.
I think it’s kind of crazy to label a vivid and extremely useful metaphor as “toxic.”
It’s useful in some cases, but when the effect of it is raising the temperature of the discussion, then I would say it isn’t useful in that instance. There are other ways to convey the same meaning that wouldn’t have attracted the same amount of strong emotions. We’ve seen examples of such alternate arguments in this thread that would have been more suitable.
sure. but then don’t fixate on the word, and acknowledge that their choice to use strong language can also be an effect of the raising temperature of the discussion, for which responsibility is shared among all participants.
There’s this idea that multi-language codebases are categorically a disaster, right? What really bothers me is how small-minded it is. There are many C and C++ codebases that have introduced Rust. It’s not easy but it is possible, and it often is worth it. Some attempts have been successful and others haven’t. You can learn so much from other people’s experiences here!
Simply calling multi-language codebases a cancer closes off discussion rather then opening it. That goes against the basic curiosity that is a hallmark of engineering excellence.
Honestly everything around R4L that has been going has just cemented in me the belief that the average long time Linux maintainer is fundamentally incompetent. They’re reasonably good at One Thing, but totally incurious and scared of anything outside of that One Thing they’re good at.
Do you suppose there might be some kind of incentive structure behind this pattern?
That seems likely, though I don’t think I can usefully speculate on the details.
Is it possible that we could take an experienced kernel maintainer’s opinion seriously instead of dismissing it immediately as incurious and ‘against’ engineering excellence? If a co-worker who specializes in a different domain came to you and kept insisting that something you know is very difficult and not worth the tradeoff is actually quite easy, would you be incurious and ‘against’ engineering excellence if you dismissed their wild suggestions without a deep discussion of the tradeoffs?
Evaluating who to take seriously and who not to is a matter of judgment cultivated over a lifetime.
Depends on many things, but in general, yes, of course, especially if they were someone I took seriously on other matters. If nothing else, it’s a chance to sharpen my arguments. If I keep having the same arguments over and over I might even step back and write an essay or two explaining what I believe and why I believe it. Here’s one that’s public that was a result of the sort of back-and-forth you’re talking about.
Your post strayed away from a factual re-telling of facts and towards narrativization here.
Reading the discussions on this elsewhere, there’s no way to describe this that wasn’t going to get accused of bias by people holding strong opinions on either marcan or on the rust for linux project.
I’m not accusing of bias (which is more or less unavoidable), but of constructing a narrative. The facts are: he exchanged words on the LKML and then wrote Mastodon posts about the exchange. The narrative you’re constructing is: he exchanged words on LKML, then because he was unsatisfied with how that discussion was going, he wrote Mastodon posts about the exchange in an attempt to gather support.
I think it would be better to leave out the guesswork about intentions, or at least clearly separate it from the factual retelling of events.
marcan has posted about this, so it’s hardly guesswork: https://lore.kernel.org/rust-for-linux/208e1fc3-cfc3-4a26-98c3-a48ab35bb9db@marcan.st/
Correction: Marcan never “exchanged words” with Christoph Hellwig on the patches, which had not even been posted by him. He just replied to the thread with a tirade.
I’m one of those using an unusual window manager (sawfish) with xfce4. I’m starting to dread the inevitable day that I’ll have to change back to some mainstream system that will of course include all the pain points that led me to my weird setup in the first place. Fine, so X11 needs work, but replacing it with something as antimodular as wayland seems to be is just depressing. Something must be done (I agree!); this is something (er, yes?); therefore this must be done (no, wait!!). We’ve seen it with systemd and wayland recently, and linux audio infrastructure back in the day; I wonder what will be next :-(
First, reports of X’s death are greatly exaggerated. Like the author said in this link, it probably isn’t actually going to break for many years to come. So odds are good you can just keep doing what you’re doing and not worry too much about it.
But, even if it did die, and forking it and keeping it going proves impossible, you might be able to go a while with a fullscreen xwayland rootful window and still run your same window manager in there. If applications force the wayland on you, maybe you can nest a wayland compositor in the xwayland instance to run that program under your window manager too. I know how silly that sounds - three layers of stuff - but there’s a good chance it’d work, and patching up the holes there may be a reasonable course of action.
So I don’t think the outlook is super bleak yet either.
I’m not convinced that building something like those kinds of window managers on top of wlroots is significantly more difficult than building those kinds of window managers on top of X11. There’s a pretty healthy ecosystem of wlroots-based compositors which fill that need at least for me, and I believe that whatever is missing from the Wayland ecosystem is just missing because nobody has made it yet, not because it couldn’t be made. Therefore, I don’t think your issue is with “antimodularity”, but with the fact that there isn’t 40 years of history.
I assure you, my issue is with antimodularity. The ICCCM and EWMH are protocols, not code. I can’t think of much tighter a coupling than having to link with and operate within a 60,000-line C-language API, ABI and runtime. (From a fault isolation perspective alone, this is a disaster. If an X window manager crashes, you restart it. If the wayland compositor crashes, that’s the end of the session.) You also don’t have to use C or any kind of FFI to write an X window manager. They can be very small and simple indeed (e.g. tinywm in 42 lines of C or 29 lines of Python; a similarly-sized port to Scheme). You’d be hard pressed to achieve anything similar using wlroots; you’d be pushing your line budget before you’d finished writing the makefile.
The Mir compositor seems to be closer to what you’d want, with a stable API to develop your own window manager / DE on. It still runs the code in the same process, though.
Another alternative would be Arcan, which has (what I consider at least) a refinement on the X11 approach to window management with high crash resilience. While it has its own native protocols, there support for Wayland clients.
Yes, Arcan looks very interesting indeed. I’m following the project (from a bit of a distance) with interest.
There is nothing inherent in Wayland that would prevent decoupling the window manager part from the server part. Window managers under X are perfectly fit to be trivial plugins, and this whole topic is a bit overblown.
Also, feel free to write a shim layer to wlroots so you can program against it in anything you want.
I’m talking about modularity. Modularity is what makes it feasible to maintain a slightly-different ecosystem adjacent to another piece of software. Without modularity, it’s not feasible to do so. Obviously there’s nothing inherently preventing me from doing a bunch of programming to get back to where I started. But then there’s nothing inherently preventing me from designing and implementing a graphics server from scratch in any way I like. Or writing my own operating system. There’s a point beyond which one just stops, and learns to stop worrying and love the bomb. Does that sound overblown? Perhaps it always seems a bit overblown when software churn breaks other people’s software.
With all due respect, I feel your points are a bit demagogue without going into a specific tradeoff.
The wayland protocol is very simple, it is basically a trivial IPC over Linux-native DRM buffers and features. Even adding a plug-in on top of wlroots would still result in much less complexity than what one would have with X. Modularity is not an inherent goal we should strive for, it is a tradeoff with some positives and negatives.
Modular software by definition has more surface area, so more complexity, more code, more possibility of bugs. Whether it’s worth it in this particular case depends on how likely we consider a window manager “plugin” to crash. In my personal opinion, this is extremely rare - they have a quite small scope and it’s much more likely to have a bug in the display server part, at which point X will fail in the exact same manner as a “non-modular” Wayland server.
I’m not sure exactly what pain points led you to your current setup, but I don’t think the outlook is that bleak. There are some interestingly customizable, automatable, and nerd-driven options out there. Sway and hyprland being the best-known but there are more niche ones if you go looking.
I use labwc and while it’s kind of enough, it’s years away from being anywhere near something like Sawfish (or any X11 window manager that’s made it past version 0.4 or so). Sawfish may be a special case due to how customizable and scriptable it is but basically everything other than KWin and Gnome’s compositor are basically in the TWM stage of their existence :).
Tiling compositors fare a little better because there’s more prior art on them (wlroots is what Sway is built on) and, in many ways, they’re simpler to write. Both fare pretty badly because there’s a lot of variation; and, at least the last time I tried it (but bear in mind that was like six years ago?) there was a lot more code to write, even with wlroots.
I’m not saying it’s there right now, but it’s not totally dire, and I think it’s reasonable to expect it to get better.
Ah, sorry, I misread that via this part:
…as in, there are, but none of the stacking ones are anywhere near the point where they’re on-par with window managers that went past the tinkering stage today. It’s certainly reasonable to expect it to get better; Wayland compositors are the cool thing to build now, X11 WMs are not :). labwc, in fact, is “almost” there, and it’s a fairly recent development.
Just digging into Hyprland and it’s pretty nice. The keybindings make sense, especially when switching windows, the mouse will move to the focused window. Necessary? Probably not. But it’s a really nice QoL improvement for a new Linux user like myself.
I’m pretty much in the same situation as you. I’m running XFCE (not with sawfish, but with whatever default WM that it ships with). I didn’t really ever explore using Wayland since X11 is working for me so I found no reason to switch yet. For a while it seems like if you wanted a “lightweight” desktop environment you’re stuck with tiling Wayland environments like Sway. I still prefer stacking-based desktop environments, but don’t really want to run something as heavy as GNOME or KDE. I’ll probably eventually switch to LXQt which will get Wayland support soon.
Isn’t XFCE getting Wayland support at the moment?
It’s currently experimental. XFCE’s Wayland support isn’t packaged in distros yet, AFAIK.
It sounds like many things are changing for the better. But at the same time it doesn’t look like we’re already in a working state for people that don’t want to tinker around. I’d rather not have random issues and quirks while actual recording support is so-so depending on the compositor. Especially when every problem is either “you’re using the wrong distro” or “you’re using the wrong compositor, works on mine”.
I also would rather not have random issues and quirks, and honestly, that means I don’t want X11. X is the king of random issues and quirks in my experience.
I mean, if the problem is that they’ve finally got a good solid solution but the software you’re using hasn’t gotten around to implementing it yet, or the software you’re using has implemented it upstream but your distro hasn’t pulled that version in yet, what other response do you really expect? You can use a system where it already works, or you can wait for the necessary changes to make their way to your setup, or you can pitch in to make it happen faster.
On a technical level I agree with you. But on a consumer level it sounds like you just have a long time of frustration during which you probably would rather off board from linux and move to a mac (there I said it, you either die a hero..). After which there is even more friction for migrating back to linux in x years when the whole ecosystem and LTS train arrived at the state of wayland that is actually usable - without resorting to hacks that make you sound like someone trying to regedit copilot out of win11.
Or to rephrase: If Linux on the desktop is your goal, then this is not the state where you can tell people that wayland is good now.
I think the “Linux on desktop” is approaching the problem at wrong angle. Instead it should be “GNOME desktop” or “KDE Plasma desktop” or whatever.
Users don’t give a damn the thing runs on Linux. They give a damn on clickable things. So you market the clickable things.
From one of the slides:
I don’t understand this attitude. The first thing I don’t understand is this about “having” to pull down code that he doesn’t trust. You don’t have to do anything, you choose what you want to depend on yourself. But then what I really don’t understand is why some random third party compiling the code and giving you a shared library suddenly makes it reasonably trustworthy.
I interpreted this as saying: In C the norm is to use fairly large shared library dependencies that come as a binary from someone you trust to have built and tested it correctly; most often it’s from a distribution with a whole CI process to manage this. The only source code you compile is your own. Whereas in Rust/Go/Python/etc. the norm is to download everybody else’s source code and compile it yourself on the fly, and the libraries are typically smaller and come from more places. Also, in the typical default setup (”^” versions) it’s easy to pull down some very recent source code without realizing it.
I can see how that would feel like you’ve thrown away a whole layer of stability and testing compared to the
lib.somodel.It’s just pedantic really. Use vetted libraries if you care. Git submodule them and reference them locally. Cargo won’t stop you.
I don’t think you get the point.
When using the package manager of my OS to install dependencies for my program I have some safeguards. First of all I see this dependence is important enough for someone to add the package. Also there are some checks done before with the package is added. Changes are also checked before added. Also when security updates are available these are added and I can simple update them, so all running software (even the one not managed by the package manager) get the security fix after reload the dependency. Then there can’t be a maintainer go rough and just break my dependence. I get this also for dependencies of my dependence.
Of course this doesn’t do a full audit of all my dependencies and fix all problems. It just adds help to manage dependencies and trust in them. While with cargo/pip/… you need to do all this on your own for every dependence and every update.
I know it’s a spectrum, but really OS upgrades aren’t a good benchmark. There have been some incredible bugs over the years, often due to OS packaging having or ignoring the defaults of the upstream. For example when Debian screwed up the private key generation…
It is very, very hard to write any serious Rust program without any dependencies from an untrusted code repository such as Cargo. In C and C++, the same is trivial, because almost all dependencies you could ever want are provided by your operating system vendor. The core difference is between trusting some random developer who uploaded some code to Cargo, and trusting your operating system.
I first wanted to write a long comment and decided to just blog it.
The TLDR is somewhere along the lines that I doubt you can get away with only using OS dependencies and making that work is even more work. While not actually gaining as much security as you would like.
As a user, I don’t have much more trust in libfoo packaged by my distro (if it’s packaged) than in libfoo fetched from upstream. And as a distributor, I have much less trust in libfoo packaged by my users’ 50 different OSes than in libfoo I fetch from upstream.
this sums up the dynamic perfectly, and the erosion of trustworthiness to the user in favor of trustworthiness to the developer is glaring.
When you use distro provided libraries, you have to trust thousands of authors. I trust my distro maintainers to not do anything malicious, but I don’t trust them to vet 10000 C packages. Which means I have to trust the authors of the code to not be malicious.
The distro package update process is to pull down new sources, compile, run the test suite (if one is provided), only look at the code if something isn’t working, and then package it. After that, someone might notice if there is something bad going on. But it’s a roll of the die. At no point in this process is anything actually vetted. At most you get some vetting of a few important packages, if your distro is big and serious enough.
Considering how often you find random bespoke implementations of stuff like hash tables in C projects, this is clearly untrue.
I already implicitly trust the software provided by my distro.
Provided by your operating system vendor? Which operating system? If you write cross-platform code, you only get to use the dependencies that all platforms provide (which is pretty much none at all). C and C++ don’t even ship with proper UTF-8 support. It’s completely impossible to write any serious software in these languages without pulling in some external code (or reinventing the wheel). As someone who has earned a living with C/C++ development for years, I have a hard time understanding how you arrived at this conclusion.
The scope of the C/C++ standard library and the Rust standard library is very similar. Rust’s standard library has no time features or random number generators (for that, the semi-official crates
chrono,timeandrandcan be used), but it has great UTF-8 support. Overall, you’ve got about the same power without using any dependencies.My operating system vendor is the Fedora project on my desktop, and Canonical or the Debian project on various servers. All of those provide plenty of C and C++ libraries in their repositories.
I don’t understand why you talk about C++’s UTF-8 support or the size of C++‘s stdlib, that’s completely irrelevant to what I said.
If you use Windows (MSVC), you basically have no third party libraries available whatsoever. You need to download binaries and headers from various websites.
It’s the best example of a third party library you have to pull in for basically every project. This is not provided out of the box or by the operating system vendor.
I don’t use Windows. The person in the article doesn’t either. TFA contains a criticism of Rust from the perspective of a Linux user, not from the perspective of a Windows user.
Okay, we went from:
to:
Yes, dependency management is trivial if you only ever use a system where all your desired dependencies are provided through official channels. However, this is not the experience most C/C++ programmers will have. It’s not even the case if you just use Linux: Recently, I used SDL3 and I had to download the tarball and compile the source myself because it was too new to hit the apt repository of my distro.
The complaints from the article are from the perspective of a Linux user. My post was also from the perspective of a Linux user. Windows is not relevant in this conversation.
Unless you specified it upfront then Windows is going to be presumed to be relevant. Pretending to be surprised that someone thought it wasn’t is silly.
The person you are talking to then goes on to describe a Linux counterexample that is quite common for C/C++ development. If your response to this is welll I don’t do any games/graphics/gui development then cool. You have now defined an extremely narrow category of C/C++ development where it’s common to only use libraries that come already installed. It is no where close to the general case though. Which is the point of the commenter you are responding to.
As for the SDL example: Sure, there are situations where you may need to add dependencies which aren’t in your distro in C or C++. However, this is fairly rare and you can perform a risk assessment every time (where you’d conclude that the risk associated with pulling down SDL3 outside of your distro’s repos is minimal).
Contrast this with Rust (or node.js for that matter), where you’re likely to have so many dependencies not provided by your distro that it’s completely unreasonable to vet every single one of them. For my own Rust projects, the size of my Cargo.lock file is something that worries me, just like the size of the package-lock.json file worries me for my node.js projects. I don’t know most of those dependencies.
I have now specified that I (like the article) were talking in the context of Linux. Do I need to be more clear at this point?
As expected, it’s not very easy to read. The typography of raw text in web browsers is, sadly, horrible. It’s like motherfuckingwebsite.com compared to bettermotherfuckingwebsite.com, except raw text is even worse than unfromatted HTML.
I always felt that bettermotherfuckingwebsite.com was an indictment of browser implementers. There’s no reason browsers can’t have good default styles. A publisher shouldn’t have to do the work that bettermotherfuckingwebsite did to make things look readable. That can be entirely on the user agent.
Oh I 100% agree. It’s a bloody shame that the web is, and always has, looked completely unreadable by default. Even (or especially!) in its original form as a linked rich text document sharing network, there’s no excuse for not implementing basic typographic conventions to make the content readable.
Yet here we are, and it isn’t changing any time soon. And so, as bloggers, I feel like we have a responsibility to our readers to present them with something that’s better than the default HTML or raw text formatting.
The lesson: the only thing worse than engineer UI is physicist UI.
I mostly agree, I add css to my html for a living and on my free time, this blog is an exception and I think I will keep it like this, mostly for the challenge and the ability I have to add ascii art if I want without needing to add html pre tag :P
Backwards-compatibility something-something? Like viewport adjustment by default- it’s painful everyone has to set it up, but likely a lot of stuff breaks if browsers did that by default.
I was recently told about
font-family: system-uiand adopted it.But unfortunately, I suspect most users do not adjust their browser defaults either to make things readable to them, because likely this only has an effect on a few random websites.
I kinda think the Gemini world works better- just allow no styling and in theory users will just set up their browser.
Yeah Gemini is “great” if you don’t need stuff like italic and bold.
inb4 someone says “why do you need that” - I quite like being able to write a citation correctly, thank you very much.
To clarify what I said, because I was unclear: I think not allowing styling is a good idea. Semantic markup (or very simple “bold”/“italics” styling) is frequently a good idea.
(I don’t think Gemini is perfect. I think it’s mostly inspiring.)
That’s why thebestmotherfucking.website exists. It literally takes one CSS import to make your website look 10x better