At this point I think we’re more surprised when folks actually end up doing the right thing as opposed to the behaviour seen from the software vendor here. As long as some baseline of security standards and practices are not enforced by regulation, organisations primarily incentivised by money are just going to continue on doing things like this with little to no repercussion. I suppose that’s nothing new though, it’ll probably take something catastrophic for regulators to get around to it—and even then there’s no guarantee.
What’s crazy is that security isn’t even incentivized by money in extreme cases. Breaches happen to companies who you think would be massively impacted, but nope. Okta was breached in 2023 and their customers were breached because of it, but their stock is up since then. That is… insane. This indicates that no one cares about security, to the extent that companies get breached due to a vendor getting breached and that vendor sees no financial impact.
I feel like users are perhaps fatigued to the extent that it feels pointless or idk. At least some of it is that security plays almost no role for software engineers other than being perceived as a useless pursuit that adds friction.
Our engineering minds are often blind to the non-technical “fixes” that stabilize these systems.
Consider credit card fraud. The absurdly low entropy of the standardized payment card Primary Account Number has led to a massive private bureaucracy that issues data handling regulations and regularly audits all organizations that handle these numbers. The expense is considerable. And yet, fraud is a regular occurrence, written off as a cost of doing business. If you as a consumer experience a fraudulent charge, you just contact your card issuer: they reverse the charge and issue you a new number. We don’t even perceive the friction because we have little basis for comparison: it’s always been this way.
On the other hand, computers aren’t secure. They are incomprehensible mountains of complexity, and society at large doesn’t want to walk away from all the percieved benefits of that complexity.
So the only reasonable way to maintain a measure of security is to evaluate risk and prioritize fixing the things that produce the most risk (for some definition of “risk”). That means you will always have some amount of security breaches at some level of severity. It’s not a question of “if” but rather “when.”
Which also means that recovery after a breach is often more important than preventing all possible breaches in the first place.
So in Okta’s case… yeah, vendors are going to have security issues. Everyone with a little experience knows that at purchase-time. If mishaps inside Okta become a regular pattern, then I’ll ditch Okta. But one-off security incidents are not going to make reasonable companies switch vendors, which means Okta’s probably going to be ok as long as they learn from their mistakes and prioritize transparency.
(Counter-example: LastPass. Anyone still trusting that company after all their mishaps is insane.)
As long as humanity continues to demand these mountains of complexity, this is the way things are going to be. The FreeSWITCH situation is pretty crazy, but the Okta case at least doesn’t seem too insane to me.
I’m not advocating for extreme levels of security, just meeting a bar that’s less embarrassing. I would probably reject the “not if but when” idea too fwiw but I’m onboard with accepting some risk, that’s the point of threat modeling - knowing what risks you do and don’t accept. I think we can do a lot better than the status quo and I think it stems almost entirely from security being something that no one cares about.
(Counter-example: LastPass. Anyone still trusting that company after all their mishaps is insane.)
A big difference between LastPass and Okta is that Okta has way more money to burn on PR.
Another one in the long list of JavaScript tools that ditched JavaScript as their implementation language for performance reasons. Hopefully this is more easily usable by the time I have to work on a NodeJS project again, because the performance improvement numbers look incredibly promising.
I think the reality is that most people are asking that same question and coming to that same conclusion. The theory of “frontend devs can own the API layer” hasn’t really played out as well as people had hoped and I know plenty of JS developers who are just as happy to write Go if it comes to it anyways.
Ok, question asked. Why write JS on the server when I could pick Java/PHP/Elixir/Go/Rust/Python/Ruby/C#/Zig/OCAML/Crystal/Nim/Perl/Kotlin/Scala/Lua/Haskell/Clojure?
I think some people are aiming for a single language as a stack. Because JS seems to not be going away anytime soon and there are so many backend languages, people were/are trying to aim for JS on the server. There are many backend choices but only one frontend choice. Therefore, to get one language, and end-to-end types, JS on the server. Yes, I understand JS avoidance and all the arguments against. Yes, I rolled my eyes when the server was discovered again.
If I question why I have two languages in my app then people move the goal posts and reduce app features. “I can just concat html text to the client using app.pl in /cgi-bin”. Sure, you always have had that option, that’s not what I mean. I mean for a certain size/complexity of application. I mean, just as one benefit or pro in the trade-off, if I have Go types they don’t go to my client. Or I have to / want to have some contract layer to sync the two. So I end up with two languages and some contract between them. In theory, you don’t have that with trpc/typedjson/tanstack/etc etc. Because your types are full-stack.
So when people are talking about Go replacing Typescript this is still Typescript dev. It’s a tool written in Go to write/check/build Typescript. If you wanted to avoid NodeJS, you would have to look at things like Deno or Bun.
There are many backend choices but only one frontend choice.
With so many languages taking on WASM targets, I think that’s becoming less true. And even before that, there are quite a number of compilers targeting JS. Of course there are drawbacks, and these approaches aren’t always practical for every web front-end project, but I do think “only one frontend choice” is overstating the case.
What’s your marketing budget? If you aren’t aligned with the marketing budget havers on this, how do you expect them to treat you when your goals diverge?
See also, fast expiring certificates making democratized CT logs infeasible, DNS over HTTPS consolidating formerly distributed systems on cloudflare. It’s not possible to set up a webpage in 2025 without interacting with a company that has enough money and accountability to untrustworthy governments to be a CA, and that sucks.
HTTPS is cool and all, but I wish there was a usable answer that wasn’t “just centralize the authority.”
Sigh. Lobsters won’t let me post. I must be getting rate limited? It seems a bit ridiculous, I’ve made one post in like… hours. And it just shows me “null” when I post. I need to bug report or something, this is quite a pain and this is going to need to be my last response as dealing with this bug is too frustrating.
See also, fast expiring certificates making democratized CT logs infeasible, DNS over HTTPS consolidating formerly distributed systems on cloudflare.
Can you tell me more about these? I think “infeasible” is not accurate but maybe I’m wrong. I don’t see how DoH consolidates anything as anyone can set up a DoH server.
t’s not possible to set up a webpage in 2025 without interacting with a company that has enough money and accountability to untrustworthy governments to be a CA, and that sucks.
You can definitely set up a webpage in 2025 pretty with HTTPS, especially as you can just issue your own CA certs, which your users are welcome to trust. But if your concern is that a government can exert authority within its jurisdiction I have no idea how you think HTTP is helping you with that or how HTTPS is enabling that specifically. These don’t feel like HTTPS issues, they feel like regulatory issues.
HTTPS is cool and all, but I wish there was a usable answer that wasn’t “just centralize the authority.”
There are numerous, globally distributed CAs, and you can set one up at any time.
Lobsters has been having some issues, I had the same trouble yesterday too.
The CT log thing is something i read on here iirc, basically that CT logs are already pretty enormous and difficult to maintain, if there are 5x as many cert transactions cause they expire in 1/5 the time the only people who will be able to keep them are people with big budgets
I suppose i could set up a DoH server, but the common wisdom is to use somebody else’s, usually cloudflare’s, the fact that something is technically possible doesnt matter in a world where nobody does it.
especially as you can just issue your own CA certs
Are you joking? “please install my CA cert to browse my webpage” may technically count as setting up a web page but the barrier to entry is so high I might as well not. Can iphones even do that?
There are numerous, globally distributed CAs, and you can set one up at any time.
That’s a lot more centralized than “I can do it without involving a third party at all.”
I dunno, maybe I’m just romanticizing the past but I miss being able to publish stuff on the internet without a Big Company helping me.
The CT log thing is something i read on here iirc, basically that CT logs are already pretty enormous and difficult to maintain, if there are 5x as many cert transactions cause they expire in 1/5 the time the only people who will be able to keep them are people with big budgets
Strange but I will have to learn more.
I suppose i could set up a DoH server, but the common wisdom is to use somebody else’s, usually cloudflare’s
Sure, because that’s by far the easiest option and most people don’t really care about centralizing on Cloudflare, but nothing is stopping people from using another DoH.
Are you joking? “please install my CA cert to browse my webpage” may technically count as setting up a web page but the barrier to entry is so high I might as well not. Can iphones even do that?
iPhones being able to do that isn’t really relevant to HTTPS. If you want to say that users should be admins of their own devices, that’s cool too.
As for joking, no I am not. You can create a CA, anyone can. You don’t get to decide who trusts your CA, that would require work. Some companies do that work. Most individuals aren’t interested. That’s why CAs are companies. If you’re saying you want a CA without involving any company, including non-profits that run CAs, then there is in fact an “open” solution - host your own. No one can stop you.
You can run your own internet if you want to. HTTPS is only going to come up when you take on the responsibility of publishing content to the internet that everyone else has to use. No one can stop you from running your own internet.
That’s a lot more centralized than “I can do it without involving a third party at all.”
As opposed to running an HTTP server without a third party at all? I guess technically you could go set up a server at your nearest Starbucks but I think “at all” is a bit hard to come by and always has been. Like I said, if you want to set up a server on your own local network no one is ever going to be able to stop you.
I dunno, maybe I’m just romanticizing the past but I miss being able to publish stuff on the internet without a Big Company helping me.
Which drawbacks? I ask not because I believe there are none, but I’m curious which concern you the most. I’m sympathetic to wanting things and not wanting their consequences haha that’s the tricky thing with life.
HTTPS: I want the authentication properties of HTTPS without being beholden to a semi-centralized and not necessarily trustworthy CA system. All proposed alternatives are, as far as I know, bad.
DNS: I want the convenience of globally unique host names without it depending on a centralized registry. All proposed alternatives are, as far as I know, bad.
These kind of accusations are posts that make me want to spend less on lobsters.
Who knows if it’s planned or accidental obsolescence? Many devices and services outlive their teams by much longer than anticipated. Everyone working in software for a long while has experienced situations like those.
I also find the accusation that HTTPS is leading to broken devices rather wild…
I want to offer a different view: How cool is it that the devices was fixable despite Google’s failure to extend/exchange their certificate. Go, tell your folks that the Chromecast is fixable and help them :)
For me, it’s takes like yours that irritate me. Companies that are some of the largest on the planet don’t need people like you to defend them, to make excuses for them, to try to squelch the frustration directed towards them because they’re either evil or incompetent.
By the way, there is no third option - either they’re evil and intended to force obsolescence upon these devices, or they’re incompetent and didn’t know this was going to happen because of this incompetence.
The world where we’re thinking it’s cool that these devices are fixable tidily neglects the fact that 99% of the people out there will have zero clue how to fix them. That it’s fixable means practically nothing.
For me, it’s takes like yours that irritate me. Companies that are some of the largest on the planet don’t need people like you to defend them, to make excuses for them, to try to squelch the frustration directed towards them because they’re either evil or incompetent.
Who cares? No one is defending Google. People are defending deploying HTTPS as a strategy to improve security. Who cares if it’s Google or anyone else? The person you’re responding to never defends Google, none of this has to do with Google.
By the way, there is no third option - either they’re evil and intended to force obsolescence upon these devices, or they’re incompetent and didn’t know this was going to happen because of this incompetence.
Who cares? Also, there is a very obvious 3rd option - that competent people can make a mistake.
Nothing you’ve said is relevant at all to the assertion that, quoting here:
This is the future the “HTTPS everywhere” crowd wants ;)
Even though you’re quoting me, you must be mistaken - this post is about Google, and my response was about someone who is defending Google’s actions (“Who knows if it’s planned or accidental obsolescence?”).
I haven’t a clue how you can think that a whole post about Google breaking Google devices isn’t about Google…
To the last point, “https everywhere” means things like this can keep being used as an excuse to make fully functional products in to ewaste over and over, and we’re left wondering if the companies responsible are evil or dumb (or both). People pretending to not get the connection aren’t really making a good case for Google not being shit, or for how the “https everywhere” comment is somehow a tangent.
Take what you want from my employment by said company, but I would guess absolutely no-one in private and security has any wish/intention/pressure to not renew a certificate.
I have no insider knowledge about what has happened (nor could I share it if I did! But I really don’t). But I do know that the privacy and security people take their jobs extremely seriously.
This isn’t about who you criticize, I would say the same if you picked the smallest company on earth. This is about the obvious negativity.
This is because the article isn’t “Chromecast isn’t working and the devices all need to go to the trash”.
Someone actually found out why and people replied with instructions how to fix these devices, which is rather brilliant. And all of that despite google’s announcements that it would discontinue it..
This is the future the “HTTPS everywhere” crowd wants ;)
I’m not exactly sure what you meant by that, and even the winky face doesn’t elide your intent and meaning much. I don’t think privacy and security advocates want this at all. I want usable and accessible privacy and security and investment in long term maintenance and usability of products. If that’s what you meant, it reads as a literal attack rather than sarcasm. Poe’s law and all.
Not all privacy and security advocates wanted ‘HTTPS everywhere’. Not all of the ‘HTTPS everywhere’ crowd wanted centralized control of privacy and encryption solutions. But the privacy and security discussion has been captured by corporate interests to an astonishing degree. And I think @gerikson is right to point that out.
Do you seriously think that a future law in the US forcing Let’s Encrypt (or any other CA) to revoke the certificates of any site the government finds objectionable is outside the realms of possibility?
HTTPS everywhere is handing a de facto publishing license to every site that can be revoked at will by those that control the levers of power.
I admit this is orthogonal to the issue at hand. It’s just an example I came up with when brewing some tea in the dinette.
In an https-less world the same people in power can just force ISPs to serve different content for a given domain, or force DNS providers to switch the NS to whatever they want, etc. Or worse, they can maliciously modify the content you want served, subtly.
Only being able to revoke a cert is an improvement.
Holding the threat of cutting off 99% of internet traffic over the head of media companies is a great way to enforce self-censorship. And the best part is that the victim does all the work themselves!
The original sin of HTTPS was wedding it to a centralized CA structure. But then, the drafters of the Weimar constitution also believed everything would turn out fine.
They’ve just explained to you that HTTPS changes nothing about what the government can do to enact censorship. Hostile governments can turn your internet off without any need for HTTPS. In fact, HTTPS directly attempts to mitigate what the government can do with things like CT logs, etc, and we have seen this work. And in the singular instance where HTTPS provides an attack (revoke cert) you can just trust the cert anyways.
edit: Lobsters is basically completely broken for me (anyone else just getting ‘null’ when posting?) so here is my response to the reply to this post. I’m unable to reply otherwise and I’m getting no errors to indicate why. Anyway…
Yeah, “trust the cert anyway” is going to be the fig leaf used to convince a compliant SCOTUS that revoking a certification is not a blatant violation of the 1st amendment. But at least the daily mandatory webcast from Dear Leader will be guaranteed not to be tampered with during transport!
This is getting ridiculous, frankly.
You’ve conveniently ignored everything I’ve said and focused instead of how a ridiculous attack scenario that has an obvious mitigation has 4 words that somehow you’re relating to SCOTUS and 1st amendment rights? Just glossing over that this attack makes almost no sense whatsoever, glossing over that the far easier attacks apply to HTTP at least as well (or often better) as HTTPS, glossing over the fact that even more attacks are viable against HTTP that aren’t viable against HTTPS, glossing over that we’ve seen CT logs actually demonstrate value against government attackers, etc etc etc. But uh, yeah, SCOTUS.
SCOTUS is going to somehow detect that I trusted a certificate? And… this is somehow worse under HTTPS? They can detect my device accepting a certificate but they can’t detect me accessing content over HTTP? Because somehow the government can’t attack HTTP but can attack HTTPS? This just does not make any sense and you’ve done nothing to justify your points. Users have been more than charitable in explaining this to you, even granting that an attack exists on HTTPS but helpfully explaining to you why it makes no sense.
In the near future, on the other side of an American Gleichschaltung, a law is passed requiring CAs to revoke specific certificates when ordered.
If the TLS cert for CNN.com is revoked, users will reach a scary warning page telling the user the site cannot be trusted. Depending on the status of “HTTPS Everywhere”, it might not be able to proceed past this page. But crucially, CNN.com remains up, it might be accessible via HTTP (depending on HSTS settings) and the government has done nothing to impede the publication.
But the end effect is that CNN.com is unreadable for the vast number of visitors. This will make the choice of CNN to tone down criticism of the government very easy to make.
The goal of a modern authoritarian regime is not to obsessively police speech to enforce a single worldview. It’s to make it uneconomical or inconvenient to publish content that will lead to opposition to the regime. Media will parrot government talking points or peddle harmless entertainment. There will be an opposition and it will be “protected” by free speech laws, but in practice accessing its speech online will be hard to impossible for the vast majority of people.
If the USA apparatus decides to censor CNN, revoking TLS cert wouldn’t be the way. It’ll be secret court orders (not unlike recent one British government has sent to Apple), and, should they not comply, apprehension of key staff.
And, even if such cert revocation happened, CNN would be able to get new one within seconds by contacting any other ACME CA, there are even some operating in EEA.
I think your whole argument is misguided, and not aimed at understanding failures of Google, but at lashing at only tangentially related problem space.
And my comment is not defence of Google or Cloudflare, I consider both to be malicious for plethora of reasons.
You’re still thinking like the USSR or China or any totalitarian government. The point isn’t to enforce a particular view. The point is to prevent CNN or any other media organization from publishing anything other than pablum, by threatening their ad revenue stream. They will cover government talking points, entertainment, even happily fake news. Like in Russia, “nothing is true and everything is possible”.
And, even if such cert revocation happened, CNN would be able to get new one within seconds by contacting any other ACME CA, there are even some operating in EEA.
Nothing is preventing the US from only allowing certs from US based issuers. Effectively, if you’re using a mainstream browser, the hypothetical law I have sketched out will also affect root CAs.[1]
I think your whole argument is misguided, and not aimed at understanding failures of Google, but at lashing at only tangentially related problem space.
I proposed a semi-plausible failure mode of the current CA-based certification system and suddenly I’ve gotten more flags than ever before. I find it really interesting.
[1] note that each and every one of these attempts to block access will have quite easy and trivial workarounds. That’s fine, because as stated above, having 100% control of some sort of “truth” is not the point. If nerds and really motivated people can get around a block by installing their own root store or similar, it will just keep them happy to have “cheated the system”. The point is having an atomized audience, incapable of organizing a resistance.
I proposed a semi-plausible failure mode of the current CA-based certification system and suddenly I’ve gotten more flags than ever before. I find it really interesting.
The flags are me and they’re because your posts have been overwhelmingly low quality, consisting of cherry picking, trolling, rhetoric, and failing to engage with anyone’s points. You also never proposed any such attack, other users did you the favor of explaining what attack exists.
The closest thing you’ve come to defining an attack (before others stepped in to hand you one) is this:
Holding the threat of cutting off 99% of internet traffic over the head of media companies
It’s not that interesting why you’re getting flagged. IMO flags should be required to have a reason + should be open, but that’s just me, and that’s why I virtually always add a comment when I flag a post.
This is one of the only posts where you’ve almost come close to saying what you think the actual problem is, which if I very charitably interpret and steel-man on your behalf I can take as essentially “The US will exert power over CAs in order to make it hard for news sites to publish content”. This utterly fails, to be clear (as so many people have pointed out that there are far more attacks on HTTP that would work just as well or infinitely better, and as I have pointed out that we have seen HTTPS explicitly add this threat model and try to address it WITH SUCCESS using CT Logs), but at least with enough effort I can extract a coherent point.
I have around 30 flags right now in these threads (plus some from people who took time off their busy schedule to trawl through older comments for semi-plausible ones to flag). You’re not the only one I have pissed off.[1]
(I actually appreciate you replying to my comments but to be honest I find your replies quite rambling and incoherent. I guess I can take some blame for not fully cosplaying as a Project 2025 lawyer, instead relying on vibes.)
It’s fine, though. I’ve grown disillusioned by the EFF style of encryption boosting[2]. I expect them to fold like a cheap suit if and when the gloves come off.
[1] but I’m still net positive on scores, so there are people on the other side too.
[2] they’ve been hyperfocussed on the threat of government threats to free speech, while giving corporations a free pass. They never really considered corporations taking over the government.
Hm, I see. No, I certainly have not flagged all of your posts or anything, just 2 or 3 that I felt were egregious. I think lobsters should genuinely ban more people for flag abuse, tbh, but such is the way.
It’s interesting that my posts come off as rambly. I suppose I just dislike tree-style conversations and lobsters bugs have made following up extremely annoying as my posts just disappear and show as “null”.
I’ve been getting the “null” response too. There’s nothing in the bug tracker right now, and I don’t have IRC access. Hopefully it will be looked at soon.
As to the flags, people might legitimately feel I’m getting too political.
Yeah, “trust the cert anyway” is going to be the fig leaf used to convince a compliant SCOTUS that revoking a certification is not a blatant violation of the 1st amendment. But at least the daily mandatory webcast from Dear Leader will be guaranteed not to be tampered with during transport!
The point of this hypothetical scenario would be that the threat of certificate revocation would be out in the open, to enforce self-censorship to avoid losing traffic/audience. See my comment here:
I’m not sure any of those are good examples of planned obsolescence. As far as I can tell, they’re all services that didn’t perform very well that Google didn’t want to support, tools that got subsumed into other tools, or ongoing projects that were halted.
I think it’s reasonable to still wish that some of those things were still going, or that they’d been open-sourced in some way so that people could keep them going by themselves, or even that Google themselves had managed them better. But planned obsolescence is quite specifically the idea that you should create things with a limited lifespan so that you can make money by selling their replacements. As far as I can tell, that doesn’t apply to any of those examples.
I get that it’s a tongue in cheek comment, but this is what falls out of “we want our non-https authentication certificates to chain through public roots”.
There is no reason for device authentication to be tied to PKI - it is inherently a private (as in “only relevant to the vendor” , not secret) authentication mechanism so should not be trying to chain through PKI, or PKI-like, roots.
Why is this a hyperbole? It is clear that even an enterprise the size of Google, famous for it’s leetcode-topping talent is unable to manage certificates at scale. This makes it a pretty good point against uncritical deployment of cryptographic solutions.
When Microsoft did that I wasn’t standing embarrassed in front of my family failing to cast cartoons on the TV. So it was their problem, not my problem.
Maybe. I think there are two ways to interpret it - “HTTPS Everywhere” means “literally every place” or it means “everywhere that makes sense, which is the vast majority of places”. But, to me, neither of these implies “you should deploy in a way that isn’t considered and that will completely destroy a product in the future”, it just means that you should very likely be aiming for a reliable, well supported deployment of HTTPS.
I was replying more to the “planned and enforced obsolescence” conspiracy theorizing.
It is true that managing certificates at scale is something not a lot of large organizations seem to be able to pull off, and that’s a legitimate discussion to have… but I didn’t detect any good faith arguments here, just ranting
Even if half of the things I have heard about Brave are wrong, why even bother when so many other great, free alternatives exist. The first and last time I tried it was the home page ad fiasco… uninstalled and went back to Chrome.
These days I try to use Firefox, but escape hatch to Chrome when things don’t work. I know there are better alternatives to both Firefox and Chrome, I’ll start exploring them… maybe? It’s hard for me to care about them since most of them are just Chrome/Firefox anyway. I’ll definitely give Ladybird a go when it’s ready. On paper, at least, it sounds like the escape from Google/Mozilla that is desperately needed.
Kagi bringing Orion to Linux feels promising. It’s OK on Mac, though after using it for 6 months I switched back to Safari. It looks like they’re using Webkit for that on Linux, not blink, which is a happy surprise IMO. That feels like a good development. (I’m also looking forward to Ladybird, though. Every so often I build myself a binary and kick the tires. Their progress feels simultaneously impossibly fast and excruciatingly slow.
If I understand correctly, Orion is not open source. That feels like a huge step backward and not a solution to a browser being controlled by a company with user-hostile incentives. I think Ladybird is more in line with what we really need: a browser that isn’t a product but rather a public good that may be funded in part by corporations but isn’t strongly influenced by any one commercial entity.
they have stated that open sourcing is in the works
That help page has said Kagi is “working on it” since 2023-09 or earlier. Since Kagi hasn’t finished that work after 1.5 years, I don’t believe Kagi is actually working on open sourcing Orion.
Their business model is, at the minimum, less user hostile than others due to users paying them money directly to keep them alive.
If US DoJ has their way, google won’t be able to fund chrome any more the way it was doing so far. That also means apple and firefox lose money too. So Kagi’s stuff might work out long term if breakup happens.
That’s totally valid, and I’d strongly prefer to use an open source UA as well!
In the context of browsers, though, where almost all traffic comes from either webkit-based browsers (chiefly if not only Safari on Mac/iPad/iPhone), blink-based browsers (chrome/edge/vivaldi/opera/other even smaller ones) or gecko-based browsers (Firefox/LibreWolf/Waterfox/IceCat/Seamonkey/Zen/other even smaller ones) two things stand out to me:
Only the gecko-based ones are mostly FOSS.
One of the 3 engines is practically Apple-exclusive.
I thought that Orion moving Webkit into a Linux browser was a promising development just from an ecosystem diversity perspective. And I thought having a browser that’s not ad-funded on Linux (because even those FOSS ones are, indirectly ad-funded) was also a promising development.
I’d also be happier with a production ready Ladybird. But that doesn’t diminish the notion that, in my eye, a new option that’s not beholden to advertisers feels like a really good step.
Of the blink-based pure FOSS browsers, I use Ungoogled Chromium, which tracks the Chromium project and removes all binary blobs and Google services. There is also Debian Chromium; Iridium; Falkon from KDE; and Qute (keyboard driven UI with vim-style key bindings). Probably many others.
The best Webkit based browser I’m aware of on Linux is Epiphany, aka Gnome Web. It has built-in ad blocking and “experimental” support for chrome/firefox extensions. A hypothetical Orion port to Linux would presumably have non-experimental extension support. (I found some browsers based on the deprecated QtWebKit, but these should not be used due to unfixed security flaws.)
I wasn’t sure Ungoogled Chromium was fully FOSS, and I completely forgot about Debian Chromium. I tried to use Qute for a while and it was broken enough for me at the time that I assumed it was not actively developed.
When did Epiphany switch from Gecko to Webkit? Last time I was aware of what it used, it was like “Camino for Linux” and was good, but I still had it on the Gecko pile.
According to Wikipedia, Epiphany switched from Gecko to Webkit in 2008, because the Gecko API was too difficult to interface to / caused too much maintenance burden. Using Gecko as a library and wrapping your own UI around it is apparently quite different from soft forking the entire Firefox project and applying patches.
Webkit.org endorses Epiphany as the Linux browser that uses Webkit.
There used to be a QtWebKit wrapper in the Qt project, but it was abandoned in favour of QtWebEngine based on Blink. The QtWebEngine announcement in 2013 gives the rationale: https://www.qt.io/blog/2013/09/12/introducing-the-qt-webengine. At the time, the Qt project was doing all the work of making WebKit into a cross-platform API, and it was too much work. Google had recently forked Webkit to create Blink as a cross-platform library. Switching to Blink gave the Qt project better features and compatibility at a lower development cost.
The FOSS world needs a high quality, cross-platform browser engine that you can wrap your own UI around. It seems that Blink is the best implementation of such a library. WebKit is focused on macOS and iOS, and Firefox develops Gecko as an internal API for Firefox.
EDIT: I see that https://webkitgtk.org/ exists for the Gnome platform, and is reported to be easy to use.
I see Servo as the future, since it is written in Rust, not C++, and since it is developed as a cross platform API, to which you must bring your own UI. There is also Ladybird, and it’s also cross-platform, but it’s written in C++, which is less popular for new projects, and its web engine is not developed as a separate project. Servo isn’t ready yet, but they project it will be ready this year: https://servo.org/blog/2025/02/19/this-month-in-servo/.
I used to contribute to Camino on OS X, and I knew that most appetite for embedding gecko in anything that’s not firefox died a while back, about the time Mozilla deprecated the embedding library, but I’d lost track of Epiphany. As an aside: I’m still sorry that Mozilla deprecated the embedding interface for gecko, and I wish I could find a way to make it practical to maintain that. Embedded Gecko was really nice to work with in its time.
The FOSS world needs a high quality, cross-platform browser engine that you can wrap your own UI around.
I strongly agree with this. I’d really like a non-blink thing to be an option for this. Not because there’s anything wrong with blink, but because that feels like a rug pull waiting to happen. I like that servo update, and hope that the momentum holds.
Wikipedia suggests the WebKit backend was added to Epiphany in 2007 and they removed the Gecko backend in 2009. Wow, time flies! GNOME Web is one I would like to try out more, if only because I enjoy GNOME and it seems to be a decent option for mobile Linux.
I have not encountered any website that doesn’t work on firefox (one corporate app said it required Chrome for some undisclosed reason, but I changed the useragent and had no issue at all to use their sinple CRUD).
What kind of issues do you find?
I’ve wondered the same thing in these recent discussions. I’ve used Firefox exclusively at home for over 15 years, and I’ve used it at my different jobs as much as possible. While my last two employers had maybe one thing that only worked in IE or Chrome/Edge, everything else worked fine (and often better than my coworkers’ Chrome) in Firefox. At home, the last time I remember installing Chrome was to try some demo of Web MIDI before Firefox had support. That was probably five years ago, and I uninstalled Chrome after playing with the demo for a few minutes.
I had to install Chromium a couple of times in the last years to join meetings and podcast recording that were done with software using Chrome-only API.
When it happens, I bless flattpak as I install Chromium then permanently delete it afterward without any trace on my system.
If you are an heavy user of such web apps, I guess that it makes sense to use Chrome as your main browser.
I can’t get launcher.keychron.com to work on LibreWolf but that’s pretty much it. I also have chrome just in case I’m too lazy to figure out what specifically is breaking a site
Thanks, yeah, that’s it. I knew it was some specific thing that wasn’t supported I just couldn’t remember and was writing that previous comment on my phone so I was too lazy to check. But yeah, it’s literally the only site I could think of that doesn’t work on Firefox (for me).
It’s pretty rare to be fair, so much so that I don’t have an example of the top off my head. I know, classic internet comment un-cited source bullshit, sorry. It was probably awful gov or company intranet pages over the years.
Some intensive browser based games run noticeably better on Chrome too, but I know this isn’t exactly a common use case for browsers that others care about.
For some reason, trying to log in to the CRA (Canadian equivalent of the IRS) always fails for me with firefox and I need to use chrome to pay my taxes.
I run into small stuff fairly regularly. Visual glitches are common. Every once in a while, I’ll run into a site that won’t let me login. (Redirects fail, can’t solve a CAPTCHA, etc.)
Some google workspace features at least used to be annoying enough that I just devote a chrome profile to running those workspace apps. I haven’t retried them in Firefox recently because I kind of feel that it’s google’s just deserts that they get a profile on me that has nothing but their own properties, while I use other browsers for the real web.
I should start keeping a list of specific sites. Because I do care about this, but usually when it comes up I’m trying to get something done quickly and a work-around like “use chrome for that site” carries the day, then I forget to return to it and dig into why it was broken.
. Same goes for the Google and Microsoft options. These services creepily read my documents in the name of convenience.
I can’t tell if “these services” is supposed to include Dropbox but I know when I worked at Dropbox (I left in ~2019) I don’t believe we ever examined user files. Any project that would want to access any user data (ie: email addresses, access logs, etc) was heavily scrutinized and almost always rejected and that wasn’t even direct file access, I don’t think that would have been entertained. That’s my recollection at least, and perhaps things have changed, but given the ambiguity in the wording I figured I’d just point that out.
Historically the Dropbox client app used to do all manner of suspicious things to further undermine my trust; enough suspicious things that I will never trust them. TBH I am not so excited I can be bothered going in to see if it is still happening or not; there are other alternatives. Suffice it to say, I am not keen to opt in to their nonsense. How can I get something like Dropbox without all the security holes and creepy behaviour?
I really would have liked to have heard what they’re thinking here. We took privacy of documents extremely seriously. I don’t know the details of whatever telemetry we did or did not collection but I know that there was intense scrutiny whenever such things came up and the few times (just once that I remember, actually) I recall a bizops team or whatever wanting telemetry they were flatly told it was never going to happen so I strongly believe that Dropbox only ever collected what was necessary to operate. Again, things may have changed, but the culture around protecting users and respecting users was pervasive and strongly enforced.
Peer-to-peer synchronising (i.e. no special server at all) is one robust option and I do a lot of this. I then have no 3rd party helping me, which is a plus and a minus. Plus: I can feel safer. Minus: I do not synchronise if my computers are not simultaneously online.
I don’t know why you would trust a P2P solution more. Having a bunch of strangers integrate into your file storage solution does not seem to solve the problem of trust, it seems to make it far more complex.
Basically, the premise of this seems to be that Dropbox isn’t trustworthy but it doesn’t seem well justified but, again just based on my few years at the company, doesn’t mesh at all with the culture. I was also on a security team, I was pretty privy to the sort of access that engineers etc had to users files and how at east some calls were made on topics like accessing user data. I’m sure there’s plenty I was not privy to, and certainly in terms of legal requirements to access data that’s valid, but even then we were always extremely deliberate and user-focused.
My suggestion would indeed be that if you do not trust Dropbox that you consider locally encrypting files before synchronizing them. I think when this had come up at DBX the issues we ran into were never “but what if we want to access that data”, instead it’s:
If you already don’t trust DBX, why would you trust us to encrypt your data locally? If the attack you’re concerned with is insider, that is. Obviously if it’s “what about an active attacker” that’s separate, but also we did encrypt data across multiple stages of the product (ie: in transit, storage, hardware, etc).
It’s a meaningfully difficult technical problem to store client-side-encrypted data, sync it, etc.
It creates confusion for users - certain features may or may not be possible to support, recovery becomes extremely difficult, etc.
This is based on recollection, I am sure there are tons of other reasons why this wasn’t implemented when it was requested. I don’t want to overly represent my part in these discussions, this is just recollection from what I had heard. But I’m confident that the reason was never “we want to read their files” while I worked there. Again, maybe things have changed, it’s been years since I worked there.
I’m a little bit uncomfortable speaking about past work I’ve done so hopefully I’ll be forgiven if I’m reluctant to elaborate much more. Again, it’s been years since I worked there, I worked on an internal security team, my scope and view was limited, and there’s only so much I can feel comfortable speaking to about a past employer, but that’s my impression.
Perhaps! And I respect that people will be put off by that. That’s a really good post, I’m personally against companies doing that sort of thing myself. I don’t really agree with the characterization here, for example:
But most of all, I learned that I don’t trust Dropbox at all. Unnecessary privileges and backdooring are what I call untrustworthy behaviour and a clear breach of user trust.
But I respect that someone would take that feeling away and make decisions based on that. No one has to use Dropbox, Dropbox is not owed any free passes, nothing like that, I only mean to say that when it came to accessing user data it was, at the time I was there, taken extremely seriously and file access was basically non-negotiably barred for something like a product feature.
Having cofounded a company that stored peoples’ health information, I’m familiar with the dissonance between an internal culture that is entirely well-meaning and competent, trying to navigate very real tradeoffs between usability, feasibility, and cost, and the view of a particularly (but not even unreasonably) paranoid customer.
I also remember a lot of reputational hits to Dropbox in the early days that could leave a lasting impression.
The initial Dropbox messaging oversimplified things with regard to privacy, and there was some backlash. This resulted in a blog post in 2011 where they clarified that actually, some employees can access your files, and yes, your files will be turned over to law enforcement if required.
To a very technical user this was obvious from the existence of the password recovery and preview features, so they found the original oversimplification duplicitous. Some of the less-technical users who took the initial messaging at face value felt betrayed by the clarification.
I’m a satisfied Dropbox user myself, but I am at all times aware of the limitations on the privacy of what I put in there, which are just inherent in the definition of the service, no matter how great the team is.
I think that there are, of course, bugs, flaws, mistakes, etc. I’m just saying that Dropbox is not (or at least, was not) one of those companies that monetizes your file data. FWIW, just as a fun note, every single employee who joins Dropbox learns about that password bug in training.
Then there was the NSA slide in the Snowden dump that described Dropbox as “coming soon”.
I have no idea what Snowden would have revealed that would have surprised me or anyone else. I don’t want to comment on any access that would or would not exist as I think it would be inappropriate so I’ll just leave at this - I am skeptical of the claim of a “coming soon” that never came.
Anyways, like I’ve said, people are free to feel however they like about Dropbox. I just wanted to share my experience in a limited way, I’m already closer to talking about details than I’d like to be so I’ll have to leave it there.
Thanks for the insight! (As I say, I don’t find Dropbox particularly “creepy” myself, just pulling up some ancient history to try explaining why some might say that.)
You talk a lot about trusting, but why do you need trust at all? That’s the obvious benefit of P2P. Your files are only ever on your own devices, and the transfer between them is secured with your own keys. You don’t need to trust anyone else because no one else is involved.
All I’m saying with regards to trust is that if you don’t trust a service I recommend not using it, regardless of design. I think that Signal is really well designed, if I didn’t trust the authors I wouldn’t use it. If I thought the client might be malicious, I wouldn’t use it. Distrusting the people that build, package, distribute, and design your software is a deal breaker to me without additional mitigations like encrypting the data myself - if I didn’t trust the authors I wouldn’t trust them to handle the encryption, that would have to be out of band.
That’s the obvious benefit of P2P. Your files are only ever on your own devices
That is not my understanding of P2P? Am I missing something? Either way, I have no issue with people using P2P either.
I’ve never heard of a P2P file sync software that gives you access to other users’ disk space. All of them that I know of including Syncthing work as I described. Have you seen it being done differently?
To me, P2P means that instead of downloading a file from one source, such as Dropbox, I download pieces of it from distributed peers. Similarly, when you upload a file, you upload pieces to peers. Is that not the case?
I’m talking about the barebones definition of P2P, which is simply a networking model in which peers connect (directly) to other peers. There’s nothing specific to it other than that. Anything more is just building on top of that general idea. A P2P protocol could be designed to require each peer to have a cryptographic key for identification and authentication. Now peers can communicate securely, and they can authorize peers to do different things based on their identity. Then you add a feature to the protocol by which a device allows peers to download files from it if they are authorized. You can tell which devices in the network belong to you, you grant only those devices the ability to transfer your files around, and no devices other than your own can see the files because they communicate confidentially. That’s the high-level concept of Syncthing.
What enterprise firewall vendors mean when they say “P2P” is basically just BitTorrent.
I’m talking about the barebones definition of P2P, which is simply a networking model in which peers connect (directly) to other peers. There’s nothing specific to it other than that.
I feel like that matches exactly what I said. I never mentioned anything higher level like a cryptographic key or lack thereof. I said you distribute files to peers, and it sounds like that’s exactly what “peer to peer” means from your perspective as well.
A P2P protocol could be designed to require each peer to have a cryptographic key for identification and authentication. Now peers can communicate securely, and they can authorize peers to do different things based on their identity. Then you add a feature to the protocol by which a device allows peers to download files from it if they are authorized.
Okay but if you don’t trust the person developing the P2P product, why would you trust them to implement this protocol? My suggestion with Dropbox was that if you don’t trust Dropbox as a company you shouldn’t trust their agent to encrypt data for you, you should do it yourself. I think this applies exactly the same way to P2P. If you want to remove trust from the vendor you should be encrypting using a tool you do trust and only distributing that.
You can tell which devices in the network belong to you, you grant only those devices the ability to transfer your files around, and no devices other than your own can see the files because they communicate confidentially.
Okay, that’s fine too. If you want a system where you own all of the devices involved that seems totally reasonable.
I feel like that matches exactly what I said. I never mentioned anything higher level like a cryptographic key or lack thereof.
You mentioned chunking and sending files to untrusted peers. That’s too specific and explicitly not what most P2P file sync tools do. Most of what I’ve said hinges on this sentence of yours:
Having a bunch of strangers integrate into your file storage solution does not seem to solve the problem of trust, it seems to make it far more complex.
The strangers never get to see your files, so this doesn’t matter.
Okay but if you don’t trust the person developing the P2P product, why would you trust them to implement this protocol?
The behavior of software I run on my own computer can be analyzed and verified to some extent, but I can’t verify what the software on your computer does. This can be improved further by using FOSS. If the files never end up on your computer, I don’t need to trust you to not do anything shady with them. If the files are only ever on my own devices, I don’t need to encrypt them either.
Sure, I was assuming in the P2P scenario that the peers were untrusted because that’s the scenario presented with Dropbox - that it is untrusted. If you remove that and say “I trust my peers” sure, that’s perfectly fine.
The strangers never get to see your files, so this doesn’t matter.
Why would that be the case? Do you mean because you’re encrypting them?
Perhaps I should clarify. I’m saying that in the scenario where you don’t trust the person producing your file syncing software:
You would logically not trust them to implement the cryptography used in that software
You would therefor want to encrypt the files yourself
I feel that this applies just as much to P2P as it does to Dropbox? And to me it seems like P2P just adds more parties.
If the files never end up on your computer, I don’t need to trust you to not do anything shady with them. If the files are only ever on my own devices, I don’t need to encrypt them either.
Sure, I was assuming in the P2P scenario that the peers were untrusted because that’s the scenario presented with Dropbox - that it is untrusted. If you remove that and say “I trust my peers” sure, that’s perfectly fine.
Right, that was my point. Remove the need to trust by removing third parties from the equation. In that sense, P2P file sync has the same benefits as self-hosting a client-server file sync service, except you don’t need to operate a server.
The rest boils down to “Can you trust proprietary software running on your computer?”, where I would argue “just use FOSS”. But even so, which malicious behavior would be more likely to be spotted by a user?
Sync client that’s supposed to send encrypted files to a server you don’t control actually uses weak or no crypto for the files themselves
P2P sync software that’s supposed to only talk to your other devices suddenly sends a ton of traffic to a machine controlled by the creator of said software
This feels quite biased. For example, presumably Brave has responded to these things. Even if the response is invalid, my preference would be to show it and then demonstrate it as being invalid. When I see a big list of “they did this” with no representation of the other side it makes me very skeptical and, perhaps unfairly, dismissive.
I have no stake in this but I will say that Brave seems to do two things that I see potential in.
It is one of the few cases where blockchain seems reasonable. I know people like to hate on blockchain because of cryptocurrency, and they hate on cryptocurrency because it is largely a tool for criminals, but just from a technical perspective there are potentially valid use cases where you have a distributed system with some number of participants who you do not want to trust while still ensuring that all participants are doing some kind of work (visiting a page).
It is the only browser project I have seen that actually seems like it could disrupt the singular monetization strategy of the web - advertising. Mozilla can’t exist without ad revenue, they already can’t meaningfully compete against it even if you ignore that all of their funding is ad-driven. We should probably be asking ourselves how we expect the web to continue if we reject advertising as a source. To my knowledge, Brave is the only company that’s taken an approach that even appears viable.
I don’t really have a strong stake in this but I find this list really unhelpful. It doesn’t do any of the hard work for me. I could search online for “brave controversy” and get a list of search results like this. What would be valuable is actually providing the hard information to find, to provide analysis, etc. A listicle of “I accuse them of this” is not valuable at all unless you’re either completely uninterested in basing your opinions on information or you’re already sold on the idea that Brave is bad and you want to add more urls to your evidentiary arsenal.
Am I to click each of these links, find the responses, analyze them myself, etc? I could do that, but it’s kind of a heavy lift.
I know people like to hate on blockchain because of cryptocurrency, and they hate on cryptocurrency because it is largely a tool for criminals
I haven’t seen this be the reason people criticize crypto for at least 10-15 years. People have concerns about the ecological concerns, and the rampancy of pump-and-dump and other scams. But people haven’t cared about crypto being used to buy drugs in years. At least, not to a prominent degree
I wasn’t thinking of drugs specifically, I was thinking about scams, identity theft, blackmail, ransomware, etc. But it isn’t really important to my point, I don’t think.
Bitcoin is ESG and Brave use a blockchain that is not proof of work. FIAT as USD is the most used to buy drugs and fund wars. Also Chainalisys loves open ledgers because transactions can be traced.
The way Brave disrupts advertising on the Web is by intercepting it and replacing the ads with their own ads. It is the sleaziest of all possible disruptions.
Assuming content creators (god I hate that word) actually try to make a living using only advertisement and actually give a shit while choosing partners it’s taking away their income while still showing the user advertisement.
It’s kind of like syphoning off the money the actual creator could make.
It feels to me like selling an open source product you don’t develop (without support) or selling fandubs.
I’m not really familiar with this stuff, so I don’t know much about it. My understanding was that Brave intended to pay creators in that circumstance, perhaps not though?
Afaict from all the controversies they don’t give a shit, are intransperent and come off as evil. See everything in the linked reddit list. And I don’t know either how brave is supposed to work, but after everything I read about it
https://lobste.rs/s/iopw1d/what_s_up_with_lobste_rs_blocking_brave
I don’t want to touch it with a 10 foot pole
Yeah, that’s fair. I am not really trying to defend Brave. I just wanted to point out that a big list of accusations isn’t very helpful to me personally and I think that the premise of finding new ways to monetize the web is valuable. I have no other thoughts on the matter tbh.
After engaging in that discussion, it reaffirmed to me that two reasonable people can see the exact same events/discussions and come to opposite conclusions.
two reasonable people can see the exact same events/discussions and come to opposite conclusions
It’s mostly preconceived notions/beliefs. For some reason a lot of people are predisposed to give the benefit of the doubt to Brave, and so we gets lots of rebuttals which try to see the things Brave has done in the best possible light/downplay the criticism/find a charitable interpretation/etc.
By contrast a lot of people, for whatever reason, are predisposed to never give benefit of the doubt to Mozilla, so we get lots of angry threads where people will do whatever they can to view decisions by Mozilla in a negative light.
Personally I’m in the exact opposite position – I am inclined to give Mozilla the benefit of the doubt and look for charitable interpretations of things they do (especially the recent ToS kerfuffle, which just seems like silly overreaction by the internet), and inclined to view Brave and especially Brendan Eich negatively and never give the benefit of the doubt.
We can dig further about why one wants to give the benefit of the doubt and another does not, and lay out all the arguments, and still reach opposite conclusions.
I agree if you mean to say that it’s ideological, I think that is true by definition.
It is one of the few cases where blockchain seems reasonable.
To me, blockchain is merely a public ledger that cannot (easily) be amended once written to. In that case, Git’s commit system is technically blockchain. Each commit hash depends on the previous hash and the commit’s diff, meaning it cannot be edited without throwing everyone’s clone out of sync. Pretty useful!
Based on activities listed in the OP, Brave is funded by advertising (injecting their own ads into your browser, replacing the ads of the site you are visiting), soliciting donations for open source projects they have no relation with and pocketing the funds, subscription fees from the VPN that they install without permission, scraping web sites without permission and reselling copyrighted data for AI training, and injecting URLs with affiliate codes into your web browser.
Based on Brave’s own description of their company, from their FAQ:
We generate revenue in several ways, including:
The sale of New Tab Takeovers, Brave Search Ads, and other Brave Ads (the first-party ad units that users opt into via our privacy-preserving ad platform). Note that opted-in users receive 70% of this ad revenue back in the form of BAT.
Subscriptions to our premium products: Brave Firewall + VPN, Brave Talk Premium, Brave Leo Premium, and Brave Search Premium.
A 1% fee on fiat-to-crypto transactions (through onramp partners) in Brave Wallet, and a nominal fee on creator tips and auto-contributions made via Brave Rewards.
Subscriptions to our Search API.
Partnership deals (for example with platforms integrated into the Brave browser).
Elsewhere in the FAQ, they talk about collecting user data and selling that to advertisers, for targeted advertising.
Early on I think the intent was to fund entirely via BAT. At this point, from what I understand at least, they have a number of things they’re doing, including at least some funding being advertisement based.
The main reason I worry about zed being too late is because Cursor is gaining so much traction and I feel that Zed is going to have a hard time competing, which means investing my time into using Zed (as I do) is a bit of a waste of time. Cursor has a huge advantage being based on VSCode (I see that as a disadvantage for me personally, but it doesn’t matter - practically it’s an advantage) so while Zed is trying to catch up on basic features and advanced features Cursor already has tons of stuff for free and appears ahead in the advanced stuff too (presumably because they can focus entirely on the AI stuff and reap the benefits of the VSCode extension world).
I’m unsure how it’ll play out but I think it’s probably fair to say that Zed is very behind in many areas compared to Cursor, that they are going after a similar audience, and that it’s unclear how Zed is going to compete against Cursor given this.
It just takes one feature. I switched from Atom to VSCode 8 or 9 years ago because they had better support for the Go debugger. At this point VSCode didn’t even have tabs and the maintainers were opposed to the concept. VSCode just took users one by one adding features that filled a niche until they beat Atom.
Zed is in the same situation today. It feels extremely fast compared to VSCode and that’s the reason I keep giving it a try. It can’t replace VSCode yet for my workflow but it’s getting closer. I hope they’ll move in the right direction.
Hm, I used VS Code when it launched and I distinctly remember it having tabs. The earliest screenshots I can find (one from August 2016 on this page) show tabs too. Maybe you mean tabs in the integrated terminal?
Nope, for a while the editor didn’t have tabs, just “Open editors” in the sidebar. Tabs were added around June 2016. Here’s the issue discussing adding tabs: https://github.com/microsoft/vscode/issues/224
Yeah, I think Zed has the speed advantage and Cursor won’t be able to close that gap for a long time. We’ll see if it pays off, I really hope so. I kinda hate vscode lol
an example of out-side-of-knowledge thinking –or paradigm shift– which is essentially making the progress of science.
I’m not convinced that concept of sort of spontaneous knowledge, or what sounds as almost a non-causal knowledge, is even possible but it’s interesting - maybe some sort of random selection process + experimentation would play a part here, but that seems perfectly fine to encode into a model. In reality for whatever reason I suspect that all of these people simply processed information and experiences based on biases but ultimately in a way that’s straightforwardly causal, just as any llm would be, and just as I imagine all consciousness is.
These are exactly the kinds of exams where I excelled in my field. These benchmarks test if AI models can find the right answers to a set of questions we already know the answer to.
Well they aren’t known to the AI. The point is to see if the AI can use the building blocks it has to assemble novel solutions (from the AI’s perspective) to new problems. This is distinctly different from school, as described in the article, where you learn the questions and answers at the same time by reading a book on the topic. School rarely gives you X and Y and then tells you to derive Z based on the shared properties of X and Y + some leap to new properties of Z.
If we want scientific breakthroughs, we should probably explore how we’re currently measuring the performance of AI models and move to a measure of knowledge and reasoning able to test if scientific AI models can for instance:
Is this not exactly what those math ones do? The point is that the AI should be able to derive answers to math problems that it hasn’t seen before based on properties of what it has seen before. It seems that these recommendations are largely where things are headed with regards to chain of thought and the benchmarks being used.
PS: You might be wondering what such a benchmark could look like. Evaluating it could involve testing a model on some recent discovery it should not know yet (a modern equivalent of special relativity) and explore how the model might start asking the right questions on a topic it has no exposure to the answers or conceptual framework of.
If we’re just saying “test it on things that are not a part of its training set” we do that already. If it’s “test it on things it can have no conceptual framework of” I don’t believe that humans are capable of solving those problems and I personally believe that it is literally impossible to do so. There has to be some sort of pre-knowledge from which other knowledge can be derived and likely some sort of external force that provides some kind of epistemic access, I suppose the idea here is to minimize those pre-requisites.
I guess my feelings here are that we are already doing what is being advocated for here, mostly?
I think the counterargument is basically “Poincare basically did or would have invented relativity” and such …
Or someone else would have, in a matter of time
I think that’s basically impossible to against. Nobody would say that in 2025, we wouldn’t know about relativity if Einstein died early
Also, I replace “LLM” with “search engines” again. I would say search engines probably reduced some creative thinking in the general case, but for the most creative thinkers, it probably didn’t (?). Or at least I’ve never heard anyone argue that – I’d have to think about it [1]
It’s funny that the “AI” framing confuses both the advocates and the detractors … I more agree with the framing of “pretty reliable word calculators”, and the surprising thing is that “word calculation” can produce some knowledge/insight
(On the other hand, I guess it might be surprising if the best “word calculator” could produce no insight at all!)
[1] Maybe I need to re-read this 2008 article: Is Google Making Us Stupid? What the Internet is doing to our brains
Maybe we are more stupid :) I guess by now we would have seen all the non-search-engine using, non-stupid people achieving more, not sure … My guess is that using ONLY google is a loss, but using google to find primary sources is a HUGE win.
Ooh now here’s a killer app idea: a simple, usable terminal command that you can use to sandbox a particular directory, a la chroot but actually usable for security. So you can run sandbox in a terminal and everything outside the local dir is unreachable forever to any process started within it, or you run sandbox cargo build like you’d run sudo except it has the opposite effect. Always starts from the existing local state, so you don’t have to do any setup a la Docker.
Not an ideal solution, given that many cargo commands want to touch the network or the rest of the filesystem for things like downloading cached packages, but it’s a thought. Maybe you can have a TOFU type setup where you run it and it goes through and asks “can this talk over the network to crates.io?” and “can this read ~/.cargo/registry/cache? Can this write ~/.cargo/registry/cache?”. Then, idk, it remembers the results for that directory and command?
I know all the tools are there to make something like this, no idea if it’s feasible in terms of UI though, or even whether it’d actually be useful for security. But it seems like something we should have.
Always starts from the existing local state, so you don’t have to do any setup a la Docker.
If you give this one requirement up you can do all of this today pretty easily. It’s a lot harder to do this otherwise as unprivileged sandboxing is already annoying + x-plat sandboxing is extremely painful.
I actually started work on an idea for that years ago but a mixture of “I wasn’t in a good mindstate and then 2019 made it worse” and fears of becoming more vulnerable, not less, if I was the face of a tool others were relying on for security (by going from being at risk of passive/undirected attacks to being at risk of active/directed attacks) caused it to go on de facto hiatus before I finished it.
You can see what got written here: https://github.com/ssokolow/nodo/ (defaults.toml illustrates the direction I was thinking in terms of making something like nodo cargo build Just Work™ with a useful amount of sandboxing.)
This is possible through apparmor and selinux. It’s not trivial, but doable. Unfortunately macos is on its own here with sandbox-exec being basically unsupported and wired in behaviour.
I think it would be a good idea even for things like default-allow, but preventing writes to home/SSH configuration. But ui? Nah, this is going to be a per-project mess.
I think that would be tricky because the sandbox program would need to know what files and other resources are required by the program it is supposed to execute in order to run them in a subdirectory—there’s not a great programmatic way to do this, and even if there was, it wouldn’t improve security (the command could just say “I need the contents of the user’s private keys” for instance). The alternative is to somehow tell the sandbox program what resources are required by the script which can be really difficult to do in the general case and probably isn’t a lot better than Docker or similar.
On a developer workstation, probably most critical are your home directory (could contain SSH keys, secrets to various applications, etc.), /etc, /var, and /run/user/<UID>. You could use something like bubblewrap to only make the project’s directory visible in $HOME, use a tmpfs for $HOME/.cargo, and use stubs or tmpfses for the other directories.
I did this once and it works pretty well and across projects. However, the question is if you don’t trust the build, why would you trust the application itself? So, at that point you want to run it at all or in an isolated VM anyway. So it probably makes more sense to build the project in a low-privileged environment like that as well.
IMO sandboxing is primarily interesting for applications that you trust in principle, but process untrusted data (chat clients, web browsers, etc.). So you sandbox them for when there is a zero-day vulnerability. E.g. running something like Signal, Discord, or a Mastodon client without sandboxing is pretty crazy (e.g. something like iMessage needs application sandboxing + blastdoor all the time to ensure that zero-days cannot elevate to wider access).
Every time this topic comes up I post a similar comment about how hallucinations in code really don’t matter because they reveal themselves the second you try to run that code.
That’s fascinating. I’d really enjoy hearing some more about that. Was this a team project? Were there tests? I feel like this would be really valuable as a sort of post mortem.
Lots of different teams and project. I am talking 30% of a 1k engineer department being feature frozen for months to try to dig out of the mess.
And yes there were tests. Tests do not even start to cut it. We are talking death through thousands deep cut.
This is btw not a single anecdote. My network of “we are here to fix shit” people are flooded with these cases. I expect the tech industry output to plummet starting soon.
Again, really interesting and I’ve love more details. I am at a company that has adopted code editors with AI and we have not seen anything like that at all.
That just sounds so extreme to me. Feature frozen for months is something I’ve personally never even heard of, I’ve never experienced anything like that. It feels kind of mind boggling that AI would have done that.
Nope. They had tested it. But to test, you have to be able to understand the failure cases. Which you have heuristics for based on how humans write code
These things are trained exactly to avoid this detection. This is how they get good grade. Humans supervising them is not a viable strategy.
I’d like to understand this better. Can you give an example of something a human reviewer would miss because it’s the kind of error a human code author wouldn’t make but an LLM would?
I’m with @Diana here. You test code, but testing does not guarantee the absence of bugs. Testing guarantees the absence of a specific bug that is tested for. LLM-generated code has a habit of failing in surprising ways that humans fail to account for.
I used AI primarily for generating test cases, specifically prompting for property tests to check the various properties we expect the cryptography to uphold. A test case found a bug.
“Those are scary things, those gels. You know one suffocated a bunch of people in London a while back?”
Yes, Joel’s about to say, but Jarvis is back in spew mode. “No shit. It was running the subway system over there, perfect operational record, and then one day it just forgets to crank up the ventilators when it’s supposed to. Train slides into station fifteen meters underground, everybody gets out, no air, boom.”
Joel’s heard this before. The punchline’s got something to do with a broken clock, if he remembers it right.
“These things teach themselves from experience, right?,” Jarvis continues. “So everyone just assumed it had learned to cue the ventilators on something obvious. Body heat, motion, CO2 levels, you know. Turns out instead it was watching a clock on the wall. Train arrival correlated with a predictable subset of patterns on the digital display, so it started the fans whenever it saw one of those patterns.”
“Yeah. That’s right.” Joel shakes his head. “And vandals had smashed the clock, or something.”
Hallucinated methods are such a tiny roadblock that when people complain about them I assume they’ve spent minimal time learning how to effectively use these systems—they dropped them at the first hurdle.
You imply that because one kind of hallucination is obvious, all hallucinations are so obvious that (per your next 3 paragraphs) the programmer must have been 1. trying to dismiss the tool, 2. inexperienced, or 3. irresponsible.
You describe this as a failing of the programmer that has a clear correction (and elaborate a few more paragraphs):
You have to run it yourself! Proving to yourself that the code works is your job.
It is, and I do. Even without LLMs, almost every bug I’ve ever committed to prod has made it past “run it yourself” and the test suite. The state space of programs is usually much larger than we intuit and LLM hallucinations, like my own bugs, don’t always throw exceptions on the first run or look wrong when read.
I think you missed the point of this post. It tells the story of figuring out where one hallucination comes from and claims LLMs are especially prone to producing hallucinations about niche topics. It’s about trying to understand in depth how the tool works and the failure mode that it produces hallucinations that looks plausible to inexperienced programmers; you’re responding with a moral dictum that the user is at fault for not looking at it harder. I’m strongly reminds me of @hwayne’s rebuttal of “discipline” advice (discussion).
Just because code looks good and runs without errors doesn’t mean it’s actually doing the right thing. No amount of meticulous code review—or even comprehensive automated tests—will demonstrably prove that code actually does the right thing. You have to run it yourself!
What does “running” the code prove?
Proving to yourself that the code works is your job. This is one of the many reasons I don’t think LLMs are going to put software professionals out of work.
So LLMs leave the QA to me, while automating the parts that have a degree of freedom and creativity to them.
Can you at least understand why some people are not that excited about LLM code assistants?
In a typed system, it proves that your code conforms to the properties of its input and output types, which is nice. In a tested system it proves whatever properties you believe your tests uphold.
So LLMs leave the QA to me, while automating the parts that have a degree of freedom and creativity to them.
QA was always on you. If you don’t enjoy using one, don’t? If you feel that it takes your freedom and creativity away, don’t use it. I don’t use LLMs for a ton of my work, especially the creative stuff.
In a typed system, it proves that your code conforms to the properties of its input and output types, which is nice. In a tested system it proves whatever properties you believe your tests uphold.
Which is at odds with the claim in the same sentence, that ‘comprehensive automated tests’ will not prove that code does the right thing. And yes, you can argue that the comprehensive tests might be correct, but do not evaluate the properties you expect the results to have, if you want to split hairs.
Evaluating code for correctness is the hard problem in programming. I don’t think anyone expected LLMs to make that better, but there’s a case to be made that LLMs will make it harder. Code-sharing platforms like Stack Overflow or Github at least provide some context about the fitness of the code, and facilitate feedback.
The article is supposed to disprove that, but all it does is make some vague claims about “running” the code (while simultaneously questioning the motives of people who distrust LLM-generated code). I don’t think it’s a great argument.
What did you think my article was trying to disprove?
It’s an article that’s mainly about all the ways LLMs can mislead you that aren’t as obvious as hallucinating a method that doesn’t exist. Even the title contains an implicit criticism of LLMs: “Hallucinations in code are the least dangerous form of LLM mistakes”.
If anything, this is a piece about why people should “distrust LLM-generated code” more!
Can you at least understand why some people are not that excited about LLM code assistants?
Because they don’t enjoy QA.
I don’t enjoy manual QA myself, but I’ve had to teach each myself to get good at it - not because of LLMs, but because that’s what it takes to productively ship good software.
I actually disagree a little bit here. QA’ing every bit of functionality you use is never going to scale. At some level you have to trust the ability of your fellow human beings to fish out bugs and verify correctness. And yes, it’s easy for that trust to be abused, by supply chain attacks and even more complicated “Jia Tan”-like operations.
But just like LLMs can be said to do copyright laundering, they also launder trust, because it’s impossible for them to distinguish example code from working code, let alone vulnerable code from safe code.
What I meant was something slightly different. Almost every piece of software that’s not a bootloader runs on a distributed stack of trust. I might trust a particular open source library, I might trust the stdlib, or the operating system itself. Most likely written by strangers on the internet. It’s curl | sudo bash all the way down.
The action of importing code from github, or even copy-pasting it from stack overflow, is qualitatively different from that of trusting the output of an LLM, because an LLM gives you no indication as to whether the code has been verified.
I’d go so far as to say the fact that an LLM emitted the code gives you the sure indication it has not been verified and must be tested—the same as if I wrote quicksort on a whiteboard from memory.
I think this post is more than just another “LLMs bad” post, though I did enjoy your response post as a standalone piece. The author ‘s co-worker figured out it didn’t work pretty quickly. It’s more interesting to me that the author found the source of the hallucination, and that it was a hypothetical that the author themselves had posed.
That’s why I didn’t link to the “Making o1, o3, and Sonnet 3.7 Hallucinate for Everyone” post from mine - I wasn’t attempting a rebuttal of that, I was arguing against a common theme I see in discussions any time the theme of hallucinations in code is raised.
I turned it into a full post when I found myself about to make the exact same point once again.
And that’s fair enough - in context I read your comment as a direct reply. I appreciate all the work you’ve been doing on sharing your experience, Simon!
No,. there’s a lot of policy discretion. The US government has access to any data stored in the US belonging to non-US persons without basic due process like search warrants. The data they choose to access is a policy question. The people being installed in US security agencies have strong connections to global far right movements.
In 2004 servers operated by Rackspace in the UK on behalf of Indymedia were handed over to the American authorities with no consideration of the legal situation in the jurisdiction where they were physically located.
/Any/ organisation- governmental or otherwise- that exposes themselves to that kind of risk needs to be put out of business.
I seem to remember an incident where instapaper went offline. The FBI raided a data centre and took a blade machine offline containing blade servers they had warrants for, and instapapers, which they didn’t. So accidents happen.
Yes, but in that case the server was in an American-owned datacenter physically located in America (Virginia), where it was within the jurisdiction of the FBI.
That is hardly the same as a server in an American-owned datacenter physically located in the UK, where it was not within the jurisdiction of the FBI.
Having worked for an American “multinational” I can see how that sort of thing can happen: a chain of managers unversed in the law assumes it is doing “the right thing”. Which makes it even more important that customers consider both the actual legal situation and the cost of that sort of foulup when choosing a datacenter.
The US government has access to any data stored in the US belonging to non-US persons without basic due process like search warrants.
Serious question, who’s putting data in us-west etc when there is eu data centres? And does that free rein over data extend to data in European data centres? I was under the impression that safe harbour regs protected it? But it’s been years since I had to know about this kind of stuff and it’s now foggy.
It does not matter where the data is stored. Using EU datacenters will help latency if that is where your users are, but it will not protect you from warrants. The author digs into this in this post, but unfortunately, it is in Dutch: https://berthub.eu/articles/posts/servers-in-de-eu-eigen-sleutels-helpt-het/
Serious question, who’s putting data in us-west etc when there is eu data centres?
A lot of non-EU companies. Seems like a weird question, not everyone is either US or EU. Almost every Latin American company I’ve worked for uses us-east/west, even if it has no US customers. It’s just way cheaper than LATAM data centers and has better latency than EU.
Obviously the world isn’t just US/EU, I appreciate that. This article is dealing with the trade agreements concerning EU/US data protection though so take my comment in that perspective.
I haven’t personally made up my mind on this, but one piece of evidence in the “it’s dramatically different (in a bad way)” side of things would be the usage of unvetted DOGE staffers with IRS data. That to me seems to indicate that the situation is worse than before.
Not sure what you mean—Operational Desert Storm and the Cold War weren’t initiated by the US nor were Iraq and the USSR allies in the sense that the US is allied with Western Europe, Canada, etc (yes, the US supported the USSR against Nazi Germany and Iraq against Islamist Iran, but everyone understood those alliances were temporary—the US didn’t enter into a mutual defense pact with Iraq or USSR, for example).
they absolutely 100% were initiated by the US. yes the existence of a mutual defense pact is notable, as is its continued existence despite the US “seeking to harm” its treaty partners. it sounds like our differing perceptions of whether the present moment is “dramatically different” come down to differences in historical understanding, the discussion of which would undoubtedly be pruned by pushcx.
This isn’t true, as the US has been the steward of the Internet and its administration has turned hostile towards US’s allies.
In truth, Europe already had a wake-up call with Snowden’s revelations, the US government spying on non-US citizens with impunity, by coercing private US companies to do it. And I remember the Obama administration claiming that “non-US citizens have no rights”.
But that was about privacy, whereas this time we’re talking about a far right administration that seems to be on a war path with US’s allies. The world today is not the same as it was 10 years ago.
hm, you have a good point. I was wondering why now it would be different but “privacy” has always been too vague a concept for most people to grasp/care about. But an unpredictable foreign government which is actively cutting ties with everyone and reneging on many of its promises with (former?) allies might be a bigger warning sign to companies and governments world wide.
I mean, nobody in their right mind would host stuff pertaining to EU citizens in, say, Russia or China.
I’m into it. I’m a big fan of the typestate pattern, and even though this can feel a bit repetitive for endpoints with less logic, I like that it’s so straightforward. No more worrying about the order various handlers run…
Interesting. So is the idea with regards to typestate that you’d ensure that your routes/apis do X and Y and Z steps before calling into some function F(DidZ) ?
I haven’t open sourced my codebase yet, but yeah. So like, instead of saying “this API call is guarded by a is_logged_in? handler, my “save this Foo to the database” function requires an Authorization struct. How can you get one of those? Well, the only way is to call into the authorization subsystem. And doing that requires a User. And you can only get a User by calling into the authentication subsystem. And that happens to take a request context.
You can see how a lot of these handlers have the general form “grab a nexus instance from the Dropshot context, construct a Context from the request context, then call some nexus method, passing the context in.” Same basic idea, except a little cleaner; I’m sort of in a “embrace a little more boilerplate than I’m used to” moment and so rather than the Context stuff I’m doing the same idea but a bit more “inline” in the handlers. I might remove that duplication soon but I want to sit with it a bit more before I overly refactor.
Anyway, I think that in Nexus, that these handlers have the same sort of shape but are also a bit different is the strength of this approach. I’ve worked on rails apps where the before, after, and around request middleware ended up with subtle dependencies between them, and ordering issues, and “do this 80% of the time but not 20% of the time” kinds of things. Doing stuff this way eliminates all of that; it’s just normal code that you just read in the handler. I’ve also found that this style is basically the “skinny controller” argument from back in the day, and comes with the same benefits. It’s easier to test stuff without needing Dropshot at all, since Dropshot itself is really not doing any business logic whatsoever, which middlewares can often end up doing.
I mean, things were overhyped and ridiculous, but can anyone say that the internet isn’t at the core of the economy?
1999-2006: Java
Still one of the most widely used languages, powering many multi-billion dollar companies.
2004-2007: Web 2.0
What we now just think of as “the web”.
2007-2010: The Cloud
Again, powering multi-billion dollar workloads, a major economic factor for top tech companies, massive innovation centers for new database technologies, etc.
2010-2015: Social media
Still massively important, much to everyone’s regret.
2012-2015: Internet of Things
This one is interesting. I don’t have anything myself but most people I know have a Smart TV (I don’t get it tbh, an HDMI cord and a laptop seems infinitely better)
2013-2015: Big Data
Still a thing, right? I mean, more than ever, probably.
2017-2021: Blockchain
Weirdly still a thing, but I see this as finally being relegated to what it was primarily good for (with regards to crypto) - crime.
2021-present: AI
Do we expect this “bubble” to “pop” like the others? If so, I expect AI to be a massive part of the industry in 20 years. No question, things ebb and flow, and some of that ebbing and flowing is extremely dramatic (dot com), but in all of these cases the technology has survived and in almost every case thrived.
All of these things produced real things that are still useful, more or less, but also were massively and absolutely overhyped. I’m looking at the level of the hype more than the level of the technology. Most of these things involved huge amounts of money being dumped into very dubious ventures, most of which has not been worth it,
and several of them absolutely involved a nearly-audible pop that destroyed companies.
Yeah, I was just reflecting on the terminology. I’d never really seen someone list out so many examples before and I was struck by how successful and pervasive these technologies are. It makes me think that bubble is not the right word other than perhaps in the case of dot com where there was a very dramatic, bursty implosion.
The typical S-shaped logistic curves of exponential processes seeking (and always eventually finding!) new limits. The hype is just the noise of accelerating money. If you were riding one of these up and then it sort of leveled off unexpectedly, you might experience that as a “pop”.
To me the distinguishing feature is the inflated expectations (such as NVidia’s stock price tripling within a year, despite them not really changing much as a company), followed by backlash and disillusionment (often social/cultural, such as few people wanting to associate with cryptobro’s, outside of their niche community). This is accompanied by vast amounts of investment money flooding into, and then out of, the portion of the industry in question both of which self-reinforce the swing-y tendency.
Not for everyone and also not cheap, but many projectors come with something like android on a compute stick that is just plugged into the hdmi port, so unplug and it’s dumb.
Yeah, I’ve been eying a projector myself for a while now, but my wife is concerned about we’d be able to make the space dark enough for the image to be visible.
I use a monitor with my console for video games, same with watching TV with others. I think the only reason this wouldn’t work is if people just don’t use laptops or don’t like having to plug in? Or something idk
This one is interesting. I don’t have anything myself but most people I know have a Smart TV (I don’t get it tbh, an HDMI cord and a laptop seems infinitely better)
It’s the UX. Being able to watch a video with just your very same TV remote or a mobile phone it’s much much better than plugging your laptop with an HDMI cord. The same reason why still dedicated video game consoles exist even if there are devices like smartphones or computers that are technically just better. Now almost all TVs sold are Smart TVs, but even before, many people (like me) liked to buy TV boxes and TV dongles.
And that’s taking into account that a person own a laptop, because the number of people that doesn’t use PCs outside of work is increasing.
I have a dumb TV with a smart dongle - a jailbroken FireStick running LineageOS TV. The UX is a lot better than I’d have connecting a laptop, generally. If I were sticking to my own media collection, the difference might not be that big, but e.g. Netflix limits the resolution available to browsers, especially on Linux, compared to the app.
massive innovation centers for new database technologies, etc.
Citation needed? So far I only know about then either “stealing” existing tech as a service. Usually in an inferior way to self housing, usually lagging behind.
The other thing is putting whatever in-house DB they had and making it available. That was a one time thing though and since they largely predate the cloud I think it doesn’t make sense to call it innovation.
Yet another thing is a classic strategy of the big companies which is taking small companies or university projects and turn them into products
So I’d argue that innovations do end up in the could (duh!) it’s rarely ever driving them.
Maybe the major thing being around things like virtualization and related things but even here I can’t think of a particular one. All that stuff seems to steem largely from Xen which again originated at University.
As for bubbles: one could also argue that dotcom also still exists?
But I agree that hypes is a better term.
I wonder how many of these are as large because of the (unjustified part of) hype they received. I mean promises that were never kept and expectations that were never met, but investments (learning Java, writing java, making things cloud ready, making things depend on cloud tech, building Blockchain Knowhow, investing into currencies, etc) are the reason why they are still so big.
See Java. You learn that language at almost every university. All the companies learned you can get cheap labor right from University. It’s not about the language but about the economy that built around it. The same it’s true for many other hypes.
Compressed heap pointers with offsets reminds me a lot of the early segmented memory regions offered by x86. The segmentation registers baked the offsets directly into the architecture.
Grsecurity leveraged x86 segmentation for a number of powerful mitigations like UDEREF, which was a precursor to SMAP, using (iirc) the segmentation registers to isolate kernel/ user memory. SMAP is a sort of specialized instruction for this enforcement but Grsecurity used the generalized approach to build numerous mitigations. I suppose segmentation was considered too complex to burn registers on? Not sure why they were removed.
Even if you’re not up to replacing pointers entirely, you should start considering them as handles: They should only be dereferenceable together with the engine base pointer, never without it.
This is, basically, what UDEREF did and it was a decade-ahead (or more?) mitigation from Grsecurity using this concept.
The language that dominates the future of software development may not be Rust, Rust’s borrow checker is after all currently not even strong enough to encode the features needed to do use-after-free checking of handles to garbage collected data without some manual work, but it will be a language with a borrow checker that will win the sweepstakes. The benefits of a borrow checker are simply too great, and the downsides are only a matter of finding the right way to frame your assumptions for it to check.
It seems like a bit of a turtles situation. At some point you need to implement this “offset” logic. I suppose a memory tagging system could do it at the hardware level like with grsecurity, but I don’t see why Rust wouldn’t be suitable to implement something like compressed pointers/ offsets.
I … have trouble believing the author of this really understands GC. The whole point of GC is that “ownership” of memory is moot.
External pointers to GC objects are not plain old pointers, not unless you’re using a conservative collector that scans native stacks. Generally they are always some sort of handle that has to be dereferenced through special GC code. Because (a) the objects they point to can move, and (b) the GC has to keep track of these handles because they are roots that keep objects alive.
So the problem of dangling raw pointers to a disposed GC heap is not real, not unless you’re using a conservative collector, which no JS runtime I know of does.
They seem to understand GC just fine, they even describe pointer semantics and how a GC works, as well as V8’s pointer compression. I think it’s perhaps their use of “ownership” that’s a bit contentious but they give their own definition so I think it’s fine (I think their definition is actually accurate?). They also appear to be a developer of a garbage collected language runtime, Nova. Perhaps I’m missing something myself.
So the problem of dangling raw pointers to a disposed GC heap is not real, not unless you’re using a conservative collector, which no JS runtime I know of does.
If the GC “owns” the memory and the GC is unaware of a reference, which seems both possible and trivial to express in cases like FFI from a GC language, then you run into the issues described. Another case is the implementation of a system like V8, which implements a GC that may have external references (or improperly managed/ generated references) into the memory allocated by javascript being executed (and is the focus of much of the article). This seems consistent and reasonable, and would motivate something like the specific compressed pointers in V8?
I wonder if I’m missing something because I don’t know much about GCs, but this all seems like it makes sense?
If the GC “owns” the memory and the GC is unaware of a reference, which seems both possible and trivial to express in cases like FFI from a GC language, then you run into the issues described.
But that never happens; if it does, it’s a bug. The GC has to know about every external reference to its objects, because those objects anre by definition live annd cannot be freed; they are “roots” that it traces live objects from.
A conservative collector scans thread stacks looking for anything that might be a pointer to an object. Other collectors, like V8, use special handle types that wrap pointers.
In this post, I’ll analyse CVE-2021-37975 (reported by an anonymous researcher), which is a logic bug in the implementation of the garbage collector (GC) in v8 (the JavaScript interpreter of Chrome). This bug allows reachable JavaScript objects to be collected by the garbage collector, which then leads to a use-after-free (UAF) vulnerability for arbitrary objects in v8. In this post, I’ll go through the root cause analysis and exploit development for the bug.
An optimization in EarlyOptimizationPhase that elides ChangeTaggedToInt32 -> ChangeInt31ToTaggedSigned caused a HeapNumber to be stored to an object in a field with a TaggedSigned representation. A check before EarlyOptimizationPhase in SimplifiedLoweringPhase makes the correct assumption that write barriers can be elided when storing anything to a field in an object with the TaggedSigned representation. This leads to a use-after-free if the HeapNumber being mistakenly stored to the object lives in NewSpace, and the object being stored to lives in OldSpace.
Not every object is managed by the V8 GC, and there are also bugs in that GC, it seems.
Also, at least in Ruby, I know it’s very easy to create invalid references via FFI.
My reading of the article does not support your assumption that it’s all about the presence of bugs. It’s more like “we’ve given out a raw pointer into GC and now people could deref it after the heap is disposed.”
And the big conclusion:
A handle cannot be dereferenced on its own, all usage of it to access real memory requires access to the engine’s base pointer: The engine owns the memory. A handle cannot be dereferenced on its own, if the engine shuts down then the handle becomes a useless integer, fit only for performing integer overflow tricks with.
No, actually, what happens is the handle gets dereferenced against some garbage address (where the base pointer used to be located), and now you’re at best reading a random memory location and getting garbage or crashing, and at worst writing to some random memory location and corrupting the process’s state.
Hm, well perhaps the author would have to weigh in then. Given the reference to the V8 isolated heaps, which are definitely about preventing exploitation of GC bugs, and the bits about segmented heaps, and the reference to Halvar’s talk, it does strike me as being about the sorts of issues I described.
and now you’re at best reading a random memory location and getting garbage or crashing, and at worst writing to some random memory location and corrupting the process’s state.
Assuming the offset is secret, that seems like quite a win, but I would have to reference the paper on these heaps that the V8 team put out to know, and I don’t have time this morning :\ but “random in a 64bit space” is pretty good, there’s a huge chance you access something that just instantly crashes the system.
Nice work by Soatok again.
At this point I think we’re more surprised when folks actually end up doing the right thing as opposed to the behaviour seen from the software vendor here. As long as some baseline of security standards and practices are not enforced by regulation, organisations primarily incentivised by money are just going to continue on doing things like this with little to no repercussion. I suppose that’s nothing new though, it’ll probably take something catastrophic for regulators to get around to it—and even then there’s no guarantee.
What’s crazy is that security isn’t even incentivized by money in extreme cases. Breaches happen to companies who you think would be massively impacted, but nope. Okta was breached in 2023 and their customers were breached because of it, but their stock is up since then. That is… insane. This indicates that no one cares about security, to the extent that companies get breached due to a vendor getting breached and that vendor sees no financial impact.
I feel like users are perhaps fatigued to the extent that it feels pointless or idk. At least some of it is that security plays almost no role for software engineers other than being perceived as a useless pursuit that adds friction.
Our engineering minds are often blind to the non-technical “fixes” that stabilize these systems.
Consider credit card fraud. The absurdly low entropy of the standardized payment card Primary Account Number has led to a massive private bureaucracy that issues data handling regulations and regularly audits all organizations that handle these numbers. The expense is considerable. And yet, fraud is a regular occurrence, written off as a cost of doing business. If you as a consumer experience a fraudulent charge, you just contact your card issuer: they reverse the charge and issue you a new number. We don’t even perceive the friction because we have little basis for comparison: it’s always been this way.
On the other hand, computers aren’t secure. They are incomprehensible mountains of complexity, and society at large doesn’t want to walk away from all the percieved benefits of that complexity.
So the only reasonable way to maintain a measure of security is to evaluate risk and prioritize fixing the things that produce the most risk (for some definition of “risk”). That means you will always have some amount of security breaches at some level of severity. It’s not a question of “if” but rather “when.”
Which also means that recovery after a breach is often more important than preventing all possible breaches in the first place.
So in Okta’s case… yeah, vendors are going to have security issues. Everyone with a little experience knows that at purchase-time. If mishaps inside Okta become a regular pattern, then I’ll ditch Okta. But one-off security incidents are not going to make reasonable companies switch vendors, which means Okta’s probably going to be ok as long as they learn from their mistakes and prioritize transparency.
(Counter-example: LastPass. Anyone still trusting that company after all their mishaps is insane.)
As long as humanity continues to demand these mountains of complexity, this is the way things are going to be. The FreeSWITCH situation is pretty crazy, but the Okta case at least doesn’t seem too insane to me.
I’m not advocating for extreme levels of security, just meeting a bar that’s less embarrassing. I would probably reject the “not if but when” idea too fwiw but I’m onboard with accepting some risk, that’s the point of threat modeling - knowing what risks you do and don’t accept. I think we can do a lot better than the status quo and I think it stems almost entirely from security being something that no one cares about.
A big difference between LastPass and Okta is that Okta has way more money to burn on PR.
If it “helps” at all, physical safety isn’t much better. Regulations only get written when a lot of people die in a high profile incident.
lol it does not help
Another one in the long list of JavaScript tools that ditched JavaScript as their implementation language for performance reasons. Hopefully this is more easily usable by the time I have to work on a NodeJS project again, because the performance improvement numbers look incredibly promising.
This begs the question, why write Javascript on the server if Go is there.
Watching this industry choose js in a lot of places they don’t have to (i.e. anywhere but the browser) has been strange to see.
Single language stacks are awesome to work in. That’s why I write my frontends in Rust, but I understand TS devs going the other way.
It’s such a bad language and ecosystem. Typescript barely improves anything there.
I think the reality is that most people are asking that same question and coming to that same conclusion. The theory of “frontend devs can own the API layer” hasn’t really played out as well as people had hoped and I know plenty of JS developers who are just as happy to write Go if it comes to it anyways.
Ok, question asked. Why write JS on the server when I could pick Java/PHP/Elixir/Go/Rust/Python/Ruby/C#/Zig/OCAML/Crystal/Nim/Perl/Kotlin/Scala/Lua/Haskell/Clojure?
I think some people are aiming for a single language as a stack. Because JS seems to not be going away anytime soon and there are so many backend languages, people were/are trying to aim for JS on the server. There are many backend choices but only one frontend choice. Therefore, to get one language, and end-to-end types, JS on the server. Yes, I understand JS avoidance and all the arguments against. Yes, I rolled my eyes when the server was discovered again.
If I question why I have two languages in my app then people move the goal posts and reduce app features. “I can just concat html text to the client using app.pl in /cgi-bin”. Sure, you always have had that option, that’s not what I mean. I mean for a certain size/complexity of application. I mean, just as one benefit or pro in the trade-off, if I have Go types they don’t go to my client. Or I have to / want to have some contract layer to sync the two. So I end up with two languages and some contract between them. In theory, you don’t have that with trpc/typedjson/tanstack/etc etc. Because your types are full-stack.
So when people are talking about Go replacing Typescript this is still Typescript dev. It’s a tool written in Go to write/check/build Typescript. If you wanted to avoid NodeJS, you would have to look at things like Deno or Bun.
With so many languages taking on WASM targets, I think that’s becoming less true. And even before that, there are quite a number of compilers targeting JS. Of course there are drawbacks, and these approaches aren’t always practical for every web front-end project, but I do think “only one frontend choice” is overstating the case.
Why write Go if C is there?
Why write C if hand-optimized assembly is there?
Planned and enforced obsolescence via certificates.
This is the future the “HTTPS everywhere” crowd wants ;)
It will be interesting to see if Google fixes this. On the one hand, brand value. On the other, it’s a chance to force purchase of new hardware!
Not me. I want HTTPS Everywhere and I also don’t want this.
What’s your marketing budget? If you aren’t aligned with the marketing budget havers on this, how do you expect them to treat you when your goals diverge?
See also, fast expiring certificates making democratized CT logs infeasible, DNS over HTTPS consolidating formerly distributed systems on cloudflare. It’s not possible to set up a webpage in 2025 without interacting with a company that has enough money and accountability to untrustworthy governments to be a CA, and that sucks.
HTTPS is cool and all, but I wish there was a usable answer that wasn’t “just centralize the authority.”
Sigh. Lobsters won’t let me post. I must be getting rate limited? It seems a bit ridiculous, I’ve made one post in like… hours. And it just shows me “null” when I post. I need to bug report or something, this is quite a pain and this is going to need to be my last response as dealing with this bug is too frustrating.
Can you tell me more about these? I think “infeasible” is not accurate but maybe I’m wrong. I don’t see how DoH consolidates anything as anyone can set up a DoH server.
You can definitely set up a webpage in 2025 pretty with HTTPS, especially as you can just issue your own CA certs, which your users are welcome to trust. But if your concern is that a government can exert authority within its jurisdiction I have no idea how you think HTTP is helping you with that or how HTTPS is enabling that specifically. These don’t feel like HTTPS issues, they feel like regulatory issues.
There are numerous, globally distributed CAs, and you can set one up at any time.
Lobsters has been having some issues, I had the same trouble yesterday too.
The CT log thing is something i read on here iirc, basically that CT logs are already pretty enormous and difficult to maintain, if there are 5x as many cert transactions cause they expire in 1/5 the time the only people who will be able to keep them are people with big budgets
I suppose i could set up a DoH server, but the common wisdom is to use somebody else’s, usually cloudflare’s, the fact that something is technically possible doesnt matter in a world where nobody does it.
Are you joking? “please install my CA cert to browse my webpage” may technically count as setting up a web page but the barrier to entry is so high I might as well not. Can iphones even do that?
That’s a lot more centralized than “I can do it without involving a third party at all.”
I dunno, maybe I’m just romanticizing the past but I miss being able to publish stuff on the internet without a Big Company helping me.
Strange but I will have to learn more.
Sure, because that’s by far the easiest option and most people don’t really care about centralizing on Cloudflare, but nothing is stopping people from using another DoH.
iPhones being able to do that isn’t really relevant to HTTPS. If you want to say that users should be admins of their own devices, that’s cool too.
As for joking, no I am not. You can create a CA, anyone can. You don’t get to decide who trusts your CA, that would require work. Some companies do that work. Most individuals aren’t interested. That’s why CAs are companies. If you’re saying you want a CA without involving any company, including non-profits that run CAs, then there is in fact an “open” solution - host your own. No one can stop you.
You can run your own internet if you want to. HTTPS is only going to come up when you take on the responsibility of publishing content to the internet that everyone else has to use. No one can stop you from running your own internet.
As opposed to running an HTTP server without a third party at all? I guess technically you could go set up a server at your nearest Starbucks but I think “at all” is a bit hard to come by and always has been. Like I said, if you want to set up a server on your own local network no one is ever going to be able to stop you.
What did that look like?
I want the benefits of HTTPS without the drawbacks. I also want the benefits of DNS without the drawbacks.
On the one hand, I am completely sincere about this. On the other, I feel kind of foolish for wanting things without wanting their consequences.
Which drawbacks? I ask not because I believe there are none, but I’m curious which concern you the most. I’m sympathetic to wanting things and not wanting their consequences haha that’s the tricky thing with life.
HTTPS: I want the authentication properties of HTTPS without being beholden to a semi-centralized and not necessarily trustworthy CA system. All proposed alternatives are, as far as I know, bad.
DNS: I want the convenience of globally unique host names without it depending on a centralized registry. All proposed alternatives are, as far as I know, bad.
These kind of accusations are posts that make me want to spend less on lobsters. Who knows if it’s planned or accidental obsolescence? Many devices and services outlive their teams by much longer than anticipated. Everyone working in software for a long while has experienced situations like those. I also find the accusation that HTTPS is leading to broken devices rather wild…
I want to offer a different view: How cool is it that the devices was fixable despite Google’s failure to extend/exchange their certificate. Go, tell your folks that the Chromecast is fixable and help them :)
For me, it’s takes like yours that irritate me. Companies that are some of the largest on the planet don’t need people like you to defend them, to make excuses for them, to try to squelch the frustration directed towards them because they’re either evil or incompetent.
By the way, there is no third option - either they’re evil and intended to force obsolescence upon these devices, or they’re incompetent and didn’t know this was going to happen because of this incompetence.
The world where we’re thinking it’s cool that these devices are fixable tidily neglects the fact that 99% of the people out there will have zero clue how to fix them. That it’s fixable means practically nothing.
Who cares? No one is defending Google. People are defending deploying HTTPS as a strategy to improve security. Who cares if it’s Google or anyone else? The person you’re responding to never defends Google, none of this has to do with Google.
Who cares? Also, there is a very obvious 3rd option - that competent people can make a mistake.
Nothing you’ve said is relevant at all to the assertion that, quoting here:
Even though you’re quoting me, you must be mistaken - this post is about Google, and my response was about someone who is defending Google’s actions (“Who knows if it’s planned or accidental obsolescence?”).
I haven’t a clue how you can think that a whole post about Google breaking Google devices isn’t about Google…
To the last point, “https everywhere” means things like this can keep being used as an excuse to make fully functional products in to ewaste over and over, and we’re left wondering if the companies responsible are evil or dumb (or both). People pretending to not get the connection aren’t really making a good case for Google not being shit, or for how the “https everywhere” comment is somehow a tangent.
Nope, not mistaken. I think my points all stand as-is.
Take what you want from my employment by said company, but I would guess absolutely no-one in private and security has any wish/intention/pressure to not renew a certificate.
I have no insider knowledge about what has happened (nor could I share it if I did! But I really don’t). But I do know that the privacy and security people take their jobs extremely seriously.
Google has form in these matters, and the Chromecast as a brand even has an entry here:
https://killedbygoogle.com/
But in the future I’ll be more polite in criticizing one of the world’s biggest companies so that this place is more welcoming to you.
This isn’t about who you criticize, I would say the same if you picked the smallest company on earth. This is about the obvious negativity.
This is because the article isn’t “Chromecast isn’t working and the devices all need to go to the trash”. Someone actually found out why and people replied with instructions how to fix these devices, which is rather brilliant. And all of that despite google’s announcements that it would discontinue it..
I’m not exactly sure what you meant by that, and even the winky face doesn’t elide your intent and meaning much. I don’t think privacy and security advocates want this at all. I want usable and accessible privacy and security and investment in long term maintenance and usability of products. If that’s what you meant, it reads as a literal attack rather than sarcasm. Poe’s law and all.
Not all privacy and security advocates wanted ‘HTTPS everywhere’. Not all of the ‘HTTPS everywhere’ crowd wanted centralized control of privacy and encryption solutions. But the privacy and security discussion has been captured by corporate interests to an astonishing degree. And I think @gerikson is right to point that out.
Do you seriously think that a future law in the US forcing Let’s Encrypt (or any other CA) to revoke the certificates of any site the government finds objectionable is outside the realms of possibility?
HTTPS everywhere is handing a de facto publishing license to every site that can be revoked at will by those that control the levers of power.
I admit this is orthogonal to the issue at hand. It’s just an example I came up with when brewing some tea in the dinette.
In an https-less world the same people in power can just force ISPs to serve different content for a given domain, or force DNS providers to switch the NS to whatever they want, etc. Or worse, they can maliciously modify the content you want served, subtly.
Only being able to revoke a cert is an improvement.
Am I missing something?
Holding the threat of cutting off 99% of internet traffic over the head of media companies is a great way to enforce self-censorship. And the best part is that the victim does all the work themselves!
The original sin of HTTPS was wedding it to a centralized CA structure. But then, the drafters of the Weimar constitution also believed everything would turn out fine.
They’ve just explained to you that HTTPS changes nothing about what the government can do to enact censorship. Hostile governments can turn your internet off without any need for HTTPS. In fact, HTTPS directly attempts to mitigate what the government can do with things like CT logs, etc, and we have seen this work. And in the singular instance where HTTPS provides an attack (revoke cert) you can just trust the cert anyways.
edit: Lobsters is basically completely broken for me (anyone else just getting ‘null’ when posting?) so here is my response to the reply to this post. I’m unable to reply otherwise and I’m getting no errors to indicate why. Anyway…
This is getting ridiculous, frankly.
You’ve conveniently ignored everything I’ve said and focused instead of how a ridiculous attack scenario that has an obvious mitigation has 4 words that somehow you’re relating to SCOTUS and 1st amendment rights? Just glossing over that this attack makes almost no sense whatsoever, glossing over that the far easier attacks apply to HTTP at least as well (or often better) as HTTPS, glossing over the fact that even more attacks are viable against HTTP that aren’t viable against HTTPS, glossing over that we’ve seen CT logs actually demonstrate value against government attackers, etc etc etc. But uh, yeah, SCOTUS.
SCOTUS is going to somehow detect that I trusted a certificate? And… this is somehow worse under HTTPS? They can detect my device accepting a certificate but they can’t detect me accessing content over HTTP? Because somehow the government can’t attack HTTP but can attack HTTPS? This just does not make any sense and you’ve done nothing to justify your points. Users have been more than charitable in explaining this to you, even granting that an attack exists on HTTPS but helpfully explaining to you why it makes no sense.
Going along with your broken threading
My scenario was hypothetical.
In the near future, on the other side of an American Gleichschaltung, a law is passed requiring CAs to revoke specific certificates when ordered.
If the TLS cert for CNN.com is revoked, users will reach a scary warning page telling the user the site cannot be trusted. Depending on the status of “HTTPS Everywhere”, it might not be able to proceed past this page. But crucially, CNN.com remains up, it might be accessible via HTTP (depending on HSTS settings) and the government has done nothing to impede the publication.
But the end effect is that CNN.com is unreadable for the vast number of visitors. This will make the choice of CNN to tone down criticism of the government very easy to make.
The goal of a modern authoritarian regime is not to obsessively police speech to enforce a single worldview. It’s to make it uneconomical or inconvenient to publish content that will lead to opposition to the regime. Media will parrot government talking points or peddle harmless entertainment. There will be an opposition and it will be “protected” by free speech laws, but in practice accessing its speech online will be hard to impossible for the vast majority of people.
I feel like your entire argument hinges on this and it just isn’t true.
If the USA apparatus decides to censor CNN, revoking TLS cert wouldn’t be the way. It’ll be secret court orders (not unlike recent one British government has sent to Apple), and, should they not comply, apprehension of key staff.
And, even if such cert revocation happened, CNN would be able to get new one within seconds by contacting any other ACME CA, there are even some operating in EEA.
I think your whole argument is misguided, and not aimed at understanding failures of Google, but at lashing at only tangentially related problem space.
And my comment is not defence of Google or Cloudflare, I consider both to be malicious for plethora of reasons.
You’re still thinking like the USSR or China or any totalitarian government. The point isn’t to enforce a particular view. The point is to prevent CNN or any other media organization from publishing anything other than pablum, by threatening their ad revenue stream. They will cover government talking points, entertainment, even happily fake news. Like in Russia, “nothing is true and everything is possible”.
Nothing is preventing the US from only allowing certs from US based issuers. Effectively, if you’re using a mainstream browser, the hypothetical law I have sketched out will also affect root CAs.[1]
I proposed a semi-plausible failure mode of the current CA-based certification system and suddenly I’ve gotten more flags than ever before. I find it really interesting.
[1] note that each and every one of these attempts to block access will have quite easy and trivial workarounds. That’s fine, because as stated above, having 100% control of some sort of “truth” is not the point. If nerds and really motivated people can get around a block by installing their own root store or similar, it will just keep them happy to have “cheated the system”. The point is having an atomized audience, incapable of organizing a resistance.
The flags are me and they’re because your posts have been overwhelmingly low quality, consisting of cherry picking, trolling, rhetoric, and failing to engage with anyone’s points. You also never proposed any such attack, other users did you the favor of explaining what attack exists.
The closest thing you’ve come to defining an attack (before others stepped in to hand you one) is this:
It’s not that interesting why you’re getting flagged. IMO flags should be required to have a reason + should be open, but that’s just me, and that’s why I virtually always add a comment when I flag a post.
This is one of the only posts where you’ve almost come close to saying what you think the actual problem is, which if I very charitably interpret and steel-man on your behalf I can take as essentially “The US will exert power over CAs in order to make it hard for news sites to publish content”. This utterly fails, to be clear (as so many people have pointed out that there are far more attacks on HTTP that would work just as well or infinitely better, and as I have pointed out that we have seen HTTPS explicitly add this threat model and try to address it WITH SUCCESS using CT Logs), but at least with enough effort I can extract a coherent point.
I have around 30 flags right now in these threads (plus some from people who took time off their busy schedule to trawl through older comments for semi-plausible ones to flag). You’re not the only one I have pissed off.[1]
(I actually appreciate you replying to my comments but to be honest I find your replies quite rambling and incoherent. I guess I can take some blame for not fully cosplaying as a Project 2025 lawyer, instead relying on vibes.)
It’s fine, though. I’ve grown disillusioned by the EFF style of encryption boosting[2]. I expect them to fold like a cheap suit if and when the gloves come off.
[1] but I’m still net positive on scores, so there are people on the other side too.
[2] they’ve been hyperfocussed on the threat of government threats to free speech, while giving corporations a free pass. They never really considered corporations taking over the government.
Hm, I see. No, I certainly have not flagged all of your posts or anything, just 2 or 3 that I felt were egregious. I think lobsters should genuinely ban more people for flag abuse, tbh, but such is the way.
It’s interesting that my posts come off as rambly. I suppose I just dislike tree-style conversations and lobsters bugs have made following up extremely annoying as my posts just disappear and show as “null”.
I’ve been getting the “null” response too. There’s nothing in the bug tracker right now, and I don’t have IRC access. Hopefully it will be looked at soon.
As to the flags, people might legitimately feel I’m getting too political.
Genuine question, is this aimed at me?
Nope. Unless you are a lawyer for Project 2025.
Yeah, “trust the cert anyway” is going to be the fig leaf used to convince a compliant SCOTUS that revoking a certification is not a blatant violation of the 1st amendment. But at least the daily mandatory webcast from Dear Leader will be guaranteed not to be tampered with during transport!
Wouldn’t you agree that certificate transparency does a better job detecting this kind of thing than surreptitiously redirecting DNS would?
The point of this hypothetical scenario would be that the threat of certificate revocation would be out in the open, to enforce self-censorship to avoid losing traffic/audience. See my comment here:
https://lobste.rs/s/mxy0si/chromecast_2_s_device_authentication#c_lyenlf
Flagged as trolling. I’m also extremely critical of Google’s killing of various services.
I’m not sure any of those are good examples of planned obsolescence. As far as I can tell, they’re all services that didn’t perform very well that Google didn’t want to support, tools that got subsumed into other tools, or ongoing projects that were halted.
I think it’s reasonable to still wish that some of those things were still going, or that they’d been open-sourced in some way so that people could keep them going by themselves, or even that Google themselves had managed them better. But planned obsolescence is quite specifically the idea that you should create things with a limited lifespan so that you can make money by selling their replacements. As far as I can tell, that doesn’t apply to any of those examples.
Trust Google to not even manage to do planned obsolescence right either…
Please refrain from smirky, inflammatory comments.
I get that it’s a tongue in cheek comment, but this is what falls out of “we want our non-https authentication certificates to chain through public roots”.
There is no reason for device authentication to be tied to PKI - it is inherently a private (as in “only relevant to the vendor” , not secret) authentication mechanism so should not be trying to chain through PKI, or PKI-like, roots.
Hyperbole much? Sometimes an expired certificate is just an expired certificate
Why is this a hyperbole? It is clear that even an enterprise the size of Google, famous for it’s leetcode-topping talent is unable to manage certificates at scale. This makes it a pretty good point against uncritical deployment of cryptographic solutions.
Microsoft let microsoft.com lapse that one time. Should we give up on DNS?
When Microsoft did that I wasn’t standing embarrassed in front of my family failing to cast cartoons on the TV. So it was their problem, not my problem.
(It is still bricked today btw)
No one has ever argued for “uncritical deployment” of any solution, let alone cryptographic ones.
Maybe I’m reading too much into “HTTPS everywhere” then.
Maybe. I think there are two ways to interpret it - “HTTPS Everywhere” means “literally every place” or it means “everywhere that makes sense, which is the vast majority of places”. But, to me, neither of these implies “you should deploy in a way that isn’t considered and that will completely destroy a product in the future”, it just means that you should very likely be aiming for a reliable, well supported deployment of HTTPS.
I was replying more to the “planned and enforced obsolescence” conspiracy theorizing.
It is true that managing certificates at scale is something not a lot of large organizations seem to be able to pull off, and that’s a legitimate discussion to have… but I didn’t detect any good faith arguments here, just ranting
Even if half of the things I have heard about Brave are wrong, why even bother when so many other great, free alternatives exist. The first and last time I tried it was the home page ad fiasco… uninstalled and went back to Chrome.
These days I try to use Firefox, but escape hatch to Chrome when things don’t work. I know there are better alternatives to both Firefox and Chrome, I’ll start exploring them… maybe? It’s hard for me to care about them since most of them are just Chrome/Firefox anyway. I’ll definitely give Ladybird a go when it’s ready. On paper, at least, it sounds like the escape from Google/Mozilla that is desperately needed.
Kagi bringing Orion to Linux feels promising. It’s OK on Mac, though after using it for 6 months I switched back to Safari. It looks like they’re using Webkit for that on Linux, not blink, which is a happy surprise IMO. That feels like a good development. (I’m also looking forward to Ladybird, though. Every so often I build myself a binary and kick the tires. Their progress feels simultaneously impossibly fast and excruciatingly slow.
If I understand correctly, Orion is not open source. That feels like a huge step backward and not a solution to a browser being controlled by a company with user-hostile incentives. I think Ladybird is more in line with what we really need: a browser that isn’t a product but rather a public good that may be funded in part by corporations but isn’t strongly influenced by any one commercial entity.
I believe they have stated that open sourcing is in the works1
Their business model is, at the minimum, less user hostile than others due to users paying them money directly to keep them alive.
Disclaimer: Paid Kagi user.
That help page has said Kagi is “working on it” since 2023-09 or earlier. Since Kagi hasn’t finished that work after 1.5 years, I don’t believe Kagi is actually working on open sourcing Orion.
If US DoJ has their way, google won’t be able to fund chrome any more the way it was doing so far. That also means apple and firefox lose money too. So Kagi’s stuff might work out long term if breakup happens.
That’s totally valid, and I’d strongly prefer to use an open source UA as well!
In the context of browsers, though, where almost all traffic comes from either webkit-based browsers (chiefly if not only Safari on Mac/iPad/iPhone), blink-based browsers (chrome/edge/vivaldi/opera/other even smaller ones) or gecko-based browsers (Firefox/LibreWolf/Waterfox/IceCat/Seamonkey/Zen/other even smaller ones) two things stand out to me:
I thought that Orion moving Webkit into a Linux browser was a promising development just from an ecosystem diversity perspective. And I thought having a browser that’s not ad-funded on Linux (because even those FOSS ones are, indirectly ad-funded) was also a promising development.
I’d also be happier with a production ready Ladybird. But that doesn’t diminish the notion that, in my eye, a new option that’s not beholden to advertisers feels like a really good step.
There are non-gecko pure FOSS browsers on Linux.
Of the blink-based pure FOSS browsers, I use Ungoogled Chromium, which tracks the Chromium project and removes all binary blobs and Google services. There is also Debian Chromium; Iridium; Falkon from KDE; and Qute (keyboard driven UI with vim-style key bindings). Probably many others.
The best Webkit based browser I’m aware of on Linux is Epiphany, aka Gnome Web. It has built-in ad blocking and “experimental” support for chrome/firefox extensions. A hypothetical Orion port to Linux would presumably have non-experimental extension support. (I found some browsers based on the deprecated QtWebKit, but these should not be used due to unfixed security flaws.)
I wasn’t sure Ungoogled Chromium was fully FOSS, and I completely forgot about Debian Chromium. I tried to use Qute for a while and it was broken enough for me at the time that I assumed it was not actively developed.
When did Epiphany switch from Gecko to Webkit? Last time I was aware of what it used, it was like “Camino for Linux” and was good, but I still had it on the Gecko pile.
According to Wikipedia, Epiphany switched from Gecko to Webkit in 2008, because the Gecko API was too difficult to interface to / caused too much maintenance burden. Using Gecko as a library and wrapping your own UI around it is apparently quite different from soft forking the entire Firefox project and applying patches.
Webkit.org endorses Epiphany as the Linux browser that uses Webkit.
There used to be a QtWebKit wrapper in the Qt project, but it was abandoned in favour of QtWebEngine based on Blink. The QtWebEngine announcement in 2013 gives the rationale: https://www.qt.io/blog/2013/09/12/introducing-the-qt-webengine. At the time, the Qt project was doing all the work of making WebKit into a cross-platform API, and it was too much work. Google had recently forked Webkit to create Blink as a cross-platform library. Switching to Blink gave the Qt project better features and compatibility at a lower development cost.
The FOSS world needs a high quality, cross-platform browser engine that you can wrap your own UI around. It seems that Blink is the best implementation of such a library. WebKit is focused on macOS and iOS, and Firefox develops Gecko as an internal API for Firefox.
EDIT: I see that https://webkitgtk.org/ exists for the Gnome platform, and is reported to be easy to use.
I see Servo as the future, since it is written in Rust, not C++, and since it is developed as a cross platform API, to which you must bring your own UI. There is also Ladybird, and it’s also cross-platform, but it’s written in C++, which is less popular for new projects, and its web engine is not developed as a separate project. Servo isn’t ready yet, but they project it will be ready this year: https://servo.org/blog/2025/02/19/this-month-in-servo/.
I used to contribute to Camino on OS X, and I knew that most appetite for embedding gecko in anything that’s not firefox died a while back, about the time Mozilla deprecated the embedding library, but I’d lost track of Epiphany. As an aside: I’m still sorry that Mozilla deprecated the embedding interface for gecko, and I wish I could find a way to make it practical to maintain that. Embedded Gecko was really nice to work with in its time.
I strongly agree with this. I’d really like a non-blink thing to be an option for this. Not because there’s anything wrong with blink, but because that feels like a rug pull waiting to happen. I like that servo update, and hope that the momentum holds.
Wikipedia suggests the WebKit backend was added to Epiphany in 2007 and they removed the Gecko backend in 2009. Wow, time flies! GNOME Web is one I would like to try out more, if only because I enjoy GNOME and it seems to be a decent option for mobile Linux.
I have not encountered any website that doesn’t work on firefox (one corporate app said it required Chrome for some undisclosed reason, but I changed the useragent and had no issue at all to use their sinple CRUD). What kind of issues do you find?
I’ve wondered the same thing in these recent discussions. I’ve used Firefox exclusively at home for over 15 years, and I’ve used it at my different jobs as much as possible. While my last two employers had maybe one thing that only worked in IE or Chrome/Edge, everything else worked fine (and often better than my coworkers’ Chrome) in Firefox. At home, the last time I remember installing Chrome was to try some demo of Web MIDI before Firefox had support. That was probably five years ago, and I uninstalled Chrome after playing with the demo for a few minutes.
I had to install Chromium a couple of times in the last years to join meetings and podcast recording that were done with software using Chrome-only API.
When it happens, I bless flattpak as I install Chromium then permanently delete it afterward without any trace on my system.
If you are an heavy user of such web apps, I guess that it makes sense to use Chrome as your main browser.
I can’t get launcher.keychron.com to work on LibreWolf but that’s pretty much it. I also have chrome just in case I’m too lazy to figure out what specifically is breaking a site
Firefox doesn’t support WebUSB, so that’s probably the issue.
Thanks, yeah, that’s it. I knew it was some specific thing that wasn’t supported I just couldn’t remember and was writing that previous comment on my phone so I was too lazy to check. But yeah, it’s literally the only site I could think of that doesn’t work on Firefox (for me).
It’s pretty rare to be fair, so much so that I don’t have an example of the top off my head. I know, classic internet comment un-cited source bullshit, sorry. It was probably awful gov or company intranet pages over the years.
Some intensive browser based games run noticeably better on Chrome too, but I know this isn’t exactly a common use case for browsers that others care about.
Probably not a satisfying reply, apologies.
For some reason, trying to log in to the CRA (Canadian equivalent of the IRS) always fails for me with firefox and I need to use chrome to pay my taxes.
I run into small stuff fairly regularly. Visual glitches are common. Every once in a while, I’ll run into a site that won’t let me login. (Redirects fail, can’t solve a CAPTCHA, etc.)
Some google workspace features at least used to be annoying enough that I just devote a chrome profile to running those workspace apps. I haven’t retried them in Firefox recently because I kind of feel that it’s google’s just deserts that they get a profile on me that has nothing but their own properties, while I use other browsers for the real web.
I should start keeping a list of specific sites. Because I do care about this, but usually when it comes up I’m trying to get something done quickly and a work-around like “use chrome for that site” carries the day, then I forget to return to it and dig into why it was broken.
I can’t tell if “these services” is supposed to include Dropbox but I know when I worked at Dropbox (I left in ~2019) I don’t believe we ever examined user files. Any project that would want to access any user data (ie: email addresses, access logs, etc) was heavily scrutinized and almost always rejected and that wasn’t even direct file access, I don’t think that would have been entertained. That’s my recollection at least, and perhaps things have changed, but given the ambiguity in the wording I figured I’d just point that out.
I really would have liked to have heard what they’re thinking here. We took privacy of documents extremely seriously. I don’t know the details of whatever telemetry we did or did not collection but I know that there was intense scrutiny whenever such things came up and the few times (just once that I remember, actually) I recall a bizops team or whatever wanting telemetry they were flatly told it was never going to happen so I strongly believe that Dropbox only ever collected what was necessary to operate. Again, things may have changed, but the culture around protecting users and respecting users was pervasive and strongly enforced.
I don’t know why you would trust a P2P solution more. Having a bunch of strangers integrate into your file storage solution does not seem to solve the problem of trust, it seems to make it far more complex.
Basically, the premise of this seems to be that Dropbox isn’t trustworthy but it doesn’t seem well justified but, again just based on my few years at the company, doesn’t mesh at all with the culture. I was also on a security team, I was pretty privy to the sort of access that engineers etc had to users files and how at east some calls were made on topics like accessing user data. I’m sure there’s plenty I was not privy to, and certainly in terms of legal requirements to access data that’s valid, but even then we were always extremely deliberate and user-focused.
My suggestion would indeed be that if you do not trust Dropbox that you consider locally encrypting files before synchronizing them. I think when this had come up at DBX the issues we ran into were never “but what if we want to access that data”, instead it’s:
If you already don’t trust DBX, why would you trust us to encrypt your data locally? If the attack you’re concerned with is insider, that is. Obviously if it’s “what about an active attacker” that’s separate, but also we did encrypt data across multiple stages of the product (ie: in transit, storage, hardware, etc).
It’s a meaningfully difficult technical problem to store client-side-encrypted data, sync it, etc.
It creates confusion for users - certain features may or may not be possible to support, recovery becomes extremely difficult, etc.
This is based on recollection, I am sure there are tons of other reasons why this wasn’t implemented when it was requested. I don’t want to overly represent my part in these discussions, this is just recollection from what I had heard. But I’m confident that the reason was never “we want to read their files” while I worked there. Again, maybe things have changed, it’s been years since I worked there.
I’m a little bit uncomfortable speaking about past work I’ve done so hopefully I’ll be forgiven if I’m reluctant to elaborate much more. Again, it’s been years since I worked there, I worked on an internal security team, my scope and view was limited, and there’s only so much I can feel comfortable speaking to about a past employer, but that’s my impression.
Perhaps the “suspicious things” include the Accessibility permission kerfuffle of 2016? I know a couple of people who are still put off by that.
Perhaps! And I respect that people will be put off by that. That’s a really good post, I’m personally against companies doing that sort of thing myself. I don’t really agree with the characterization here, for example:
But I respect that someone would take that feeling away and make decisions based on that. No one has to use Dropbox, Dropbox is not owed any free passes, nothing like that, I only mean to say that when it came to accessing user data it was, at the time I was there, taken extremely seriously and file access was basically non-negotiably barred for something like a product feature.
Having cofounded a company that stored peoples’ health information, I’m familiar with the dissonance between an internal culture that is entirely well-meaning and competent, trying to navigate very real tradeoffs between usability, feasibility, and cost, and the view of a particularly (but not even unreasonably) paranoid customer.
I also remember a lot of reputational hits to Dropbox in the early days that could leave a lasting impression.
The initial Dropbox messaging oversimplified things with regard to privacy, and there was some backlash. This resulted in a blog post in 2011 where they clarified that actually, some employees can access your files, and yes, your files will be turned over to law enforcement if required.
To a very technical user this was obvious from the existence of the password recovery and preview features, so they found the original oversimplification duplicitous. Some of the less-technical users who took the initial messaging at face value felt betrayed by the clarification.
A few months after this post (“We’re proud of our excellent track record in security”), the fail-open auth fiasco happened. Later there was the full auth database breach of 2016, and the resurrected deleted files of 2017. Then there was the NSA slide in the Snowden dump that described Dropbox as “coming soon”.
I’m a satisfied Dropbox user myself, but I am at all times aware of the limitations on the privacy of what I put in there, which are just inherent in the definition of the service, no matter how great the team is.
I think that there are, of course, bugs, flaws, mistakes, etc. I’m just saying that Dropbox is not (or at least, was not) one of those companies that monetizes your file data. FWIW, just as a fun note, every single employee who joins Dropbox learns about that password bug in training.
I have no idea what Snowden would have revealed that would have surprised me or anyone else. I don’t want to comment on any access that would or would not exist as I think it would be inappropriate so I’ll just leave at this - I am skeptical of the claim of a “coming soon” that never came.
Anyways, like I’ve said, people are free to feel however they like about Dropbox. I just wanted to share my experience in a limited way, I’m already closer to talking about details than I’d like to be so I’ll have to leave it there.
Thanks for the insight! (As I say, I don’t find Dropbox particularly “creepy” myself, just pulling up some ancient history to try explaining why some might say that.)
Yeah for sure, it was interesting to see iall of it laid out like that.
You talk a lot about trusting, but why do you need trust at all? That’s the obvious benefit of P2P. Your files are only ever on your own devices, and the transfer between them is secured with your own keys. You don’t need to trust anyone else because no one else is involved.
All I’m saying with regards to trust is that if you don’t trust a service I recommend not using it, regardless of design. I think that Signal is really well designed, if I didn’t trust the authors I wouldn’t use it. If I thought the client might be malicious, I wouldn’t use it. Distrusting the people that build, package, distribute, and design your software is a deal breaker to me without additional mitigations like encrypting the data myself - if I didn’t trust the authors I wouldn’t trust them to handle the encryption, that would have to be out of band.
That is not my understanding of P2P? Am I missing something? Either way, I have no issue with people using P2P either.
I’ve never heard of a P2P file sync software that gives you access to other users’ disk space. All of them that I know of including Syncthing work as I described. Have you seen it being done differently?
To me, P2P means that instead of downloading a file from one source, such as Dropbox, I download pieces of it from distributed peers. Similarly, when you upload a file, you upload pieces to peers. Is that not the case?
You fell for the enterprise firewall vendors’ definition of P2P.
Okay, can you explain to me what you mean then? Obviously I’m not following and it seems like something I’d be interested to learn about.
I’m talking about the barebones definition of P2P, which is simply a networking model in which peers connect (directly) to other peers. There’s nothing specific to it other than that. Anything more is just building on top of that general idea. A P2P protocol could be designed to require each peer to have a cryptographic key for identification and authentication. Now peers can communicate securely, and they can authorize peers to do different things based on their identity. Then you add a feature to the protocol by which a device allows peers to download files from it if they are authorized. You can tell which devices in the network belong to you, you grant only those devices the ability to transfer your files around, and no devices other than your own can see the files because they communicate confidentially. That’s the high-level concept of Syncthing.
What enterprise firewall vendors mean when they say “P2P” is basically just BitTorrent.
I feel like that matches exactly what I said. I never mentioned anything higher level like a cryptographic key or lack thereof. I said you distribute files to peers, and it sounds like that’s exactly what “peer to peer” means from your perspective as well.
Okay but if you don’t trust the person developing the P2P product, why would you trust them to implement this protocol? My suggestion with Dropbox was that if you don’t trust Dropbox as a company you shouldn’t trust their agent to encrypt data for you, you should do it yourself. I think this applies exactly the same way to P2P. If you want to remove trust from the vendor you should be encrypting using a tool you do trust and only distributing that.
Okay, that’s fine too. If you want a system where you own all of the devices involved that seems totally reasonable.
You mentioned chunking and sending files to untrusted peers. That’s too specific and explicitly not what most P2P file sync tools do. Most of what I’ve said hinges on this sentence of yours:
The strangers never get to see your files, so this doesn’t matter.
The behavior of software I run on my own computer can be analyzed and verified to some extent, but I can’t verify what the software on your computer does. This can be improved further by using FOSS. If the files never end up on your computer, I don’t need to trust you to not do anything shady with them. If the files are only ever on my own devices, I don’t need to encrypt them either.
Sure, I was assuming in the P2P scenario that the peers were untrusted because that’s the scenario presented with Dropbox - that it is untrusted. If you remove that and say “I trust my peers” sure, that’s perfectly fine.
Why would that be the case? Do you mean because you’re encrypting them?
Perhaps I should clarify. I’m saying that in the scenario where you don’t trust the person producing your file syncing software:
I feel that this applies just as much to P2P as it does to Dropbox? And to me it seems like P2P just adds more parties.
Yeah, like I said, that’s all fine.
Right, that was my point. Remove the need to trust by removing third parties from the equation. In that sense, P2P file sync has the same benefits as self-hosting a client-server file sync service, except you don’t need to operate a server.
The rest boils down to “Can you trust proprietary software running on your computer?”, where I would argue “just use FOSS”. But even so, which malicious behavior would be more likely to be spotted by a user?
This feels quite biased. For example, presumably Brave has responded to these things. Even if the response is invalid, my preference would be to show it and then demonstrate it as being invalid. When I see a big list of “they did this” with no representation of the other side it makes me very skeptical and, perhaps unfairly, dismissive.
I have no stake in this but I will say that Brave seems to do two things that I see potential in.
It is one of the few cases where blockchain seems reasonable. I know people like to hate on blockchain because of cryptocurrency, and they hate on cryptocurrency because it is largely a tool for criminals, but just from a technical perspective there are potentially valid use cases where you have a distributed system with some number of participants who you do not want to trust while still ensuring that all participants are doing some kind of work (visiting a page).
It is the only browser project I have seen that actually seems like it could disrupt the singular monetization strategy of the web - advertising. Mozilla can’t exist without ad revenue, they already can’t meaningfully compete against it even if you ignore that all of their funding is ad-driven. We should probably be asking ourselves how we expect the web to continue if we reject advertising as a source. To my knowledge, Brave is the only company that’s taken an approach that even appears viable.
I don’t really have a strong stake in this but I find this list really unhelpful. It doesn’t do any of the hard work for me. I could search online for “brave controversy” and get a list of search results like this. What would be valuable is actually providing the hard information to find, to provide analysis, etc. A listicle of “I accuse them of this” is not valuable at all unless you’re either completely uninterested in basing your opinions on information or you’re already sold on the idea that Brave is bad and you want to add more urls to your evidentiary arsenal.
Am I to click each of these links, find the responses, analyze them myself, etc? I could do that, but it’s kind of a heavy lift.
I haven’t seen this be the reason people criticize crypto for at least 10-15 years. People have concerns about the ecological concerns, and the rampancy of pump-and-dump and other scams. But people haven’t cared about crypto being used to buy drugs in years. At least, not to a prominent degree
I wasn’t thinking of drugs specifically, I was thinking about scams, identity theft, blackmail, ransomware, etc. But it isn’t really important to my point, I don’t think.
has ransomware moved on from cryptocurrency?
Bitcoin is ESG and Brave use a blockchain that is not proof of work. FIAT as USD is the most used to buy drugs and fund wars. Also Chainalisys loves open ledgers because transactions can be traced.
The way Brave disrupts advertising on the Web is by intercepting it and replacing the ads with their own ads. It is the sleaziest of all possible disruptions.
Why is that? I don’t know much about that, it sounds fine to me, personally.
Assuming content creators (god I hate that word) actually try to make a living using only advertisement and actually give a shit while choosing partners it’s taking away their income while still showing the user advertisement.
It’s kind of like syphoning off the money the actual creator could make.
It feels to me like selling an open source product you don’t develop (without support) or selling fandubs.
I’m not really familiar with this stuff, so I don’t know much about it. My understanding was that Brave intended to pay creators in that circumstance, perhaps not though?
Afaict from all the controversies they don’t give a shit, are intransperent and come off as evil. See everything in the linked reddit list. And I don’t know either how brave is supposed to work, but after everything I read about it https://lobste.rs/s/iopw1d/what_s_up_with_lobste_rs_blocking_brave I don’t want to touch it with a 10 foot pole
Yeah, that’s fair. I am not really trying to defend Brave. I just wanted to point out that a big list of accusations isn’t very helpful to me personally and I think that the premise of finding new ways to monetize the web is valuable. I have no other thoughts on the matter tbh.
I tried to do that in this thread which has some of the bigger criticisms: https://lobste.rs/s/iopw1d/what_s_up_with_lobste_rs_blocking_brave#c_ezne5h
After engaging in that discussion, it reaffirmed to me that two reasonable people can see the exact same events/discussions and come to opposite conclusions.
It’s mostly preconceived notions/beliefs. For some reason a lot of people are predisposed to give the benefit of the doubt to Brave, and so we gets lots of rebuttals which try to see the things Brave has done in the best possible light/downplay the criticism/find a charitable interpretation/etc.
By contrast a lot of people, for whatever reason, are predisposed to never give benefit of the doubt to Mozilla, so we get lots of angry threads where people will do whatever they can to view decisions by Mozilla in a negative light.
Personally I’m in the exact opposite position – I am inclined to give Mozilla the benefit of the doubt and look for charitable interpretations of things they do (especially the recent ToS kerfuffle, which just seems like silly overreaction by the internet), and inclined to view Brave and especially Brendan Eich negatively and never give the benefit of the doubt.
We can dig further about why one wants to give the benefit of the doubt and another does not, and lay out all the arguments, and still reach opposite conclusions.
I agree if you mean to say that it’s ideological, I think that is true by definition.
To me, blockchain is merely a public ledger that cannot (easily) be amended once written to. In that case, Git’s commit system is technically blockchain. Each commit hash depends on the previous hash and the commit’s diff, meaning it cannot be edited without throwing everyone’s clone out of sync. Pretty useful!
How is Brave funded?
Based on activities listed in the OP, Brave is funded by advertising (injecting their own ads into your browser, replacing the ads of the site you are visiting), soliciting donations for open source projects they have no relation with and pocketing the funds, subscription fees from the VPN that they install without permission, scraping web sites without permission and reselling copyrighted data for AI training, and injecting URLs with affiliate codes into your web browser.
Based on Brave’s own description of their company, from their FAQ:
Elsewhere in the FAQ, they talk about collecting user data and selling that to advertisers, for targeted advertising.
Early on I think the intent was to fund entirely via BAT. At this point, from what I understand at least, they have a number of things they’re doing, including at least some funding being advertisement based.
The absence of Git integration was one of the main reasons I gave up on Zed. Cool that they’re working on it now, but it’s too late for me.
Why do you feel it’s too late? It’s an editor, you can try as many as you like, and even use many at once.
yep, I can but I’m not interested anymore.
The main reason I worry about zed being too late is because Cursor is gaining so much traction and I feel that Zed is going to have a hard time competing, which means investing my time into using Zed (as I do) is a bit of a waste of time. Cursor has a huge advantage being based on VSCode (I see that as a disadvantage for me personally, but it doesn’t matter - practically it’s an advantage) so while Zed is trying to catch up on basic features and advanced features Cursor already has tons of stuff for free and appears ahead in the advanced stuff too (presumably because they can focus entirely on the AI stuff and reap the benefits of the VSCode extension world).
I’m unsure how it’ll play out but I think it’s probably fair to say that Zed is very behind in many areas compared to Cursor, that they are going after a similar audience, and that it’s unclear how Zed is going to compete against Cursor given this.
It just takes one feature. I switched from Atom to VSCode 8 or 9 years ago because they had better support for the Go debugger. At this point VSCode didn’t even have tabs and the maintainers were opposed to the concept. VSCode just took users one by one adding features that filled a niche until they beat Atom.
Zed is in the same situation today. It feels extremely fast compared to VSCode and that’s the reason I keep giving it a try. It can’t replace VSCode yet for my workflow but it’s getting closer. I hope they’ll move in the right direction.
Hm, I used VS Code when it launched and I distinctly remember it having tabs. The earliest screenshots I can find (one from August 2016 on this page) show tabs too. Maybe you mean tabs in the integrated terminal?
Nope, for a while the editor didn’t have tabs, just “Open editors” in the sidebar. Tabs were added around June 2016. Here’s the issue discussing adding tabs: https://github.com/microsoft/vscode/issues/224
Yeah, I think Zed has the speed advantage and Cursor won’t be able to close that gap for a long time. We’ll see if it pays off, I really hope so. I kinda hate vscode lol
I’m not convinced that concept of sort of spontaneous knowledge, or what sounds as almost a non-causal knowledge, is even possible but it’s interesting - maybe some sort of random selection process + experimentation would play a part here, but that seems perfectly fine to encode into a model. In reality for whatever reason I suspect that all of these people simply processed information and experiences based on biases but ultimately in a way that’s straightforwardly causal, just as any llm would be, and just as I imagine all consciousness is.
Well they aren’t known to the AI. The point is to see if the AI can use the building blocks it has to assemble novel solutions (from the AI’s perspective) to new problems. This is distinctly different from school, as described in the article, where you learn the questions and answers at the same time by reading a book on the topic. School rarely gives you X and Y and then tells you to derive Z based on the shared properties of X and Y + some leap to new properties of Z.
Is this not exactly what those math ones do? The point is that the AI should be able to derive answers to math problems that it hasn’t seen before based on properties of what it has seen before. It seems that these recommendations are largely where things are headed with regards to chain of thought and the benchmarks being used.
If we’re just saying “test it on things that are not a part of its training set” we do that already. If it’s “test it on things it can have no conceptual framework of” I don’t believe that humans are capable of solving those problems and I personally believe that it is literally impossible to do so. There has to be some sort of pre-knowledge from which other knowledge can be derived and likely some sort of external force that provides some kind of epistemic access, I suppose the idea here is to minimize those pre-requisites.
I guess my feelings here are that we are already doing what is being advocated for here, mostly?
I think the counterargument is basically “Poincare basically did or would have invented relativity” and such …
Or someone else would have, in a matter of time
I think that’s basically impossible to against. Nobody would say that in 2025, we wouldn’t know about relativity if Einstein died early
Also, I replace “LLM” with “search engines” again. I would say search engines probably reduced some creative thinking in the general case, but for the most creative thinkers, it probably didn’t (?). Or at least I’ve never heard anyone argue that – I’d have to think about it [1]
It’s funny that the “AI” framing confuses both the advocates and the detractors … I more agree with the framing of “pretty reliable word calculators”, and the surprising thing is that “word calculation” can produce some knowledge/insight
(On the other hand, I guess it might be surprising if the best “word calculator” could produce no insight at all!)
[1] Maybe I need to re-read this 2008 article: Is Google Making Us Stupid? What the Internet is doing to our brains
https://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/
Maybe we are more stupid :) I guess by now we would have seen all the non-search-engine using, non-stupid people achieving more, not sure … My guess is that using ONLY google is a loss, but using google to find primary sources is a HUGE win.
Ooh now here’s a killer app idea: a simple, usable terminal command that you can use to sandbox a particular directory, a la chroot but actually usable for security. So you can run
sandboxin a terminal and everything outside the local dir is unreachable forever to any process started within it, or you runsandbox cargo buildlike you’d runsudoexcept it has the opposite effect. Always starts from the existing local state, so you don’t have to do any setup a la Docker.Not an ideal solution, given that many cargo commands want to touch the network or the rest of the filesystem for things like downloading cached packages, but it’s a thought. Maybe you can have a TOFU type setup where you run it and it goes through and asks “can this talk over the network to
crates.io?” and “can this read~/.cargo/registry/cache? Can this write~/.cargo/registry/cache?”. Then, idk, it remembers the results for that directory and command?I know all the tools are there to make something like this, no idea if it’s feasible in terms of UI though, or even whether it’d actually be useful for security. But it seems like something we should have.
Years ago, I did this with AppArmor to prevent applications from accessing the network. You can use it to restrict file access too.
If you give this one requirement up you can do all of this today pretty easily. It’s a lot harder to do this otherwise as unprivileged sandboxing is already annoying + x-plat sandboxing is extremely painful.
I started exploring/ experimenting with this via https://github.com/insanitybit/cargo-sandbox
But for a first pass you can just do
docker runand use a mount for caching etc if you want.I actually started work on an idea for that years ago but a mixture of “I wasn’t in a good mindstate and then 2019 made it worse” and fears of becoming more vulnerable, not less, if I was the face of a tool others were relying on for security (by going from being at risk of passive/undirected attacks to being at risk of active/directed attacks) caused it to go on de facto hiatus before I finished it.
You can see what got written here: https://github.com/ssokolow/nodo/ (defaults.toml illustrates the direction I was thinking in terms of making something like
nodo cargo buildJust Work™ with a useful amount of sandboxing.)This is possible through apparmor and selinux. It’s not trivial, but doable. Unfortunately macos is on its own here with sandbox-exec being basically unsupported and wired in behaviour.
I think it would be a good idea even for things like default-allow, but preventing writes to home/SSH configuration. But ui? Nah, this is going to be a per-project mess.
I think that would be tricky because the
sandboxprogram would need to know what files and other resources are required by the program it is supposed to execute in order to run them in a subdirectory—there’s not a great programmatic way to do this, and even if there was, it wouldn’t improve security (the command could just say “I need the contents of the user’s private keys” for instance). The alternative is to somehow tell the sandbox program what resources are required by the script which can be really difficult to do in the general case and probably isn’t a lot better than Docker or similar.On a developer workstation, probably most critical are your home directory (could contain SSH keys, secrets to various applications, etc.),
/etc,/var, and/run/user/<UID>. You could use something like bubblewrap to only make the project’s directory visible in$HOME, use a tmpfs for$HOME/.cargo, and use stubs or tmpfses for the other directories.I did this once and it works pretty well and across projects. However, the question is if you don’t trust the build, why would you trust the application itself? So, at that point you want to run it at all or in an isolated VM anyway. So it probably makes more sense to build the project in a low-privileged environment like that as well.
IMO sandboxing is primarily interesting for applications that you trust in principle, but process untrusted data (chat clients, web browsers, etc.). So you sandbox them for when there is a zero-day vulnerability. E.g. running something like Signal, Discord, or a Mastodon client without sandboxing is pretty crazy (e.g. something like iMessage needs application sandboxing + blastdoor all the time to ensure that zero-days cannot elevate to wider access).
Every time this topic comes up I post a similar comment about how hallucinations in code really don’t matter because they reveal themselves the second you try to run that code.
This time I’ve turned that into a blog post: https://simonwillison.net/2025/Mar/2/hallucinations-in-code/
As the person that saw 6 months of copilot kill so many systems due to the accumulation of latent hallucinations… Yeah. No.
That’s fascinating. I’d really enjoy hearing some more about that. Was this a team project? Were there tests? I feel like this would be really valuable as a sort of post mortem.
Lots of different teams and project. I am talking 30% of a 1k engineer department being feature frozen for months to try to dig out of the mess.
And yes there were tests. Tests do not even start to cut it. We are talking death through thousands deep cut.
This is btw not a single anecdote. My network of “we are here to fix shit” people are flooded with these cases. I expect the tech industry output to plummet starting soon.
Again, really interesting and I’ve love more details. I am at a company that has adopted code editors with AI and we have not seen anything like that at all.
That just sounds so extreme to me. Feature frozen for months is something I’ve personally never even heard of, I’ve never experienced anything like that. It feels kind of mind boggling that AI would have done that.
Did developers spend six months checking in code that they hadn’t tested? Because yeah, that’s going to suck. That’s the premise of my post.
Nope. They had tested it. But to test, you have to be able to understand the failure cases. Which you have heuristics for based on how humans write code
These things are trained exactly to avoid this detection. This is how they get good grade. Humans supervising them is not a viable strategy.
I’d like to understand this better. Can you give an example of something a human reviewer would miss because it’s the kind of error a human code author wouldn’t make but an LLM would?
I’m with @Diana here. You test code, but testing does not guarantee the absence of bugs. Testing guarantees the absence of a specific bug that is tested for. LLM-generated code has a habit of failing in surprising ways that humans fail to account for.
This isn’t really my experience unless you just say “write tests”. ex: https://insanitybit.github.io/2025/02/11/i-rolled-my-own-crypto
I used AI primarily for generating test cases, specifically prompting for property tests to check the various properties we expect the cryptography to uphold. A test case found a bug.
“Those are scary things, those gels. You know one suffocated a bunch of people in London a while back?”
Yes, Joel’s about to say, but Jarvis is back in spew mode. “No shit. It was running the subway system over there, perfect operational record, and then one day it just forgets to crank up the ventilators when it’s supposed to. Train slides into station fifteen meters underground, everybody gets out, no air, boom.”
Joel’s heard this before. The punchline’s got something to do with a broken clock, if he remembers it right.
“These things teach themselves from experience, right?,” Jarvis continues. “So everyone just assumed it had learned to cue the ventilators on something obvious. Body heat, motion, CO2 levels, you know. Turns out instead it was watching a clock on the wall. Train arrival correlated with a predictable subset of patterns on the digital display, so it started the fans whenever it saw one of those patterns.”
“Yeah. That’s right.” Joel shakes his head. “And vandals had smashed the clock, or something.”
You imply that because one kind of hallucination is obvious, all hallucinations are so obvious that (per your next 3 paragraphs) the programmer must have been 1. trying to dismiss the tool, 2. inexperienced, or 3. irresponsible.
You describe this as a failing of the programmer that has a clear correction (and elaborate a few more paragraphs):
It is, and I do. Even without LLMs, almost every bug I’ve ever committed to prod has made it past “run it yourself” and the test suite. The state space of programs is usually much larger than we intuit and LLM hallucinations, like my own bugs, don’t always throw exceptions on the first run or look wrong when read.
I think you missed the point of this post. It tells the story of figuring out where one hallucination comes from and claims LLMs are especially prone to producing hallucinations about niche topics. It’s about trying to understand in depth how the tool works and the failure mode that it produces hallucinations that looks plausible to inexperienced programmers; you’re responding with a moral dictum that the user is at fault for not looking at it harder. I’m strongly reminds me of @hwayne’s rebuttal of “discipline” advice (discussion).
What does “running” the code prove?
So LLMs leave the QA to me, while automating the parts that have a degree of freedom and creativity to them.
Can you at least understand why some people are not that excited about LLM code assistants?
In a typed system, it proves that your code conforms to the properties of its input and output types, which is nice. In a tested system it proves whatever properties you believe your tests uphold.
QA was always on you. If you don’t enjoy using one, don’t? If you feel that it takes your freedom and creativity away, don’t use it. I don’t use LLMs for a ton of my work, especially the creative stuff.
Which is at odds with the claim in the same sentence, that ‘comprehensive automated tests’ will not prove that code does the right thing. And yes, you can argue that the comprehensive tests might be correct, but do not evaluate the properties you expect the results to have, if you want to split hairs.
Evaluating code for correctness is the hard problem in programming. I don’t think anyone expected LLMs to make that better, but there’s a case to be made that LLMs will make it harder. Code-sharing platforms like Stack Overflow or Github at least provide some context about the fitness of the code, and facilitate feedback.
The article is supposed to disprove that, but all it does is make some vague claims about “running” the code (while simultaneously questioning the motives of people who distrust LLM-generated code). I don’t think it’s a great argument.
Ah, I see what you mean. Yes, I don’t think that “running” code is sufficient testing for hallucinations.
What did you think my article was trying to disprove?
It’s an article that’s mainly about all the ways LLMs can mislead you that aren’t as obvious as hallucinating a method that doesn’t exist. Even the title contains an implicit criticism of LLMs: “Hallucinations in code are the least dangerous form of LLM mistakes”.
If anything, this is a piece about why people should “distrust LLM-generated code” more!
Ah, if you restrict ‘hallucinations’ to specifically mean non-extant functions or variables, then I can see where you’re coming from.
Because they don’t enjoy QA.
I don’t enjoy manual QA myself, but I’ve had to teach each myself to get good at it - not because of LLMs, but because that’s what it takes to productively ship good software.
I actually disagree a little bit here. QA’ing every bit of functionality you use is never going to scale. At some level you have to trust the ability of your fellow human beings to fish out bugs and verify correctness. And yes, it’s easy for that trust to be abused, by supply chain attacks and even more complicated “Jia Tan”-like operations.
But just like LLMs can be said to do copyright laundering, they also launder trust, because it’s impossible for them to distinguish example code from working code, let alone vulnerable code from safe code.
That’s fair - if you’re working on a team with shared responsibility for a codebase you should be able to trust other team members to test their code.
You can’t trust an LLM to test its code.
What I meant was something slightly different. Almost every piece of software that’s not a bootloader runs on a distributed stack of trust. I might trust a particular open source library, I might trust the stdlib, or the operating system itself. Most likely written by strangers on the internet. It’s
curl | sudo bashall the way down.The action of importing code from github, or even copy-pasting it from stack overflow, is qualitatively different from that of trusting the output of an LLM, because an LLM gives you no indication as to whether the code has been verified.
I’d go so far as to say the fact that an LLM emitted the code gives you the sure indication it has not been verified and must be tested—the same as if I wrote quicksort on a whiteboard from memory.
I think this post is more than just another “LLMs bad” post, though I did enjoy your response post as a standalone piece. The author ‘s co-worker figured out it didn’t work pretty quickly. It’s more interesting to me that the author found the source of the hallucination, and that it was a hypothetical that the author themselves had posed.
That’s why I didn’t link to the “Making o1, o3, and Sonnet 3.7 Hallucinate for Everyone” post from mine - I wasn’t attempting a rebuttal of that, I was arguing against a common theme I see in discussions any time the theme of hallucinations in code is raised.
I turned it into a full post when I found myself about to make the exact same point once again.
And that’s fair enough - in context I read your comment as a direct reply. I appreciate all the work you’ve been doing on sharing your experience, Simon!
It’s just as safe as it’s always been.
No,. there’s a lot of policy discretion. The US government has access to any data stored in the US belonging to non-US persons without basic due process like search warrants. The data they choose to access is a policy question. The people being installed in US security agencies have strong connections to global far right movements.
In 2004 servers operated by Rackspace in the UK on behalf of Indymedia were handed over to the American authorities with no consideration of the legal situation in the jurisdiction where they were physically located.
/Any/ organisation- governmental or otherwise- that exposes themselves to that kind of risk needs to be put out of business.
I seem to remember an incident where instapaper went offline. The FBI raided a data centre and took a blade machine offline containing blade servers they had warrants for, and instapapers, which they didn’t. So accidents happen.
Link: https://blog.instapaper.com/post/6830514157
Yes, but in that case the server was in an American-owned datacenter physically located in America (Virginia), where it was within the jurisdiction of the FBI.
That is hardly the same as a server in an American-owned datacenter physically located in the UK, where it was not within the jurisdiction of the FBI.
Having worked for an American “multinational” I can see how that sort of thing can happen: a chain of managers unversed in the law assumes it is doing “the right thing”. Which makes it even more important that customers consider both the actual legal situation and the cost of that sort of foulup when choosing a datacenter.
The FBI has offices around the world.
https://www.fbi.gov/contact-us/international-offices
Serious question, who’s putting data in
us-westetc when there is eu data centres? And does that free rein over data extend to data in European data centres? I was under the impression that safe harbour regs protected it? But it’s been years since I had to know about this kind of stuff and it’s now foggy.It does not matter where the data is stored. Using EU datacenters will help latency if that is where your users are, but it will not protect you from warrants. The author digs into this in this post, but unfortunately, it is in Dutch: https://berthub.eu/articles/posts/servers-in-de-eu-eigen-sleutels-helpt-het/
I re-read the English article a bit better and see he addresses it with sources and linked articles. Saturday morning, what can I say.
A lot of non-EU companies. Seems like a weird question, not everyone is either US or EU. Almost every Latin American company I’ve worked for uses us-east/west, even if it has no US customers. It’s just way cheaper than LATAM data centers and has better latency than EU.
Obviously the world isn’t just US/EU, I appreciate that. This article is dealing with the trade agreements concerning EU/US data protection though so take my comment in that perspective.
I don’t see how this is at odds with the parent comment?
That is the one good thing. It has always been unsafe, but now people are finally starting to understand that.
Because it’s dramatically less safe. Everyone saying “it’s the same as before” has no clue what is happening in the US government right now.
And everyone saying it’s dramatically different has no clue what has happened in the US government in the past.
I haven’t personally made up my mind on this, but one piece of evidence in the “it’s dramatically different (in a bad way)” side of things would be the usage of unvetted DOGE staffers with IRS data. That to me seems to indicate that the situation is worse than before.
yeah could be
You’re incorrect. The US has never had a government that openly seeks to harm its own allies.
What do you mean? Take Operation Desert Storm. Or the early Cold War.
Not sure what you mean—Operational Desert Storm and the Cold War weren’t initiated by the US nor were Iraq and the USSR allies in the sense that the US is allied with Western Europe, Canada, etc (yes, the US supported the USSR against Nazi Germany and Iraq against Islamist Iran, but everyone understood those alliances were temporary—the US didn’t enter into a mutual defense pact with Iraq or USSR, for example).
they absolutely 100% were initiated by the US. yes the existence of a mutual defense pact is notable, as is its continued existence despite the US “seeking to harm” its treaty partners. it sounds like our differing perceptions of whether the present moment is “dramatically different” come down to differences in historical understanding, the discussion of which would undoubtedly be pruned by pushcx.
My gut feeling says that you’re right, but actually I think practically nobody knows whether you are or not. To take one example, it’s not clear whether the US government is going to crash its own banking system: https://www.crisesnotes.com/how-can-we-know-if-government-payments-stop-an-exploratory-analysis-of-banking-system-warning-signs/ . The US governmant has done plenty of things that BAD before but it doesn’t often do anything that STRANGE. I think.
the reply was to me
Oh, yeah. Clearly I’m bad at parsing indentation on mobile.
Just because it was not safe before, doesn’t mean it cannot be (alarmingly) less safe now.
And just because it logically can be less safe now doesn’t mean it is.
It is not. Not anymore. But I don’t want to get into political debate here.
I suspect parent meant it has never been safe
This isn’t true, as the US has been the steward of the Internet and its administration has turned hostile towards US’s allies.
In truth, Europe already had a wake-up call with Snowden’s revelations, the US government spying on non-US citizens with impunity, by coercing private US companies to do it. And I remember the Obama administration claiming that “non-US citizens have no rights”.
But that was about privacy, whereas this time we’re talking about a far right administration that seems to be on a war path with US’s allies. The world today is not the same as it was 10 years ago.
hm, you have a good point. I was wondering why now it would be different but “privacy” has always been too vague a concept for most people to grasp/care about. But an unpredictable foreign government which is actively cutting ties with everyone and reneging on many of its promises with (former?) allies might be a bigger warning sign to companies and governments world wide.
I mean, nobody in their right mind would host stuff pertaining to EU citizens in, say, Russia or China.
Which is to say: its not safe at all and never has been a good idea.
CLI is in Rust >_>
How has this design choice played out? It’s been a few years, I’m curious to hear lessons learned.
I’m into it. I’m a big fan of the typestate pattern, and even though this can feel a bit repetitive for endpoints with less logic, I like that it’s so straightforward. No more worrying about the order various handlers run…
Interesting. So is the idea with regards to typestate that you’d ensure that your routes/apis do X and Y and Z steps before calling into some function F(DidZ) ?
I haven’t open sourced my codebase yet, but yeah. So like, instead of saying “this API call is guarded by a is_logged_in? handler, my “save this Foo to the database” function requires an Authorization struct. How can you get one of those? Well, the only way is to call into the authorization subsystem. And doing that requires a User. And you can only get a User by calling into the authentication subsystem. And that happens to take a request context.
Nexus (the control plane API) does it slightly differently, but they have significantly more complexity to their auth than I do. See 24-39 here: https://github.com/oxidecomputer/omicron/blob/main/nexus/auth/src/context.rs
And some examples of using it here: https://github.com/oxidecomputer/omicron/blob/main/nexus/src/external_api/http_entrypoints.rs
You can see how a lot of these handlers have the general form “grab a nexus instance from the Dropshot context, construct a Context from the request context, then call some nexus method, passing the context in.” Same basic idea, except a little cleaner; I’m sort of in a “embrace a little more boilerplate than I’m used to” moment and so rather than the Context stuff I’m doing the same idea but a bit more “inline” in the handlers. I might remove that duplication soon but I want to sit with it a bit more before I overly refactor.
Anyway, I think that in Nexus, that these handlers have the same sort of shape but are also a bit different is the strength of this approach. I’ve worked on rails apps where the before, after, and around request middleware ended up with subtle dependencies between them, and ordering issues, and “do this 80% of the time but not 20% of the time” kinds of things. Doing stuff this way eliminates all of that; it’s just normal code that you just read in the handler. I’ve also found that this style is basically the “skinny controller” argument from back in the day, and comes with the same benefits. It’s easier to test stuff without needing Dropshot at all, since Dropshot itself is really not doing any business logic whatsoever, which middlewares can often end up doing.
Yep, perfect. Makes sense to me. I’m wondering about composability though. (edit: nvm, code explained my questions)
This is interesting. Bubble implies a pop, yes?
I mean, things were overhyped and ridiculous, but can anyone say that the internet isn’t at the core of the economy?
Still one of the most widely used languages, powering many multi-billion dollar companies.
What we now just think of as “the web”.
Again, powering multi-billion dollar workloads, a major economic factor for top tech companies, massive innovation centers for new database technologies, etc.
Still massively important, much to everyone’s regret.
This one is interesting. I don’t have anything myself but most people I know have a Smart TV (I don’t get it tbh, an HDMI cord and a laptop seems infinitely better)
Still a thing, right? I mean, more than ever, probably.
Weirdly still a thing, but I see this as finally being relegated to what it was primarily good for (with regards to crypto) - crime.
Do we expect this “bubble” to “pop” like the others? If so, I expect AI to be a massive part of the industry in 20 years. No question, things ebb and flow, and some of that ebbing and flowing is extremely dramatic (dot com), but in all of these cases the technology has survived and in almost every case thrived.
All of these things produced real things that are still useful, more or less, but also were massively and absolutely overhyped. I’m looking at the level of the hype more than the level of the technology. Most of these things involved huge amounts of money being dumped into very dubious ventures, most of which has not been worth it, and several of them absolutely involved a nearly-audible pop that destroyed companies.
Yeah, I was just reflecting on the terminology. I’d never really seen someone list out so many examples before and I was struck by how successful and pervasive these technologies are. It makes me think that bubble is not the right word other than perhaps in the case of dot com where there was a very dramatic, bursty implosion.
The typical S-shaped logistic curves of exponential processes seeking (and always eventually finding!) new limits. The hype is just the noise of accelerating money. If you were riding one of these up and then it sort of leveled off unexpectedly, you might experience that as a “pop”.
See https://en.wikipedia.org/wiki/Gartner_hype_cycle
To me the distinguishing feature is the inflated expectations (such as NVidia’s stock price tripling within a year, despite them not really changing much as a company), followed by backlash and disillusionment (often social/cultural, such as few people wanting to associate with cryptobro’s, outside of their niche community). This is accompanied by vast amounts of investment money flooding into, and then out of, the portion of the industry in question both of which self-reinforce the swing-y tendency.
“Dumb TVs” are nigh-impossible to find, and significantly more expensive. Even if you don’t use the “Smart” features, they’ll be present (and spying).
Not for everyone and also not cheap, but many projectors come with something like android on a compute stick that is just plugged into the hdmi port, so unplug and it’s dumb.
Yeah, I’ve been eying a projector myself for a while now, but my wife is concerned about we’d be able to make the space dark enough for the image to be visible.
That’s assuming you make the mistake of connecting it to your network.
At least for now… once mobile data becomes cheap enough to get paid for using stolen personal data we are SO fucked.
Why have a TV though? Sports, maybe?
Multiplayer video games, and watching TV (not necessarily sports) with other people.
I use a monitor with my console for video games, same with watching TV with others. I think the only reason this wouldn’t work is if people just don’t use laptops or don’t like having to plug in? Or something idk
It’s the UX. Being able to watch a video with just your very same TV remote or a mobile phone it’s much much better than plugging your laptop with an HDMI cord. The same reason why still dedicated video game consoles exist even if there are devices like smartphones or computers that are technically just better. Now almost all TVs sold are Smart TVs, but even before, many people (like me) liked to buy TV boxes and TV dongles.
And that’s taking into account that a person own a laptop, because the number of people that doesn’t use PCs outside of work is increasing.
I have a dumb TV with a smart dongle - a jailbroken FireStick running LineageOS TV. The UX is a lot better than I’d have connecting a laptop, generally. If I were sticking to my own media collection, the difference might not be that big, but e.g. Netflix limits the resolution available to browsers, especially on Linux, compared to the app.
Citation needed? So far I only know about then either “stealing” existing tech as a service. Usually in an inferior way to self housing, usually lagging behind.
The other thing is putting whatever in-house DB they had and making it available. That was a one time thing though and since they largely predate the cloud I think it doesn’t make sense to call it innovation.
Yet another thing is a classic strategy of the big companies which is taking small companies or university projects and turn them into products
So I’d argue that innovations do end up in the could (duh!) it’s rarely ever driving them.
Maybe the major thing being around things like virtualization and related things but even here I can’t think of a particular one. All that stuff seems to steem largely from Xen which again originated at University.
As for bubbles: one could also argue that dotcom also still exists?
But I agree that hypes is a better term.
I wonder how many of these are as large because of the (unjustified part of) hype they received. I mean promises that were never kept and expectations that were never met, but investments (learning Java, writing java, making things cloud ready, making things depend on cloud tech, building Blockchain Knowhow, investing into currencies, etc) are the reason why they are still so big.
See Java. You learn that language at almost every university. All the companies learned you can get cheap labor right from University. It’s not about the language but about the economy that built around it. The same it’s true for many other hypes.
Compressed heap pointers with offsets reminds me a lot of the early segmented memory regions offered by x86. The segmentation registers baked the offsets directly into the architecture.
Grsecurity leveraged x86 segmentation for a number of powerful mitigations like UDEREF, which was a precursor to SMAP, using (iirc) the segmentation registers to isolate kernel/ user memory. SMAP is a sort of specialized instruction for this enforcement but Grsecurity used the generalized approach to build numerous mitigations. I suppose segmentation was considered too complex to burn registers on? Not sure why they were removed.
This is, basically, what UDEREF did and it was a decade-ahead (or more?) mitigation from Grsecurity using this concept.
It seems like a bit of a turtles situation. At some point you need to implement this “offset” logic. I suppose a memory tagging system could do it at the hardware level like with grsecurity, but I don’t see why Rust wouldn’t be suitable to implement something like compressed pointers/ offsets.
I … have trouble believing the author of this really understands GC. The whole point of GC is that “ownership” of memory is moot.
External pointers to GC objects are not plain old pointers, not unless you’re using a conservative collector that scans native stacks. Generally they are always some sort of handle that has to be dereferenced through special GC code. Because (a) the objects they point to can move, and (b) the GC has to keep track of these handles because they are roots that keep objects alive.
So the problem of dangling raw pointers to a disposed GC heap is not real, not unless you’re using a conservative collector, which no JS runtime I know of does.
They seem to understand GC just fine, they even describe pointer semantics and how a GC works, as well as V8’s pointer compression. I think it’s perhaps their use of “ownership” that’s a bit contentious but they give their own definition so I think it’s fine (I think their definition is actually accurate?). They also appear to be a developer of a garbage collected language runtime, Nova. Perhaps I’m missing something myself.
If the GC “owns” the memory and the GC is unaware of a reference, which seems both possible and trivial to express in cases like FFI from a GC language, then you run into the issues described. Another case is the implementation of a system like V8, which implements a GC that may have external references (or improperly managed/ generated references) into the memory allocated by javascript being executed (and is the focus of much of the article). This seems consistent and reasonable, and would motivate something like the specific compressed pointers in V8?
I wonder if I’m missing something because I don’t know much about GCs, but this all seems like it makes sense?
But that never happens; if it does, it’s a bug. The GC has to know about every external reference to its objects, because those objects anre by definition live annd cannot be freed; they are “roots” that it traces live objects from.
A conservative collector scans thread stacks looking for anything that might be a pointer to an object. Other collectors, like V8, use special handle types that wrap pointers.
The compressed pointers/ isolated heaps are assuming the presence of bugs.
For example, https://github.blog/security/vulnerability-research/chrome-in-the-wild-bug-analysis-cve-2021-37975/
Here’s one where the JIT gets involved, which seems to be a common problem lately: https://googleprojectzero.github.io/0days-in-the-wild//0day-RCAs/2021/CVE-2021-4102.html
Not every object is managed by the V8 GC, and there are also bugs in that GC, it seems.
Also, at least in Ruby, I know it’s very easy to create invalid references via FFI.
My reading of the article does not support your assumption that it’s all about the presence of bugs. It’s more like “we’ve given out a raw pointer into GC and now people could deref it after the heap is disposed.”
And the big conclusion:
No, actually, what happens is the handle gets dereferenced against some garbage address (where the base pointer used to be located), and now you’re at best reading a random memory location and getting garbage or crashing, and at worst writing to some random memory location and corrupting the process’s state.
Hm, well perhaps the author would have to weigh in then. Given the reference to the V8 isolated heaps, which are definitely about preventing exploitation of GC bugs, and the bits about segmented heaps, and the reference to Halvar’s talk, it does strike me as being about the sorts of issues I described.
Assuming the offset is secret, that seems like quite a win, but I would have to reference the paper on these heaps that the V8 team put out to know, and I don’t have time this morning :\ but “random in a 64bit space” is pretty good, there’s a huge chance you access something that just instantly crashes the system.