On the topic of git send-email being the worst, I’ve been slowly working on an ssh app to replace it for git collaboration. It’s still WIP but you can check it out https://github.com/picosh/git-pr
Maybe I don’t understand something, but replaceable memory isn’t important to me at all. It’s not like batteries and storage devices (which degrade over time) – memory that leaves the factory in non-defective shape will pretty much never fail. And yeah, you might want to increase it in a few years, but my guess is that when you’re ready to upgrade the RAM, it will probably also be about time for a cpu upgrade.
Having the two coupled doesn’t seem like an issue to me.
That said, Framework Computer, Inc. is a for-profit corporation like all the others. Everything it has ever said about waste-reduction or principles – or AI – was and is shameless marketing. It will say whatever it thinks will get it into your bed.
Across the entire fleet, 1.3% of machines are affected by uncorrectable errors per year, with some platforms seeing as many as 2-4% affected.
In the systems we study, all uncorrectable errors are considered serious enough to shut down the machine
and replace the DIMM at fault.
In my experience, because failing RAM generally just causes application or system crashes (as opposed to a completely non-working machine like a dead battery or dead storage), people are more likely to blame software or other hardware components for the issue, not their RAM.
Thanks for this! I no longer know where I first heard the “memory doesn’t degrade” claim, but I’ve heard it repeated enough from smart people that I’ve started repeating it myself.
Without getting to deeply into it, that paper appears to claim that socketed memory does degrade with age. I have some immediate concerns about the study, including that it looks like it was published around 2008-9 (judging by the age charts). I don’t know how relevant that still is today.
There are multiple papers investigating DRAM failures I found that I did not mention in my previous comment. I did not find any that support the idea that “memory doesn’t degrade”.
I could tell you relevant anecdotes from my technical support days about Memtest and the differing outcomes for old iBooks depending on whether their failing RAM was soldered to their logic board or installed in the board’s expansion slot, but I think a paper involving orders of magnitude of more computers than I ever fixed (i.e. “the majority of machines in Google’s fleet … from January 2006 to June 2008”) should be more convincing.
The majority of bugs (quantity, not quality/severity) we have are due to the stupid little corner cases in C that are totally gone in Rust. Things like simple overwrites of memory (not that rust can catch all of these by far), error path cleanups, forgetting to check error values, and use-after-free mistakes. That’s why I’m wanting to see Rust get into the kernel, these types of issues just go away, allowing developers and maintainers more time to focus on the REAL
bugs that happen (i.e. logic issues, race conditions, etc.)
This is an extremely strong statement.
I think a few things are also interesting:
I think people are realizing how low quality the Linux kernel code is, how haphazard development is, how much burnout and misery is involved, etc.
I think people are realizing how insanely not in the open kernel dev is, how much is private conversations that a few are privy to, how much is politics, etc.
I think people are realizing how insanely not in the open kernel dev is, how much is private conversations that a few are privy to, how much is politics, etc.
The Hellwig/Ojeda part of the thread is just frustrating to read because it almost feels like pleading. “We went over this in private” “we discussed this already, why are you bringing it up again?” “Linus said (in private so there’s no record)”, etc., etc.
Dragging discussions out in front of an audience is a pretty decent tactic for dealing with obstinate maintainers. They don’t like to explain their shoddy reasoning in front of people, and would prefer it remain hidden. It isn’t the first tool in the toolbelt but at a certain point there is no convincing people directly.
Dragging discussions out in front of an audience is a pretty decent tactic for dealing with
With quite a few things actually. A friend of mine is contributing to a non-profit, which until recently had this very toxic member (they’ve even attempted felony). They were driven out of the non-profit very soon after members talked in a thread that was accessible to all members. Obscurity is often one key component of abuse, be it mere stubbornness or criminal behaviour. Shine light, and it often goes away.
IIRC Hintjens noted this quite explicitly as a tactic of bad actors in his works.
It’s amazing how quickly people are to recognize folks trying to subvert an org piecemeal via one-off private conversations once everybody can compare notes. It’s equally amazing to see how much the same people beforehand will swear up and down oh no that’s a conspiracy theory such things can’t happen here until they’ve been burned at least once.
This is an active, unpatched attack vector in most communities.
I’ve found the lowest example of this is even meetings minutes at work. I’ve observed that people tend to act more collaboratively and seek the common good if there are public minutes, as opposed to trying to “privately” win people over to their desires.
I think people are realizing how low quality the Linux kernel code is, how haphazard development is, how much burnout and misery is involved, etc.
Something I’ve noticed is true in virtually everything I’ve looked deeply at is the majority of work is poor to mediocre and most people are not especially great at their jobs. So it wouldn’t surprise me if Linux is the same. (…and also wouldn’t surprise me if the wonderful Rust rewrite also ends up poor to mediocre.)
yet at the same time, another thing that astonishes me is how much stuff actually does get done and how well things manage to work anyway. And Linux also does a lot and works pretty well. Mediocre over the years can end up pretty good.
After tangentially following the kernel news, I think a lot of churning and death spiraling is happening. I would much rather have a rust-first kernel that isn’t crippled by the old guard of C developers reluctant to adopt new tech.
Take all of this energy into RedoxOS and let Linux stay in antiquity.
I’ve seen some of the R4L people talk on Mastodon, and they all seem to hate this argument.
They want to contribute to Linux because they use it, want to use it, and want to improve the lives of everyone who uses it. The fact that it’s out there and deployed and not a toy is a huge part of the reason why they want to improve it.
Hopping off into their own little projects which may or may not be useful to someone in 5-10 years’ time is not interesting to them. If it was, they’d already be working on Redox.
The most effective thing that could happen is for the Linux foundation, and Linus himself, to formally endorse and run a Rust-based kernel. They can adopt an existing one or make a concerted effort to replace large chunks of Linux’s C with Rust.
IMO the Linux project needs to figure out something pretty quickly because it seems to be bleeding maintainers and Linus isn’t getting any younger.
They may be misunderstanding the idea that others are not necessarily incentivized to do things just because it’s interesting for them (the Mastodon posters).
Redox does have the chains of trying to do new OS things. An ABI-compatible Rust rewrite of the Linux kernel might get further along than expected (even if it only runs in virtual contexts, without hardware support (that would come later.))
Linux developers want to work on Linux, they don’t want to make a new OS. Linux is incredibly important, and companies already have Rust-only drivers for their hardware.
Basically, sure, a new OS project would be neat, but it’s really just completely off topic in the sense that it’s not a solution for Rust for Linux. Because the “Linux” part in that matters.
I read a 25+ year old article [1] from a former Netscape developer that I think applies in part
The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive. Au contraire, baby! Is software supposed to be like an old Dodge Dart, that rusts just sitting in the garage? Is software like a teddy bear that’s kind of gross if it’s not made out of all new material?
Adopting a “rust-first” kernel is throwing the baby out with the bathwater. Linux has been beaten into submission for over 30 years for a reason. It’s the largest collaborative project in human history and over 30 million lines of code. Throwing it out and starting new would be an absolutely herculean effort that would likely take years, if it ever got off the ground.
The idea that old code is better than new code is patently absurd. Old code has stagnated. It was built using substandard, out of date methodologies. No one remembers what’s a bug and what’s a feature, and everyone is too scared to fix anything because of it. It doesn’t acquire new bugs because no one is willing to work on that weird ass bespoke shit you did with your C preprocessor. Au contraire, baby! Is software supposed to never learn? Are we never to adopt new tools? Can we never look at something we’ve built in an old way and wonder if new methodologies would produce something better?
This is what it looks like to say nothing, to beg the question. Numerous empirical claims, where is the justification?
It’s also self defeating on its face. I take an old codebase, I fix a bug, the codebase is now new. Which one is better?
Like most things in life the truth is somewhere in the middle. There is a reason there is the concept of a “mature node” in the semiconductor industry. They accept that new is needed for each node, but also that the new thing takes time to iron out the kinks and bugs. This is the primary reason why you see apple take new nodes on first before Nvidia for example, as Nvidia require much larger die sizes, and so less defects per square mm.
You can see this sometimes in software for example X11 vs Wayland, where adoption is slow, but most definetly progressing and now-days most people can see that Wayland is now, or is going to become the dominant tech in the space.
I don’t think this would qualify as dialectic, it lacks any internal debate and it leans heavily on appeals by analogy and intuition/ emotion. The post itself makes a ton of empirical claims without justification even beyond the quoted bit.
That means we can probably keep a lot of the old trusty Linux code around while making more of the new code safe by writing it in Rust in the first place.
I don’t think that’s a fair assessment of Spolsky’s argument or of CursedSilicon’s application of it to the Linux kernel.
Firstly, someone has already pointed out the research that suggests that existing code has fewer bugs in than new code (and that the older code is, the less likely it is to be buggy).
Secondly, this discussion is mainly around entire codebases, not just existing code. Codebases usually have an entire infrastructure around them for verifying that the behaviour of the codebase has not changed. This is often made up of tests, but it’s also made up of the users who try out a release of a codebase and determine whether it’s working for them. The difference between making a change to an existing codebase and releasing a new project largely comes down to whether this verification (both in terms of automated tests and in terms of users’ ability to use the new release) works for the new code.
Given this difference, if I want to (say) write a new OS completely in Rust, I need to choose: Do I want to make it completely compatible with Linux, and therefore take on the significant challenge of making sure everything behaves truly the same? Or do I make significant breaking changes, write my own OS, and therefore force potential adopters to rebuild their entire Linux workflows in my new OS?
The point is not that either of these options are bad, it is that they represent significant risks to a project. Added to the general risk that is writing new code, this produces a total level of risk that might be considered the baseline risk of doing a rewrite. Now risk is not bad per se! If the benefits of being able to write an OS in a language like Rust outweigh the potential risks, then it still makes sense to perform the rewrite. Or maybe the existing Linux kernel is so difficult to maintain that a new codebase really would be the better option. But the point that CursedSilicon was making by linking the Spolsky piece was, I believe, that the risks for a project like the Linux kernel are very high. There is a lot of existing, old code. And there is a very large ecosystem where either breaking or maintaining compatibility would each come with significant challenges.
Unfortunately, it’s very difficult to measure the risks and benefits here in a quantitative, comparable way, so I think where you fall on the “rewrite vs continuity” spectrum will depend mostly on what sort of examples you’ve seen, and how close you think this case is to those examples. I don’t think there’s any objective way to say whether it makes more sense to have something like R4L, or something like RedoxOS.
Firstly, someone has already pointed out the research that suggests that existing code has fewer bugs in than new code (and that the older code is, the less likely it is to be buggy).
I haven’t read it yet, but I haven’t made an argument about that, I just created a parody of the argument as presented. I’ll be candid, i doubt that the research is going to compel me to believe that newer code is inherently buggier, it may compel me to confirm my existing belief that testing software in the field is one good method to find some classes of bugs.
Secondly, this discussion is mainly around entire codebases, not just existing code.
I guess so, it’s a bit dependent on where we say the discussion starts - three things are relevant; RFL, which is not a wholesale rewrite, a wholesale rewrite of the Linux kernel, and Netscape. RFL is not about replacing the entire Linux kernel, although perhaps “codebase” here refers to some sort of unit, like a driver. Netscape wanted a wholesale rewrite, based on the linked post, so perhaps that’s what’s really “the single worst strategic mistake that any software company can make”, but I wonder what the boundary here is? Also, the article immediately mentions that Microsoft tried to do this with Word but it failed, but that Word didn’t suffer from this because it was still actively developed - I wonder if it really “failed” just because pyramid didn’t become the new Word? Did Microsoft have some lessons learned, or incorporate some of that code? Dunno.
I think I’m really entirely justified when I say that the post is entirely emotional/ intuitive appeals, rhetoric, and that it makes empirical claims without justification.
There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming:
This is rhetoric. These are unsubstantiated empirical claims. The article is all of this. It’s fine as an interesting, thought provoking read that gets to the root of our intuitions, but I think anyone can dismiss it pretty easily since it doesn’t really provide much in the form of an argument.
It’s important to remember that when you start from scratch there is absolutely no reason to believe that you are going to do a better job than you did the first time.
Again, totally unsubstantiated. I have MANY reasons to believe that, it is simply question begging to say otherwise.
That’s all this post is. Over and over again making empirical claims with no evidence and question beggign.
We can discuss the risks and benefits, I’d advocate for that. This article posted doesn’t advocate for that. It’s rhetoric.
existing code has fewer bugs in than new code (and that the older code is, the less likely it is to be buggy).
This is a truism. It is survival bias. If the code was buggy, it would eventually be found and fixed. So all things being equal newer code is riskier than old code. But it’s also been impirically shown that using Rust for new code is not “all things being equal”. Google showed that new code in Rust is as reliable as old code in C. Which is good news: you can use old C code from new Rust projects without the risk that comes from new C code.
But it’s also been impirically shown that using Rust for new code is not “all things being equal”.
Yeah, this is what I’ve been saying (not sure if you’d meant to respond to me or the parent, since we agree) - the issue isn’t “new” vs “old” it’s things like “reviewed vs unreviewed” or “released vs unreleased” or “tested well vs not tested well” or “class of bugs is trivial to express vs class of bugs is difficult to express” etc.
I don’t disagree that the rewards can outweigh the risks, and in this case I think there’s a lot of evidence that suggests that memory safety as a default is really important for all sorts of reasons. Let alone the many other PL developments that make Rust a much more suitable language to develop in than C.
It’s a Ship of Theseus—at no point can you call it a “new” codebase, but after a period of time, it could be completely different code. I have a C program I’ve been using and modifying for 25 years. At any given point, it would have been hard to say “this is now a new codebase,
yet not one line of code in the project is the same as when I started (even though it does the same thing at it always has).
I don’t see the point in your question. It’s going to depend on the codebase, and on the nature of the changes; it’s going to be nuanced, and subjective at least to some degree. But the fact that it’s prone to subjectivity doesn’t mean that you get to call an old codebase with a single fixed bug a new codebase, without some heavy qualification which was lacking.
What’s old and new is poorly defined and yet there’s an argument being made that “old” and “new” are good indicators of something. If they’re so poorly defined that we have to bring in all sorts of additional context like the nature of the changes, not just when they happened or the number of lines changed, etc, then it seems to me that we would be just as well served to throw away the “old” and “new” and focus on that context.
I feel like enough people would agree more-or-less on what was an “old” or “new” codebase (i.e. they would agree given particular context) that they remain useful terms in a discussion. The general context used here is apparent (at least to me) given by the discussion so far: an older codebase has been around for a while, has been maintained, has had kinks ironed out.
There’s a really important distinction here though. The point is to argue that new projects will be less stable than old ones, but you’re intuitively (and correctly) bringing in far more important context - maintenance, testing, battle testing, etc. If a new implementation has a higher degree of those properties then it being “new” stops being relevant.
It’s also self defeating on its face. I take an old codebase, I fix a bug, the codebase is now new. Which one is better?
My point was that this statement requires a definition of “new codebase” that nobody would agree with, at least in the context of the discussion we’re in. Maybe you are attacking the base proposition without applying the surrounding context, which might be valid if this were a formal argument and not a free-for-all discussion.
If a new implementation has a higher degree of those properties
I think that it would be considered no longer new if it had had significant battle-testing, for example.
FWIW the important thing in my view is that every new codebase is a potential old codebase (given time and care), and a rewrite necessarily involves a step backwards. The question should probably not be, which is immediately better?, but, which is better in the longer term (and by how much)? However your point that “new codebase” is not automatically worse is certainly valid. There are other factors than age and “time in the field” that determine quality.
Methodologies don’t matter for quality of code. They could be useful for estimates, cost control, figuring out whom you shall fire etc. But not for the quality of code.
I’ve never observed a programmer become better or worse by switching methodology. Dijkstra would’ve not became better if you made him do daily standups or go through code reviews.
There are ways to improve your programming by choosing different approach but these are very individual. Methodology is mostly a beancounting tool.
When I say “methodology” I’m speaking very broadly - simply “the approach one takes”. This isn’t necessarily saying that any methodology is better than any other. The way I approach a task today is better, I think, then the way that I would have approached that task a decade ago - my methodology has changed, the way I think has changed. Perhaps that might mean I write more tests, or I test earlier, but it may mean exactly the opposite, and my methods may only work best for me.
I’m not advocating for “process” or ubiquity, only that the approach one tasks may improve over time, which I suspect we would agree on.
It’s the largest collaborative project in human history and over 30 million lines of code.
How many of those lines are part of the core? My understanding was that the overwhelming majority was driver code. There may not be that much core subsystem code to rewrite.
For a previous project, we included a minimal Linux build. It was around 300 KLoC, which included networking and the storage stack, along with virtio drivers.
That’s around the size a single person could manage and quite easy with a motivated team.
If you started with DPDK and SPDK then you’d already have filesystems and a copy of the FreeBSD network stack to run in isolated environments.
Once many drivers share common rust wrappers over core subsystems, you could flip it and write the subsystem in Rust. Then expose C interface for the rest.
I see that Drew proposes a new OS in that linked article, but I think a better proposal in the same vein is a fork. You get to keep Linux, but you can start porting logic to Rust unimpeded, and it’s a manageable amount of work to keep porting upstream changes.
Remember when libav forked from ffmpeg? Michael Niedermayer single-handedly ported every single libav commit back into ffmpeg, and eventually, ffmpeg won.
At first there will be extremely high C percentage, low Rust percentage, so porting is trivial, just git merge and there will be no conflicts. As the fork ports more and more C code to Rust, however, you start to have to do porting work by inspecting the C code and determining whether the fixes apply to the corresponding Rust code. However, at that point, it means you should start seeing productivity gains, community gains, and feature gains from using a better language than C. At this point the community growth should be able to keep up with the extra porting work required. And this is when distros will start sniffing around, at first offering variants of the distro that uses the forked kernel, and if they like what they taste, they might even drop the original.
I genuinely think it’s a strong idea, given the momentum and potential amount of labor Rust community has at its disposal.
I think the competition would be great, especially in the domain of making it more contributor friendly to improve the kernel(s) that we use daily.
I certainly don’t think this is impossible, for sure. But the point ultimately still stands: Linux kernel devs don’t want a fork. They want Linux. These folks aren’t interested in competing, they’re interested in making the project they work on better. We’ll see if some others choose the fork route, but it’s still ultimately not the point of this project.
Linux developers want to work on Linux, they don’t want to make a new OS.
While I don’t personally want to make a new OS, I’m not sure I actually want to work on Linux. Most of the time I strive for portability, and so abstract myself from the OS whenever I can get away with it. And when I can’t, I have to say Linux’s API isn’t always that great, compared to what the BSDs have to offer (epoll vs kqueue comes to mind). Most annoying though is the lack of documentation for the less used APIs: I’ve recently worked with Netlink sockets, and for the proc stuff so far the best documentation I found was the freaking source code of a third party monitoring program.
I was shocked. Complete documentation of the public API is the minimum bar for a project as serious of the Linux kernel. I can live with an API I don’t like, but lack of documentation is a deal breaker.
While I don’t personally want to make a new OS, I’m not sure I actually want to work on Linux.
I think they mean that Linux kernel devs want to work on the Linux kernel. Most (all?) R4L devs are long time Linux kernel devs. Though, maybe some of the people resigning over LKML toxicity will go work on Redox or something…
Re-Implementing the kernel ABI would be a ton of work for little gain if all they wanted was to upstream all the work on new hardware drivers that is already done - and then eventually start re-implementing bits that need to be revised anyway.
If the singular required Rust toolchain didn’t feel like such a ridiculous to bootstrap 500 ton LLVM clown car I would agree with this statement without reservation.
Zig is easier to implement (and I personally like it as a language) but doesn’t have the same safety guarantees and strong type system that Rust does. It’s a give and take. I actually really like Rust and would like to see a proliferation of toolchain options, such as what’s in progress in GCC land. Overall, it would just be really nice to have an easily bootstrapped toolchain that a normal person can compile from scratch locally, although I don’t think it necessarily needs to be the default, or that using LLVM generally is an issue. However, it might be possible that no matter how you architect it, Rust might just be complicated enough that any sufficiently useful toolchain for the language could just end up being a 500 ton clown car of some kind anyways.
Depends on which parts of GP’s statement you care about: LLVM or bootstrap. Zig is still depending on LLVM (for now), but it is no longer bootstrappable in a limited number of steps (because they switched from a bootstrap C++ implementation of the compiler to keeping a compressed WASM build of the compiler as a blob.
Yep, although I would also add it’s unfair to judge Zig in any case on this matter now given it’s such a young project that clearly is going to evolve a lot before the dust begins to settle (Rust is also young, but not nearly as young as Zig). In ten to twenty years, so long as we’re all still typing away on our keyboards, we might have a dozen Zig 1.0 and a half dozen Zig 2.0 implementations!
Yeah, the absurdly low code quality and toxic environment make me think that Linux is ripe for disruption. Not like anyone can produce a production kernel overnight, but maybe a few years of sustained work might see a functional, production-ready Rust kernel for some niche applications and from there it could be expanded gradually. While it would have a lot of catching up to do with respect to Linux, I would expect it to mature much faster because of Rust, because of a lack of cruft/backwards-compatibility promises, and most importantly because it could avoid the pointless drama and toxicity that burn people out and prevent people from contributing in the first place.
From the thread in OP, if you expand the messages, there is wide agreement among the maintainers that all sorts of really badly designed and almost impossible to use (safely) APIs ended up in the kernel over the years because the developers were inexperienced and kind of learning kernel development as they went. In retrospect they would have designed many of the APIs very differently.
It’s based on my forays into the Linux kernel source code. I don’t doubt there’s some quality code lurking around somewhere, but the stuff I’ve come across (largely filesystem and filesystem adjacent) is baffling.
Seeing how many people are confidently incorrect about Linux maintainers only caring about their job security and keeping code bad to make it a barrier to entry, if nothing else taught me how online discussions are a huge game of Chinese whispers where most participants don’t have a clue of what they are talking about.
I doubt that maintainers are “only caring about their job security and keeping back code” but with all due respect: You’re also just taking arguments out of thin air right now. What I do believe is what we have seen: Pretty toxic responses from some people and a whole lot of issues trying to move forward.
Seeing how many people are confidently incorrect about Linux maintainers only caring about their job security and keeping code bad to make it a barrier to entry
Huh, I’m not seeing any claim to this end from the GP, or did I not look hard enough? At face value, saying that something has an “absurdly low code quality” does not imply anything about nefarious motives.
Still, in GP’s case the Chinese whispers have reduced “the safety of this API is hard to formalize and you pretty much have to use it the way everybody does it” to “absurdly low quality”. To which I ask, what is more likely. 1) That 30-million lines of code contain various levels of technical debt of which maintainers are aware; and that said maintainers are worried even of code where the technical debt is real but not causing substantial issue in practice? Or 2) that a piece of software gets to run on literally billions of devices of all sizes and prices just because it’s free and in spite of its “absurdly low quality”?
Linux is not perfect, neither technically nor socially. But it sure takes a lot of entitlement and self-righteousness to declare it “of absurdly low quality” with a straight face.
GP here: I probably should have said “shockingly” rather than “absurdly”. I didn’t really expect to get lawyered over that one word, but yeah, the idea was that for a software that runs on billions of devices, the code quality is shockingly low.
Of course, this is plainly subjective. If your code quality standards are a lot lower than mine then you might disagree with my assessment.
That said, I suspect adoption is a poor proxy for code quality. Internet Explorer was widely adopted and yet it’s broadly understood to have been poorly written.
But it sure takes a lot of entitlement and self-righteousness to declare it “of absurdly low quality” with a straight face
I’m sure self-righteousness could get you to the same place, but in my case I arrived by way of experience. You can relax, I wasn’t attacking Linux—I like Linux—it just has a lot of opportunity for improvement.
I guess I’ve seen the internals of too much proprietary software now to be shocked by anything about Linux per se. I might even argue that the quality of Linux is surprisingly good, considering its origins and development model.
I think I’d lawyer you a tiny bit differently: some of the bugs in the kernel shock me when I consider how many devices run that code and fulfill their purposes despite those bugs.
FWIW, I was not making a dig at open source software, and yes plenty of corporate software is worse. I guess my expectations for Linux are higher because of how often it is touted as exemplary in some form or another. I don’t even dislike Linux, I think it’s the best thing out there for a huge swath of use cases—I just see some pretty big opportunities for improvement.
But it sure takes a lot of entitlement and self-righteousness to declare it “of absurdly low quality” with a straight face.
Or actual benchmarks: the performance the Linux kernel leaves on the table in some cases is absurd. And sure it’s just one example, but I wouldn’t be surprised if it was representative of a good portion of the kernel.
Well not quite but still “considered broken beyond repair by many people related to life time management” - which is definitely worse than “hard to formalize” when “the way ever[y]body does it” seems to vary between each user.
I love Rust but still, we’re talking of a language which (for good reasons!) considers doubly linked lists unsafe. Take an API that gets a 4 on Rusty Russell’s API design scale (“Follow common convention and you’ll get it right”), but which was designed for a completely different programming language if not paradigm, and it’s not surprising that it can’t easily be transformed into a 9 (“The compiler/linker won’t let you get it wrong”). But at the same time there are a dozen ways in which, according to the same scale, things could actually be worse!
What I dislike is that people are seeing “awareness of complexity” and the message they spread is “absurdly low quality”.
Note that doubly linked lists are not a special case at all in Rust. All the other common data structures like Vec, HashMap etc. also need unsafe code in their implementation.
Implementing these datastructures in Rust, and writing unsafe code in general, is indeed roughly a 4. But these are all already implemented in the standard library, with an API that actually is at a 9. And std::collections::LinkedList is constructive proof that you can have a safe Rust abstraction for doubly linked lists.
Yes, the implementation could have bugs, thus making the abstraction leaky. But that’s the case for literally everything, down to the hardware that your code runs on.
You’re absolutely right that you can build abstractions with enough effort.
My point is that if a doubly linked list is (again, for good reasons) hard to make into a 9, a 20-year-old API may very well be even harder. In fact, std::collections::LinkedList is safe but still not great (for example the cursor API is still unstable); and being in std, it was designed/reviewed by some of the most knowledgeable Rust developers, sort of by definition. That’s the conundrum that maintainers face and, if they realize that, it’s a good thing. I would be scared if maintainers handwaved that away.
Yes, the implementation could have bugs, thus making the abstraction leaky.
Bugs happen, but if the abstraction is downright wrong then that’s something I wouldn’t underestimate. A lot of the appeal of Rust in Linux lies exactly in documenting/formalizing these unwritten rules, and wrong documentation can be worse than no documentation (cue the negative parts of the API design scale!); even more so if your documentation is a formal model like a set of Rust types and functions.
That said, the same thing can happen in a Rust-first kernel, which will also have a lot of unsafe code. And it would be much harder to fix it in a Rust-first kernel, than in Linux at a time when it’s just feeling the waters.
In fact, std::collections::LinkedList is safe but still not great (for example the cursor API is still unstable); and being in std, it was designed/reviewed by some of the most knowledgeable Rust developers, sort of by definition.
At the same time, it was included almost as like, half a joke, and nobody uses it, so there’s not a lot of pressure to actually finish off the cursor API.
It’s also not the kind of linked list the kernel would use, as they’d want an intrusive one.
And yet, safe to use doubly linked lists written in Rust exist. That the implementation needs unsafe is not a real problem. That’s how we should look at wrapping C code in safe Rust abstractions.
The whole comment you replied to, after the one sentence about linked lists, is about abstractions. And abstractions are rarely going to be easy, and sometimes could be hardly possible.
That’s just a fact. Confusing this fact for something as hyperbolic as “absurdly low quality” is stunning example of the Dunning Kruger effect, and frankly insulting as well.
I personally would call Linux low quality because many parts of it are buggy as sin. My GPU stops working properly literally every other time I upgrade Linux.
No one is saying that Linux is low quality because it’s hard or impossible to abstract some subsystems in Rust, they’re saying it’s low quality because a lot of it barely works! I would say that your “Chinese whispers” misrepresents the situation and what people here are actually saying. “the safety of this API is hard to formalize and you pretty much have to use it the way everybody does it” doesn’t apply if no one can tell you how to use an API, and everyone does it differently.
Actually, the NT kernel of all things seems to have a pretty good reputation, and I wouldn’t dismiss the BSD kernels out of hand. I don’t know which kernel is better, but it seems you do. If you could explain how you came to this conclusion that would be most helpful.
*nod* I haven’t been a Windows person since shortly after the release of Windows XP (i.e. the first online activation DRM’d Windows) but, whenever I see glimpses of what’s going on inside the NT kernel in places like Project Zero: The Definitive Guide on Win32 to NT Path Conversion, it really makes me want to know more.
It’s great that Nix works for you (and others) but in my experience Nix has got to be the single WORST example of UX of any technology on this planet. I say this as someone who has collectively spent weeks of time trying very hard (in 2020, 2022, and 2023) to use Nix for developer environment management and been left with nothing. I also used NixOS briefly in 2020.
Trivial tasks like wanting to apply a custom patch file as part of the flake setup I could not figure out after hours of documentation reading, asking for help on #irc afterwards, and on matrix. Sure, if I clone the remote repo, commit my patchfile and then have Nix use that as the package it’s fine.. but that’s a lot of work to replace a single patch shell command and now I have to run an entire git server, or be forced to use a code forge, and then mess around with git settings so it doesn’t degrade local clones for security reasons.
Nix’s documentation is incredibly verbose in the most useless places and also non-existent in the most critical. It is the only time I’ve ever felt like I was actively wasting my time reading a project’s documentation. If you already completely understand Nix then Nix documentation is great, for anyone else… I don’t know.
Last I checked flakes were still experimental after however many years it’s been meaning the entire ecosystem built on-top of them is unstable. They aren’t beta, or even alpha. A decision needs to be made on whether flakes come or go (maybe it has been now) because having your entire ecosystem built on quicksand doesn’t inspire confidence to invest the (considerable) time to learn Nix.
Manually wrangling outdated dependencies when you work with software that is on a faster release cycle than Nixpkg checkpoints is painful, and unstable Nixpkgs are just that.. unstable and annoying to update. Also, cleaning orphaned leaves and the like is not trivial and has to be researched versus just being a simple to understand (and documented) command.
Things like devshell, nix-shell or whatever it’s called (I cannot remember anymore) are but various options one has to explore to get developer environments which are, for some reason, not a core part of Nix (since these 3rd party flakes exist in the first place). Combine this with all the other little oddities for which there exists multiple choices, along with the uselessness of Nix’s documentation (i.e. you cannot form an understanding of Nix) and you’re suddenly in a situation where you’re adopting things for which you have no idea the consequence of. Any problem you run into must be solved with either luck (that someone else has encountered it and you find a blog post, a github issue, etc) or brute force guesswork; stabbing in the dark.
The Nix language syntax is unreadable and the errors it outputs are undecipherable to the point of the community making entire packages to display actually human readable errors, or pages long tutorials on how to read them.
I wish I had been successful with Nix, clearly some other people are. Nix worked for me in trivial cases (and it is great when it does!) but the second I wanted to do something “non-trivial” (i.e. actually useful) it was like driving at 100 km/h into a brick wall. Maybe things will improve in the future but until then Podman and OCI containers or microvms are far, far superior to anything NIx can provide in my experience. I will die on this hill.
Yes, they are not completely hermetic like Nix is but I’ve never seen nor encountered a situation where you need a completely hermetic environment. I have no doubt these situations exist but I would (as an educated guess) argue they are needed far less often than people think.
In my experience, happy nix user, nix should only be used if you have had to fight with the other package managers in anger to get something impossible done. You’ll only be motivated to push past the pain of learning it if you have enough anger about whatever you are already using.
If you don’t have that anger it’s hard to push past the Nix learning curve. Which is a shame because it genuinely is a better package management/build/infra-as-code solution.
I guess I don’t see how you would patch a existing package in say Debian or arch easier then forking it and maintaining a patch…..
Heck, I couldn’t even figure out how to make deb packages, arch was much easier but still a huge pain. With NixOS I can apply patches to anything (albeit I am not using flakes, just patching nixpkgs where needed in my fork or using package override in my config). I’ve never felt quite this powerful at modifying core system components w/o breaking something or having to do a disk backup rollback.
Having the nixpkgs repo is better the documentation IMO, just grep and look at usages. This doesn’t cover flakes, but I find the documentation and CLI help/man pretty good for flakes; and there are good examples in many project to pull from.
Nothing compares to using home-manager for dot files and user level config, will never go back from that. There is no drift, all machines stay in sync with everything in versioned files that can be modularized for re-use.
Trivial tasks like wanting to apply a custom patch file as part of the flake setup
That’s not a trivial task. Flakes do not support patching the flake source. Nix makes it trivial to patch packages as they’re built, but patching Nix source code is not simple. More generally, if you want to patch Nix source code (whether it’s flakes or whether it’s via fetchTarball) you need to use IFD (Import From Derivation). https://wiki.nixos.org/wiki/Nixpkgs/Patching_Nixpkgs has a demonstration of how to use this to patch nixpkgs itself. In the case of an arbitrary flake, if the flake has a default.nix and importing that gets you what you want then you can do the exact same thing that URL does to patch it. If you need access to the patched flake’s outputs (e.g. if you’re patching a nixosModule) then I would look at using flake-compat to get at the outputs of the patched flake.
The funniest thing to me is that 50% of people say: avoid flakes and half of the rest say: I only managed to get something done in nix because of flakes (me included).
wanting to apply a custom patch file as part of the flake setup
I wanted to have a flake with one package at a different version than is the release (or whatever) which was also super annoying.
brute force guesswork; stabbing in the dark
I thought it should be doable to for instance build a node project for which it turns out there are half a dozen unmaintained projects and no documentation. Seemingly because an experienced Nix person can whip this out in two seconds so nobody bothers to document it.
but I’ve never seen nor encountered a situation where you need a completely hermetic environment
100% true
I think most people are better off using something like nx or buck to build their stuff.
Yeah I dual-boot NixOS and Arch. For whatever I can use NixOS for without much trouble, I prefer it. However, it’s nice to be able to bail out into Arch when I run into something that will clearly take many more hours of my time to figure out in NixOS than I desire (lately, developing a mixed OCaml/C++ project). I symlink my home subdirectories so it’s easy to reboot and pick up where I left off (there are definitely still dev tools in 2025 that hate symlinked directories though, fair warning to anybody else who wants to try this).
I think flakes complicated things a lot. I started using Nix pre-flakes and did not find it hard to pick up. The language is pretty familiar if you used Haskell or a comparable functional language at some point. The language, builders, etc. clicked for me after reading the Nix pills.
Flakes are quite great in production for reproducibility (though Niv provided some of the same benefits), but adds a layer that makes a lot of people struggle. It removes some of the ‘directness’ that Nix had, making it harder to quickly iterate on things. It also split up the docs, the community, and made a lot of historical posts/solutions harder to apply.
Trivial tasks like wanting to apply a custom patch file as part of the flake setup
Could you elaborate what you mean by applying a custom patch? Do you want to patch an external flake itself or a package from nixpkgs/a flake. Adding a patch to a package is pretty easy with overrideAttrs, I do this all the time and it’s a superpower of Nix, compared to other package managers where you have to basically fork and maintain packages.
Yea I agree. I investigated nix a year or two ago when flakes were just starting to become popular and it was a total mess to figure out. Anything outside of ordinary was a rabbit hole to figure out.
I think a better solution to the same problem is an immutable OS with distrobox. That solution leverages tech most of us already understand without the terrible programming language and fragmented ecosystem.
I ended up moving away from that setup because I need to actually work on projects instead of tinkering with my setup but I wrote a post about it: https://bower.sh/opensuse-microos-container-dev
Totally agree and it’s bigger than just LLMs. Consciousness is not unique or as complicated as we had hoped. I use hope because we keep clutching onto the idea that consciousness is mysterious. All we need to do is merge the two modes, training and inference, then remove the human from the equation. That’s it.
“whatever is may be” is exactly the mysticism I’m talking about. There’s nothing special about consciousness, we are just hopelessly biased by our own egos.
Cognitive science was always one of my pet subjects. So, there was this book I read, years back: The User Illusion. And then there was this other popular book, which presaged the Deep Learning explosion: On Intelligence. Long before that, Daniel Dennett’s Consciousness Explained got a lot of attention.
I don’t want to spoil any of these for anybody, but I suggest checking out the responses that their arguments received in serious academic reviews. Might learn something!
All we need to do is merge the two modes, training and inference, then remove the human from the equation.
“Look, all we need to do is merge the two modes, tractor beams and faster-than-light drives, and then we could build the Starship Enterprise. That’s it.”
No, consciousness is a different phenomenon, and we don’t even have a robust definition of what would count as consciousness in non-humans.
We recycle terms of human cognition for behaviors we observe in ML models, but that doesn’t mean they’re the same behaviors. It doesn’t really matter whether that’s true “reasoning” like humans do, we just need some word for the extra output they generate that improves their ability to find the correct answers.
The question of whether a computer can think is no more interesting than the question of whether a submarine can swim — Dijkstra
Well, at least you’re not even wrong! I don’t see how you get that from what is posted except for the overloading of terms between LLMs and neuroscience or cognitive research or something. We have no idea what consciousness is.
I find it weird that they bundle s3 and postgres support directly into a runtime instead of having them as libraries. Looking at the documentation for s3, I see
Bun aims to be a cloud-first JavaScript runtime. That means supporting all the tools and services you need to run a production application in the cloud.
and I get that it’s better to have a native implementation rather than a pure JavaScript one for speed (though I’m skeptic of their 5x claim). But I can’t see this approach scaling. If performance is a concern, then other services you communicate with also need to be fast. In particular Redis/memcached or whatever caching mechanism you prefer, but also ElasticSearch or maybe some Oracle database you need to talk to as well. Should those also be in the runtime? And how do you handle backwards compatibility for those libraries?
I guess this is just a long-winded way of saying “Why aren’t they implementing this as native add-ons and instead force this into the runtime?”. It feels much more modular and sensible to put it outside of the runtime.
My two biggest complaints are mostly to do with the API - one being that s3 is imported from bun, rather than bun/s3 or similar. Python provides a big stdlib, but the modules still have different names. You don’t do from python import DefaultDict, httplib, for example.
The other one is that the file function supports s3 URIs, which is nice from a minimal API perspective, but I also think it’s not ideal to treat s3 the same as local files. s3 has a lot of additional setup - e.g AWS credentials, handling auth errors, etc. So I think it makes sense to logically separate the behaviour for local vs remote files.
I don’t mind new takes on AWS / postgres sdks, though. The SDK is pretty decent compared to some others (e.g Google or Slack), but I think both their AWS and postgress examples there (other than the two issues I mentioned) are pretty nice.
I agree with your sentiment and I am also very confused why I would use the Bun s3 implementation over the whole AWS SDK that I have been using and accustomed to for years now. Sure there could be some performance gains (for just S3) but I don’t see the benefit.
I’ve run into contention on S3 in a Python backend, and it’s really not fun. It’s a very good feature to have this sorted and guaranteed to work fast, it means that Bun can stay competitive with compiled languages for more intensive workloads. To me, this is a production mindset: identify the key components, and optimise them so that they don’t get in your way.
I prefer running web apps in my standard browser over the electron apps 100% of the time, and one of the reasons is that I, as the end user, am empowered with choice that way that electron denies me. And the general user experience is generally better, with better integration in my desktop (which you might not expect from a web app, but it is true because my browser is better integrated than their browser) and controls the other one might deny me.
It hasn’t been much of a compatibility problem either, everything ive wanted to use has worked fine, so I don’t buy the line about needing separate downloads for version control either.
uBlock is also very helpful for thwarting some of the dark patterns in web app design. For example, it is trivial to block the “recommendations” sections on YouTube to avoid falling in to rabbit holes there.
As another example, I’ve personally blocked almost the entire “home page” of GitHub with its mix of useless eye-grabbing information and AI crap. Ideally I’d just not use GitHub, but the network effect is strong and being able to exert my will on the interface to some extent makes it more tolerable for now.
Indeed, and this is a legit reason for users to prefer the web apps… but if you use their browser instead of your browser, you throw those sandbox benefits away. (as well as the actually kinda nice ui customization options you have in the web world)
Sure, but since Electron apps are usually just wrapped web apps anyway, might as well use them in a browser where you get to block unwanted stuff. At least if that’s a concern for you.
It’s a bit surreal to me that a guy who maintains electron and works at notion tries to tell me that I’m wrong about electron while their app is so broken that I can’t even log in in it because the input fields don’t work for me.
It exists in a lot of cases to get past gatekeepers at larger companies. Buyers in these organizations use checklists for product comparison, so the lack of a desktop app can rule a product out of contention. A PWA would likely suffice, but these get surprisingly negative feedback from large organizations where the people with the control and giving the feedback are somewhat distant from the usage.
Developers respond by doing minimal work to take advantage of the desktop. The developers know they could do deeper desktop integration, but product management only want to check the box and avoid divergence from the web experience (along with a dose of MVP-itis). End users could get more value from an otherwise cruddy Electron app, if it exploited helpful desktop integrations.
Clearly, it’d be better to have a native app that exploits the desktop, but this is unlikely to happen when the customer is checking boxes (but not suggesting solid integration use cases) and PMs are overly focused on MVPs (with limited vision and experience with how desktop apps can shine.) It’s funny how these things change when it comes to native mobile apps because cross-platform apps can get dinged on enterprise checklists while PMs are willing to commit heavily.
I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.
WHY this is the case is a respactable topic for sure, but since I already know the outcome I’m more interested in other topics.
I believe in your perception, but I wonder how people determine this sort of thing.
It seems like an availability heuristic: if you notice an app is bad, and discover it’s made in Electron, you remember that. But if an app isn’t bad, do you even check how it was built?
Sort of like how you can always tell bad plastic surgery, but not necessarily good plastic surgery.
On macOS, there has been a shift in the past decade from noticing apps have poor UIs and seeing that they are Qt, to seeing that they are Electron. One of the problems with the web is that there’s no standard rich text edit control. Cocoa’s NSTextView is incredibly powerful, it basically includes an entire typesetting engine with hooks exposed to everything. Things like drag-and-drop, undo, consistent keyboard shortcuts, and so on all work for free if you use it. Any app that doesn’t use it, but exposes a way of editing text, sticks out. Keyboard navigation will work almost how you’re used to, for example. In Electron and the web, everyone has to use a non-standard text view if they want anything more than plain text. And a lot of apps implement their own rather than reusing a single common one, so there isn’t even consistency between Electron apps.
In Electron and the web, everyone has to use a non-standard text view if they want anything more than plain text. And a lot of apps implement their own rather than reusing a single common one, so there isn’t even consistency between Electron apps.
The is probably the best criticism of Electron apps in this thread that’s not just knee-jerk / dogpiling. It’s absolutely valid and even for non-Electron web apps it’s a real problem. I work at a company that had it’s own collaborative rich-text editor based on OT, and it is both a tonne of work to maintain and extend, and also subtly (and sometimes not-so-subtly) different to every other rich text editor out there.
I’ve been using Obsidian a fair bit lately. I’m pretty sure it’s Electron-based but on OSX that still means that most of the editing shortcuts work properly. Ctrl-a and ctrl-d for start and end of line, ctrl-n and ctrl-p for next and previous line, etc. These are all Emacs hotkeys that ended up in OSX via NeXT. Want to know what the most frustrating thing has been with using Obsidian cross platform? Those Emacs hotkeys that all work on OSX don’t work on the Linux version… on the Linux version they do things like Select All or Print. Every time I switch from my Mac laptop to my Linux desktop I end up frustrated from all of the crap that happens when I use my muscle memory hotkeys.
This is something that annoys me about Linux desktops. OPENSTEP and CDE, and even EMACS, supported a meta key so that control could be control and invoking shortcuts was a different key. Both KDE and GNOME were first released after Windows keys were ubiquitous on PC keyboards that could have been used as a command / meta key, yet they copied the Windows model for shortcuts.
More than 50% of the reason I use a Mac is that copy and paste in the terminal are the same shortcut as every other app.
More than 50% of the reason I use a Mac is that copy and paste in the terminal are the same shortcut as every other app.
You mean middle click, right? I say that in jest, but anytime I’m on a non-Linux platform, I find myself highlighting and middle clicking, then realizing that just doesn’t work here and sadly finding the actual clipboard keys.
X11’s select buffer always annoyed me because it conflates two actions. Selecting and copying are distinct operations and need to be to support operations like select and paste to overwrite. Implicitly doing a copy-like operation is annoying and hits a bunch of common corner cases. If I have something selected in two apps, switching between them and then pasting in a third makes it unclear which thing I’ll paste (some apps share selection to the select buffer when it’s selected, some do it when they are active and a selection exists, it’s not clear which is ‘correct’ behaviour).
The select buffer exists to avoid needing a clipboard server that holds a copy of the object being transferred, but drag and drop (which worked reliably on OPENSTEP and was always a pain on X11) is a better interaction model for that. And, when designed properly, has better support for content negotiation, than the select buffer in X11. For example, on macOS I can drag a file from the Finder to the Terminal and the Terminal will negotiate the path of the file as the type (and know that it’s a file, not a string, so properly escape it) and insert it into the shell. If you select a file in any X11 file manager, does this work when you middle click in an X11 terminal? Without massive hacks and tight coupling?
If you select a file in any X11 file manager, does this work when you middle click in an X11 terminal?
There’s no reason why it shouldn’t on the X level - middle clicks to the same content negotiation as any other clipboard or drag and drop operation (in fact, it is literally the same, asking for the TARGETS property, then calling XConvertSelection with the format you want, the only difference is that second argument to XConvertSelection - PRIMARY, CLIPBOARD, or XdndSelection).
If it doesn’t work, it is probably just because the terminal doesn’t try. Which I’d understand; my terminal unconditionally asks for strings too, because knowing what is going on in the running application is a bit of a pain. The terminal doesn’t know if you are at a shell prompt or a text editor or a Python interpreter unless you hack up those programs to inform it somehow. (This is something I was fairly impressed with on the Mac, those things do generally work, but I don’t know how. My guess is massive hacks and tight coupling between their shell extensions and their terminal extensions.)
need to be to support operations like select and paste to overwrite
Eh, I made it work in my library! I like middle click a lot and frequently double click one thing to select it, then double click followed by middle click in another to replace its content. Heck, that’s how I do web links a great many times (I can’t say a majority, but several times a day). Made me a little angry that it wouldn’t work in the mainstream programs, so I made it work in mind.
It is a bit hacky though: it does an automatic string copy of the selection into an internal buffer of the application when replacing the selection. Upon pasting, if it is asked to paste the current selection over itself, it instead use that saved buffer. Theoretically pure? Nah. Practically perfect? Yup. Works For Me.
If I have something selected in two apps, switching between them and then pasting in a third makes it unclear which thing I’ll paste (some apps share selection to the select buffer when it’s selected, some do it when they are active and a selection exists, it’s not clear which is ‘correct’ behaviour).
You know, I thought this was in the spec and loaded it up to prove it and…. it isn’t. lol, it is clear to me what is the correct behavior (asserting ownership of the global selection just when switching between programs is obviously wrong - it’d make copy/paste between two programs with a background selection impossible, since trying to paste in one would switch the active window, which would change the selection, which is just annoying), I’d assert the selection if and only if it is an explicit user action to change the selection or to initiate a clipboard cut/copy command, but yeah the ICCCM doesn’t go into any of this and neither does any other official document ive checked.
tbh, I think this is my biggest criticism of the X ecosystem in general: there’s little bits that are underspecified. In some cases, they just never defined a standard, though it’d be easy, and thus you get annoying interop problems. Other cases, like here, they describe how you should do something, but not when or why you should do that. There’s a lot to like about “mechanism, not policy” but… it certainly has its downsides.
Fair points and a difference of opinion probably driven by difference in use. I wasn’t even thinking about copying and pasting files, just textual snippets. Middle click from a file doesn’t work, but dragging and dropping files does lead to the escaped file path being inserted into the terminal.
I always appreciate the depth of knowledge your comments bring to this site, thank you for turning my half-in-jest poke at MacOS into a learning opportunity!
More than 50% of the reason I use a Mac is that copy and paste in the terminal are the same shortcut as every other app.
You know, I’m always ashamed to say that, and I won’t rate the % that it figures into my decision, but me too. For me, the thing I really like is that I can use full vim mode in JetBrains tools, but all my Mac keyboard shortcuts also work well. Because the mac command key doesn’t interfere ever with vim mode. And same for terminal apps. But the deciding feature is really JetBrains… PyCharm Pro on Mac is so much better than PyCharm Pro on Linux just because of how this specific bit of behavior influences IdeaVim.
I also like Apple’s hardware better right now, but all things being equal, this would nudge me towards mac.
Nothing to be ashamed of. I’m a diehard Linux user. I’ve been at my job 3 years now, that entire time I had a goal to get a Linux laptop, I’ve purposefully picked products that enabled that and have finally switched, and I intend to maintain development environment stuff myself (this is challenging because I’m not only the only Linux engineer, I’m also the only x86 engineer).
I say all this to hammer home that despite how much I prefer Linux (many, many warts and all), this is actually one of the biggest things by far that I miss about my old work Mac.
Plus we live in a world now where we expect tools to be released cross-platform, which means that I think a lot of people compare an electron app on, say, Linux to an equivalent native app on Linux, and argue that the native app would clearly be better.
But from what I remember of the days before electron, what we had on Linux was either significantly worse than the same app released for other platforms, or nothing at all. I’m thinking particularly of Skype for Linux right now, which was a pain to use and supported relatively few of the features other platforms had. The election Skype app is still terrible, but at least it’s better than what we had before.
Weird, all the ones I’ve used have been excellent with great UX. It’s the ones that go native that seem to struggle with their design. Prolly because xml is terrible for designing apps
I’d really like to see how you and the parent comment author interact with your computer. For me electron apps are at best barely useable, ranging from terrible UI/UX but with some useful features and useless eye candy at the cost of performance (VSCode), to really insulting to use (Slack, Discord). But then I like my margins to be set to 0 and information density on my screen to approximate the average circa-2005 japanese website. For instance Ripcord (https://cancel.fm/ripcord/static/ripcord_screenshot_win_6.png) is infinitely more pleasant for me to use than Discord.
But most likely some people disagree - from the article:
The McDonald’s ordering kiosk, powering the world’s biggest food retailer, is entirely built with Chromium.
I’m really amazed for instance that anyone would use McDonald’s kiosks as an example of something good - you can literally see most of these poor things stutter with 10fps animations and constant struggles to show anything in a timely manner.
I’d really like to see how you and the parent comment author interact with your computer. For me electron apps are at best barely useable, ranging from terrible UI/UX but with some useful features and useless eye candy at the cost of performance (VSCode), to really insulting to use (Slack, Discord).
IDK, Slack literally changed the business world by moving many companies away from email. As it turns out, instant communication and the systems Slack provided to promote communication almost certainly resulted in economic growth as well as the ability to increase remote work around the world. You can call that “insulting” but it doesn’t change the facts of its market- and mind-share.
Emoji reactions, threads, huddles, screen sharing are all squarely in the realm of UX and popularized by Slack. I would argue they wouldn’t have been able to make Slack so feature packed without using web tech, especially when you see their app marketplace which is a huge UX boon.
Slack is not just a “chat app”.
If you want a simple text-based chat app with 0-margins then use IRC.
I could easily make the same argument for VSCode: you cannot ignore the market- and mind-share. If the UX was truly deplorable then no one would use it.
Everything else is anecdotal and personal preference which I do not have any interest in discussing.
If you want a simple text-based chat app with 0-margins then use IRC.
I truly miss the days when you could actually connect to Slack with an IRC client. That feature went away in… I dunno, 2017 or so. It worked fabulously well for me.
Yeah Slack used to be much easier to integrate with. As a user I could pretty easily spot the point where they had grown large enough that it was time to start walling in that garden …
This is not a direct personal attack or criticism, but a general comment:
I find it remarkable that, when I professionally criticise GNOME, KDE and indeed Electron apps in my writing, people frequently defend them and say that they find them fine – in other words, as a generic global value judgement – without directly addressing my criticisms.
I use one Electron app routinely, Panwriter, and that’s partly because it tries to hide its UI. It’s a distraction-free writing tool. I don’t want to see its UI. That’s the point. But the UI it does have is good and standards-compliant. It has a menu bar; those menus appear in the OS’s standard place; they respond to the standard keystrokes.
My point is:
There are objective, independent standards for UI, of which IBM CUA is the #1 and the Mac HIG are the #2.
“It looks good and I can find the buttons and it’s easy to work” does not equate to “this program has good UI.”
It is, IMHO, more important to be standards-compliant than it is to look good.
Most Electron apps look like PWAs (which I also hate). But they are often pretty. Looking good is nice, but function is more important. For an application running on an OS, fitting in with that OS and using the OS’s UI is more important than looking good.
But today ISTM that this itself is an opinion, and an unusual and unpopular one. I find that bizarre. To me it’s like saying that a car or motorbike must have the standard controls in the standard places and they must work in the standard way, and it doesn’t matter if it’s a drop-dead beautiful streamlined work of art if those aren’t true. Whereas it feels like the prevailing opinion now is that a streamlined work of art with no standard controls is not only fine but desirable.
Confirmation bias is cherry picking evidence to support your preconceptions. This, is simply having observed something (“all Electron apps I’ve used were terrible”), and not being interested in why — which is understandable since the conclusion was “avoid Electron”.
It’s okay at some point to decide you have looked at enough evidence, make up your mind, and stop spending time examining any further evidence.
Yes, cherry picking is part of it, but confirmation bias is a little more extensive than that.
It also affects when you even seek evidence, such as only checking what an app is built with when it’s slow, but not checking when it’s fast.
It can affect your interpretation and memory as well. E.g., if you already believe electron apps are slow, you may be more likely to remember slow electron apps and forget (if you ever learned of) fast electron apps.
Don’t get me wrong, I’m guilty of this too. Slack is the canonical slow electron app, and everyone remembers it. Whereas my 1Password app is a fast electron app, but I never bothered to learn that until the article mentioned it.
All of which is to say, I’m very dubious that people’s personal experiences in the comments are an unbiased survey of the state of electron app speeds. And if your data collection and interpretation are biased, it doesn’t matter how much of it you’ve collected. (E.g., the disastrous 1936 Literary Digest prediction of Landon defeating Roosevelt, which polled millions of Americans, but from non-representative automobile and telephone owners.)
We’re talking about someone who stopped seeking evidence, so it doesn’t apply here.
All of which is to say, I’m very dubious that people’s personal experiences in the comments are an unbiased survey of the state of electron app speeds.
So would I.
And it doesn’t help that apparently different people have very different criteria for what constitutes acceptable performance. My personal criterion would be “within an order of magnitude of the maximum achievable”. That is, if it is 10 times slower than the fastest possible, that’s still acceptable to me in most settings. Thing is though, I’m pretty sure many programs are _three_orders of magnitude slower than they could be, and I don’t notice because when I click a button or whatever they still react in fewer frames than I can consciously perceive — but that still impacts battery life, and still necessitates a faster computer than necessary. Worse, in practice I have no idea how much slower than necessary an app really is. The best I can do is notice that a similar app feels snappier, or doesn’t uses as much resources.
It still applies if they stopped seeking evidence because of confirmation bias.
Oh but then you have to establish that confirmation bias happened before they stopped. And so far you haven’t even asserted it — though if you did I would dispute it.
And even if you were right, and confirmation bias led them to think they have enough evidence even though they do not, and then stopped seeking, the act of stopping itself is not confirmation bias, it’s just thinking one has enough evidence to stop bothering.
Stopping to seek evidence does not confirm anything by the way. It goes both ways: either the evidence confirms the belief and we go “yeah yeah, I know” and forget about it, or it weakens it and we go “don’t bother, I know it’s bogus in some way”. Under confirmation bias, one would remember the confirming evidence, and use it later.
Oh but then you have to establish that confirmation bias happened before they stopped. And so far you haven’t even asserted it — though if you did I would dispute it.
As a default stance, that’s more likely to be wrong than right.
Which of these two scenarios is more likely: that the users in this thread carefully weighed the evidence in an unbiased manner, examining both electron and non-electron apps, seeking both confirmatory and disconfirmatory evidence… or that they made a gut judgment based on a mix of personal experience and public perception.
The second is way more likely.
the act of stopping itself is not confirmation bias, it’s just thinking one has enough evidence to stop bothering.
It’s the reason behind stopping, not the act itself, that can constitute “confirmation bias”.
… either the evidence confirms the belief and we go “yeah yeah, I know” and forget about it, or it weakens it and we go “don’t bother, I know it’s bogus in some way”. Under confirmation bias, one would remember the confirming evidence, and use it later.
As a former neuroscientist, I can assure you, you’re using an overly narrow definition not shared by the actual psychology literature.
Confirmation bias (also confirmatory bias, myside bias,[a] or congeniality bias[2]) is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one’s prior beliefs or values.[3] People display this bias when they select information that supports their views, ignoring contrary information, or when they interpret ambiguous evidence as supporting their existing attitudes. The effect is strongest for desired outcomes, for emotionally charged issues, and for deeply entrenched beliefs.
Sounds like a reasonable definition, not overly narrow. And if you as a specialist disagree with that, I encourage you to correct the Wikipedia page. Assuming however you do agree with this definition, let’s pick apart the original comment:
I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.
WHY this is the case is a respactable topic for sure, but since I already know the outcome I’m more interested in other topics.
Let’s see:
I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.
If that’s true, that’s not confirmation bias — because it’s true. If it isn’t, yeah, we can blame confirmation bias for ignoring good Electron apps. Maybe they only checked when the app was terrible or something? At this point we don’t know.
Now one could say with high confidence this is confirmation bias, if they personally believe a good proportion of Electron apps are not terrible. They would conclude highly unlikely that the original commenter really only stumbled on terrible Electron apps, so they must have ignored (or failed to notice) the non-terrible ones. Which indeed would be textbook confirmation bias.
But then you came in and wrote:
since I already know the outcome
This is exactly what confirmation bias refers to.
Oh, so you were seeing the bias in the second paragraph:
WHY this is the case is a respactable topic for sure, but since I already know the outcome I’m more interested in other topics.
Here we have someone who decided they had seen enough, and decided to just avoid Electron and move on. Which I would insist is a very reasonable thing to do, should the first paragraph be true (which it is, as far as they’re concerned).
Even if the first paragraph was full of confirmation bias, I don’t see any here. Specifically, I don’t see any favouring of supporting information, or any disfavouring of contrary information, or misinterpreting anything in a way that suits them. And again, if you as a specialist says confirmation bias is more than that, I urge you to correct the Wikipedia page.
is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one’s prior beliefs or values
But… Wikipedia already agrees with me here. This definition is quite broad in scope. In particular, note the “search for” part. Confirmation bias includes biased searches, or lack thereof.
If that’s true, that’s not confirmation bias — because it’s true.
Confirmation bias is about how we search for and interpret evidence, regardless of whether our belief is true or not. Science is not served by only seeking to confirm what we know. As Karl Popper put it, scientists should always aim to falsify their theories. Plus, doing so assumes the conclusion; we might only think we know the truth, but without seeking to disconfirm, we’d never find out.
I don’t see any favouring of supporting information, or any disfavouring of contrary information, or misinterpreting anything in a way that suits them
Why would they be consciously aware of doing that, or say so in public if they were? People rarely think, “I’m potentially biased”. It’s a scientific approach to our own cognition that has to be cultivated.
To reiterate, it’s most likely we’re biased, haven’t done the self-reflection to see that, and haven’t systematically investigated electron vs non-electron performance to state anything definitively.
And I get it, too. We only have so many hours in the day, we can’t investigate everything 100%, and quick judgments are useful. But, they trade off speed for accuracy. We should strive to remember that, and be humble instead of overconfident.
In particular, note the “search for” part. Confirmation bias includes biased searches, or lack thereof.
As long as you’re saying “biased search”, and “biased lack of search”. The mere absence of search is not in itself a bias.
quick judgments are useful. But, they trade off speed for accuracy.
Yup. Note that this trade-off is a far cry from actual confirmation bias.
If that’s true, that’s not confirmation bias — because it’s true.
Confirmation bias is about how we search for and interpret evidence, regardless of whether our belief is true or not.
Wait, I think you’re misinterpreting the “it” in my sentence. By “it”, I meant literally the following statement: “I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.”
That statement does not say whether all Electron apps are terrible or whether Electron makes apps terrible, or anything like that. It states what had been directly observed. And if it is true that:
They used many Electron apps.
They’ve all been terrible.
Every time there was an alternative it was better.
Then believing and writing those 3 points is not confirmation bias. It’s just stating the fact as they happened. If on the other hand it’s not true, then we can call foul:
If they only used a couple Electron apps, that’s inflating evidence.
If not all Electron apps they used have been terrible, there’s confirmation bias for omitting (or forgetting) the one that weren’t.
If sometimes the alternative was worse, again, confirmation bias.
As Karl Popper put it, scientists should always aim to falsify their theories.
For the record I’m way past Popper. He’s not wrong, and his heuristic is great in practice, but now we have probability theory. Long story short, the material presented in E. T. Jaynes’ Probability Theory: the Logic of Science should be part of the mandatory curriculum, before you even leave high school — even if maths and science aren’t your chosen field.
One trivial, yet important, result from probability theory, is that absence of evidence is evidence of absence: if you expect to see some evidence of something if it’s true, then not seeing that evidence should lower your probability that it is true. The stronger you expect that evidence, the further your belief ought to shift.
Which is why Popper’s rule is important: by actively seeking evidence, you make it that much more probable to stumble upon it, should your theory be false. But the more effort you put into falsifying your theory, and failing, the more likely your theory is true. The kicker, though, is that it doesn’t apply to just the evidence you actively seek out, or the experimental tests you might do. It applies to any evidence, including what you passively observe.
Why would they be consciously aware of doing that, or say so in public if they were? People rarely think, “I’m potentially biased”.
Oh no you don’t. We’re all fallible mortals, all potentially biased, so I can quote a random piece of text, say “This is exactly what confirmation bias refers to” and that’s okay because surely the human behind it has confirmation bias like the rest of us even if they aren’t aware of it, right? That’s a general counter argument, it does not work that way.
There is a way to assert confirmation bias, but you need to work from your own prior beliefs:
Say you have very good reasons to believe that (i) at least half of Electrons app are not terrible, and (ii) confirmation bias is extremely common.
Say you accept that they have used at least 10 such apps. Under your prior, the random chance they’ve all been terrible is less than 1 in a thousand. The random chance that confirmation bias is involved in some way however, is quite a bit higher.
Do the math. What do you know, it is more likely this comment is a product of confirmation bias than actual observation.
Something like that. It’s not exact either (there’s selection bias, the possibility of “many” meaning only “5”, the fact we probably don’t agree on the definitions of “terrible” and “better”), but you get the gist of it: you can’t invoke confirmation bias from a pedestal. You have to expose yourself a little bit, reveal your priors at the risk of other people disputing them, otherwise your argument falls flat.
Our comments are getting longer and longer, we’re starting to parse minutiae, and I just don’t have the energy to add in the Bayesian angle and keep it going today.
It’s been stimulating though! I disagree, but I liked arguing with someone in good faith. Enjoy your weekend out there.
Am I the only one who routinely looks at every app I download to see what toolkit it’s using? Granted, I have an additional reason to care about that: accessibility.
Who cares? “All electron apps are terrible, most non-electron apps are not” is enough information to try to avoid Electron, even if it just so happens to be true for other reasons (e.g maybe only terrible development teams choose Electron, or maybe the only teams who choose Electron are those under a set of constraints from management which necessarily will make the software terrible).
I totally agree. Specifically, people arguing over bundle size is ridiculous when compared to the amount of data we stream on a daily basis. People complain that a website requires 10mb of JS to run but ignore the GBs in python libs required to run an LLM – and that’s ignoring the model weights themselves.
There’s a reason why Electron continues to dominate modern desktop apps and it’s pride that is clouding our collective judgement.
As someone who complains about Electron bundle size, I don’t think the argument of how much data we stream makes sense.
My ISP doesn’t impose a data cap—I’m not worried about the download size. However, my disk does have a fixed capacity.
Take the Element desktop app (which uses Electron) for example; on macOS the bundle is 600 MB. That is insane. There is no reason a chat app needs to be that large, and to add insult to injury, the UX and performance is far worse than if it were a native client or a Qt-based application.
Take the Element desktop app (which uses Electron) for example; on macOS the bundle is 600 MB. That is insane. There is no reason a chat app needs to be that large, and to add insult to injury, the UX and performance is far worse than if it were a native client or a Qt-based application.
Is it because it uses Electron that the bundle size is so large? Or could it be due to the developer not taking any care with the bundle size?
Offhand I think a copy of Electron or CEF is about ~120-150MB in 2025, so the whole bundle being 600MB isn’t entirely explained by just the presence of that.
Hm. I may be looking at a compressed number? My source for this is that when the five or so different Electron-or-CEF based apps that I use on Windows update themselves regularly, each of them does a suspiciously similar 120MB-150MB download each time.
I don’t think the streaming data comparison makes sense. I don’t like giant electron bundles because they take up valuable disk space. No matter how much I stream my disk utilisation remains roughly the same.
Interesting, I don’t think I’ve ever heard this complaint before. I’m curious, why is physical disk space a problem for you?
Also “giant” at 100mb is a surprising claim but I’m used to games that reach 100gb+. That feels giant to me so we are orders of magnitude different on our definitions.
Also, the problem of disk space and memory capacity becomes worse when we consider the recent trend of non-upgradable/non-expandable disk and memory in laptops. Then the money really counts.
Modern software dev tooling seems to devolve to copies of copies of things (Docker, Electron) in the name of standardizing and streamlining the dev process. This is a good goal, but, hope you shelled out for the 1TB SSD!
I believe @Loup-Vaillant was referring to 3D eye candy, which I think you know is different from the Electron eye candy people are referring to in other threads.
A primary purpose of games is often to show eye candy. In other words, sure games use more disk space, but the ratio of disk space actually used / disk space inherently required by the problem space is dramatically lower in games than in Electron apps. Context matters.
I care because my phone has limited storage. I’m at a point where I can’t install more apps because they’re so unnecessarily huge. When apps take up more space than personal files… it really does suck.
And many phones don’t have expandable storage via sdcard either, so it’s eWaste to upgrade. And some builds of Android don’t allow apps to be installed on external storage either.
Native libraries amortize this storage cost via sharing, and it still matters today.
Does Electron run on phones? I had no idea, and I can’t find much on their site except info about submitting to the MacOS app store, which is different to the iOS app store.
Well, Electron doesn’t run on your phone, and Apple doesn’t let apps ship custom browser engines even if they did. Native phone apps are still frequently 100mb+ using the native system libraries.
It’s not Electron but often React Native, and other Web based frameworks.
There’s definitely some bloated native apps, but the minimum size is usually larger for the web based ones. Just shipping code as text, even if minified,is a lot of overhead.
Offhand I think React Native’s overhead for a “Hello world” app is about 4MB on iOS and about 10MB on Android, though you have to turn on some build system features for Android or you’ll see a ~25MB apk.
Just shipping code as text, even if minified,is a lot of overhead.
I am not convinced of this in either direction. Can you cite anything, please? My recollection is uncertain but I think I’ve seen adding a line of source code to a C program produce object code which was bigger by more bytes than the size of the source code line was bigger. And C is a PL which tends towards small object code size, and that’s without gzipping the source code or anything.
I don’t have numbers but I believe on average machine code has higher information density than the textual representation, even if you minify that text.
So if you take a C program and compile it, generally the binary is smaller than the total text it is generated from. Again, I didn’t measure anything, but knowing a bit about how instructions are encoded makes this seem obvious to me. Optimizations can come into play, but I doubt it would change the outcome on average.
I’ve seen adding a line of source code to a C program produce object code which was bigger by more bytes than the size of the source code line was bigger
That’s different to what I’m claiming. I’d wager that change caused more machine code to be generated because before some of the text wasn’t used in the final program, i.e. was dead code or not included.
Installed web apps don’t generally ship their code compressed AFAIK, that’s more when streaming them over the network, so I don’t think it’s really relevant for bundle size.
Installed web apps don’t generally ship their code compressed AFAIK, that’s more when streaming them over the network, so I don’t think it’s really relevant for bundle size.
IPA and APK files are both zip archives. I’m not certain about iOS but installed apps are stored compressed on Android phones.
I’m not sure about APK, but IPAs are only used for distribution and are unpacked on install. Basically like a .deb DMG on macOS.
So AFAIK, it’s not relevant for disk space.
FWIW phone apps that embed HTML are using WebView (Android) or WKWebView (iOS). They are using the system web renderer. I don’t think anyone (except Firefox for Android) is bundling their own copy of a browser engine because I think it’s economically infeasible.
Funnily enough, there’s a level of speculation that one of the cited examples (Call of Duty games) is large specifically to make storage crunch a thing. You already play Call of Duty, so you want to delete our 300GB game and then have to download it again later to play other games?
I think that’s mostly debunked at this point, due to the performance creep and complexity that comes in with a more complicated client-server relationship.
Nonsense. I can make any SPA fast or any server rendered site slow.
I’m reading this on an M1 Air right now and though it’s slightly underpowered it’s a fantastic machine. No clue why I would bother with Linux either, I already have a job dealing with technology. Don’t need another unpaid one.
i’ve been using niri fulltime after having used sway for ~4-5 years - it’s incredible!
tiling window management (i3, sway) forces constant window resizes, which has always caused weird behavior. niri doesn’t have that issue, since all windows spawn horizontally. swiping between windows sideways feels very naturally - just imagine a bunch of fullsized macos apps. niri also natively supports swiping gestures, which reinforces this model. overall, you can tell that a lot of thought & energy went into niri, and it really shows. this release only reinforces that belief!
well done! niri is great, and i hope that it’s my endgame window manager :)
He shows off the testing and performance analysis tooling used and it all seems much better than what I’ve seen in most “professional” projects even. Worth watching even if you don’t care about the window manager itself.
There are high-quality English subtitles if you need them.
The main thing I have a hard time with is that I want to be able to open URLs that are displayed in my terminal using the keyboard instead of clicking on them with the mouse. I have been able to find a total of zero terminal emulators that support this out of the box, and one (urxvt) that makes it easy to add with a small amount of configuration. (I think there might be some that exist, but for my own purposes I won’t use it unless it’s packaged in Debian apt.)
Anyone know if there’s something like that I’m missing? I don’t love urxvt but having to use the mouse for URLs is a deal-killer for me.
Wezterm supports this with the quick select mode. By default there are only keybindings setup to copy and insert URLs, but the documentation already contains the necessary Lua snippet needed for opening them directly.
How up to date are the “modern” tools, and how soon do you usually get them on Debian? I’m on fedora which already pushes close to the bleeding edge. And for a while now I’m just used to tools being in flatpak, so I get a really up to date working environment. E.g. I’m using wezterm for quite a while now.
I remember from… Centos or Ubuntu LTS r something, that it is usually significantly slower, at least it used to be. Not just getting the shiny new tools in the repos, but sometimes it even took quite long for already packaged tools to get updated from upstream. It’s frustrating to know that the bug you reported was fixed months ago but you can’t update it from the repos.
How does Debian fare there? I’m adding out of curiosity, I don’t intend to switch from Fedora for my workhorse machine any time soon.
How up to date are the “modern” tools, and how soon do you usually get them on Debian?
TBH I’m not even sure what this question means. Maybe that in and of itself is the answer to your question.
I used to build things from source a fair bit, but nowadays I pretty much only find myself doing that for programs that I’m planning to make contributions to myself. Everything else comes from Debian’s apt sources, except Signal which I wish I didn’t have to use, but realistically the alternatives are all so much worse.
Foot has a visual mode where you see all the links highlighted with a shortcut to press to open one. It’s definitely my favorite version of the problem you are describing
Thanks; this looks promising. I think I tried this on my previous machine and it wouldn’t even boot because it needed a more recent OpenGL version, but it runs on my current one.
I use it all the time not in vi mode, maybe the docs are outdated? And yeah they are pretty sparse, I think you’re better off looking at the default config file.
Not out of the box but st with the externalpipe patch can do it in a very flexible way since you pipe the whole terminal pane into the program or script of your choice. It’s possible to do the same on the tmux side too. Or anything else that can pipe the whole pane into an external command, for example xterm with a custom printer command. The patch link has some examples for the URL extraction.
Scaling out and distributing the workload is easier with application code. It’s just a matter of throwing more containers or servers to keep up with demand.
Of course, this depends on your infrastructure, but setting up a couple of read replicas is not especially hard.
Nowadays, it’s common to have databases managed by the cloud service provider that run on ridiculously anemic VMs
Well… don’t?
I do appreciate the “Other Views” section! And I agree with those “other views”.
Also I’d add: The chances of the SQL execution engine having bugs is probably a lot less than your application code will have. So as long as you get your SQL query correct, the chances of having bugs is much lower than if you write it yourself.
Once you have something like PgTAP so you can test your SQL queries and DB, you are in amazing shape for generic well-tested code that is very unlikely to break over time.
I agree that unless you know SQL and your DB, it can be hard to reason about how it handles performance. In your own code, that would be easier, since you wrote it and (hopefully) understand it’s performance characteristics.
I would argue bug-free predictable code is more important than performant code. In my experience, when performance starts to matter is also when you can generally start to dedicate resources specific to performance problems. I.e. you can hire a DB expert in your particular DB and application domain.
I’m not with you on the ‘hard to test locally, since large data is required’ part though. You usually get access to something like PG’s ANALYZE, where you can verify the right indexes are being used for a given query. Sure you might need some large test instance to figure out what the right indexes are, but that’s a development task, not a testing task right? Once you figure it out, you can write a test that ensures the right indexes are being followed for a given query, which is the important part to test, from my perspective.
There are so many different types of UIs that it’s hard to know what the author is talking about. When I think about UI the first thing that comes to mind are web apps. What is the point in versioning web apps? Is there some decision an end user can make in this regard? No. They are forced to upgrade to latest and have zero mechanism to downgrade.
I like this stance, especially for talented authors with popular projects. Burn out is high for these people and I’d rather them focus on innovation than the muck and the more of maintaining a project
All well and good except that innovation comes from dealing with muck. That’s what innovation is, better ways of dealing with muck. Someone whose strategy for muck is “make someone else cope with it” has no incentive to innovate.
This is a blog post about jobs, and business. Not about getting issue tracker spam from people who will neither contribute money or time to software you’ve voluntarily shared with the public.
I have also encountered people (online) who didn’t know that you could render web pages on the server.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
People learn by imitation, and that’s 100% necessary in software, so it’s not too surprising. But yeah this is not a good way to do it.
The web is objectively worse that way, i.e. if you have ever seen an non-technical older person trying to navigate the little pop-out hamburger menus and so forth. Or a person without a fast mobile connection.
If I look back a couple decades, when I used Windows, I definitely remember that shell and FTP were barriers to web publishing. It was hard to figure out where to put stuff, and how to look at it.
And just synchronizing it was a problem
PHP was also something I never understood until I learned shell :-) I can see how JavaScript is more natural than PHP for some, even though PHP lets you render on the server.
That’s not completely true, one beautiful JSX thing is that any JSX HTML node is a value, you can use all your language at your disposal to create that HTML. Most backend server frameworks use templating instead. For most cases both are equivalent, but sometimes being able to put HTML values in lists and dictionaries and have the full power of the language do come in handy.
Well, that’s exactly what your OP said. Here is an example in Scala. It’s the same style. It’s not like this was invented by react or even by frontend libraries.
In fact, Scalatags for example is even better than JSX because it is really just values and doesn’t even need any special syntax preprocessor. It is pure, plain Scala code.
Most backend server frameworks use templating instead.
Maybe, but then just pick a better one? OP said “many” and not “most”.
Fine, I misread many as most as I had just woken up. But I’m a bit baffled that a post saying “pick a better one” for any engineering topic has been upvoted like this. Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags. JSX was the first such exposure of a values-based HTML builder for mainstream use, you and your brother comment talk about Scala and Lisp as examples, two very niche languages.
When did I say that this was invented by React? I’m just saying that you can use JSX both on front and back which makes it useful for generating HTML. Your post, your sibling and the OP just sound slightly butthurt at Javascript for some reason, and it’s not my favourite language by any stretch of the imagination, but when someone says “JSX is a good way to generate HTML” and the response is “well, other languages have similar things as well”, I just find that as arguing in bad faith and not trying to bring anything constructive to the conversation, same as the rest of the thread.
Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags.
But the point is that you wouldn’t have to - you could use a Ruby workalike, or implement one yourself. Something like Markaby is exactly that. Just take these good ideas from other languages and use it in yours.
Anyway, it sounds like we are in agreement that this would be better than just adopting JavaScript just because it is one of the few non-niche languages which happens to have such a language-oriented support for tags-as-objects like JSX.
I found that after all I prefer to write what is going to end up as HTML in something that looks as much as HTML as possible. I have tried the it’s-just-pure-data-and-functions approach (mostly with elm-ui, which replaces both HTML and CSS), but in the end I don’t like context switching it forces on my brain. HTML templates with as much strong checks as possible is my current preference. (Of course, it’s excellent if you can also hook into the HTML as a data structure to do manipulations at some stage.)
For doing JSX (along with other frameworks) on the backend, Astro is excellent.
That’s fair. There’s advantages and disadvantages when it comes to emulating the syntax of a target language in the host language. I also find JSX not too bad - however, one has to learn it first which definitely is a lot of overhead, we just tend to forget that once we have learned and used it for a long time.
In the Lisp world, it is super common to represent HTML/XML elements as lists. There’s nothing more natural than performing list operations in Lisp (after all, Lisp stands for LISt Processing). I don’t know how old this is, but it certainly predates React and JSX (Scheme’s SXML has been around since at least the early naughts).
Yeah, JSX is just a weird specialized language for quasiquotation of one specific kind of data that requires an additional compilation step. At least it’s not string templating, I guess…
Every mainstream as well as many niche languages have libraries that build HTML as pure values in your language itself, allowing the full power of the language to be used–defining functions, using control flow syntax, and so on. I predict this approach will become even more popular over time as server-driven apps have a resurgence.
I am not a web developer at all, and do not keep up with web trends, so the first time I heard the term “server-side rendering” I was fascinated and mildly horrified. How were servers rendering a web page? Were they rendering to a PNG and sending that to the browser to display?
I must say I was rather disappointed to learn that server-side rendering just means that the server sends HTML, which is rather anticlimactic, though much more sane than sending a PNG. (I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
(I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
The Bible says “Render unto Caesar the things that are Caesar’s, and unto God the things that are God’s”. Adapting that to today’s question: “Render unto the Screen the things that are the Screen’s, and unto the Browser the things that are the Browser’s”. The screen works in images, the browser works in html. Therefore, you render unto the Browser the HTML. Thus saith the Blasphemy.
(so personally I also think it is an overused word and it sounds silly to me, but the dictionary definition of the word “render” is to extract, convert, deliver, submit, etc. So this use is perfectly inline with the definition and with centuries of usage irl so i can’t complain too much really.)
I don’t think it would be a particularly useful distinction to make; as others said you generally “render” HTML when you turn a templated file into valid HTML, or when you generate HTML from another arbitrary format. You could also use “materialize” if you’re writing it to a file, or you could make the argument that it’s compilers all the way down, but IMO that would be splitting hairs.
I’m reminded of the “transpiler vs compiler” ‘debate’, which is also all a bit academic (or rather, the total opposite; vibe-y and whatever/who cares!).
Technically true, but in the context of websites, “render” is almost always used in a certain way. Using it in a different way renders the optimizations my brain is doing useless.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
Unless it was being rendered on the client, I don’t see what’s wrong with that. JSX and React were basically the templating language they were using. There’s no reason that setup cannot be fully server-generated and served as static HTML, and they could use any of the thousands of react components out there.
Yeah if you’re using it as a static site generator, it could be perfectly fine
I don’t have a link handy, but the site I was thinking about had a big janky template with pop-out hamburger menus, so it was definitely being rendered client side. It was slow and big.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
I’m hoping tools like Astro (and other JS frameworks w/ SSR) can shed this baggage. Astro will render your React components to static HTML by default, with normal clientside loading being done on a per-component basis (at least within the initial Astro component you’re calling the React component from).
I’m not sure I would call a custom file type .astro that renders TS, JSX, and MD/X split by a frontmatter “shedding baggage”. In fact, I think we could argue that astro is a symptom of the exact same problem you are illustrating from that quote.
That framework is going to die the same way that Next.JS will: death by a thousand features.
huh couldn’t disagree more. Astro is specifically fixing the issue quoted: now you can just make a React app and your baseline perf for your static bits is now the same as any other framework. The baggage I’m referring to would be the awful SSG frameworks I’ve used that are more difficult to hold correctly than Astro, and of course require plenty of other file types to do what .astro does (.rb, .py, .yml, etc.). The State of JS survey seems to indicate that people are sharing my sentiments (Astro has a retention rate of 94%, the highest of the “metaframeworks”).
I don’t know if I could nail what dictates the whims of devs, but I know it isn’t feature count. If Next dies, it will be because some framework with SSG, SSR, ISR, PPR, RSC, and a dozen more acronym’d features replaced it. (And because OpenNext still isn’t finished.)
Astro’s design is quite nice. It’s flexing into SSR web apps pretty nicely I’d say. The way they organize things isn’t always my favorite, but if it means I can avoid ever touching NextJS I’m happy.
Please don’t make your home directory a git repo — it breaks a lot of tooling that checks to see if it’s in a git repo or not. Instead, symlink out of a repo.
I’ve used homeshick for many years but there are probably better alternatives now.
On topic, home-manager is neat but doesn’t work on Unixes other than Mac or Linux.
Count yourself lucky then! This was a persistent problem at Facebook – there was a lot of tooling within repos that cd’d to the root of the repository (e.g. cd $(git rev-parse --show-toplevel)) so you could run it from anywhere within the repo. When we were switching to Mercurial, people putting .git in their home directories caused issues for years. Now that developers are using Jujutsu more (maybe without a colocated git repository) this is soon going to start causing issues again.
I’ve also seen .git in home directories cause issues with Nix flakes.
now that you say that, i’m pretty sure there are some scripts at $current_job that could bite me if i ran them outside of their project dir.
maybe it’s my idealism speaking, but shying away from a toplevel git repository because some scripts don’t do proper directory validation before they act is smelly - i’d rather fix the scripts.
here’s a check i wrote for a script of my own that relies on --show-toplevel:
git remote -v | grep -q chef ||
die "Chef repository was not detected. Are you running cinc from outside the chef repo?"
i suppose if i worked at a place as huge as Facebook, i may have a different perspective haha.
I think it’s more about going with the natural grooves that tooling provides. In a lot of cases it’s nice to be able to auto-discover the root of the repo. You could add a specific file to mark the root of the workspace (whatever that means), but that isn’t going to be a universal standard.
In general, nested repos are a bit of an exceptional situation.
I’m mostly leaning towards the same conclusion but as I’ve never seen such a script in person I’d say it might a problem of certain orgs. I mean, if I run it in ~/code/foo and it would breaks because of ~/.git - why would it work if I ran it under ~/code/foo/my-git-submodule/subdir/?
ya! I did actually used to do things in exactly your way. I don’t recall why, but at some point I ported to a little dotfile manager I wrote for myself which cp’d everything into place. I suspect it was because I wanted to use them on multiple target hosts, so of course it came with a very small, very reasonable templating syntax; per-host master files would specify which source files would go where, and set variables used by some of them. The whole thing was about a hundred lines of Ruby, mostly managing parsing the master files (heredoc support, of course!) and safety; checking if a given destination file was OK to overwrite and so on. It was nice; it ran on Linux, macOS, BSD … I used that up until moving to Nix!
I like your custom prompt :) It’s nice to personalise things so much. I am reminded of my co-author’s post on customising Git slightly, and having that stay with her indefinitely thanks to Nix. You might be interested!
Same here. Haven’t run into issues with it for a long time, but if I run into tools these days where there’s a chance they will try to mess around in my home directory whether or not there is a .git file/directory there, I’d choose to sandbox that tool over changing my home directory setup.
One example from my workflow: Magit. Right now if I run it from a directory that isn’t part of a git repo it’ll complain about that. If my home directory were a git repo I’d be accidentally adding files to my home git repo instead of getting that error.
I’ve done this since 2009, various jobs, with no breakage. Tooling that breaks because a git repo is inside another git repo sounds like very buggy tooling.
There’s just no general, universal way to define the “root of the workspace”, so the Schelling point everyone settles on is the root of the repo. And when the repo uses a different VCS than the tooling is expecting, and the home directory uses the same VCS, well, things go wrong. Hard to call that “very buggy” — it’s just outside the design spec for those tools and scripts.
It’s similar to how many tools don’t handle git submodules (which are a different but related nested repo situation). They’re not buggy — git submodules are merely outside the design spec.
Tooling that breaks because a git repo is inside another git repo
A lot of the tooling I’m thinking of was actually fine with this. It broke when there was a Mercurial repo inside a Git repo, because the repo auto discovery would find the Git repo at the home directory.
This approach is fine if you only have a single type of machine and never need to store secrets in your dotfiles, but as soon as you do have more than one type of machine (e.g. home and work, or Linux and macOS) or need to store secrets then there are much more powerful and easy-to-use alternatives like https://chezmoi.io.
i use this setup across a variety of machines of different types - macos, linux, and openbsd - 5 machines in total.
sometimes i run into a situation where i need software to act differently, and there’s almost always a way to make it do so. for example, i needed my git config to be different on my work machine, hence:
in my git config. a lot of software supports this kind of “conditional include” :D
i also configure my ~/bin path according to the hostname & architecture of the machine i’m on. that way, i can drop scripts into $HOME/bin/<machine-name> or $HOME/bin/<architecture> & have them function differently on different machines.
i’ve never used home-manager, but i personally prefer staying away from bespoke tools where possible. i like that i can just git add -f <whatever> to begin tracking it, and that i can see when i’ve messed with live configs via a git status from my homedir. it’s the most “it just works” way of managing dotfiles i’ve found.
Although many think that IRC’s lack of history is a feature, I find it really confuses new users and poses a problem in certain kinds of channels. (E.g. “support” chats, which are one of the top uses of IRC. People ask a question and leave, expecting to see answers when they come back.)
There are mitigation strategies for those issues (I recently found some traditional IRC servers have history, but it’s opt-in per channel, IIRC), but really, Ergo defaulting to having chat history is IMHO very nice and makes it gain relevance as an alternative to the Slacks/Discords/Telegrams of the world.
Combined with Kiwi, you can provide a link that will get you into a channel without any registration step, which few other alternatives can provide.
(I really don’t care what support channel you choose for your software… as long as it’s not proprietary with no allowed third-party clients. XMPP, Matrix, etc. all have their scenarios. But I feel the humble IRC is more of a contender than we think.)
Although many think that IRC’s lack of history is a feature, I find it really confuses new users and poses a problem in certain kinds of channels. (E.g. “support” chats, which are one of the top uses of IRC. People ask a question and leave, expecting to see answers when they come back.)
I totally agree. I run https://pico.sh and all realtime comms are through IRC. Overall it has been a very positive experience. The biggest downside is when people want to join to ask a question that have never used IRC before. They join, see a completely empty channel, ask a question, and then leave because it looks dead. It is a very jarring experience and people feel like they are talking into the void.
Agreed. Server handling channel history is a huge step towards user friendliness IMO. I wonder why the original design decided to put the onus of recording history on the user. As you said for something like support/help having the history available is nice. Discord used to get lambasted earlier on when used as the community platform for open source projects because of potential loss of knowledge but IRC has had the same problem unless the user had access to a bouncer.
Well, IRC was invented in 1988- like 8 years before ICQ. It was a different age and it is not surprising that it evolved organically in a way that seems bizarre nowadays.
I agree in part with the people who think lack of history is a feature. I think it’s good to treat some IRC channels as transient. I think chat history should only be used to address user inconvenience if they don’t have a persistent session, not as a way to make IRC chats eternal.
Chat history or not, mods can run an archival process that pushes logs to HTTP, but it’s always going to do other things to build “knowledge bases”: FAQs, web forums, QA software, wikis, etc.
The initial design and evolution of IRC, through say 1998, was heavily guided by what could be easily tucked into the corner of an IT budget in a CS department or ISP. Had ircd implemented a feature for server-stored channel history most IRC network operators would have turned it off or limited it to like an hour of history.
IRC has had the same problem unless the user had access to a bouncer.
Most major support channels were paired with a website that hosted channel history archives and/or a FAQ, which would have been linked in the channel topic. e.g. http://mywiki.wooledge.org/BashFAQ
Although many think that IRC’s lack of history is a feature, I find it really confuses new users and poses a problem in certain kinds of channels. (E.g. “support” chats, which are one of the top uses of IRC. People ask a question and leave, expecting to see answers when they come back.)
This somehow reminds me of the story about reinventing the city bus[0]. So when you want to have a support channel which is the way you describe, maybe a forum would be a better solution. Chat systems (not only irc) live from being ``synchrony''. Yes there is this patter in irc where you ask a question and get an answer several hours later. I would call this synchronization phase. Because when you then get some response mostly you can continue the conversation in a synchrony manner.
Yes I know there are benefits from history (and from other features of other chat implementations), but when you think this benefits are more imported then other features you should consider switching tools instant of try to convert irc in this other solutions.
Oh, I have the same persistent doubts about what is suitable for chat and what for forums.
(Some people likely had extreme doubts about this and decided to create Zulip, I guess.)
Ultimately, I think it’s an unsolvable problem, because some support situations are better in real-time, and some are better asynchronously. And there’s also a matter of personal taste! This seems to be confirmed by the huge amount of “things” that provide both sync and async methods of communication.
As I mentioned:
Although many think that IRC’s lack of history is a feature
, and I can relate to the feeling, ultimately, I feel that IRC’s lack of chat history is a net negative for the world at large. Other inferior systems with chat history are more widely used, so I’m kinda forced to use Discord and Matrix, where IRC would make me much happier.
(Also, I do not have solid data, but my perception is that a huge share of IRC users run bouncers or whatever to have chat history.)
I think there are tradeoffs here, but I feel making chat history more prevalent in IRC would be a net win.
On the topic of git send-email being the worst, I’ve been slowly working on an ssh app to replace it for git collaboration. It’s still WIP but you can check it out https://github.com/picosh/git-pr
Maybe I don’t understand something, but replaceable memory isn’t important to me at all. It’s not like batteries and storage devices (which degrade over time) – memory that leaves the factory in non-defective shape will pretty much never fail. And yeah, you might want to increase it in a few years, but my guess is that when you’re ready to upgrade the RAM, it will probably also be about time for a cpu upgrade.
Having the two coupled doesn’t seem like an issue to me.
That said, Framework Computer, Inc. is a for-profit corporation like all the others. Everything it has ever said about waste-reduction or principles – or AI – was and is shameless marketing. It will say whatever it thinks will get it into your bed.
Do you have a source for this?
DRAM Errors in the Wild: A Large-Scale Field Study looked at errors in Google servers over multiple years:
In my experience, because failing RAM generally just causes application or system crashes (as opposed to a completely non-working machine like a dead battery or dead storage), people are more likely to blame software or other hardware components for the issue, not their RAM.
Thanks for this! I no longer know where I first heard the “memory doesn’t degrade” claim, but I’ve heard it repeated enough from smart people that I’ve started repeating it myself.
Without getting to deeply into it, that paper appears to claim that socketed memory does degrade with age. I have some immediate concerns about the study, including that it looks like it was published around 2008-9 (judging by the age charts). I don’t know how relevant that still is today.
I could have been wrong about this!
There are multiple papers investigating DRAM failures I found that I did not mention in my previous comment. I did not find any that support the idea that “memory doesn’t degrade”.
I could tell you relevant anecdotes from my technical support days about Memtest and the differing outcomes for old iBooks depending on whether their failing RAM was soldered to their logic board or installed in the board’s expansion slot, but I think a paper involving orders of magnitude of more computers than I ever fixed (i.e. “the majority of machines in Google’s fleet … from January 2006 to June 2008”) should be more convincing.
agreed, i don’t get the need to upgrade the ram, especially when ram speed is dependent on the cpu anyway.
Based on this post I decided to rebuild my resume in typst, worked great!
This is an extremely strong statement.
I think a few things are also interesting:
I think people are realizing how low quality the Linux kernel code is, how haphazard development is, how much burnout and misery is involved, etc.
I think people are realizing how insanely not in the open kernel dev is, how much is private conversations that a few are privy to, how much is politics, etc.
The Hellwig/Ojeda part of the thread is just frustrating to read because it almost feels like pleading. “We went over this in private” “we discussed this already, why are you bringing it up again?” “Linus said (in private so there’s no record)”, etc., etc.
Dragging discussions out in front of an audience is a pretty decent tactic for dealing with obstinate maintainers. They don’t like to explain their shoddy reasoning in front of people, and would prefer it remain hidden. It isn’t the first tool in the toolbelt but at a certain point there is no convincing people directly.
With quite a few things actually. A friend of mine is contributing to a non-profit, which until recently had this very toxic member (they’ve even attempted felony). They were driven out of the non-profit very soon after members talked in a thread that was accessible to all members. Obscurity is often one key component of abuse, be it mere stubbornness or criminal behaviour. Shine light, and it often goes away.
IIRC Hintjens noted this quite explicitly as a tactic of bad actors in his works.
It’s amazing how quickly people are to recognize folks trying to subvert an org piecemeal via one-off private conversations once everybody can compare notes. It’s equally amazing to see how much the same people beforehand will swear up and down oh no that’s a conspiracy theory such things can’t happen here until they’ve been burned at least once.
This is an active, unpatched attack vector in most communities.
I’ve found the lowest example of this is even meetings minutes at work. I’ve observed that people tend to act more collaboratively and seek the common good if there are public minutes, as opposed to trying to “privately” win people over to their desires.
There is something to be said for keeping things between people with skin in the game.
It’s flipped over here, though, because more people want to contribute. The question is whether it’ll be stabe long-term.
Something I’ve noticed is true in virtually everything I’ve looked deeply at is the majority of work is poor to mediocre and most people are not especially great at their jobs. So it wouldn’t surprise me if Linux is the same. (…and also wouldn’t surprise me if the wonderful Rust rewrite also ends up poor to mediocre.)
yet at the same time, another thing that astonishes me is how much stuff actually does get done and how well things manage to work anyway. And Linux also does a lot and works pretty well. Mediocre over the years can end up pretty good.
After tangentially following the kernel news, I think a lot of churning and death spiraling is happening. I would much rather have a rust-first kernel that isn’t crippled by the old guard of C developers reluctant to adopt new tech.
Take all of this energy into RedoxOS and let Linux stay in antiquity.
I’ve seen some of the R4L people talk on Mastodon, and they all seem to hate this argument.
They want to contribute to Linux because they use it, want to use it, and want to improve the lives of everyone who uses it. The fact that it’s out there and deployed and not a toy is a huge part of the reason why they want to improve it.
Hopping off into their own little projects which may or may not be useful to someone in 5-10 years’ time is not interesting to them. If it was, they’d already be working on Redox.
The most effective thing that could happen is for the Linux foundation, and Linus himself, to formally endorse and run a Rust-based kernel. They can adopt an existing one or make a concerted effort to replace large chunks of Linux’s C with Rust.
IMO the Linux project needs to figure out something pretty quickly because it seems to be bleeding maintainers and Linus isn’t getting any younger.
They may be misunderstanding the idea that others are not necessarily incentivized to do things just because it’s interesting for them (the Mastodon posters).
Yep, I made a similar remark upthread. A Rust-first kernel would have a lot of benefits over Linux, assuming a competent group of maintainers.
along similar lines: https://drewdevault.com/2024/08/30/2024-08-30-Rust-in-Linux-revisited.html
Redox does have the chains of trying to do new OS things. An ABI-compatible Rust rewrite of the Linux kernel might get further along than expected (even if it only runs in virtual contexts, without hardware support (that would come later.))
Linux developers want to work on Linux, they don’t want to make a new OS. Linux is incredibly important, and companies already have Rust-only drivers for their hardware.
Basically, sure, a new OS project would be neat, but it’s really just completely off topic in the sense that it’s not a solution for Rust for Linux. Because the “Linux” part in that matters.
I read a 25+ year old article [1] from a former Netscape developer that I think applies in part
The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive. Au contraire, baby! Is software supposed to be like an old Dodge Dart, that rusts just sitting in the garage? Is software like a teddy bear that’s kind of gross if it’s not made out of all new material?Adopting a “rust-first” kernel is throwing the baby out with the bathwater. Linux has been beaten into submission for over 30 years for a reason. It’s the largest collaborative project in human history and over 30 million lines of code. Throwing it out and starting new would be an absolutely herculean effort that would likely take years, if it ever got off the ground.
[1] https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/
The idea that old code is better than new code is patently absurd. Old code has stagnated. It was built using substandard, out of date methodologies. No one remembers what’s a bug and what’s a feature, and everyone is too scared to fix anything because of it. It doesn’t acquire new bugs because no one is willing to work on that weird ass bespoke shit you did with your C preprocessor. Au contraire, baby! Is software supposed to never learn? Are we never to adopt new tools? Can we never look at something we’ve built in an old way and wonder if new methodologies would produce something better?
This is what it looks like to say nothing, to beg the question. Numerous empirical claims, where is the justification?
It’s also self defeating on its face. I take an old codebase, I fix a bug, the codebase is now new. Which one is better?
Like most things in life the truth is somewhere in the middle. There is a reason there is the concept of a “mature node” in the semiconductor industry. They accept that new is needed for each node, but also that the new thing takes time to iron out the kinks and bugs. This is the primary reason why you see apple take new nodes on first before Nvidia for example, as Nvidia require much larger die sizes, and so less defects per square mm.
You can see this sometimes in software for example X11 vs Wayland, where adoption is slow, but most definetly progressing and now-days most people can see that Wayland is now, or is going to become the dominant tech in the space.
The truth lies where it lies. Maybe the middle, maybe elsewhere. I just don’t think we’ll get to the truth with rhetoric.
Aren’t the arguments above more dialectic than rhetoric?
I don’t think this would qualify as dialectic, it lacks any internal debate and it leans heavily on appeals by analogy and intuition/ emotion. The post itself makes a ton of empirical claims without justification even beyond the quoted bit.
fair enough, I can see how one would make that argument.
“Good” is subjective, but there is real evidence that older code does contain fewer vulnerabilities: https://www.usenix.org/conference/usenixsecurity22/presentation/alexopoulos
That means we can probably keep a lot of the old trusty Linux code around while making more of the new code safe by writing it in Rust in the first place.
I don’t think that’s a fair assessment of Spolsky’s argument or of CursedSilicon’s application of it to the Linux kernel.
Firstly, someone has already pointed out the research that suggests that existing code has fewer bugs in than new code (and that the older code is, the less likely it is to be buggy).
Secondly, this discussion is mainly around entire codebases, not just existing code. Codebases usually have an entire infrastructure around them for verifying that the behaviour of the codebase has not changed. This is often made up of tests, but it’s also made up of the users who try out a release of a codebase and determine whether it’s working for them. The difference between making a change to an existing codebase and releasing a new project largely comes down to whether this verification (both in terms of automated tests and in terms of users’ ability to use the new release) works for the new code.
Given this difference, if I want to (say) write a new OS completely in Rust, I need to choose: Do I want to make it completely compatible with Linux, and therefore take on the significant challenge of making sure everything behaves truly the same? Or do I make significant breaking changes, write my own OS, and therefore force potential adopters to rebuild their entire Linux workflows in my new OS?
The point is not that either of these options are bad, it is that they represent significant risks to a project. Added to the general risk that is writing new code, this produces a total level of risk that might be considered the baseline risk of doing a rewrite. Now risk is not bad per se! If the benefits of being able to write an OS in a language like Rust outweigh the potential risks, then it still makes sense to perform the rewrite. Or maybe the existing Linux kernel is so difficult to maintain that a new codebase really would be the better option. But the point that CursedSilicon was making by linking the Spolsky piece was, I believe, that the risks for a project like the Linux kernel are very high. There is a lot of existing, old code. And there is a very large ecosystem where either breaking or maintaining compatibility would each come with significant challenges.
Unfortunately, it’s very difficult to measure the risks and benefits here in a quantitative, comparable way, so I think where you fall on the “rewrite vs continuity” spectrum will depend mostly on what sort of examples you’ve seen, and how close you think this case is to those examples. I don’t think there’s any objective way to say whether it makes more sense to have something like R4L, or something like RedoxOS.
I haven’t read it yet, but I haven’t made an argument about that, I just created a parody of the argument as presented. I’ll be candid, i doubt that the research is going to compel me to believe that newer code is inherently buggier, it may compel me to confirm my existing belief that testing software in the field is one good method to find some classes of bugs.
I guess so, it’s a bit dependent on where we say the discussion starts - three things are relevant; RFL, which is not a wholesale rewrite, a wholesale rewrite of the Linux kernel, and Netscape. RFL is not about replacing the entire Linux kernel, although perhaps “codebase” here refers to some sort of unit, like a driver. Netscape wanted a wholesale rewrite, based on the linked post, so perhaps that’s what’s really “the single worst strategic mistake that any software company can make”, but I wonder what the boundary here is? Also, the article immediately mentions that Microsoft tried to do this with Word but it failed, but that Word didn’t suffer from this because it was still actively developed - I wonder if it really “failed” just because pyramid didn’t become the new Word? Did Microsoft have some lessons learned, or incorporate some of that code? Dunno.
I think I’m really entirely justified when I say that the post is entirely emotional/ intuitive appeals, rhetoric, and that it makes empirical claims without justification.
This is rhetoric. These are unsubstantiated empirical claims. The article is all of this. It’s fine as an interesting, thought provoking read that gets to the root of our intuitions, but I think anyone can dismiss it pretty easily since it doesn’t really provide much in the form of an argument.
Again, totally unsubstantiated. I have MANY reasons to believe that, it is simply question begging to say otherwise.
That’s all this post is. Over and over again making empirical claims with no evidence and question beggign.
We can discuss the risks and benefits, I’d advocate for that. This article posted doesn’t advocate for that. It’s rhetoric.
This is a truism. It is survival bias. If the code was buggy, it would eventually be found and fixed. So all things being equal newer code is riskier than old code. But it’s also been impirically shown that using Rust for new code is not “all things being equal”. Google showed that new code in Rust is as reliable as old code in C. Which is good news: you can use old C code from new Rust projects without the risk that comes from new C code.
Yeah, this is what I’ve been saying (not sure if you’d meant to respond to me or the parent, since we agree) - the issue isn’t “new” vs “old” it’s things like “reviewed vs unreviewed” or “released vs unreleased” or “tested well vs not tested well” or “class of bugs is trivial to express vs class of bugs is difficult to express” etc.
Was restating your thesis in the hopes of making it clearer.
I don’t disagree that the rewards can outweigh the risks, and in this case I think there’s a lot of evidence that suggests that memory safety as a default is really important for all sorts of reasons. Let alone the many other PL developments that make Rust a much more suitable language to develop in than C.
That doesn’t mean the risks don’t exist, though.
Nobody would call an old codebase with a handful of fixes a new codebase, at least not in the contexts in which those terms have been used here.
How many lines then?
It’s a Ship of Theseus—at no point can you call it a “new” codebase, but after a period of time, it could be completely different code. I have a C program I’ve been using and modifying for 25 years. At any given point, it would have been hard to say “this is now a new codebase, yet not one line of code in the project is the same as when I started (even though it does the same thing at it always has).
I don’t see the point in your question. It’s going to depend on the codebase, and on the nature of the changes; it’s going to be nuanced, and subjective at least to some degree. But the fact that it’s prone to subjectivity doesn’t mean that you get to call an old codebase with a single fixed bug a new codebase, without some heavy qualification which was lacking.
If it requires all of that nuance and context maybe the issue isn’t what’s “old” and what’s “new”.
I don’t follow, to me that seems like a non-sequitur.
What’s old and new is poorly defined and yet there’s an argument being made that “old” and “new” are good indicators of something. If they’re so poorly defined that we have to bring in all sorts of additional context like the nature of the changes, not just when they happened or the number of lines changed, etc, then it seems to me that we would be just as well served to throw away the “old” and “new” and focus on that context.
I feel like enough people would agree more-or-less on what was an “old” or “new” codebase (i.e. they would agree given particular context) that they remain useful terms in a discussion. The general context used here is apparent (at least to me) given by the discussion so far: an older codebase has been around for a while, has been maintained, has had kinks ironed out.
There’s a really important distinction here though. The point is to argue that new projects will be less stable than old ones, but you’re intuitively (and correctly) bringing in far more important context - maintenance, testing, battle testing, etc. If a new implementation has a higher degree of those properties then it being “new” stops being relevant.
Ok, but:
My point was that this statement requires a definition of “new codebase” that nobody would agree with, at least in the context of the discussion we’re in. Maybe you are attacking the base proposition without applying the surrounding context, which might be valid if this were a formal argument and not a free-for-all discussion.
I think that it would be considered no longer new if it had had significant battle-testing, for example.
FWIW the important thing in my view is that every new codebase is a potential old codebase (given time and care), and a rewrite necessarily involves a step backwards. The question should probably not be, which is immediately better?, but, which is better in the longer term (and by how much)? However your point that “new codebase” is not automatically worse is certainly valid. There are other factors than age and “time in the field” that determine quality.
Methodologies don’t matter for quality of code. They could be useful for estimates, cost control, figuring out whom you shall fire etc. But not for the quality of code.
You’re suggesting that the way you approach programming has no bearing on the quality of the produced program?
I’ve never observed a programmer become better or worse by switching methodology. Dijkstra would’ve not became better if you made him do daily standups or go through code reviews.
There are ways to improve your programming by choosing different approach but these are very individual. Methodology is mostly a beancounting tool.
When I say “methodology” I’m speaking very broadly - simply “the approach one takes”. This isn’t necessarily saying that any methodology is better than any other. The way I approach a task today is better, I think, then the way that I would have approached that task a decade ago - my methodology has changed, the way I think has changed. Perhaps that might mean I write more tests, or I test earlier, but it may mean exactly the opposite, and my methods may only work best for me.
I’m not advocating for “process” or ubiquity, only that the approach one tasks may improve over time, which I suspect we would agree on.
If you take this logic to its end, you should never create new things.
At one point in time, Linux was also the new kid on the block.
The best time to plant a tree is 30 years ago. The second best time is now.
I don’t think Joel Spolsky was ever a Netscape developer. He was a Microsoft developer who worked on Excel.
My mistake! The article contained a bit about Netscape and I misremembered it
How many of those lines are part of the core? My understanding was that the overwhelming majority was driver code. There may not be that much core subsystem code to rewrite.
For a previous project, we included a minimal Linux build. It was around 300 KLoC, which included networking and the storage stack, along with virtio drivers.
That’s around the size a single person could manage and quite easy with a motivated team.
If you started with DPDK and SPDK then you’d already have filesystems and a copy of the FreeBSD network stack to run in isolated environments.
Once many drivers share common rust wrappers over core subsystems, you could flip it and write the subsystem in Rust. Then expose C interface for the rest.
Oh sure, that would be my plan as well. And I bet some subsystem maintainers see this coming, and resist it for reasons that aren’t entirely selfless.
That’s pretty far into the future, both from a maintainer acceptance PoV and from a rustc_codegen_gcc and/or gccrs maturity PoV.
Sure. But I doubt I’ll running a different kernel 10y from now.
And like us, those maintainers are not getting any younger and if they need a hand, I am confident I’ll get faster into it with a strict type checker.
I am also confident nobody in our office would be able to help out with C at all.
This cannot possibly be true.
It’s the largest collaborative open source os kernel project in human history
It’s been described as such based purely on the number of unique human contributions to it
I would expect Wikipedia should be bigger 🤔
I see that Drew proposes a new OS in that linked article, but I think a better proposal in the same vein is a fork. You get to keep Linux, but you can start porting logic to Rust unimpeded, and it’s a manageable amount of work to keep porting upstream changes.
Remember when libav forked from ffmpeg? Michael Niedermayer single-handedly ported every single libav commit back into ffmpeg, and eventually, ffmpeg won.
At first there will be extremely high C percentage, low Rust percentage, so porting is trivial, just git merge and there will be no conflicts. As the fork ports more and more C code to Rust, however, you start to have to do porting work by inspecting the C code and determining whether the fixes apply to the corresponding Rust code. However, at that point, it means you should start seeing productivity gains, community gains, and feature gains from using a better language than C. At this point the community growth should be able to keep up with the extra porting work required. And this is when distros will start sniffing around, at first offering variants of the distro that uses the forked kernel, and if they like what they taste, they might even drop the original.
I genuinely think it’s a strong idea, given the momentum and potential amount of labor Rust community has at its disposal.
I think the competition would be great, especially in the domain of making it more contributor friendly to improve the kernel(s) that we use daily.
I certainly don’t think this is impossible, for sure. But the point ultimately still stands: Linux kernel devs don’t want a fork. They want Linux. These folks aren’t interested in competing, they’re interested in making the project they work on better. We’ll see if some others choose the fork route, but it’s still ultimately not the point of this project.
While I don’t personally want to make a new OS, I’m not sure I actually want to work on Linux. Most of the time I strive for portability, and so abstract myself from the OS whenever I can get away with it. And when I can’t, I have to say Linux’s API isn’t always that great, compared to what the BSDs have to offer (epoll vs kqueue comes to mind). Most annoying though is the lack of documentation for the less used APIs: I’ve recently worked with Netlink sockets, and for the proc stuff so far the best documentation I found was the freaking source code of a third party monitoring program.
I was shocked. Complete documentation of the public API is the minimum bar for a project as serious of the Linux kernel. I can live with an API I don’t like, but lack of documentation is a deal breaker.
I think they mean that Linux kernel devs want to work on the Linux kernel. Most (all?) R4L devs are long time Linux kernel devs. Though, maybe some of the people resigning over LKML toxicity will go work on Redox or something…
That’s is what I was saying, yes.
I’m talking about the people who develop the Linux kernel, not people who write userland programs for Linux.
Re-Implementing the kernel ABI would be a ton of work for little gain if all they wanted was to upstream all the work on new hardware drivers that is already done - and then eventually start re-implementing bits that need to be revised anyway.
If the singular required Rust toolchain didn’t feel like such a ridiculous to bootstrap 500 ton LLVM clown car I would agree with this statement without reservation.
Would zig be a better starting place?
Zig is easier to implement (and I personally like it as a language) but doesn’t have the same safety guarantees and strong type system that Rust does. It’s a give and take. I actually really like Rust and would like to see a proliferation of toolchain options, such as what’s in progress in GCC land. Overall, it would just be really nice to have an easily bootstrapped toolchain that a normal person can compile from scratch locally, although I don’t think it necessarily needs to be the default, or that using LLVM generally is an issue. However, it might be possible that no matter how you architect it, Rust might just be complicated enough that any sufficiently useful toolchain for the language could just end up being a 500 ton clown car of some kind anyways.
Depends on which parts of GP’s statement you care about: LLVM or bootstrap. Zig is still depending on LLVM (for now), but it is no longer bootstrappable in a limited number of steps (because they switched from a bootstrap C++ implementation of the compiler to keeping a compressed WASM build of the compiler as a blob.
Yep, although I would also add it’s unfair to judge Zig in any case on this matter now given it’s such a young project that clearly is going to evolve a lot before the dust begins to settle (Rust is also young, but not nearly as young as Zig). In ten to twenty years, so long as we’re all still typing away on our keyboards, we might have a dozen Zig 1.0 and a half dozen Zig 2.0 implementations!
Yeah, the absurdly low code quality and toxic environment make me think that Linux is ripe for disruption. Not like anyone can produce a production kernel overnight, but maybe a few years of sustained work might see a functional, production-ready Rust kernel for some niche applications and from there it could be expanded gradually. While it would have a lot of catching up to do with respect to Linux, I would expect it to mature much faster because of Rust, because of a lack of cruft/backwards-compatibility promises, and most importantly because it could avoid the pointless drama and toxicity that burn people out and prevent people from contributing in the first place.
What is the, some kind of a new meme? Where did you hear it first?
From the thread in OP, if you expand the messages, there is wide agreement among the maintainers that all sorts of really badly designed and almost impossible to use (safely) APIs ended up in the kernel over the years because the developers were inexperienced and kind of learning kernel development as they went. In retrospect they would have designed many of the APIs very differently.
Someone should compile everything to help future OS developers avoid those traps! There are a lot of exieting non-posix experiments though.
It’s based on my forays into the Linux kernel source code. I don’t doubt there’s some quality code lurking around somewhere, but the stuff I’ve come across (largely filesystem and filesystem adjacent) is baffling.
Seeing how many people are confidently incorrect about Linux maintainers only caring about their job security and keeping code bad to make it a barrier to entry, if nothing else taught me how online discussions are a huge game of Chinese whispers where most participants don’t have a clue of what they are talking about.
I doubt that maintainers are “only caring about their job security and keeping back code” but with all due respect: You’re also just taking arguments out of thin air right now. What I do believe is what we have seen: Pretty toxic responses from some people and a whole lot of issues trying to move forward.
Huh, I’m not seeing any claim to this end from the GP, or did I not look hard enough? At face value, saying that something has an “absurdly low code quality” does not imply anything about nefarious motives.
I can personally attest to having never made that specific claim.
Indeed that remark wasn’t directly referring to GP’s comment, but rather to the range of confidently incorrect comments that I read in the previous episodes, and to the “gatekeeping greybeards” theme that can be seen elsewhere on this page. First occurrence, found just by searching for “old”: Linux is apparently “crippled by the old guard of C developers reluctant to adopt new tech”, to which GP replied in agreement in fact. Another one, maintainers don’t want to “do the hard work”.
Still, in GP’s case the Chinese whispers have reduced “the safety of this API is hard to formalize and you pretty much have to use it the way everybody does it” to “absurdly low quality”. To which I ask, what is more likely. 1) That 30-million lines of code contain various levels of technical debt of which maintainers are aware; and that said maintainers are worried even of code where the technical debt is real but not causing substantial issue in practice? Or 2) that a piece of software gets to run on literally billions of devices of all sizes and prices just because it’s free and in spite of its “absurdly low quality”?
Linux is not perfect, neither technically nor socially. But it sure takes a lot of entitlement and self-righteousness to declare it “of absurdly low quality” with a straight face.
GP here: I probably should have said “shockingly” rather than “absurdly”. I didn’t really expect to get lawyered over that one word, but yeah, the idea was that for a software that runs on billions of devices, the code quality is shockingly low.
Of course, this is plainly subjective. If your code quality standards are a lot lower than mine then you might disagree with my assessment.
That said, I suspect adoption is a poor proxy for code quality. Internet Explorer was widely adopted and yet it’s broadly understood to have been poorly written.
I’m sure self-righteousness could get you to the same place, but in my case I arrived by way of experience. You can relax, I wasn’t attacking Linux—I like Linux—it just has a lot of opportunity for improvement.
I guess I’ve seen the internals of too much proprietary software now to be shocked by anything about Linux per se. I might even argue that the quality of Linux is surprisingly good, considering its origins and development model.
I think I’d lawyer you a tiny bit differently: some of the bugs in the kernel shock me when I consider how many devices run that code and fulfill their purposes despite those bugs.
FWIW, I was not making a dig at open source software, and yes plenty of corporate software is worse. I guess my expectations for Linux are higher because of how often it is touted as exemplary in some form or another. I don’t even dislike Linux, I think it’s the best thing out there for a huge swath of use cases—I just see some pretty big opportunities for improvement.
Or actual benchmarks: the performance the Linux kernel leaves on the table in some cases is absurd. And sure it’s just one example, but I wouldn’t be surprised if it was representative of a good portion of the kernel.
Well not quite but still “considered broken beyond repair by many people related to life time management” - which is definitely worse than “hard to formalize” when “the way ever[y]body does it” seems to vary between each user.
I love Rust but still, we’re talking of a language which (for good reasons!) considers doubly linked lists unsafe. Take an API that gets a 4 on Rusty Russell’s API design scale (“Follow common convention and you’ll get it right”), but which was designed for a completely different programming language if not paradigm, and it’s not surprising that it can’t easily be transformed into a 9 (“The compiler/linker won’t let you get it wrong”). But at the same time there are a dozen ways in which, according to the same scale, things could actually be worse!
What I dislike is that people are seeing “awareness of complexity” and the message they spread is “absurdly low quality”.
Note that doubly linked lists are not a special case at all in Rust. All the other common data structures like
Vec,HashMapetc. also need unsafe code in their implementation.Implementing these datastructures in Rust, and writing unsafe code in general, is indeed roughly a 4. But these are all already implemented in the standard library, with an API that actually is at a 9. And
std::collections::LinkedListis constructive proof that you can have a safe Rust abstraction for doubly linked lists.Yes, the implementation could have bugs, thus making the abstraction leaky. But that’s the case for literally everything, down to the hardware that your code runs on.
You’re absolutely right that you can build abstractions with enough effort.
My point is that if a doubly linked list is (again, for good reasons) hard to make into a 9, a 20-year-old API may very well be even harder. In fact,
std::collections::LinkedListis safe but still not great (for example the cursor API is still unstable); and being in std, it was designed/reviewed by some of the most knowledgeable Rust developers, sort of by definition. That’s the conundrum that maintainers face and, if they realize that, it’s a good thing. I would be scared if maintainers handwaved that away.Bugs happen, but if the abstraction is downright wrong then that’s something I wouldn’t underestimate. A lot of the appeal of Rust in Linux lies exactly in documenting/formalizing these unwritten rules, and wrong documentation can be worse than no documentation (cue the negative parts of the API design scale!); even more so if your documentation is a formal model like a set of Rust types and functions.
That said, the same thing can happen in a Rust-first kernel, which will also have a lot of unsafe code. And it would be much harder to fix it in a Rust-first kernel, than in Linux at a time when it’s just feeling the waters.
At the same time, it was included almost as like, half a joke, and nobody uses it, so there’s not a lot of pressure to actually finish off the cursor API.
It’s also not the kind of linked list the kernel would use, as they’d want an intrusive one.
And yet, safe to use doubly linked lists written in Rust exist. That the implementation needs unsafe is not a real problem. That’s how we should look at wrapping C code in safe Rust abstractions.
The whole comment you replied to, after the one sentence about linked lists, is about abstractions. And abstractions are rarely going to be easy, and sometimes could be hardly possible.
That’s just a fact. Confusing this fact for something as hyperbolic as “absurdly low quality” is stunning example of the Dunning Kruger effect, and frankly insulting as well.
I personally would call Linux low quality because many parts of it are buggy as sin. My GPU stops working properly literally every other time I upgrade Linux.
No one is saying that Linux is low quality because it’s hard or impossible to abstract some subsystems in Rust, they’re saying it’s low quality because a lot of it barely works! I would say that your “Chinese whispers” misrepresents the situation and what people here are actually saying. “the safety of this API is hard to formalize and you pretty much have to use it the way everybody does it” doesn’t apply if no one can tell you how to use an API, and everyone does it differently.
I agree, Linux is the worst of all kernels.
Except for all the others.
Actually, the NT kernel of all things seems to have a pretty good reputation, and I wouldn’t dismiss the BSD kernels out of hand. I don’t know which kernel is better, but it seems you do. If you could explain how you came to this conclusion that would be most helpful.
NT gets a bad rap because of the OS on top of it, not because it’s actually bad. NT itself is a very well-designed kernel.
*nod* I haven’t been a Windows person since shortly after the release of Windows XP (i.e. the first online activation DRM’d Windows) but, whenever I see glimpses of what’s going on inside the NT kernel in places like Project Zero: The Definitive Guide on Win32 to NT Path Conversion, it really makes me want to know more.
More likely a fork that gets rusted from the inside out
Somewhere else it was mentioned that most developers in the kernel could just not be bothered with checking for basic things.
Nobody is forcing any of these people to do this.
It’s great that Nix works for you (and others) but in my experience Nix has got to be the single WORST example of UX of any technology on this planet. I say this as someone who has collectively spent weeks of time trying very hard (in 2020, 2022, and 2023) to use Nix for developer environment management and been left with nothing. I also used NixOS briefly in 2020.
Trivial tasks like wanting to apply a custom patch file as part of the flake setup I could not figure out after hours of documentation reading, asking for help on #irc afterwards, and on matrix. Sure, if I clone the remote repo, commit my patchfile and then have Nix use that as the package it’s fine.. but that’s a lot of work to replace a single
patchshell command and now I have to run an entire git server, or be forced to use a code forge, and then mess around with git settings so it doesn’t degrade local clones for security reasons.Nix’s documentation is incredibly verbose in the most useless places and also non-existent in the most critical. It is the only time I’ve ever felt like I was actively wasting my time reading a project’s documentation. If you already completely understand Nix then Nix documentation is great, for anyone else… I don’t know.
Last I checked flakes were still experimental after however many years it’s been meaning the entire ecosystem built on-top of them is unstable. They aren’t beta, or even alpha. A decision needs to be made on whether flakes come or go (maybe it has been now) because having your entire ecosystem built on quicksand doesn’t inspire confidence to invest the (considerable) time to learn Nix.
Manually wrangling outdated dependencies when you work with software that is on a faster release cycle than Nixpkg checkpoints is painful, and unstable Nixpkgs are just that.. unstable and annoying to update. Also, cleaning orphaned leaves and the like is not trivial and has to be researched versus just being a simple to understand (and documented) command.
Things like devshell, nix-shell or whatever it’s called (I cannot remember anymore) are but various options one has to explore to get developer environments which are, for some reason, not a core part of Nix (since these 3rd party flakes exist in the first place). Combine this with all the other little oddities for which there exists multiple choices, along with the uselessness of Nix’s documentation (i.e. you cannot form an understanding of Nix) and you’re suddenly in a situation where you’re adopting things for which you have no idea the consequence of. Any problem you run into must be solved with either luck (that someone else has encountered it and you find a blog post, a github issue, etc) or brute force guesswork; stabbing in the dark.
The Nix language syntax is unreadable and the errors it outputs are undecipherable to the point of the community making entire packages to display actually human readable errors, or pages long tutorials on how to read them.
I wish I had been successful with Nix, clearly some other people are. Nix worked for me in trivial cases (and it is great when it does!) but the second I wanted to do something “non-trivial” (i.e. actually useful) it was like driving at 100 km/h into a brick wall. Maybe things will improve in the future but until then Podman and OCI containers or microvms are far, far superior to anything NIx can provide in my experience. I will die on this hill.
Yes, they are not completely hermetic like Nix is but I’ve never seen nor encountered a situation where you need a completely hermetic environment. I have no doubt these situations exist but I would (as an educated guess) argue they are needed far less often than people think.
In my experience, happy nix user, nix should only be used if you have had to fight with the other package managers in anger to get something impossible done. You’ll only be motivated to push past the pain of learning it if you have enough anger about whatever you are already using.
If you don’t have that anger it’s hard to push past the Nix learning curve. Which is a shame because it genuinely is a better package management/build/infra-as-code solution.
I guess I don’t see how you would patch a existing package in say Debian or arch easier then forking it and maintaining a patch…..
Heck, I couldn’t even figure out how to make deb packages, arch was much easier but still a huge pain. With NixOS I can apply patches to anything (albeit I am not using flakes, just patching nixpkgs where needed in my fork or using package override in my config). I’ve never felt quite this powerful at modifying core system components w/o breaking something or having to do a disk backup rollback.
Having the nixpkgs repo is better the documentation IMO, just grep and look at usages. This doesn’t cover flakes, but I find the documentation and CLI help/man pretty good for flakes; and there are good examples in many project to pull from.
Nothing compares to using home-manager for dot files and user level config, will never go back from that. There is no drift, all machines stay in sync with everything in versioned files that can be modularized for re-use.
example how to patch a package:
https://github.com/NixOS/nixpkgs/issues/281478
Pretty much pure syntax sugar and custom DSL. I bet that sed is feed to shell.
That’s not a trivial task. Flakes do not support patching the flake source. Nix makes it trivial to patch packages as they’re built, but patching Nix source code is not simple. More generally, if you want to patch Nix source code (whether it’s flakes or whether it’s via
fetchTarball) you need to use IFD (Import From Derivation). https://wiki.nixos.org/wiki/Nixpkgs/Patching_Nixpkgs has a demonstration of how to use this to patch nixpkgs itself. In the case of an arbitrary flake, if the flake has adefault.nixand importing that gets you what you want then you can do the exact same thing that URL does to patch it. If you need access to the patched flake’s outputs (e.g. if you’re patching anixosModule) then I would look at using flake-compat to get at the outputs of the patched flake.The funniest thing to me is that 50% of people say: avoid flakes and half of the rest say: I only managed to get something done in nix because of flakes (me included).
I’m still not sold overall.
I wanted to have a flake with one package at a different version than is the release (or whatever) which was also super annoying.
I thought it should be doable to for instance build a node project for which it turns out there are half a dozen unmaintained projects and no documentation. Seemingly because an experienced Nix person can whip this out in two seconds so nobody bothers to document it.
100% true
I think most people are better off using something like nx or buck to build their stuff.
Yeah I dual-boot NixOS and Arch. For whatever I can use NixOS for without much trouble, I prefer it. However, it’s nice to be able to bail out into Arch when I run into something that will clearly take many more hours of my time to figure out in NixOS than I desire (lately, developing a mixed OCaml/C++ project). I symlink my home subdirectories so it’s easy to reboot and pick up where I left off (there are definitely still dev tools in 2025 that hate symlinked directories though, fair warning to anybody else who wants to try this).
I think flakes complicated things a lot. I started using Nix pre-flakes and did not find it hard to pick up. The language is pretty familiar if you used Haskell or a comparable functional language at some point. The language, builders, etc. clicked for me after reading the Nix pills.
Flakes are quite great in production for reproducibility (though Niv provided some of the same benefits), but adds a layer that makes a lot of people struggle. It removes some of the ‘directness’ that Nix had, making it harder to quickly iterate on things. It also split up the docs, the community, and made a lot of historical posts/solutions harder to apply.
Could you elaborate what you mean by applying a custom patch? Do you want to patch an external flake itself or a package from nixpkgs/a flake. Adding a patch to a package is pretty easy with
overrideAttrs, I do this all the time and it’s a superpower of Nix, compared to other package managers where you have to basically fork and maintain packages.Yea I agree. I investigated nix a year or two ago when flakes were just starting to become popular and it was a total mess to figure out. Anything outside of ordinary was a rabbit hole to figure out.
I think a better solution to the same problem is an immutable OS with distrobox. That solution leverages tech most of us already understand without the terrible programming language and fragmented ecosystem.
I ended up moving away from that setup because I need to actually work on projects instead of tinkering with my setup but I wrote a post about it: https://bower.sh/opensuse-microos-container-dev
Totally agree and it’s bigger than just LLMs. Consciousness is not unique or as complicated as we had hoped. I use hope because we keep clutching onto the idea that consciousness is mysterious. All we need to do is merge the two modes, training and inference, then remove the human from the equation. That’s it.
I am not into mysticism but I don’t think what LLMs do is remotely close to consciousness whatever it may be.
“whatever is may be” is exactly the mysticism I’m talking about. There’s nothing special about consciousness, we are just hopelessly biased by our own egos.
Admitting something is not properly understood at the moment is not mysticism
Cognitive science was always one of my pet subjects. So, there was this book I read, years back: The User Illusion. And then there was this other popular book, which presaged the Deep Learning explosion: On Intelligence. Long before that, Daniel Dennett’s Consciousness Explained got a lot of attention.
I don’t want to spoil any of these for anybody, but I suggest checking out the responses that their arguments received in serious academic reviews. Might learn something!
“Look, all we need to do is merge the two modes, tractor beams and faster-than-light drives, and then we could build the Starship Enterprise. That’s it.”
No, consciousness is a different phenomenon, and we don’t even have a robust definition of what would count as consciousness in non-humans.
We recycle terms of human cognition for behaviors we observe in ML models, but that doesn’t mean they’re the same behaviors. It doesn’t really matter whether that’s true “reasoning” like humans do, we just need some word for the extra output they generate that improves their ability to find the correct answers.
This is exactly my argument, it isn’t a different phenomenon.
You’re not making an argument, you’re just saying a ridiculous thing without any concrete evidence.
Well, at least you’re not even wrong! I don’t see how you get that from what is posted except for the overloading of terms between LLMs and neuroscience or cognitive research or something. We have no idea what consciousness is.
I find it weird that they bundle s3 and postgres support directly into a runtime instead of having them as libraries. Looking at the documentation for s3, I see
and I get that it’s better to have a native implementation rather than a pure JavaScript one for speed (though I’m skeptic of their 5x claim). But I can’t see this approach scaling. If performance is a concern, then other services you communicate with also need to be fast. In particular Redis/memcached or whatever caching mechanism you prefer, but also ElasticSearch or maybe some Oracle database you need to talk to as well. Should those also be in the runtime? And how do you handle backwards compatibility for those libraries?
I guess this is just a long-winded way of saying “Why aren’t they implementing this as native add-ons and instead force this into the runtime?”. It feels much more modular and sensible to put it outside of the runtime.
Deno is doing the exact same thing so they are likely following suit. This is probably one of the ways Oven intends on monetizing the runtime.
My two biggest complaints are mostly to do with the API - one being that s3 is imported from bun, rather than
bun/s3or similar. Python provides a big stdlib, but the modules still have different names. You don’t dofrom python import DefaultDict, httplib, for example.The other one is that the
filefunction supports s3 URIs, which is nice from a minimal API perspective, but I also think it’s not ideal to treat s3 the same as local files. s3 has a lot of additional setup - e.g AWS credentials, handling auth errors, etc. So I think it makes sense to logically separate the behaviour for local vs remote files.I don’t mind new takes on AWS / postgres sdks, though. The SDK is pretty decent compared to some others (e.g Google or Slack), but I think both their AWS and postgress examples there (other than the two issues I mentioned) are pretty nice.
I agree with your sentiment and I am also very confused why I would use the Bun s3 implementation over the whole AWS SDK that I have been using and accustomed to for years now. Sure there could be some performance gains (for just S3) but I don’t see the benefit.
I’ve run into contention on S3 in a Python backend, and it’s really not fun. It’s a very good feature to have this sorted and guaranteed to work fast, it means that Bun can stay competitive with compiled languages for more intensive workloads. To me, this is a production mindset: identify the key components, and optimise them so that they don’t get in your way.
I prefer running web apps in my standard browser over the electron apps 100% of the time, and one of the reasons is that I, as the end user, am empowered with choice that way that electron denies me. And the general user experience is generally better, with better integration in my desktop (which you might not expect from a web app, but it is true because my browser is better integrated than their browser) and controls the other one might deny me.
It hasn’t been much of a compatibility problem either, everything ive wanted to use has worked fine, so I don’t buy the line about needing separate downloads for version control either.
Electron doesn’t support uBlock… enough said
Do many native apps support ad blocking extensions? That’d be the relevant comparison here.
Also I can’t say I’ve seen many (any?) ads in Electron apps, but I suppose they’re out there.
Tracking and analytics can be an issue as well, even if there aren’t any ads visible.
uBlock is also very helpful for thwarting some of the dark patterns in web app design. For example, it is trivial to block the “recommendations” sections on YouTube to avoid falling in to rabbit holes there.
As another example, I’ve personally blocked almost the entire “home page” of GitHub with its mix of useless eye-grabbing information and AI crap. Ideally I’d just not use GitHub, but the network effect is strong and being able to exert my will on the interface to some extent makes it more tolerable for now.
There’s nothing stopping any native app from doing that, too, right? It’s not an Electron thing per se
Indeed, and this is a legit reason for users to prefer the web apps… but if you use their browser instead of your browser, you throw those sandbox benefits away. (as well as the actually kinda nice ui customization options you have in the web world)
Sure, but since Electron apps are usually just wrapped web apps anyway, might as well use them in a browser where you get to block unwanted stuff. At least if that’s a concern for you.
I also use adblockers for removing AI prompts and such (thinking of slack here)
Particularly relevant to the author: I have no idea why a Notion app exists (which is probably broken in lots of ways) if it’s just a web site anyway.
It’s a bit surreal to me that a guy who maintains electron and works at notion tries to tell me that I’m wrong about electron while their app is so broken that I can’t even log in in it because the input fields don’t work for me.
It exists in a lot of cases to get past gatekeepers at larger companies. Buyers in these organizations use checklists for product comparison, so the lack of a desktop app can rule a product out of contention. A PWA would likely suffice, but these get surprisingly negative feedback from large organizations where the people with the control and giving the feedback are somewhat distant from the usage.
Developers respond by doing minimal work to take advantage of the desktop. The developers know they could do deeper desktop integration, but product management only want to check the box and avoid divergence from the web experience (along with a dose of MVP-itis). End users could get more value from an otherwise cruddy Electron app, if it exploited helpful desktop integrations.
Clearly, it’d be better to have a native app that exploits the desktop, but this is unlikely to happen when the customer is checking boxes (but not suggesting solid integration use cases) and PMs are overly focused on MVPs (with limited vision and experience with how desktop apps can shine.) It’s funny how these things change when it comes to native mobile apps because cross-platform apps can get dinged on enterprise checklists while PMs are willing to commit heavily.
I usually run into issues when I need to screen share or something that requires the FS
I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.
WHY this is the case is a respactable topic for sure, but since I already know the outcome I’m more interested in other topics.
I believe in your perception, but I wonder how people determine this sort of thing.
It seems like an availability heuristic: if you notice an app is bad, and discover it’s made in Electron, you remember that. But if an app isn’t bad, do you even check how it was built?
Sort of like how you can always tell bad plastic surgery, but not necessarily good plastic surgery.
On macOS, there has been a shift in the past decade from noticing apps have poor UIs and seeing that they are Qt, to seeing that they are Electron. One of the problems with the web is that there’s no standard rich text edit control. Cocoa’s NSTextView is incredibly powerful, it basically includes an entire typesetting engine with hooks exposed to everything. Things like drag-and-drop, undo, consistent keyboard shortcuts, and so on all work for free if you use it. Any app that doesn’t use it, but exposes a way of editing text, sticks out. Keyboard navigation will work almost how you’re used to, for example. In Electron and the web, everyone has to use a non-standard text view if they want anything more than plain text. And a lot of apps implement their own rather than reusing a single common one, so there isn’t even consistency between Electron apps.
The is probably the best criticism of Electron apps in this thread that’s not just knee-jerk / dogpiling. It’s absolutely valid and even for non-Electron web apps it’s a real problem. I work at a company that had it’s own collaborative rich-text editor based on OT, and it is both a tonne of work to maintain and extend, and also subtly (and sometimes not-so-subtly) different to every other rich text editor out there.
I’ve been using Obsidian a fair bit lately. I’m pretty sure it’s Electron-based but on OSX that still means that most of the editing shortcuts work properly. Ctrl-a and ctrl-d for start and end of line, ctrl-n and ctrl-p for next and previous line, etc. These are all Emacs hotkeys that ended up in OSX via NeXT. Want to know what the most frustrating thing has been with using Obsidian cross platform? Those Emacs hotkeys that all work on OSX don’t work on the Linux version… on the Linux version they do things like Select All or Print. Every time I switch from my Mac laptop to my Linux desktop I end up frustrated from all of the crap that happens when I use my muscle memory hotkeys.
This is something that annoys me about Linux desktops. OPENSTEP and CDE, and even EMACS, supported a meta key so that control could be control and invoking shortcuts was a different key. Both KDE and GNOME were first released after Windows keys were ubiquitous on PC keyboards that could have been used as a command / meta key, yet they copied the Windows model for shortcuts.
More than 50% of the reason I use a Mac is that copy and paste in the terminal are the same shortcut as every other app.
You mean middle click, right? I say that in jest, but anytime I’m on a non-Linux platform, I find myself highlighting and middle clicking, then realizing that just doesn’t work here and sadly finding the actual clipboard keys.
X11’s select buffer always annoyed me because it conflates two actions. Selecting and copying are distinct operations and need to be to support operations like select and paste to overwrite. Implicitly doing a copy-like operation is annoying and hits a bunch of common corner cases. If I have something selected in two apps, switching between them and then pasting in a third makes it unclear which thing I’ll paste (some apps share selection to the select buffer when it’s selected, some do it when they are active and a selection exists, it’s not clear which is ‘correct’ behaviour).
The select buffer exists to avoid needing a clipboard server that holds a copy of the object being transferred, but drag and drop (which worked reliably on OPENSTEP and was always a pain on X11) is a better interaction model for that. And, when designed properly, has better support for content negotiation, than the select buffer in X11. For example, on macOS I can drag a file from the Finder to the Terminal and the Terminal will negotiate the path of the file as the type (and know that it’s a file, not a string, so properly escape it) and insert it into the shell. If you select a file in any X11 file manager, does this work when you middle click in an X11 terminal? Without massive hacks and tight coupling?
There’s no reason why it shouldn’t on the X level - middle clicks to the same content negotiation as any other clipboard or drag and drop operation (in fact, it is literally the same, asking for the TARGETS property, then calling XConvertSelection with the format you want, the only difference is that second argument to XConvertSelection - PRIMARY, CLIPBOARD, or XdndSelection).
If it doesn’t work, it is probably just because the terminal doesn’t try. Which I’d understand; my terminal unconditionally asks for strings too, because knowing what is going on in the running application is a bit of a pain. The terminal doesn’t know if you are at a shell prompt or a text editor or a Python interpreter unless you hack up those programs to inform it somehow. (This is something I was fairly impressed with on the Mac, those things do generally work, but I don’t know how. My guess is massive hacks and tight coupling between their shell extensions and their terminal extensions.)
Eh, I made it work in my library! I like middle click a lot and frequently double click one thing to select it, then double click followed by middle click in another to replace its content. Heck, that’s how I do web links a great many times (I can’t say a majority, but several times a day). Made me a little angry that it wouldn’t work in the mainstream programs, so I made it work in mind.
It is a bit hacky though: it does an automatic string copy of the selection into an internal buffer of the application when replacing the selection. Upon pasting, if it is asked to paste the current selection over itself, it instead use that saved buffer. Theoretically pure? Nah. Practically perfect? Yup. Works For Me.
You know, I thought this was in the spec and loaded it up to prove it and…. it isn’t. lol, it is clear to me what is the correct behavior (asserting ownership of the global selection just when switching between programs is obviously wrong - it’d make copy/paste between two programs with a background selection impossible, since trying to paste in one would switch the active window, which would change the selection, which is just annoying), I’d assert the selection if and only if it is an explicit user action to change the selection or to initiate a clipboard cut/copy command, but yeah the ICCCM doesn’t go into any of this and neither does any other official document ive checked.
tbh, I think this is my biggest criticism of the X ecosystem in general: there’s little bits that are underspecified. In some cases, they just never defined a standard, though it’d be easy, and thus you get annoying interop problems. Other cases, like here, they describe how you should do something, but not when or why you should do that. There’s a lot to like about “mechanism, not policy” but… it certainly has its downsides.
Fair points and a difference of opinion probably driven by difference in use. I wasn’t even thinking about copying and pasting files, just textual snippets. Middle click from a file doesn’t work, but dragging and dropping files does lead to the escaped file path being inserted into the terminal.
I always appreciate the depth of knowledge your comments bring to this site, thank you for turning my half-in-jest poke at MacOS into a learning opportunity!
You know, I’m always ashamed to say that, and I won’t rate the % that it figures into my decision, but me too. For me, the thing I really like is that I can use full vim mode in JetBrains tools, but all my Mac keyboard shortcuts also work well. Because the mac command key doesn’t interfere ever with vim mode. And same for terminal apps. But the deciding feature is really JetBrains… PyCharm Pro on Mac is so much better than PyCharm Pro on Linux just because of how this specific bit of behavior influences IdeaVim.
I also like Apple’s hardware better right now, but all things being equal, this would nudge me towards mac.
Nothing to be ashamed of. I’m a diehard Linux user. I’ve been at my job 3 years now, that entire time I had a goal to get a Linux laptop, I’ve purposefully picked products that enabled that and have finally switched, and I intend to maintain development environment stuff myself (this is challenging because I’m not only the only Linux engineer, I’m also the only x86 engineer).
I say all this to hammer home that despite how much I prefer Linux (many, many warts and all), this is actually one of the biggest things by far that I miss about my old work Mac.
Have you seen or tried Kinto?
I have not heard of it and my ability to operate a search engine to find the relevant thing is failing me.
https://kinto.sh/
“Mac-style shortcut keys for Linux & Windows”
https://github.com/rbreaves/kinto
Plus we live in a world now where we expect tools to be released cross-platform, which means that I think a lot of people compare an electron app on, say, Linux to an equivalent native app on Linux, and argue that the native app would clearly be better.
But from what I remember of the days before electron, what we had on Linux was either significantly worse than the same app released for other platforms, or nothing at all. I’m thinking particularly of Skype for Linux right now, which was a pain to use and supported relatively few of the features other platforms had. The election Skype app is still terrible, but at least it’s better than what we had before.
Yeah, I recall those days. Web tech is the only reason Linux on the desktop isn’t even worse than it was then.
Weird, all the ones I’ve used have been excellent with great UX. It’s the ones that go native that seem to struggle with their design. Prolly because xml is terrible for designing apps
I’d really like to see how you and the parent comment author interact with your computer. For me electron apps are at best barely useable, ranging from terrible UI/UX but with some useful features and useless eye candy at the cost of performance (VSCode), to really insulting to use (Slack, Discord). But then I like my margins to be set to 0 and information density on my screen to approximate the average circa-2005 japanese website. For instance Ripcord (https://cancel.fm/ripcord/static/ripcord_screenshot_win_6.png) is infinitely more pleasant for me to use than Discord.
But most likely some people disagree - from the article:
I’m really amazed for instance that anyone would use McDonald’s kiosks as an example of something good - you can literally see most of these poor things stutter with 10fps animations and constant struggles to show anything in a timely manner.
My children - especially the 10 and 12 year old - will stand around mocking their performance while ordering food.
IDK, Slack literally changed the business world by moving many companies away from email. As it turns out, instant communication and the systems Slack provided to promote communication almost certainly resulted in economic growth as well as the ability to increase remote work around the world. You can call that “insulting” but it doesn’t change the facts of its market- and mind-share.
Emoji reactions, threads, huddles, screen sharing are all squarely in the realm of UX and popularized by Slack. I would argue they wouldn’t have been able to make Slack so feature packed without using web tech, especially when you see their app marketplace which is a huge UX boon.
Slack is not just a “chat app”.
If you want a simple text-based chat app with 0-margins then use IRC.
I could easily make the same argument for VSCode: you cannot ignore the market- and mind-share. If the UX was truly deplorable then no one would use it.
Everything else is anecdotal and personal preference which I do not have any interest in discussing.
I truly miss the days when you could actually connect to Slack with an IRC client. That feature went away in… I dunno, 2017 or so. It worked fabulously well for me.
Yeah Slack used to be much easier to integrate with. As a user I could pretty easily spot the point where they had grown large enough that it was time to start walling in that garden …
This is not a direct personal attack or criticism, but a general comment:
I find it remarkable that, when I professionally criticise GNOME, KDE and indeed Electron apps in my writing, people frequently defend them and say that they find them fine – in other words, as a generic global value judgement – without directly addressing my criticisms.
I use one Electron app routinely, Panwriter, and that’s partly because it tries to hide its UI. It’s a distraction-free writing tool. I don’t want to see its UI. That’s the point. But the UI it does have is good and standards-compliant. It has a menu bar; those menus appear in the OS’s standard place; they respond to the standard keystrokes.
My point is:
There are objective, independent standards for UI, of which IBM CUA is the #1 and the Mac HIG are the #2.
“It looks good and I can find the buttons and it’s easy to work” does not equate to “this program has good UI.”
It is, IMHO, more important to be standards-compliant than it is to look good.
Most Electron apps look like PWAs (which I also hate). But they are often pretty. Looking good is nice, but function is more important. For an application running on an OS, fitting in with that OS and using the OS’s UI is more important than looking good.
But today ISTM that this itself is an opinion, and an unusual and unpopular one. I find that bizarre. To me it’s like saying that a car or motorbike must have the standard controls in the standard places and they must work in the standard way, and it doesn’t matter if it’s a drop-dead beautiful streamlined work of art if those aren’t true. Whereas it feels like the prevailing opinion now is that a streamlined work of art with no standard controls is not only fine but desirable.
This is called confirmation bias.
No, that’s not what confirmation bias means.
This is exactly what confirmation bias refers to.
Confirmation bias is cherry picking evidence to support your preconceptions. This, is simply having observed something (“all Electron apps I’ve used were terrible”), and not being interested in why — which is understandable since the conclusion was “avoid Electron”.
It’s okay at some point to decide you have looked at enough evidence, make up your mind, and stop spending time examining any further evidence.
Yes, cherry picking is part of it, but confirmation bias is a little more extensive than that.
It also affects when you even seek evidence, such as only checking what an app is built with when it’s slow, but not checking when it’s fast.
It can affect your interpretation and memory as well. E.g., if you already believe electron apps are slow, you may be more likely to remember slow electron apps and forget (if you ever learned of) fast electron apps.
Don’t get me wrong, I’m guilty of this too. Slack is the canonical slow electron app, and everyone remembers it. Whereas my 1Password app is a fast electron app, but I never bothered to learn that until the article mentioned it.
All of which is to say, I’m very dubious that people’s personal experiences in the comments are an unbiased survey of the state of electron app speeds. And if your data collection and interpretation are biased, it doesn’t matter how much of it you’ve collected. (E.g., the disastrous 1936 Literary Digest prediction of Landon defeating Roosevelt, which polled millions of Americans, but from non-representative automobile and telephone owners.)
We’re talking about someone who stopped seeking evidence, so it doesn’t apply here.
So would I.
And it doesn’t help that apparently different people have very different criteria for what constitutes acceptable performance. My personal criterion would be “within an order of magnitude of the maximum achievable”. That is, if it is 10 times slower than the fastest possible, that’s still acceptable to me in most settings. Thing is though, I’m pretty sure many programs are _three_orders of magnitude slower than they could be, and I don’t notice because when I click a button or whatever they still react in fewer frames than I can consciously perceive — but that still impacts battery life, and still necessitates a faster computer than necessary. Worse, in practice I have no idea how much slower than necessary an app really is. The best I can do is notice that a similar app feels snappier, or doesn’t uses as much resources.
??? It still applies if they stopped seeking evidence because of confirmation bias. I’m not clear what you’re trying to say here.
Oh but then you have to establish that confirmation bias happened before they stopped. And so far you haven’t even asserted it — though if you did I would dispute it.
And even if you were right, and confirmation bias led them to think they have enough evidence even though they do not, and then stopped seeking, the act of stopping itself is not confirmation bias, it’s just thinking one has enough evidence to stop bothering.
Stopping to seek evidence does not confirm anything by the way. It goes both ways: either the evidence confirms the belief and we go “yeah yeah, I know” and forget about it, or it weakens it and we go “don’t bother, I know it’s bogus in some way”. Under confirmation bias, one would remember the confirming evidence, and use it later.
As a default stance, that’s more likely to be wrong than right.
Which of these two scenarios is more likely: that the users in this thread carefully weighed the evidence in an unbiased manner, examining both electron and non-electron apps, seeking both confirmatory and disconfirmatory evidence… or that they made a gut judgment based on a mix of personal experience and public perception.
The second is way more likely.
It’s the reason behind stopping, not the act itself, that can constitute “confirmation bias”.
As a former neuroscientist, I can assure you, you’re using an overly narrow definition not shared by the actual psychology literature.
From Wikipedia:
Sounds like a reasonable definition, not overly narrow. And if you as a specialist disagree with that, I encourage you to correct the Wikipedia page. Assuming however you do agree with this definition, let’s pick apart the original comment:
Let’s see:
If that’s true, that’s not confirmation bias — because it’s true. If it isn’t, yeah, we can blame confirmation bias for ignoring good Electron apps. Maybe they only checked when the app was terrible or something? At this point we don’t know.
Now one could say with high confidence this is confirmation bias, if they personally believe a good proportion of Electron apps are not terrible. They would conclude highly unlikely that the original commenter really only stumbled on terrible Electron apps, so they must have ignored (or failed to notice) the non-terrible ones. Which indeed would be textbook confirmation bias.
But then you came in and wrote:
Oh, so you were seeing the bias in the second paragraph:
Here we have someone who decided they had seen enough, and decided to just avoid Electron and move on. Which I would insist is a very reasonable thing to do, should the first paragraph be true (which it is, as far as they’re concerned).
Even if the first paragraph was full of confirmation bias, I don’t see any here. Specifically, I don’t see any favouring of supporting information, or any disfavouring of contrary information, or misinterpreting anything in a way that suits them. And again, if you as a specialist says confirmation bias is more than that, I urge you to correct the Wikipedia page.
But… Wikipedia already agrees with me here. This definition is quite broad in scope. In particular, note the “search for” part. Confirmation bias includes biased searches, or lack thereof.
Confirmation bias is about how we search for and interpret evidence, regardless of whether our belief is true or not. Science is not served by only seeking to confirm what we know. As Karl Popper put it, scientists should always aim to falsify their theories. Plus, doing so assumes the conclusion; we might only think we know the truth, but without seeking to disconfirm, we’d never find out.
Why would they be consciously aware of doing that, or say so in public if they were? People rarely think, “I’m potentially biased”. It’s a scientific approach to our own cognition that has to be cultivated.
To reiterate, it’s most likely we’re biased, haven’t done the self-reflection to see that, and haven’t systematically investigated electron vs non-electron performance to state anything definitively.
And I get it, too. We only have so many hours in the day, we can’t investigate everything 100%, and quick judgments are useful. But, they trade off speed for accuracy. We should strive to remember that, and be humble instead of overconfident.
As long as you’re saying “biased search”, and “biased lack of search”. The mere absence of search is not in itself a bias.
Yup. Note that this trade-off is a far cry from actual confirmation bias.
Wait, I think you’re misinterpreting the “it” in my sentence. By “it”, I meant literally the following statement: “I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.”
That statement does not say whether all Electron apps are terrible or whether Electron makes apps terrible, or anything like that. It states what had been directly observed. And if it is true that:
Then believing and writing those 3 points is not confirmation bias. It’s just stating the fact as they happened. If on the other hand it’s not true, then we can call foul:
For the record I’m way past Popper. He’s not wrong, and his heuristic is great in practice, but now we have probability theory. Long story short, the material presented in E. T. Jaynes’ Probability Theory: the Logic of Science should be part of the mandatory curriculum, before you even leave high school — even if maths and science aren’t your chosen field.
One trivial, yet important, result from probability theory, is that absence of evidence is evidence of absence: if you expect to see some evidence of something if it’s true, then not seeing that evidence should lower your probability that it is true. The stronger you expect that evidence, the further your belief ought to shift.
Which is why Popper’s rule is important: by actively seeking evidence, you make it that much more probable to stumble upon it, should your theory be false. But the more effort you put into falsifying your theory, and failing, the more likely your theory is true. The kicker, though, is that it doesn’t apply to just the evidence you actively seek out, or the experimental tests you might do. It applies to any evidence, including what you passively observe.
Oh no you don’t. We’re all fallible mortals, all potentially biased, so I can quote a random piece of text, say “This is exactly what confirmation bias refers to” and that’s okay because surely the human behind it has confirmation bias like the rest of us even if they aren’t aware of it, right? That’s a general counter argument, it does not work that way.
There is a way to assert confirmation bias, but you need to work from your own prior beliefs:
Something like that. It’s not exact either (there’s selection bias, the possibility of “many” meaning only “5”, the fact we probably don’t agree on the definitions of “terrible” and “better”), but you get the gist of it: you can’t invoke confirmation bias from a pedestal. You have to expose yourself a little bit, reveal your priors at the risk of other people disputing them, otherwise your argument falls flat.
Our comments are getting longer and longer, we’re starting to parse minutiae, and I just don’t have the energy to add in the Bayesian angle and keep it going today.
It’s been stimulating though! I disagree, but I liked arguing with someone in good faith. Enjoy your weekend out there.
If Robert Aumann himself can at the same time produce his agreement theorem and be religious, it’s okay for us to give up. :-)
Thanks for engaging with me thus far.
Am I the only one who routinely looks at every app I download to see what toolkit it’s using? Granted, I have an additional reason to care about that: accessibility.
No I do this too. Always interesting to see how things are built.
You should write your findings up in a post and submit it! Might settle a lot of debates in the comments 😉
Were you able to determine that they were terrible because they used Electron?
Who cares? “All electron apps are terrible, most non-electron apps are not” is enough information to try to avoid Electron, even if it just so happens to be true for other reasons (e.g maybe only terrible development teams choose Electron, or maybe the only teams who choose Electron are those under a set of constraints from management which necessarily will make the software terrible).
I think one thing worth noting is this:
Emphasis mine. A lot of users won’t care if an app sort of sucks, but at least it exists.
I totally agree. Specifically, people arguing over bundle size is ridiculous when compared to the amount of data we stream on a daily basis. People complain that a website requires 10mb of JS to run but ignore the GBs in python libs required to run an LLM – and that’s ignoring the model weights themselves.
There’s a reason why Electron continues to dominate modern desktop apps and it’s pride that is clouding our collective judgement.
https://bower.sh/my-love-letter-to-front-end-web-development
As someone who complains about Electron bundle size, I don’t think the argument of how much data we stream makes sense.
My ISP doesn’t impose a data cap—I’m not worried about the download size. However, my disk does have a fixed capacity.
Take the Element desktop app (which uses Electron) for example; on macOS the bundle is 600 MB. That is insane. There is no reason a chat app needs to be that large, and to add insult to injury, the UX and performance is far worse than if it were a native client or a Qt-based application.
Mine does and many ISPs do. Disk space is dirt cheap in comparison to bandwidth costs
Citation needed.
See native (Qt-based) Telegram client.
Is it because it uses Electron that the bundle size is so large? Or could it be due to the developer not taking any care with the bundle size?
Offhand I think a copy of Electron or CEF is about ~120-150MB in 2025, so the whole bundle being 600MB isn’t entirely explained by just the presence of that.
I’m not so sure… the “minimal” spotify build is 344 MB:
And if you decompress it (well, this is an older version but i don’t wanna downlod another just for a lobsters comment):
1.3 GB libcef.so, no slouch.
Hm. I may be looking at a compressed number? My source for this is that when the five or so different Electron-or-CEF based apps that I use on Windows update themselves regularly, each of them does a suspiciously similar 120MB-150MB download each time.
I don’t think the streaming data comparison makes sense. I don’t like giant electron bundles because they take up valuable disk space. No matter how much I stream my disk utilisation remains roughly the same.
Interesting, I don’t think I’ve ever heard this complaint before. I’m curious, why is physical disk space a problem for you?
Also “giant” at 100mb is a surprising claim but I’m used to games that reach 100gb+. That feels giant to me so we are orders of magnitude different on our definitions.
When a disk is getting full due to virtual machines, video footage, photos, every bit counts and having 9 copies of chromium seems silly and wasteful.
Also, the problem of disk space and memory capacity becomes worse when we consider the recent trend of non-upgradable/non-expandable disk and memory in laptops. Then the money really counts.
Modern software dev tooling seems to devolve to copies of copies of things (Docker, Electron) in the name of standardizing and streamlining the dev process. This is a good goal, but, hope you shelled out for the 1TB SSD!
At least games have the excuse of showing a crapload of eye candy (landscapes and all that).
Funny, even within this thread people are claiming Electron apps have “too much eye candy” while others are claiming “not enough”
I believe @Loup-Vaillant was referring to 3D eye candy, which I think you know is different from the Electron eye candy people are referring to in other threads.
A primary purpose of games is often to show eye candy. In other words, sure games use more disk space, but the ratio of
disk space actually used / disk space inherently required by the problem spaceis dramatically lower in games than in Electron apps. Context matters.I care because my phone has limited storage. I’m at a point where I can’t install more apps because they’re so unnecessarily huge. When apps take up more space than personal files… it really does suck.
And many phones don’t have expandable storage via sdcard either, so it’s eWaste to upgrade. And some builds of Android don’t allow apps to be installed on external storage either.
Native libraries amortize this storage cost via sharing, and it still matters today.
Does Electron run on phones? I had no idea, and I can’t find much on their site except info about submitting to the MacOS app store, which is different to the iOS app store.
Well, Electron doesn’t run on your phone, and Apple doesn’t let apps ship custom browser engines even if they did. Native phone apps are still frequently 100mb+ using the native system libraries.
It’s not Electron but often React Native, and other Web based frameworks.
There’s definitely some bloated native apps, but the minimum size is usually larger for the web based ones. Just shipping code as text, even if minified,is a lot of overhead.
Offhand I think React Native’s overhead for a “Hello world” app is about 4MB on iOS and about 10MB on Android, though you have to turn on some build system features for Android or you’ll see a ~25MB apk.
I am not convinced of this in either direction. Can you cite anything, please? My recollection is uncertain but I think I’ve seen adding a line of source code to a C program produce object code which was bigger by more bytes than the size of the source code line was bigger. And C is a PL which tends towards small object code size, and that’s without gzipping the source code or anything.
I don’t have numbers but I believe on average machine code has higher information density than the textual representation, even if you minify that text.
So if you take a C program and compile it, generally the binary is smaller than the total text it is generated from. Again, I didn’t measure anything, but knowing a bit about how instructions are encoded makes this seem obvious to me. Optimizations can come into play, but I doubt it would change the outcome on average.
That’s different to what I’m claiming. I’d wager that change caused more machine code to be generated because before some of the text wasn’t used in the final program, i.e. was dead code or not included.
Installed web apps don’t generally ship their code compressed AFAIK, that’s more when streaming them over the network, so I don’t think it’s really relevant for bundle size.
IPA and APK files are both zip archives. I’m not certain about iOS but installed apps are stored compressed on Android phones.
I’m not sure about APK, but IPAs are only used for distribution and are unpacked on install. Basically like a
.debDMG on macOS.So AFAIK, it’s not relevant for disk space.
FWIW phone apps that embed HTML are using WebView (Android) or WKWebView (iOS). They are using the system web renderer. I don’t think anyone (except Firefox for Android) is bundling their own copy of a browser engine because I think it’s economically infeasible.
Funnily enough, there’s a level of speculation that one of the cited examples (Call of Duty games) is large specifically to make storage crunch a thing. You already play Call of Duty, so you want to delete our 300GB game and then have to download it again later to play other games?
Nonsense. I can make any SPA fast or any server rendered site slow.
I’m reading this on an M1 Air right now and though it’s slightly underpowered it’s a fantastic machine. No clue why I would bother with Linux either, I already have a job dealing with technology. Don’t need another unpaid one.
If you used Linux you wouldn’t have to deal with MacOS, duh.
Hah, I feel this one after the constant churn of dealing with Wayland issues.
I have a framework laptop and love it but it doesn’t compare to an m-class MacBook.
If I was only allowed a single laptop it would be a no brainer: MacBook Air
i’ve been using niri fulltime after having used sway for ~4-5 years - it’s incredible!
tiling window management (i3, sway) forces constant window resizes, which has always caused weird behavior. niri doesn’t have that issue, since all windows spawn horizontally. swiping between windows sideways feels very naturally - just imagine a bunch of fullsized macos apps. niri also natively supports swiping gestures, which reinforces this model. overall, you can tell that a lot of thought & energy went into niri, and it really shows. this release only reinforces that belief!
well done! niri is great, and i hope that it’s my endgame window manager :)
Niri really is rock-solid. There’s so much attention to detail in things like atomically resizing windows and it really shows.
I recommend watching the author’s talk from RustCon Moscow: https://youtu.be/Kmz8ODolnDg
He shows off the testing and performance analysis tooling used and it all seems much better than what I’ve seen in most “professional” projects even. Worth watching even if you don’t care about the window manager itself.
There are high-quality English subtitles if you need them.
Honestly what I want is the same thing inside tmux. I think there’s an interesting opportunity for someone to build
The main thing I have a hard time with is that I want to be able to open URLs that are displayed in my terminal using the keyboard instead of clicking on them with the mouse. I have been able to find a total of zero terminal emulators that support this out of the box, and one (urxvt) that makes it easy to add with a small amount of configuration. (I think there might be some that exist, but for my own purposes I won’t use it unless it’s packaged in Debian
apt.)Anyone know if there’s something like that I’m missing? I don’t love
urxvtbut having to use the mouse for URLs is a deal-killer for me.Wezterm supports this with the quick select mode. By default there are only keybindings setup to copy and insert URLs, but the documentation already contains the necessary Lua snippet needed for opening them directly.
Yeah, I really like the look of Wezterm. I will probably switch to it once it gets packaged for Debian, but that will probably take a while.
How up to date are the “modern” tools, and how soon do you usually get them on Debian? I’m on fedora which already pushes close to the bleeding edge. And for a while now I’m just used to tools being in flatpak, so I get a really up to date working environment. E.g. I’m using wezterm for quite a while now.
I remember from… Centos or Ubuntu LTS r something, that it is usually significantly slower, at least it used to be. Not just getting the shiny new tools in the repos, but sometimes it even took quite long for already packaged tools to get updated from upstream. It’s frustrating to know that the bug you reported was fixed months ago but you can’t update it from the repos.
How does Debian fare there? I’m adding out of curiosity, I don’t intend to switch from Fedora for my workhorse machine any time soon.
I usually find what I want in https://backports.debian.org/ or sometimes upstream’s own package repository (eg for llvm)
TBH I’m not even sure what this question means. Maybe that in and of itself is the answer to your question.
I used to build things from source a fair bit, but nowadays I pretty much only find myself doing that for programs that I’m planning to make contributions to myself. Everything else comes from Debian’s apt sources, except Signal which I wish I didn’t have to use, but realistically the alternatives are all so much worse.
Foot has a visual mode where you see all the links highlighted with a shortcut to press to open one. It’s definitely my favorite version of the problem you are describing
Hm; yeah, looks neat but it unfortunately doesn’t work with my window manager. (exwm)
Alacritty has that too and works on X11 as well; the default shortcut is ctrl+shift+o.
Thanks; this looks promising. I think I tried this on my previous machine and it wouldn’t even boot because it needed a more recent OpenGL version, but it runs on my current one.
However, ctrl-shift-o doesn’t do anything, and the docs seem to imply that opening URLs with the keyboard is just a feature of vi mode: https://github.com/alacritty/alacritty/blob/master/docs/features.md The documentation seems … pretty sparse.
I use it all the time not in vi mode, maybe the docs are outdated? And yeah they are pretty sparse, I think you’re better off looking at the default config file.
Maybe you’re running an older version, before 0.14 it used to be ctrl+shift+u https://alacritty.org/changelog_0_14_0.html
That was it, thanks! I will give this a try. I like the hinting system; reminds me of conkeror and surfingkeys.
Not out of the box but st with the externalpipe patch can do it in a very flexible way since you pipe the whole terminal pane into the program or script of your choice. It’s possible to do the same on the tmux side too. Or anything else that can pipe the whole pane into an external command, for example xterm with a custom printer command. The patch link has some examples for the URL extraction.
I find the arguments here strange.
Of course, this depends on your infrastructure, but setting up a couple of read replicas is not especially hard.
Well… don’t?
I do appreciate the “Other Views” section! And I agree with those “other views”.
I’m with you.
Also I’d add: The chances of the SQL execution engine having bugs is probably a lot less than your application code will have. So as long as you get your SQL query correct, the chances of having bugs is much lower than if you write it yourself.
Once you have something like PgTAP so you can test your SQL queries and DB, you are in amazing shape for generic well-tested code that is very unlikely to break over time.
IDK. Complex queries are hard to build indexes against and those are hard to test locally since they require a lot of data.
Sometimes perf in app code can be more predictable
I agree that unless you know SQL and your DB, it can be hard to reason about how it handles performance. In your own code, that would be easier, since you wrote it and (hopefully) understand it’s performance characteristics.
I would argue bug-free predictable code is more important than performant code. In my experience, when performance starts to matter is also when you can generally start to dedicate resources specific to performance problems. I.e. you can hire a DB expert in your particular DB and application domain.
I’m not with you on the ‘hard to test locally, since large data is required’ part though. You usually get access to something like PG’s ANALYZE, where you can verify the right indexes are being used for a given query. Sure you might need some large test instance to figure out what the right indexes are, but that’s a development task, not a testing task right? Once you figure it out, you can write a test that ensures the right indexes are being followed for a given query, which is the important part to test, from my perspective.
There are so many different types of UIs that it’s hard to know what the author is talking about. When I think about UI the first thing that comes to mind are web apps. What is the point in versioning web apps? Is there some decision an end user can make in this regard? No. They are forced to upgrade to latest and have zero mechanism to downgrade.
I like this stance, especially for talented authors with popular projects. Burn out is high for these people and I’d rather them focus on innovation than the muck and the more of maintaining a project
All well and good except that innovation comes from dealing with muck. That’s what innovation is, better ways of dealing with muck. Someone whose strategy for muck is “make someone else cope with it” has no incentive to innovate.
You’ll have to back this up with something more than rhetoric because this is the absolute opposite of my experience.
https://www.joelonsoftware.com/2007/12/06/where-theres-muck-theres-brass/
This is a blog post about jobs, and business. Not about getting issue tracker spam from people who will neither contribute money or time to software you’ve voluntarily shared with the public.
I have also encountered people (online) who didn’t know that you could render web pages on the server.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
People learn by imitation, and that’s 100% necessary in software, so it’s not too surprising. But yeah this is not a good way to do it.
The web is objectively worse that way, i.e. if you have ever seen an non-technical older person trying to navigate the little pop-out hamburger menus and so forth. Or a person without a fast mobile connection.
If I look back a couple decades, when I used Windows, I definitely remember that shell and FTP were barriers to web publishing. It was hard to figure out where to put stuff, and how to look at it.
And just synchronizing it was a problem
PHP was also something I never understood until I learned shell :-) I can see how JavaScript is more natural than PHP for some, even though PHP lets you render on the server.
To be fair, JSX is a pleasurable way to sling together HTML, regardless of if it’s on the frontend or backend.
Many backend server frameworks have things similar to JSX.
That’s not completely true, one beautiful JSX thing is that any JSX HTML node is a value, you can use all your language at your disposal to create that HTML. Most backend server frameworks use templating instead. For most cases both are equivalent, but sometimes being able to put HTML values in lists and dictionaries and have the full power of the language do come in handy.
Well, that’s exactly what your OP said. Here is an example in Scala. It’s the same style. It’s not like this was invented by react or even by frontend libraries. In fact, Scalatags for example is even better than JSX because it is really just values and doesn’t even need any special syntax preprocessor. It is pure, plain Scala code.
Maybe, but then just pick a better one? OP said “many” and not “most”.
Fine, I misread many as most as I had just woken up. But I’m a bit baffled that a post saying “pick a better one” for any engineering topic has been upvoted like this. Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags. JSX was the first such exposure of a values-based HTML builder for mainstream use, you and your brother comment talk about Scala and Lisp as examples, two very niche languages.
When did I say that this was invented by React? I’m just saying that you can use JSX both on front and back which makes it useful for generating HTML. Your post, your sibling and the OP just sound slightly butthurt at Javascript for some reason, and it’s not my favourite language by any stretch of the imagination, but when someone says “JSX is a good way to generate HTML” and the response is “well, other languages have similar things as well”, I just find that as arguing in bad faith and not trying to bring anything constructive to the conversation, same as the rest of the thread.
But the point is that you wouldn’t have to - you could use a Ruby workalike, or implement one yourself. Something like Markaby is exactly that. Just take these good ideas from other languages and use it in yours.
Anyway, it sounds like we are in agreement that this would be better than just adopting JavaScript just because it is one of the few non-niche languages which happens to have such a language-oriented support for tags-as-objects like JSX.
I found that after all I prefer to write what is going to end up as HTML in something that looks as much as HTML as possible. I have tried the it’s-just-pure-data-and-functions approach (mostly with elm-ui, which replaces both HTML and CSS), but in the end I don’t like context switching it forces on my brain. HTML templates with as much strong checks as possible is my current preference. (Of course, it’s excellent if you can also hook into the HTML as a data structure to do manipulations at some stage.)
For doing JSX (along with other frameworks) on the backend, Astro is excellent.
That’s fair. There’s advantages and disadvantages when it comes to emulating the syntax of a target language in the host language. I also find JSX not too bad - however, one has to learn it first which definitely is a lot of overhead, we just tend to forget that once we have learned and used it for a long time.
In the Lisp world, it is super common to represent HTML/XML elements as lists. There’s nothing more natural than performing list operations in Lisp (after all, Lisp stands for LISt Processing). I don’t know how old this is, but it certainly predates React and JSX (Scheme’s SXML has been around since at least the early naughts).
Yeah, JSX is just a weird specialized language for quasiquotation of one specific kind of data that requires an additional compilation step. At least it’s not string templating, I guess…
I’m aware, I wrote such a lib for common lisp. I was talking that in most frameworks most people use they are still at the templating world.
It’s a shame other languages don’t really have this. I guess having SXSLT transformation is the closest most get.
Many languages have this, here’s a tiny sample: https://github.com/yawaramin/dream-html?tab=readme-ov-file#prior-artdesign-notes
Every mainstream as well as many niche languages have libraries that build HTML as pure values in your language itself, allowing the full power of the language to be used–defining functions, using control flow syntax, and so on. I predict this approach will become even more popular over time as server-driven apps have a resurgence.
JSX is one among many 😉
I am not a web developer at all, and do not keep up with web trends, so the first time I heard the term “server-side rendering” I was fascinated and mildly horrified. How were servers rendering a web page? Were they rendering to a PNG and sending that to the browser to display?
I must say I was rather disappointed to learn that server-side rendering just means that the server sends HTML, which is rather anticlimactic, though much more sane than sending a PNG. (I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
The Bible says “Render unto Caesar the things that are Caesar’s, and unto God the things that are God’s”. Adapting that to today’s question: “Render unto the Screen the things that are the Screen’s, and unto the Browser the things that are the Browser’s”. The screen works in images, the browser works in html. Therefore, you render unto the Browser the HTML. Thus saith the Blasphemy.
(so personally I also think it is an overused word and it sounds silly to me, but the dictionary definition of the word “render” is to extract, convert, deliver, submit, etc. So this use is perfectly inline with the definition and with centuries of usage irl so i can’t complain too much really.)
You can render a template (as in, plug in values for the placeholders in an HTML skeleton), and that’s the intended usage here I think.
I don’t think it would be a particularly useful distinction to make; as others said you generally “render” HTML when you turn a templated file into valid HTML, or when you generate HTML from another arbitrary format. You could also use “materialize” if you’re writing it to a file, or you could make the argument that it’s compilers all the way down, but IMO that would be splitting hairs.
I’m reminded of the “transpiler vs compiler” ‘debate’, which is also all a bit academic (or rather, the total opposite; vibe-y and whatever/who cares!).
It seems that was a phase? The term transpiler annoys me a bit, but I don’t remember seeing it for quite a while now.
Worked very well for Opera Mini for years. Made very low-end web clients far more usable. What amazed me was how well interactivity worked.
So now I want a server side rendering framework that produces a PNG that fits the width of my screen. This could be awesome!
There was a startup whose idea was to stream (as in video stream) web browsing similar to cloud gaming: https://www.theverge.com/2021/4/29/22408818/mighty-browser-chrome-cloud-streaming-web
It would probably be smaller than what is being shipped as a web page these days.
Exactly. The term is simply wrong…
ESL issue. “To render” is fairly broad term meaning something is to provide/concoct/actuate, has little to do with graphics in general.
Technically true, but in the context of websites, “render” is almost always used in a certain way. Using it in a different way renders the optimizations my brain is doing useless.
The way that seems ‘different’ to you is the way that is idiomatic in the context of websites 😉
Unless it was being rendered on the client, I don’t see what’s wrong with that. JSX and React were basically the templating language they were using. There’s no reason that setup cannot be fully server-generated and served as static HTML, and they could use any of the thousands of react components out there.
Yeah if you’re using it as a static site generator, it could be perfectly fine
I don’t have a link handy, but the site I was thinking about had a big janky template with pop-out hamburger menus, so it was definitely being rendered client side. It was slow and big.
I’m hoping tools like Astro (and other JS frameworks w/ SSR) can shed this baggage. Astro will render your React components to static HTML by default, with normal clientside loading being done on a per-component basis (at least within the initial Astro component you’re calling the React component from).
I’m not sure I would call a custom file type
.astrothat renders TS, JSX, and MD/X split by a frontmatter “shedding baggage”. In fact, I think we could argue that astro is a symptom of the exact same problem you are illustrating from that quote.That framework is going to die the same way that Next.JS will: death by a thousand features.
huh couldn’t disagree more. Astro is specifically fixing the issue quoted: now you can just make a React app and your baseline perf for your static bits is now the same as any other framework. The baggage I’m referring to would be the awful SSG frameworks I’ve used that are more difficult to hold correctly than Astro, and of course require plenty of other file types to do what
.astrodoes (.rb,.py,.yml, etc.). The State of JS survey seems to indicate that people are sharing my sentiments (Astro has a retention rate of 94%, the highest of the “metaframeworks”).I don’t know if I could nail what dictates the whims of devs, but I know it isn’t feature count. If Next dies, it will be because some framework with SSG, SSR, ISR, PPR, RSC, and a dozen more acronym’d features replaced it. (And because OpenNext still isn’t finished.)
Astro’s design is quite nice. It’s flexing into SSR web apps pretty nicely I’d say. The way they organize things isn’t always my favorite, but if it means I can avoid ever touching NextJS I’m happy.
mildly off topic, but for dotfile management i have never found a better solution than a git repo in your homedir + a glob in .gitignore
this makes setup trivial, and only depends on a single tool that you probably already use.
mine are here if you wanna see an example of what it looks like in practice: https://git.j3s.sh/dotfiles/tree/main/
Please don’t make your home directory a git repo — it breaks a lot of tooling that checks to see if it’s in a git repo or not. Instead, symlink out of a repo.
I’ve used homeshick for many years but there are probably better alternatives now.
On topic, home-manager is neat but doesn’t work on Unixes other than Mac or Linux.
what breaks, exactly? i’ve been doing this for many years and haven’t had a single tool break in that way.
Count yourself lucky then! This was a persistent problem at Facebook – there was a lot of tooling within repos that cd’d to the root of the repository (e.g.
cd $(git rev-parse --show-toplevel)) so you could run it from anywhere within the repo. When we were switching to Mercurial, people putting.gitin their home directories caused issues for years. Now that developers are using Jujutsu more (maybe without a colocated git repository) this is soon going to start causing issues again.I’ve also seen
.gitin home directories cause issues with Nix flakes.now that you say that, i’m pretty sure there are some scripts at $current_job that could bite me if i ran them outside of their project dir.
maybe it’s my idealism speaking, but shying away from a toplevel git repository because some scripts don’t do proper directory validation before they act is smelly - i’d rather fix the scripts.
here’s a check i wrote for a script of my own that relies on
--show-toplevel:i suppose if i worked at a place as huge as Facebook, i may have a different perspective haha.
I think it’s more about going with the natural grooves that tooling provides. In a lot of cases it’s nice to be able to auto-discover the root of the repo. You could add a specific file to mark the root of the workspace (whatever that means), but that isn’t going to be a universal standard.
In general, nested repos are a bit of an exceptional situation.
I’m mostly leaning towards the same conclusion but as I’ve never seen such a script in person I’d say it might a problem of certain orgs. I mean, if I run it in ~/code/foo and it would breaks because of ~/.git - why would it work if I ran it under ~/code/foo/my-git-submodule/subdir/?
This is one of the reasons why my professional recommendation is to pretend that Git submodules don’t exist.
no shell vcs prompt? mine would be telling me my git revision at all times!
nope, haha! i can see how that would be annoying. on the other hand, knowing that you have uncommitted config file changes could be a useful thing.
my prompt is a little go program i wrote - if i want to know the status of git i just type
git statuslike a luddite hah.if i do ever add git branch support to my prompt, i suppose i’ll have it ignore my homedir if i find it annoying
ya! I did actually used to do things in exactly your way. I don’t recall why, but at some point I ported to a little dotfile manager I wrote for myself which
cp’d everything into place. I suspect it was because I wanted to use them on multiple target hosts, so of course it came with a very small, very reasonable templating syntax; per-host master files would specify which source files would go where, and set variables used by some of them. The whole thing was about a hundred lines of Ruby, mostly managing parsing the master files (heredoc support, of course!) and safety; checking if a given destination file was OK to overwrite and so on. It was nice; it ran on Linux, macOS, BSD … I used that up until moving to Nix!I like your custom prompt :) It’s nice to personalise things so much. I am reminded of my co-author’s post on customising Git slightly, and having that stay with her indefinitely thanks to Nix. You might be interested!
That’s what I do (ignoring the home directory repo in my prompt function): https://github.com/bkhl/dotfiles/blob/main/.bashrc#L116
Same here. Haven’t run into issues with it for a long time, but if I run into tools these days where there’s a chance they will try to mess around in my home directory whether or not there is a
.gitfile/directory there, I’d choose to sandbox that tool over changing my home directory setup.One example from my workflow: Magit. Right now if I run it from a directory that isn’t part of a git repo it’ll complain about that. If my home directory were a git repo I’d be accidentally adding files to my home git repo instead of getting that error.
GNU Emacs being as friendly to monkey-patching as it is, one should be able to advise it to think the home directory is not a Git repo, right?
I do exactly this, use Magic to manage the files in my home directory and have never had this be a problem.
There are various ways to achieve it but I prefer
Oh, you don’t have to have the repo located at
~/.gitto track the home directory directly with git and avoid symlink management :)https://mitxela.com/projects/dotfiles_management
I’ve done this since 2009, various jobs, with no breakage. Tooling that breaks because a git repo is inside another git repo sounds like very buggy tooling.
There’s just no general, universal way to define the “root of the workspace”, so the Schelling point everyone settles on is the root of the repo. And when the repo uses a different VCS than the tooling is expecting, and the home directory uses the same VCS, well, things go wrong. Hard to call that “very buggy” — it’s just outside the design spec for those tools and scripts.
It’s similar to how many tools don’t handle git submodules (which are a different but related nested repo situation). They’re not buggy — git submodules are merely outside the design spec.
A lot of the tooling I’m thinking of was actually fine with this. It broke when there was a Mercurial repo inside a Git repo, because the repo auto discovery would find the Git repo at the home directory.
Don’t assume that you’ll be using Git forever.
Which unixes are you after? There’s been some work in nixpkgs to support free/net/open-bsd this year.
I use illumos for work, and also have a FreeBSD computer.
I would be thrilled if nixpkgs was able to support illumos
I almost do the same thing except with a symlink strat using this bash script: https://erock.pastes.sh/dotfiles.sh
This approach is fine if you only have a single type of machine and never need to store secrets in your dotfiles, but as soon as you do have more than one type of machine (e.g. home and work, or Linux and macOS) or need to store secrets then there are much more powerful and easy-to-use alternatives like https://chezmoi.io.
i use this setup across a variety of machines of different types - macos, linux, and openbsd - 5 machines in total.
sometimes i run into a situation where i need software to act differently, and there’s almost always a way to make it do so. for example, i needed my git config to be different on my work machine, hence:
in my git config. a lot of software supports this kind of “conditional include” :D
i also configure my ~/bin path according to the hostname & architecture of the machine i’m on. that way, i can drop scripts into
$HOME/bin/<machine-name>or$HOME/bin/<architecture>& have them function differently on different machines.How have you found it better than home-manager described in TFA? It would be interesting to read a pros and cons list.
i’ve never used
home-manager, but i personally prefer staying away from bespoke tools where possible. i like that i can justgit add -f <whatever>to begin tracking it, and that i can see when i’ve messed with live configs via agit statusfrom my homedir. it’s the most “it just works” way of managing dotfiles i’ve found.to set up a new machine:
to track a new config file:
to commit changes, use git the usual way:
as far as cons go, i can think of a few:
git rev-parse --show-toplevelmay breakI do the exact same. I first read about this approach from drew devault - https://drewdevault.com/2019/12/30/dotfiles.html
Although many think that IRC’s lack of history is a feature, I find it really confuses new users and poses a problem in certain kinds of channels. (E.g. “support” chats, which are one of the top uses of IRC. People ask a question and leave, expecting to see answers when they come back.)
There are mitigation strategies for those issues (I recently found some traditional IRC servers have history, but it’s opt-in per channel, IIRC), but really, Ergo defaulting to having chat history is IMHO very nice and makes it gain relevance as an alternative to the Slacks/Discords/Telegrams of the world.
Combined with Kiwi, you can provide a link that will get you into a channel without any registration step, which few other alternatives can provide.
(I really don’t care what support channel you choose for your software… as long as it’s not proprietary with no allowed third-party clients. XMPP, Matrix, etc. all have their scenarios. But I feel the humble IRC is more of a contender than we think.)
I totally agree. I run https://pico.sh and all realtime comms are through IRC. Overall it has been a very positive experience. The biggest downside is when people want to join to ask a question that have never used IRC before. They join, see a completely empty channel, ask a question, and then leave because it looks dead. It is a very jarring experience and people feel like they are talking into the void.
Agreed. Server handling channel history is a huge step towards user friendliness IMO. I wonder why the original design decided to put the onus of recording history on the user. As you said for something like support/help having the history available is nice. Discord used to get lambasted earlier on when used as the community platform for open source projects because of potential loss of knowledge but IRC has had the same problem unless the user had access to a bouncer.
Well, IRC was invented in 1988- like 8 years before ICQ. It was a different age and it is not surprising that it evolved organically in a way that seems bizarre nowadays.
I agree in part with the people who think lack of history is a feature. I think it’s good to treat some IRC channels as transient. I think chat history should only be used to address user inconvenience if they don’t have a persistent session, not as a way to make IRC chats eternal.
Chat history or not, mods can run an archival process that pushes logs to HTTP, but it’s always going to do other things to build “knowledge bases”: FAQs, web forums, QA software, wikis, etc.
The initial design and evolution of IRC, through say 1998, was heavily guided by what could be easily tucked into the corner of an IT budget in a CS department or ISP. Had ircd implemented a feature for server-stored channel history most IRC network operators would have turned it off or limited it to like an hour of history.
Most major support channels were paired with a website that hosted channel history archives and/or a FAQ, which would have been linked in the channel topic. e.g. http://mywiki.wooledge.org/BashFAQ
This somehow reminds me of the story about reinventing the city bus[0]. So when you want to have a support channel which is the way you describe, maybe a forum would be a better solution. Chat systems (not only irc) live from being ``synchrony''. Yes there is this patter in irc where you ask a question and get an answer several hours later. I would call this synchronization phase. Because when you then get some response mostly you can continue the conversation in a synchrony manner.
Yes I know there are benefits from history (and from other features of other chat implementations), but when you think this benefits are more imported then other features you should consider switching tools instant of try to convert irc in this other solutions.
[0] https://stanforddaily.com/2018/04/09/when-silicon-valley-accidentally-reinvents-the-city-bus/
Oh, I have the same persistent doubts about what is suitable for chat and what for forums.
(Some people likely had extreme doubts about this and decided to create Zulip, I guess.)
Ultimately, I think it’s an unsolvable problem, because some support situations are better in real-time, and some are better asynchronously. And there’s also a matter of personal taste! This seems to be confirmed by the huge amount of “things” that provide both sync and async methods of communication.
As I mentioned:
, and I can relate to the feeling, ultimately, I feel that IRC’s lack of chat history is a net negative for the world at large. Other inferior systems with chat history are more widely used, so I’m kinda forced to use Discord and Matrix, where IRC would make me much happier.
(Also, I do not have solid data, but my perception is that a huge share of IRC users run bouncers or whatever to have chat history.)
I think there are tradeoffs here, but I feel making chat history more prevalent in IRC would be a net win.