Rama is a platform for developing backends, general enough to handle all the computation and storage for pretty much any application at any scale. Rama is deployed as a cluster, and on that cluster you can deploy any number of Rama applications called “modules”. Module are programmed in either Java or Clojure.
Modules are event-sourced, so all information comes in through distributed logs called “depots”. In your module you code “topologies” which react to incoming data on depots to materialize any number of indexed datastores called PStates (“partitioned state”). PStates are defined as any combination of data structures and can thus represent any existing data model of any database plus infinite more.
All state in Rama is incrementally replicated, so if any nodes are lost a follower is ready to take over for that node immediately.
You can see short, heavily commented examples of using Rama for a variety of use cases in rama-demo-gallery.
Well, Rama’s not a VM. You express distributed code with Rama with its dataflow API, which is much more concise and expressive than Erlang-style message passing. It also provides higher-level guarantees about data processing than Erlang (e.g. microbatch topologies in Rama have exactly-once semantics and are cross-partition transactions for completely arbitrary distributed code in every case).
I thought it was something like SpacetimeDB - a system for building multiplayer applications (games or otherwise, though Spacetime seems to be for games) where your logic lives “inside” the datastore rather than being a separate app that reads/writes from it.
SpacetimeDB describes itself like this (if Rama’s different this won’t be helpful):
You can think of SpacetimeDB as both a relational database and a server combined into one. Instead of deploying a web or game server that sits in between your clients and your database, clients connect directly to the database and execute your logic inside the database itself.
No, and this also isn’t important in Rama’s model since PStates are not your source of truth. Depots are the source of truth and the definition of the data can and should be completely normalized. Any consistency issue in your PStates can be fixed with a recompute from depot data.
In RDBMS’s, referential integrity only protects against a subset of possible data integrity issues. When you need to denormalize your schemas in an RDBMS (due to being forced to for performance reasons), maintaining data integrity becomes an application problem. And this should never be the case for a supposed source of truth.
I don’t see how normalisation or denormalization affects whether I want transactions or not.
Can a user write into the depot with a transaction? Concretely: can they observe some state in the depot or a pstate at time T1 and then write something to the depot at time T2 only if that observed state is still valid?
Or do you get around it by having the depot be an atomic message log that you write higher-level messages to, and then each pstate can get access to that log?
Rama has two kinds of topologies, streaming and microbatching.
Microbatch topologies are cross-partition transactions for every PState in every case. Our bank transfer example relies on this.
Stream topologies are transactional for all PState changes on one partition of a module done in the same event, no matter how many.
You don’t need transactions for depots, since all data that must be atomic should just be in the same depot record. Like for bank transfer, you append one depot record that says “Transfer $10 from Alice to Bob” and not two depot records saying “Deduct $10 from Alice” and “Give $10 to Bob”.
The ACID page in the docs goes into this in more detail.
Where’s eye-tracking at? I saw a demo video of sadly-now-abandoned app called Talon Voice which appeared to show great eye-tracking capabilities. That would get rid of a lot of my mouse needs!
Tomorrow I’m getting my hologram microcosm to complement my Moogs, Mother-32 and DFAM. So I’ll be spending time just synthing around and making noise (alone or with the family, my 1 year old loves the knobs and lights and sounds)
I’ve finally started working on a game idea I’ve had for 6-7 years, yesterday I got a free day and spent all day pushing the project forward. The units are programmable using Uxn and I was working on an in-game debugger for them. I’ve decided to build it in the open and write devlogs at least weekly. Here’s the itch.io page for the project plus the devlogs.
$WORK: Final cleanup of the last release we did, fixing some bugs. General janitorial work.
The microcosm looks very sick. I’ve been eyeing it or the Chroma Console for sometime in the future. I also have little ones who like synths. There’s nothing cuter than my 3yo asking if we can play the “sympathizer” as he calls it. He “got me” a Dato Duo for Christmas this year and we jam on it all the time. Highly recommend for parents who want to “jam” with young kids.
It’s an audio mixer/looper doohickey! It lets you dial in a sound distortion/effect you like, then record, loop and play snippets live while adding or effects. Looks like this one can also record multiple samples and mix them together in different ways? I wouldn’t know what to do either, but if you’re good you can make things like this or this.
While it does have a small phrase looper the main attraction for the microcosm is not particularly the type of music you sent. The microcosm takes sound you give it, chops it up into little pieces and then has several ways of producing sound using those tiny pieces. For example it may take some of the pieces and play them back to you backwards in very small loops that are randomly pitch shifted. It is very good at adding depth to music by layering sounds related to what you just played using randomness.
Interaction nets are an alternate model of computation (along the lines of the lambda calculus or Turing machines). It has several interesting properties, one of the most notable being that it is fundamentally parallel.
https://en.wikipedia.org/wiki/Interaction_netshttps://t6.fyi/guides/inets
The inverse operator is one of the most unusual and powerful language features. It’s not the main pitch or anything though – the main goal of the language is to explore the design space of interaction-net-based programming languages. I think there are many potential applications for interaction nets in parallel & distributed computing, among other fields; and such applications will need a language.
That is a mind-bending idea. Worth a read. My kneejerk is response is “wow that’ll be hard to reason about” but it’d be unfair to write the idea off with such little consideration.
Yesterday I did my first serious editing using Ki. I was impressed with its novel take on modal editing. I felt productive quickly despite its minimal documentation and being different in almost every respect from Vim, Kakoune and Helix
I’ve been tossing up switching from Vim keys (which I never really learned very well, I’m very far from a power user, I don’t even HJKL). Could you expand on what you liked about Ki?
The biggest thing keeping me on Vim for me is I like my IDEs, and only Vim has a good implementation on VSCode so far as I know.
Another option would be actually learn Vim properly…
Yeah, me neither. The docs homepage has more details, but at the end I still don’t know what it really is, why/when I would want it, or how to build something with it.
I think it might be similar to ElectricSQL (automatically-synced apps?) but at least with ElectricSQL I understand what it is.
I think Skip needs to put a link to a Todo app implementation on their front page.
46 occurrences of the term “LiveBlocks” on the page, and giving libraries a check mark for having a LiveBlocks integration? This is marketing material.
In case anyone was wondering, this is the same Eve language that was on HN years ago. Here’s a link to the announcement of Eve winding down, from seven years ago.
Yeah, looks like there’s nothing new on the repo or blog to comment on either https://github.com/witheve/eve. Shame. Bunch of interesting ideas in here.
I prefer running web apps in my standard browser over the electron apps 100% of the time, and one of the reasons is that I, as the end user, am empowered with choice that way that electron denies me. And the general user experience is generally better, with better integration in my desktop (which you might not expect from a web app, but it is true because my browser is better integrated than their browser) and controls the other one might deny me.
It hasn’t been much of a compatibility problem either, everything ive wanted to use has worked fine, so I don’t buy the line about needing separate downloads for version control either.
uBlock is also very helpful for thwarting some of the dark patterns in web app design. For example, it is trivial to block the “recommendations” sections on YouTube to avoid falling in to rabbit holes there.
As another example, I’ve personally blocked almost the entire “home page” of GitHub with its mix of useless eye-grabbing information and AI crap. Ideally I’d just not use GitHub, but the network effect is strong and being able to exert my will on the interface to some extent makes it more tolerable for now.
Indeed, and this is a legit reason for users to prefer the web apps… but if you use their browser instead of your browser, you throw those sandbox benefits away. (as well as the actually kinda nice ui customization options you have in the web world)
Sure, but since Electron apps are usually just wrapped web apps anyway, might as well use them in a browser where you get to block unwanted stuff. At least if that’s a concern for you.
It’s a bit surreal to me that a guy who maintains electron and works at notion tries to tell me that I’m wrong about electron while their app is so broken that I can’t even log in in it because the input fields don’t work for me.
It exists in a lot of cases to get past gatekeepers at larger companies. Buyers in these organizations use checklists for product comparison, so the lack of a desktop app can rule a product out of contention. A PWA would likely suffice, but these get surprisingly negative feedback from large organizations where the people with the control and giving the feedback are somewhat distant from the usage.
Developers respond by doing minimal work to take advantage of the desktop. The developers know they could do deeper desktop integration, but product management only want to check the box and avoid divergence from the web experience (along with a dose of MVP-itis). End users could get more value from an otherwise cruddy Electron app, if it exploited helpful desktop integrations.
Clearly, it’d be better to have a native app that exploits the desktop, but this is unlikely to happen when the customer is checking boxes (but not suggesting solid integration use cases) and PMs are overly focused on MVPs (with limited vision and experience with how desktop apps can shine.) It’s funny how these things change when it comes to native mobile apps because cross-platform apps can get dinged on enterprise checklists while PMs are willing to commit heavily.
I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.
WHY this is the case is a respactable topic for sure, but since I already know the outcome I’m more interested in other topics.
I believe in your perception, but I wonder how people determine this sort of thing.
It seems like an availability heuristic: if you notice an app is bad, and discover it’s made in Electron, you remember that. But if an app isn’t bad, do you even check how it was built?
Sort of like how you can always tell bad plastic surgery, but not necessarily good plastic surgery.
On macOS, there has been a shift in the past decade from noticing apps have poor UIs and seeing that they are Qt, to seeing that they are Electron. One of the problems with the web is that there’s no standard rich text edit control. Cocoa’s NSTextView is incredibly powerful, it basically includes an entire typesetting engine with hooks exposed to everything. Things like drag-and-drop, undo, consistent keyboard shortcuts, and so on all work for free if you use it. Any app that doesn’t use it, but exposes a way of editing text, sticks out. Keyboard navigation will work almost how you’re used to, for example. In Electron and the web, everyone has to use a non-standard text view if they want anything more than plain text. And a lot of apps implement their own rather than reusing a single common one, so there isn’t even consistency between Electron apps.
In Electron and the web, everyone has to use a non-standard text view if they want anything more than plain text. And a lot of apps implement their own rather than reusing a single common one, so there isn’t even consistency between Electron apps.
The is probably the best criticism of Electron apps in this thread that’s not just knee-jerk / dogpiling. It’s absolutely valid and even for non-Electron web apps it’s a real problem. I work at a company that had it’s own collaborative rich-text editor based on OT, and it is both a tonne of work to maintain and extend, and also subtly (and sometimes not-so-subtly) different to every other rich text editor out there.
I’ve been using Obsidian a fair bit lately. I’m pretty sure it’s Electron-based but on OSX that still means that most of the editing shortcuts work properly. Ctrl-a and ctrl-d for start and end of line, ctrl-n and ctrl-p for next and previous line, etc. These are all Emacs hotkeys that ended up in OSX via NeXT. Want to know what the most frustrating thing has been with using Obsidian cross platform? Those Emacs hotkeys that all work on OSX don’t work on the Linux version… on the Linux version they do things like Select All or Print. Every time I switch from my Mac laptop to my Linux desktop I end up frustrated from all of the crap that happens when I use my muscle memory hotkeys.
This is something that annoys me about Linux desktops. OPENSTEP and CDE, and even EMACS, supported a meta key so that control could be control and invoking shortcuts was a different key. Both KDE and GNOME were first released after Windows keys were ubiquitous on PC keyboards that could have been used as a command / meta key, yet they copied the Windows model for shortcuts.
More than 50% of the reason I use a Mac is that copy and paste in the terminal are the same shortcut as every other app.
More than 50% of the reason I use a Mac is that copy and paste in the terminal are the same shortcut as every other app.
You mean middle click, right? I say that in jest, but anytime I’m on a non-Linux platform, I find myself highlighting and middle clicking, then realizing that just doesn’t work here and sadly finding the actual clipboard keys.
X11’s select buffer always annoyed me because it conflates two actions. Selecting and copying are distinct operations and need to be to support operations like select and paste to overwrite. Implicitly doing a copy-like operation is annoying and hits a bunch of common corner cases. If I have something selected in two apps, switching between them and then pasting in a third makes it unclear which thing I’ll paste (some apps share selection to the select buffer when it’s selected, some do it when they are active and a selection exists, it’s not clear which is ‘correct’ behaviour).
The select buffer exists to avoid needing a clipboard server that holds a copy of the object being transferred, but drag and drop (which worked reliably on OPENSTEP and was always a pain on X11) is a better interaction model for that. And, when designed properly, has better support for content negotiation, than the select buffer in X11. For example, on macOS I can drag a file from the Finder to the Terminal and the Terminal will negotiate the path of the file as the type (and know that it’s a file, not a string, so properly escape it) and insert it into the shell. If you select a file in any X11 file manager, does this work when you middle click in an X11 terminal? Without massive hacks and tight coupling?
If you select a file in any X11 file manager, does this work when you middle click in an X11 terminal?
There’s no reason why it shouldn’t on the X level - middle clicks to the same content negotiation as any other clipboard or drag and drop operation (in fact, it is literally the same, asking for the TARGETS property, then calling XConvertSelection with the format you want, the only difference is that second argument to XConvertSelection - PRIMARY, CLIPBOARD, or XdndSelection).
If it doesn’t work, it is probably just because the terminal doesn’t try. Which I’d understand; my terminal unconditionally asks for strings too, because knowing what is going on in the running application is a bit of a pain. The terminal doesn’t know if you are at a shell prompt or a text editor or a Python interpreter unless you hack up those programs to inform it somehow. (This is something I was fairly impressed with on the Mac, those things do generally work, but I don’t know how. My guess is massive hacks and tight coupling between their shell extensions and their terminal extensions.)
need to be to support operations like select and paste to overwrite
Eh, I made it work in my library! I like middle click a lot and frequently double click one thing to select it, then double click followed by middle click in another to replace its content. Heck, that’s how I do web links a great many times (I can’t say a majority, but several times a day). Made me a little angry that it wouldn’t work in the mainstream programs, so I made it work in mind.
It is a bit hacky though: it does an automatic string copy of the selection into an internal buffer of the application when replacing the selection. Upon pasting, if it is asked to paste the current selection over itself, it instead use that saved buffer. Theoretically pure? Nah. Practically perfect? Yup. Works For Me.
If I have something selected in two apps, switching between them and then pasting in a third makes it unclear which thing I’ll paste (some apps share selection to the select buffer when it’s selected, some do it when they are active and a selection exists, it’s not clear which is ‘correct’ behaviour).
You know, I thought this was in the spec and loaded it up to prove it and…. it isn’t. lol, it is clear to me what is the correct behavior (asserting ownership of the global selection just when switching between programs is obviously wrong - it’d make copy/paste between two programs with a background selection impossible, since trying to paste in one would switch the active window, which would change the selection, which is just annoying), I’d assert the selection if and only if it is an explicit user action to change the selection or to initiate a clipboard cut/copy command, but yeah the ICCCM doesn’t go into any of this and neither does any other official document ive checked.
tbh, I think this is my biggest criticism of the X ecosystem in general: there’s little bits that are underspecified. In some cases, they just never defined a standard, though it’d be easy, and thus you get annoying interop problems. Other cases, like here, they describe how you should do something, but not when or why you should do that. There’s a lot to like about “mechanism, not policy” but… it certainly has its downsides.
Fair points and a difference of opinion probably driven by difference in use. I wasn’t even thinking about copying and pasting files, just textual snippets. Middle click from a file doesn’t work, but dragging and dropping files does lead to the escaped file path being inserted into the terminal.
I always appreciate the depth of knowledge your comments bring to this site, thank you for turning my half-in-jest poke at MacOS into a learning opportunity!
More than 50% of the reason I use a Mac is that copy and paste in the terminal are the same shortcut as every other app.
You know, I’m always ashamed to say that, and I won’t rate the % that it figures into my decision, but me too. For me, the thing I really like is that I can use full vim mode in JetBrains tools, but all my Mac keyboard shortcuts also work well. Because the mac command key doesn’t interfere ever with vim mode. And same for terminal apps. But the deciding feature is really JetBrains… PyCharm Pro on Mac is so much better than PyCharm Pro on Linux just because of how this specific bit of behavior influences IdeaVim.
I also like Apple’s hardware better right now, but all things being equal, this would nudge me towards mac.
Nothing to be ashamed of. I’m a diehard Linux user. I’ve been at my job 3 years now, that entire time I had a goal to get a Linux laptop, I’ve purposefully picked products that enabled that and have finally switched, and I intend to maintain development environment stuff myself (this is challenging because I’m not only the only Linux engineer, I’m also the only x86 engineer).
I say all this to hammer home that despite how much I prefer Linux (many, many warts and all), this is actually one of the biggest things by far that I miss about my old work Mac.
Plus we live in a world now where we expect tools to be released cross-platform, which means that I think a lot of people compare an electron app on, say, Linux to an equivalent native app on Linux, and argue that the native app would clearly be better.
But from what I remember of the days before electron, what we had on Linux was either significantly worse than the same app released for other platforms, or nothing at all. I’m thinking particularly of Skype for Linux right now, which was a pain to use and supported relatively few of the features other platforms had. The election Skype app is still terrible, but at least it’s better than what we had before.
Weird, all the ones I’ve used have been excellent with great UX. It’s the ones that go native that seem to struggle with their design. Prolly because xml is terrible for designing apps
I’d really like to see how you and the parent comment author interact with your computer. For me electron apps are at best barely useable, ranging from terrible UI/UX but with some useful features and useless eye candy at the cost of performance (VSCode), to really insulting to use (Slack, Discord). But then I like my margins to be set to 0 and information density on my screen to approximate the average circa-2005 japanese website. For instance Ripcord (https://cancel.fm/ripcord/static/ripcord_screenshot_win_6.png) is infinitely more pleasant for me to use than Discord.
But most likely some people disagree - from the article:
The McDonald’s ordering kiosk, powering the world’s biggest food retailer, is entirely built with Chromium.
I’m really amazed for instance that anyone would use McDonald’s kiosks as an example of something good - you can literally see most of these poor things stutter with 10fps animations and constant struggles to show anything in a timely manner.
I’d really like to see how you and the parent comment author interact with your computer. For me electron apps are at best barely useable, ranging from terrible UI/UX but with some useful features and useless eye candy at the cost of performance (VSCode), to really insulting to use (Slack, Discord).
IDK, Slack literally changed the business world by moving many companies away from email. As it turns out, instant communication and the systems Slack provided to promote communication almost certainly resulted in economic growth as well as the ability to increase remote work around the world. You can call that “insulting” but it doesn’t change the facts of its market- and mind-share.
Emoji reactions, threads, huddles, screen sharing are all squarely in the realm of UX and popularized by Slack. I would argue they wouldn’t have been able to make Slack so feature packed without using web tech, especially when you see their app marketplace which is a huge UX boon.
Slack is not just a “chat app”.
If you want a simple text-based chat app with 0-margins then use IRC.
I could easily make the same argument for VSCode: you cannot ignore the market- and mind-share. If the UX was truly deplorable then no one would use it.
Everything else is anecdotal and personal preference which I do not have any interest in discussing.
If you want a simple text-based chat app with 0-margins then use IRC.
I truly miss the days when you could actually connect to Slack with an IRC client. That feature went away in… I dunno, 2017 or so. It worked fabulously well for me.
Yeah Slack used to be much easier to integrate with. As a user I could pretty easily spot the point where they had grown large enough that it was time to start walling in that garden …
This is not a direct personal attack or criticism, but a general comment:
I find it remarkable that, when I professionally criticise GNOME, KDE and indeed Electron apps in my writing, people frequently defend them and say that they find them fine – in other words, as a generic global value judgement – without directly addressing my criticisms.
I use one Electron app routinely, Panwriter, and that’s partly because it tries to hide its UI. It’s a distraction-free writing tool. I don’t want to see its UI. That’s the point. But the UI it does have is good and standards-compliant. It has a menu bar; those menus appear in the OS’s standard place; they respond to the standard keystrokes.
My point is:
There are objective, independent standards for UI, of which IBM CUA is the #1 and the Mac HIG are the #2.
“It looks good and I can find the buttons and it’s easy to work” does not equate to “this program has good UI.”
It is, IMHO, more important to be standards-compliant than it is to look good.
Most Electron apps look like PWAs (which I also hate). But they are often pretty. Looking good is nice, but function is more important. For an application running on an OS, fitting in with that OS and using the OS’s UI is more important than looking good.
But today ISTM that this itself is an opinion, and an unusual and unpopular one. I find that bizarre. To me it’s like saying that a car or motorbike must have the standard controls in the standard places and they must work in the standard way, and it doesn’t matter if it’s a drop-dead beautiful streamlined work of art if those aren’t true. Whereas it feels like the prevailing opinion now is that a streamlined work of art with no standard controls is not only fine but desirable.
Confirmation bias is cherry picking evidence to support your preconceptions. This, is simply having observed something (“all Electron apps I’ve used were terrible”), and not being interested in why — which is understandable since the conclusion was “avoid Electron”.
It’s okay at some point to decide you have looked at enough evidence, make up your mind, and stop spending time examining any further evidence.
Yes, cherry picking is part of it, but confirmation bias is a little more extensive than that.
It also affects when you even seek evidence, such as only checking what an app is built with when it’s slow, but not checking when it’s fast.
It can affect your interpretation and memory as well. E.g., if you already believe electron apps are slow, you may be more likely to remember slow electron apps and forget (if you ever learned of) fast electron apps.
Don’t get me wrong, I’m guilty of this too. Slack is the canonical slow electron app, and everyone remembers it. Whereas my 1Password app is a fast electron app, but I never bothered to learn that until the article mentioned it.
All of which is to say, I’m very dubious that people’s personal experiences in the comments are an unbiased survey of the state of electron app speeds. And if your data collection and interpretation are biased, it doesn’t matter how much of it you’ve collected. (E.g., the disastrous 1936 Literary Digest prediction of Landon defeating Roosevelt, which polled millions of Americans, but from non-representative automobile and telephone owners.)
We’re talking about someone who stopped seeking evidence, so it doesn’t apply here.
All of which is to say, I’m very dubious that people’s personal experiences in the comments are an unbiased survey of the state of electron app speeds.
So would I.
And it doesn’t help that apparently different people have very different criteria for what constitutes acceptable performance. My personal criterion would be “within an order of magnitude of the maximum achievable”. That is, if it is 10 times slower than the fastest possible, that’s still acceptable to me in most settings. Thing is though, I’m pretty sure many programs are _three_orders of magnitude slower than they could be, and I don’t notice because when I click a button or whatever they still react in fewer frames than I can consciously perceive — but that still impacts battery life, and still necessitates a faster computer than necessary. Worse, in practice I have no idea how much slower than necessary an app really is. The best I can do is notice that a similar app feels snappier, or doesn’t uses as much resources.
It still applies if they stopped seeking evidence because of confirmation bias.
Oh but then you have to establish that confirmation bias happened before they stopped. And so far you haven’t even asserted it — though if you did I would dispute it.
And even if you were right, and confirmation bias led them to think they have enough evidence even though they do not, and then stopped seeking, the act of stopping itself is not confirmation bias, it’s just thinking one has enough evidence to stop bothering.
Stopping to seek evidence does not confirm anything by the way. It goes both ways: either the evidence confirms the belief and we go “yeah yeah, I know” and forget about it, or it weakens it and we go “don’t bother, I know it’s bogus in some way”. Under confirmation bias, one would remember the confirming evidence, and use it later.
Oh but then you have to establish that confirmation bias happened before they stopped. And so far you haven’t even asserted it — though if you did I would dispute it.
As a default stance, that’s more likely to be wrong than right.
Which of these two scenarios is more likely: that the users in this thread carefully weighed the evidence in an unbiased manner, examining both electron and non-electron apps, seeking both confirmatory and disconfirmatory evidence… or that they made a gut judgment based on a mix of personal experience and public perception.
The second is way more likely.
the act of stopping itself is not confirmation bias, it’s just thinking one has enough evidence to stop bothering.
It’s the reason behind stopping, not the act itself, that can constitute “confirmation bias”.
… either the evidence confirms the belief and we go “yeah yeah, I know” and forget about it, or it weakens it and we go “don’t bother, I know it’s bogus in some way”. Under confirmation bias, one would remember the confirming evidence, and use it later.
As a former neuroscientist, I can assure you, you’re using an overly narrow definition not shared by the actual psychology literature.
Confirmation bias (also confirmatory bias, myside bias,[a] or congeniality bias[2]) is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one’s prior beliefs or values.[3] People display this bias when they select information that supports their views, ignoring contrary information, or when they interpret ambiguous evidence as supporting their existing attitudes. The effect is strongest for desired outcomes, for emotionally charged issues, and for deeply entrenched beliefs.
Sounds like a reasonable definition, not overly narrow. And if you as a specialist disagree with that, I encourage you to correct the Wikipedia page. Assuming however you do agree with this definition, let’s pick apart the original comment:
I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.
WHY this is the case is a respactable topic for sure, but since I already know the outcome I’m more interested in other topics.
Let’s see:
I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.
If that’s true, that’s not confirmation bias — because it’s true. If it isn’t, yeah, we can blame confirmation bias for ignoring good Electron apps. Maybe they only checked when the app was terrible or something? At this point we don’t know.
Now one could say with high confidence this is confirmation bias, if they personally believe a good proportion of Electron apps are not terrible. They would conclude highly unlikely that the original commenter really only stumbled on terrible Electron apps, so they must have ignored (or failed to notice) the non-terrible ones. Which indeed would be textbook confirmation bias.
But then you came in and wrote:
since I already know the outcome
This is exactly what confirmation bias refers to.
Oh, so you were seeing the bias in the second paragraph:
WHY this is the case is a respactable topic for sure, but since I already know the outcome I’m more interested in other topics.
Here we have someone who decided they had seen enough, and decided to just avoid Electron and move on. Which I would insist is a very reasonable thing to do, should the first paragraph be true (which it is, as far as they’re concerned).
Even if the first paragraph was full of confirmation bias, I don’t see any here. Specifically, I don’t see any favouring of supporting information, or any disfavouring of contrary information, or misinterpreting anything in a way that suits them. And again, if you as a specialist says confirmation bias is more than that, I urge you to correct the Wikipedia page.
is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one’s prior beliefs or values
But… Wikipedia already agrees with me here. This definition is quite broad in scope. In particular, note the “search for” part. Confirmation bias includes biased searches, or lack thereof.
If that’s true, that’s not confirmation bias — because it’s true.
Confirmation bias is about how we search for and interpret evidence, regardless of whether our belief is true or not. Science is not served by only seeking to confirm what we know. As Karl Popper put it, scientists should always aim to falsify their theories. Plus, doing so assumes the conclusion; we might only think we know the truth, but without seeking to disconfirm, we’d never find out.
I don’t see any favouring of supporting information, or any disfavouring of contrary information, or misinterpreting anything in a way that suits them
Why would they be consciously aware of doing that, or say so in public if they were? People rarely think, “I’m potentially biased”. It’s a scientific approach to our own cognition that has to be cultivated.
To reiterate, it’s most likely we’re biased, haven’t done the self-reflection to see that, and haven’t systematically investigated electron vs non-electron performance to state anything definitively.
And I get it, too. We only have so many hours in the day, we can’t investigate everything 100%, and quick judgments are useful. But, they trade off speed for accuracy. We should strive to remember that, and be humble instead of overconfident.
In particular, note the “search for” part. Confirmation bias includes biased searches, or lack thereof.
As long as you’re saying “biased search”, and “biased lack of search”. The mere absence of search is not in itself a bias.
quick judgments are useful. But, they trade off speed for accuracy.
Yup. Note that this trade-off is a far cry from actual confirmation bias.
If that’s true, that’s not confirmation bias — because it’s true.
Confirmation bias is about how we search for and interpret evidence, regardless of whether our belief is true or not.
Wait, I think you’re misinterpreting the “it” in my sentence. By “it”, I meant literally the following statement: “I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.”
That statement does not say whether all Electron apps are terrible or whether Electron makes apps terrible, or anything like that. It states what had been directly observed. And if it is true that:
They used many Electron apps.
They’ve all been terrible.
Every time there was an alternative it was better.
Then believing and writing those 3 points is not confirmation bias. It’s just stating the fact as they happened. If on the other hand it’s not true, then we can call foul:
If they only used a couple Electron apps, that’s inflating evidence.
If not all Electron apps they used have been terrible, there’s confirmation bias for omitting (or forgetting) the one that weren’t.
If sometimes the alternative was worse, again, confirmation bias.
As Karl Popper put it, scientists should always aim to falsify their theories.
For the record I’m way past Popper. He’s not wrong, and his heuristic is great in practice, but now we have probability theory. Long story short, the material presented in E. T. Jaynes’ Probability Theory: the Logic of Science should be part of the mandatory curriculum, before you even leave high school — even if maths and science aren’t your chosen field.
One trivial, yet important, result from probability theory, is that absence of evidence is evidence of absence: if you expect to see some evidence of something if it’s true, then not seeing that evidence should lower your probability that it is true. The stronger you expect that evidence, the further your belief ought to shift.
Which is why Popper’s rule is important: by actively seeking evidence, you make it that much more probable to stumble upon it, should your theory be false. But the more effort you put into falsifying your theory, and failing, the more likely your theory is true. The kicker, though, is that it doesn’t apply to just the evidence you actively seek out, or the experimental tests you might do. It applies to any evidence, including what you passively observe.
Why would they be consciously aware of doing that, or say so in public if they were? People rarely think, “I’m potentially biased”.
Oh no you don’t. We’re all fallible mortals, all potentially biased, so I can quote a random piece of text, say “This is exactly what confirmation bias refers to” and that’s okay because surely the human behind it has confirmation bias like the rest of us even if they aren’t aware of it, right? That’s a general counter argument, it does not work that way.
There is a way to assert confirmation bias, but you need to work from your own prior beliefs:
Say you have very good reasons to believe that (i) at least half of Electrons app are not terrible, and (ii) confirmation bias is extremely common.
Say you accept that they have used at least 10 such apps. Under your prior, the random chance they’ve all been terrible is less than 1 in a thousand. The random chance that confirmation bias is involved in some way however, is quite a bit higher.
Do the math. What do you know, it is more likely this comment is a product of confirmation bias than actual observation.
Something like that. It’s not exact either (there’s selection bias, the possibility of “many” meaning only “5”, the fact we probably don’t agree on the definitions of “terrible” and “better”), but you get the gist of it: you can’t invoke confirmation bias from a pedestal. You have to expose yourself a little bit, reveal your priors at the risk of other people disputing them, otherwise your argument falls flat.
Our comments are getting longer and longer, we’re starting to parse minutiae, and I just don’t have the energy to add in the Bayesian angle and keep it going today.
It’s been stimulating though! I disagree, but I liked arguing with someone in good faith. Enjoy your weekend out there.
Am I the only one who routinely looks at every app I download to see what toolkit it’s using? Granted, I have an additional reason to care about that: accessibility.
Who cares? “All electron apps are terrible, most non-electron apps are not” is enough information to try to avoid Electron, even if it just so happens to be true for other reasons (e.g maybe only terrible development teams choose Electron, or maybe the only teams who choose Electron are those under a set of constraints from management which necessarily will make the software terrible).
I totally agree. Specifically, people arguing over bundle size is ridiculous when compared to the amount of data we stream on a daily basis. People complain that a website requires 10mb of JS to run but ignore the GBs in python libs required to run an LLM – and that’s ignoring the model weights themselves.
There’s a reason why Electron continues to dominate modern desktop apps and it’s pride that is clouding our collective judgement.
As someone who complains about Electron bundle size, I don’t think the argument of how much data we stream makes sense.
My ISP doesn’t impose a data cap—I’m not worried about the download size. However, my disk does have a fixed capacity.
Take the Element desktop app (which uses Electron) for example; on macOS the bundle is 600 MB. That is insane. There is no reason a chat app needs to be that large, and to add insult to injury, the UX and performance is far worse than if it were a native client or a Qt-based application.
Take the Element desktop app (which uses Electron) for example; on macOS the bundle is 600 MB. That is insane. There is no reason a chat app needs to be that large, and to add insult to injury, the UX and performance is far worse than if it were a native client or a Qt-based application.
Is it because it uses Electron that the bundle size is so large? Or could it be due to the developer not taking any care with the bundle size?
Offhand I think a copy of Electron or CEF is about ~120-150MB in 2025, so the whole bundle being 600MB isn’t entirely explained by just the presence of that.
Hm. I may be looking at a compressed number? My source for this is that when the five or so different Electron-or-CEF based apps that I use on Windows update themselves regularly, each of them does a suspiciously similar 120MB-150MB download each time.
I don’t think the streaming data comparison makes sense. I don’t like giant electron bundles because they take up valuable disk space. No matter how much I stream my disk utilisation remains roughly the same.
Interesting, I don’t think I’ve ever heard this complaint before. I’m curious, why is physical disk space a problem for you?
Also “giant” at 100mb is a surprising claim but I’m used to games that reach 100gb+. That feels giant to me so we are orders of magnitude different on our definitions.
Also, the problem of disk space and memory capacity becomes worse when we consider the recent trend of non-upgradable/non-expandable disk and memory in laptops. Then the money really counts.
Modern software dev tooling seems to devolve to copies of copies of things (Docker, Electron) in the name of standardizing and streamlining the dev process. This is a good goal, but, hope you shelled out for the 1TB SSD!
I believe @Loup-Vaillant was referring to 3D eye candy, which I think you know is different from the Electron eye candy people are referring to in other threads.
A primary purpose of games is often to show eye candy. In other words, sure games use more disk space, but the ratio of disk space actually used / disk space inherently required by the problem space is dramatically lower in games than in Electron apps. Context matters.
I care because my phone has limited storage. I’m at a point where I can’t install more apps because they’re so unnecessarily huge. When apps take up more space than personal files… it really does suck.
And many phones don’t have expandable storage via sdcard either, so it’s eWaste to upgrade. And some builds of Android don’t allow apps to be installed on external storage either.
Native libraries amortize this storage cost via sharing, and it still matters today.
Does Electron run on phones? I had no idea, and I can’t find much on their site except info about submitting to the MacOS app store, which is different to the iOS app store.
Well, Electron doesn’t run on your phone, and Apple doesn’t let apps ship custom browser engines even if they did. Native phone apps are still frequently 100mb+ using the native system libraries.
It’s not Electron but often React Native, and other Web based frameworks.
There’s definitely some bloated native apps, but the minimum size is usually larger for the web based ones. Just shipping code as text, even if minified,is a lot of overhead.
Offhand I think React Native’s overhead for a “Hello world” app is about 4MB on iOS and about 10MB on Android, though you have to turn on some build system features for Android or you’ll see a ~25MB apk.
Just shipping code as text, even if minified,is a lot of overhead.
I am not convinced of this in either direction. Can you cite anything, please? My recollection is uncertain but I think I’ve seen adding a line of source code to a C program produce object code which was bigger by more bytes than the size of the source code line was bigger. And C is a PL which tends towards small object code size, and that’s without gzipping the source code or anything.
I don’t have numbers but I believe on average machine code has higher information density than the textual representation, even if you minify that text.
So if you take a C program and compile it, generally the binary is smaller than the total text it is generated from. Again, I didn’t measure anything, but knowing a bit about how instructions are encoded makes this seem obvious to me. Optimizations can come into play, but I doubt it would change the outcome on average.
I’ve seen adding a line of source code to a C program produce object code which was bigger by more bytes than the size of the source code line was bigger
That’s different to what I’m claiming. I’d wager that change caused more machine code to be generated because before some of the text wasn’t used in the final program, i.e. was dead code or not included.
Installed web apps don’t generally ship their code compressed AFAIK, that’s more when streaming them over the network, so I don’t think it’s really relevant for bundle size.
Installed web apps don’t generally ship their code compressed AFAIK, that’s more when streaming them over the network, so I don’t think it’s really relevant for bundle size.
IPA and APK files are both zip archives. I’m not certain about iOS but installed apps are stored compressed on Android phones.
I’m not sure about APK, but IPAs are only used for distribution and are unpacked on install. Basically like a .deb DMG on macOS.
So AFAIK, it’s not relevant for disk space.
FWIW phone apps that embed HTML are using WebView (Android) or WKWebView (iOS). They are using the system web renderer. I don’t think anyone (except Firefox for Android) is bundling their own copy of a browser engine because I think it’s economically infeasible.
Funnily enough, there’s a level of speculation that one of the cited examples (Call of Duty games) is large specifically to make storage crunch a thing. You already play Call of Duty, so you want to delete our 300GB game and then have to download it again later to play other games?
Kind of a weird take. For most writers your options are either no image at all, stock photo that probably doesn’t fit well, or AI that I can tell to do something at least slightly related to the content.
Also, saying that the “lack of effort reflects on the content” is rich when you admit that you lost interest…because of the banner image.
Also, saying that the “lack of effort reflects on the content” is rich when you admit that you lost interest…because of the banner image.
The writer writes one article, the reader reads from a selection of n articles, and must have some way of determining which are relatively worth their time to read. When one article opens with a full-page distraction that doesn’t bear on the content, my heuristic tells me this one’s probably not it.
I find that DNS-based ad blocking works surprisingly well, and there are lots of ways to get that (depending on your OS). I guess not many people use it, or the evil people would be working harder to get around it.
Ublock style adblocking where the whole ad element is removed rather just blocking the request that the ad element makes a lot of difference. I guess it depends on your prespective. But the former to me feels almost utopian.
Custom rules and the element picker is another great differentiator. The ability to easily get rid of stuff that you find annoying but the blocklist maintainers don’t consider relevant is great.
I have my own custom DoH server that does adblocking, and it gets rid of nearly every ad I’ve ever seen on iOS Safari (except YouTube, but I have Rehike to handle YouTube requests).
I never browser the web on mobile, it’s such a horrid experience and not because of ads. Maybe some day it’ll change but tbh I use my phone for Youtube/ Music and phone calls and the camera and that’s it. It’s just god awful at browsing ime.
Firefox Mobile is pretty good. It’s a shame they’ve stopped supporting the platform it was born in (Linux Maemo-Meego-Mer, now succeeded by SailfishOS). Niche mobile OSes need a modern browser to be a viable alternative. My N9 was usable way past its expiration date because it had a relatively fresh Firefox.
It’s also a shame they dropped their classic architecture and their customization ethos. I understand there were security issues, but they should have tried not to throw the baby with the bathwater. Vimperator was the very best browsing experience I have had on any platform, and it’s gone. Vim in the browser with nearly zero glitches, fast and no ads.
It’s also a shame they dropped their classic architecture and their customization ethos. I understand there were security issues, but they should have tried not to throw the baby with the bathwater. Vimperator was the very best browsing experience I have had on any platform, and it’s gone. Vim in the browser with nearly zero glitches, fast and no ads.
From what I remember, it wasn’t security issues. It’s that they tried several times with things like the Jetpack API before admitting to themselves that turning every bit of the browser internals into a de facto API surface was hogtying their ability to re-architect to catch up with Chrome’s multi-process architecture.
Reminds me of what a PITA it is to get all the edge-cases right when extending Vim when everything is monkey-patching keybindings willy-nilly as if they’re DOS TSRs or Mac OS INITs instead of using a proper plugin API.
That’s true, but security also played a role. There’s a post in Tridactyl’s mailing / issues list (can’t find it now) explaining how Mozilla is reluctant to give plugins the ability to change too much of the UI. Therefore, there new APIs do not offer that possibility. There were talks about creating APIs for privileged plugins, but it never panned out. A shame.
I’ve been using cromite for a relatively long time. It had a relatively good adblock and a dark reader — which are the replacements for the only two extensions I have in Firefox on mobile. Since Firefox had added process isolation for tabs I’m back to using it though
Can anyone explain what Rama actually is? I’ve been aware of it for two years, occasionally read around etc. and it just seems baffling.
Rama is a platform for developing backends, general enough to handle all the computation and storage for pretty much any application at any scale. Rama is deployed as a cluster, and on that cluster you can deploy any number of Rama applications called “modules”. Module are programmed in either Java or Clojure.
Modules are event-sourced, so all information comes in through distributed logs called “depots”. In your module you code “topologies” which react to incoming data on depots to materialize any number of indexed datastores called PStates (“partitioned state”). PStates are defined as any combination of data structures and can thus represent any existing data model of any database plus infinite more.
All state in Rama is incrementally replicated, so if any nodes are lost a follower is ready to take over for that node immediately.
You can see short, heavily commented examples of using Rama for a variety of use cases in rama-demo-gallery.
Does this mean it’s like BEAM for the JVM, letting you write distributed Clojure like Elixir?
Well, Rama’s not a VM. You express distributed code with Rama with its dataflow API, which is much more concise and expressive than Erlang-style message passing. It also provides higher-level guarantees about data processing than Erlang (e.g. microbatch topologies in Rama have exactly-once semantics and are cross-partition transactions for completely arbitrary distributed code in every case).
I thought it was something like SpacetimeDB - a system for building multiplayer applications (games or otherwise, though Spacetime seems to be for games) where your logic lives “inside” the datastore rather than being a separate app that reads/writes from it.
SpacetimeDB describes itself like this (if Rama’s different this won’t be helpful):
It’s unclear to me if there’s referential integrity across these various PStates. Or transactions across them. Anyone know?
No, and this also isn’t important in Rama’s model since PStates are not your source of truth. Depots are the source of truth and the definition of the data can and should be completely normalized. Any consistency issue in your PStates can be fixed with a recompute from depot data.
In RDBMS’s, referential integrity only protects against a subset of possible data integrity issues. When you need to denormalize your schemas in an RDBMS (due to being forced to for performance reasons), maintaining data integrity becomes an application problem. And this should never be the case for a supposed source of truth.
I don’t see how normalisation or denormalization affects whether I want transactions or not.
Can a user write into the depot with a transaction? Concretely: can they observe some state in the depot or a pstate at time T1 and then write something to the depot at time T2 only if that observed state is still valid?
Or do you get around it by having the depot be an atomic message log that you write higher-level messages to, and then each pstate can get access to that log?
Or something more exotic?
Rama has two kinds of topologies, streaming and microbatching.
Microbatch topologies are cross-partition transactions for every PState in every case. Our bank transfer example relies on this.
Stream topologies are transactional for all PState changes on one partition of a module done in the same event, no matter how many.
You don’t need transactions for depots, since all data that must be atomic should just be in the same depot record. Like for bank transfer, you append one depot record that says “Transfer $10 from Alice to Bob” and not two depot records saying “Deduct $10 from Alice” and “Give $10 to Bob”.
The ACID page in the docs goes into this in more detail.
No clue what kind of outfit this is, but “gentle” and “gentile” are not synonyms.
I think it’s just a typo
Probably meant “genteel” because ASP is so stately and refined. ;-)
Where’s eye-tracking at? I saw a demo video of sadly-now-abandoned app called Talon Voice which appeared to show great eye-tracking capabilities. That would get rid of a lot of my mouse needs!
Oh, so it’s not the MUI some people are already familiar with.
Or even an older MUI :)
In your code snippets some of the capital letters (outside of strings) are coloured a dark red.
eg in “F” and “I” in the following bit of code
Is that a quirk of the syntax highlighter? If not then what does it indicate?
I know, it’s a bug in chroma, or rather in chroma’s Gleam spec (reported here)
$PERSONAL:
$WORK: Final cleanup of the last release we did, fixing some bugs. General janitorial work.
The microcosm looks very sick. I’ve been eyeing it or the Chroma Console for sometime in the future. I also have little ones who like synths. There’s nothing cuter than my 3yo asking if we can play the “sympathizer” as he calls it. He “got me” a Dato Duo for Christmas this year and we jam on it all the time. Highly recommend for parents who want to “jam” with young kids.
Oh! The dato duo looks amazing! We’ll definitely get it once she is a bit older :D Thank you!
I wouldn’t know what to do with it, but that’s a beautiful piece of hardware
It’s an audio mixer/looper doohickey! It lets you dial in a sound distortion/effect you like, then record, loop and play snippets live while adding or effects. Looks like this one can also record multiple samples and mix them together in different ways? I wouldn’t know what to do either, but if you’re good you can make things like this or this.
While it does have a small phrase looper the main attraction for the microcosm is not particularly the type of music you sent. The microcosm takes sound you give it, chops it up into little pieces and then has several ways of producing sound using those tiny pieces. For example it may take some of the pieces and play them back to you backwards in very small loops that are randomly pitch shifted. It is very good at adding depth to music by layering sounds related to what you just played using randomness.
Here is an example: https://www.youtube.com/watch?v=mDp9S50qyT4
You can see here how it adds depth after he toggles the pedal on at 1:42.
It has 11 different ways it does that, with 4 variations each.
Couldn’t find out what interaction nets are from the docs but this seems to be the banner language feature: https://vine.dev/docs/features/inverse
Interaction nets are an alternate model of computation (along the lines of the lambda calculus or Turing machines). It has several interesting properties, one of the most notable being that it is fundamentally parallel. https://en.wikipedia.org/wiki/Interaction_nets https://t6.fyi/guides/inets
The inverse operator is one of the most unusual and powerful language features. It’s not the main pitch or anything though – the main goal of the language is to explore the design space of interaction-net-based programming languages. I think there are many potential applications for interaction nets in parallel & distributed computing, among other fields; and such applications will need a language.
For me that “inverse operator” looks like a way for having lazy evaluation in mostly eager language, as it is similar to Tardis monad.
This sounds faintly reminiscent of X, which I found fascinating as an undergraduate (a couple of decades ago!)
That is a mind-bending idea. Worth a read. My kneejerk is response is “wow that’ll be hard to reason about” but it’d be unfair to write the idea off with such little consideration.
I could be misunderstanding, but I think they’re just promises and futures, but used in a single-threaded instead of a multithreaded context.
Yesterday I did my first serious editing using Ki. I was impressed with its novel take on modal editing. I felt productive quickly despite its minimal documentation and being different in almost every respect from Vim, Kakoune and Helix
I’ve been tossing up switching from Vim keys (which I never really learned very well, I’m very far from a power user, I don’t even HJKL). Could you expand on what you liked about Ki?
The biggest thing keeping me on Vim for me is I like my IDEs, and only Vim has a good implementation on VSCode so far as I know.
Another option would be actually learn Vim properly…
I feel like I’m going crazy. I do a lot of full stack work and none of this homepage makes any sense to me
Yeah, me neither. The docs homepage has more details, but at the end I still don’t know what it really is, why/when I would want it, or how to build something with it.
I think it might be similar to ElectricSQL (automatically-synced apps?) but at least with ElectricSQL I understand what it is.
I think Skip needs to put a link to a Todo app implementation on their front page.
They have something of an example noted here.
46 occurrences of the term “LiveBlocks” on the page, and giving libraries a check mark for having a LiveBlocks integration? This is marketing material.
regression from a recent code change: https://github.com/lobsters/lobsters/issues/1445
That explains it - thanks!
In case anyone was wondering, this is the same Eve language that was on HN years ago. Here’s a link to the announcement of Eve winding down, from seven years ago.
Yeah, looks like there’s nothing new on the repo or blog to comment on either https://github.com/witheve/eve. Shame. Bunch of interesting ideas in here.
Yep. I was really sad to see Eve go. Such a neat project.
I prefer running web apps in my standard browser over the electron apps 100% of the time, and one of the reasons is that I, as the end user, am empowered with choice that way that electron denies me. And the general user experience is generally better, with better integration in my desktop (which you might not expect from a web app, but it is true because my browser is better integrated than their browser) and controls the other one might deny me.
It hasn’t been much of a compatibility problem either, everything ive wanted to use has worked fine, so I don’t buy the line about needing separate downloads for version control either.
Electron doesn’t support uBlock… enough said
Do many native apps support ad blocking extensions? That’d be the relevant comparison here.
Also I can’t say I’ve seen many (any?) ads in Electron apps, but I suppose they’re out there.
Tracking and analytics can be an issue as well, even if there aren’t any ads visible.
uBlock is also very helpful for thwarting some of the dark patterns in web app design. For example, it is trivial to block the “recommendations” sections on YouTube to avoid falling in to rabbit holes there.
As another example, I’ve personally blocked almost the entire “home page” of GitHub with its mix of useless eye-grabbing information and AI crap. Ideally I’d just not use GitHub, but the network effect is strong and being able to exert my will on the interface to some extent makes it more tolerable for now.
There’s nothing stopping any native app from doing that, too, right? It’s not an Electron thing per se
Indeed, and this is a legit reason for users to prefer the web apps… but if you use their browser instead of your browser, you throw those sandbox benefits away. (as well as the actually kinda nice ui customization options you have in the web world)
Sure, but since Electron apps are usually just wrapped web apps anyway, might as well use them in a browser where you get to block unwanted stuff. At least if that’s a concern for you.
I also use adblockers for removing AI prompts and such (thinking of slack here)
Particularly relevant to the author: I have no idea why a Notion app exists (which is probably broken in lots of ways) if it’s just a web site anyway.
It’s a bit surreal to me that a guy who maintains electron and works at notion tries to tell me that I’m wrong about electron while their app is so broken that I can’t even log in in it because the input fields don’t work for me.
It exists in a lot of cases to get past gatekeepers at larger companies. Buyers in these organizations use checklists for product comparison, so the lack of a desktop app can rule a product out of contention. A PWA would likely suffice, but these get surprisingly negative feedback from large organizations where the people with the control and giving the feedback are somewhat distant from the usage.
Developers respond by doing minimal work to take advantage of the desktop. The developers know they could do deeper desktop integration, but product management only want to check the box and avoid divergence from the web experience (along with a dose of MVP-itis). End users could get more value from an otherwise cruddy Electron app, if it exploited helpful desktop integrations.
Clearly, it’d be better to have a native app that exploits the desktop, but this is unlikely to happen when the customer is checking boxes (but not suggesting solid integration use cases) and PMs are overly focused on MVPs (with limited vision and experience with how desktop apps can shine.) It’s funny how these things change when it comes to native mobile apps because cross-platform apps can get dinged on enterprise checklists while PMs are willing to commit heavily.
I usually run into issues when I need to screen share or something that requires the FS
I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.
WHY this is the case is a respactable topic for sure, but since I already know the outcome I’m more interested in other topics.
I believe in your perception, but I wonder how people determine this sort of thing.
It seems like an availability heuristic: if you notice an app is bad, and discover it’s made in Electron, you remember that. But if an app isn’t bad, do you even check how it was built?
Sort of like how you can always tell bad plastic surgery, but not necessarily good plastic surgery.
On macOS, there has been a shift in the past decade from noticing apps have poor UIs and seeing that they are Qt, to seeing that they are Electron. One of the problems with the web is that there’s no standard rich text edit control. Cocoa’s NSTextView is incredibly powerful, it basically includes an entire typesetting engine with hooks exposed to everything. Things like drag-and-drop, undo, consistent keyboard shortcuts, and so on all work for free if you use it. Any app that doesn’t use it, but exposes a way of editing text, sticks out. Keyboard navigation will work almost how you’re used to, for example. In Electron and the web, everyone has to use a non-standard text view if they want anything more than plain text. And a lot of apps implement their own rather than reusing a single common one, so there isn’t even consistency between Electron apps.
The is probably the best criticism of Electron apps in this thread that’s not just knee-jerk / dogpiling. It’s absolutely valid and even for non-Electron web apps it’s a real problem. I work at a company that had it’s own collaborative rich-text editor based on OT, and it is both a tonne of work to maintain and extend, and also subtly (and sometimes not-so-subtly) different to every other rich text editor out there.
I’ve been using Obsidian a fair bit lately. I’m pretty sure it’s Electron-based but on OSX that still means that most of the editing shortcuts work properly. Ctrl-a and ctrl-d for start and end of line, ctrl-n and ctrl-p for next and previous line, etc. These are all Emacs hotkeys that ended up in OSX via NeXT. Want to know what the most frustrating thing has been with using Obsidian cross platform? Those Emacs hotkeys that all work on OSX don’t work on the Linux version… on the Linux version they do things like Select All or Print. Every time I switch from my Mac laptop to my Linux desktop I end up frustrated from all of the crap that happens when I use my muscle memory hotkeys.
This is something that annoys me about Linux desktops. OPENSTEP and CDE, and even EMACS, supported a meta key so that control could be control and invoking shortcuts was a different key. Both KDE and GNOME were first released after Windows keys were ubiquitous on PC keyboards that could have been used as a command / meta key, yet they copied the Windows model for shortcuts.
More than 50% of the reason I use a Mac is that copy and paste in the terminal are the same shortcut as every other app.
You mean middle click, right? I say that in jest, but anytime I’m on a non-Linux platform, I find myself highlighting and middle clicking, then realizing that just doesn’t work here and sadly finding the actual clipboard keys.
X11’s select buffer always annoyed me because it conflates two actions. Selecting and copying are distinct operations and need to be to support operations like select and paste to overwrite. Implicitly doing a copy-like operation is annoying and hits a bunch of common corner cases. If I have something selected in two apps, switching between them and then pasting in a third makes it unclear which thing I’ll paste (some apps share selection to the select buffer when it’s selected, some do it when they are active and a selection exists, it’s not clear which is ‘correct’ behaviour).
The select buffer exists to avoid needing a clipboard server that holds a copy of the object being transferred, but drag and drop (which worked reliably on OPENSTEP and was always a pain on X11) is a better interaction model for that. And, when designed properly, has better support for content negotiation, than the select buffer in X11. For example, on macOS I can drag a file from the Finder to the Terminal and the Terminal will negotiate the path of the file as the type (and know that it’s a file, not a string, so properly escape it) and insert it into the shell. If you select a file in any X11 file manager, does this work when you middle click in an X11 terminal? Without massive hacks and tight coupling?
There’s no reason why it shouldn’t on the X level - middle clicks to the same content negotiation as any other clipboard or drag and drop operation (in fact, it is literally the same, asking for the TARGETS property, then calling XConvertSelection with the format you want, the only difference is that second argument to XConvertSelection - PRIMARY, CLIPBOARD, or XdndSelection).
If it doesn’t work, it is probably just because the terminal doesn’t try. Which I’d understand; my terminal unconditionally asks for strings too, because knowing what is going on in the running application is a bit of a pain. The terminal doesn’t know if you are at a shell prompt or a text editor or a Python interpreter unless you hack up those programs to inform it somehow. (This is something I was fairly impressed with on the Mac, those things do generally work, but I don’t know how. My guess is massive hacks and tight coupling between their shell extensions and their terminal extensions.)
Eh, I made it work in my library! I like middle click a lot and frequently double click one thing to select it, then double click followed by middle click in another to replace its content. Heck, that’s how I do web links a great many times (I can’t say a majority, but several times a day). Made me a little angry that it wouldn’t work in the mainstream programs, so I made it work in mind.
It is a bit hacky though: it does an automatic string copy of the selection into an internal buffer of the application when replacing the selection. Upon pasting, if it is asked to paste the current selection over itself, it instead use that saved buffer. Theoretically pure? Nah. Practically perfect? Yup. Works For Me.
You know, I thought this was in the spec and loaded it up to prove it and…. it isn’t. lol, it is clear to me what is the correct behavior (asserting ownership of the global selection just when switching between programs is obviously wrong - it’d make copy/paste between two programs with a background selection impossible, since trying to paste in one would switch the active window, which would change the selection, which is just annoying), I’d assert the selection if and only if it is an explicit user action to change the selection or to initiate a clipboard cut/copy command, but yeah the ICCCM doesn’t go into any of this and neither does any other official document ive checked.
tbh, I think this is my biggest criticism of the X ecosystem in general: there’s little bits that are underspecified. In some cases, they just never defined a standard, though it’d be easy, and thus you get annoying interop problems. Other cases, like here, they describe how you should do something, but not when or why you should do that. There’s a lot to like about “mechanism, not policy” but… it certainly has its downsides.
Fair points and a difference of opinion probably driven by difference in use. I wasn’t even thinking about copying and pasting files, just textual snippets. Middle click from a file doesn’t work, but dragging and dropping files does lead to the escaped file path being inserted into the terminal.
I always appreciate the depth of knowledge your comments bring to this site, thank you for turning my half-in-jest poke at MacOS into a learning opportunity!
You know, I’m always ashamed to say that, and I won’t rate the % that it figures into my decision, but me too. For me, the thing I really like is that I can use full vim mode in JetBrains tools, but all my Mac keyboard shortcuts also work well. Because the mac command key doesn’t interfere ever with vim mode. And same for terminal apps. But the deciding feature is really JetBrains… PyCharm Pro on Mac is so much better than PyCharm Pro on Linux just because of how this specific bit of behavior influences IdeaVim.
I also like Apple’s hardware better right now, but all things being equal, this would nudge me towards mac.
Nothing to be ashamed of. I’m a diehard Linux user. I’ve been at my job 3 years now, that entire time I had a goal to get a Linux laptop, I’ve purposefully picked products that enabled that and have finally switched, and I intend to maintain development environment stuff myself (this is challenging because I’m not only the only Linux engineer, I’m also the only x86 engineer).
I say all this to hammer home that despite how much I prefer Linux (many, many warts and all), this is actually one of the biggest things by far that I miss about my old work Mac.
Have you seen or tried Kinto?
I have not heard of it and my ability to operate a search engine to find the relevant thing is failing me.
https://kinto.sh/
“Mac-style shortcut keys for Linux & Windows”
https://github.com/rbreaves/kinto
Plus we live in a world now where we expect tools to be released cross-platform, which means that I think a lot of people compare an electron app on, say, Linux to an equivalent native app on Linux, and argue that the native app would clearly be better.
But from what I remember of the days before electron, what we had on Linux was either significantly worse than the same app released for other platforms, or nothing at all. I’m thinking particularly of Skype for Linux right now, which was a pain to use and supported relatively few of the features other platforms had. The election Skype app is still terrible, but at least it’s better than what we had before.
Yeah, I recall those days. Web tech is the only reason Linux on the desktop isn’t even worse than it was then.
Weird, all the ones I’ve used have been excellent with great UX. It’s the ones that go native that seem to struggle with their design. Prolly because xml is terrible for designing apps
I’d really like to see how you and the parent comment author interact with your computer. For me electron apps are at best barely useable, ranging from terrible UI/UX but with some useful features and useless eye candy at the cost of performance (VSCode), to really insulting to use (Slack, Discord). But then I like my margins to be set to 0 and information density on my screen to approximate the average circa-2005 japanese website. For instance Ripcord (https://cancel.fm/ripcord/static/ripcord_screenshot_win_6.png) is infinitely more pleasant for me to use than Discord.
But most likely some people disagree - from the article:
I’m really amazed for instance that anyone would use McDonald’s kiosks as an example of something good - you can literally see most of these poor things stutter with 10fps animations and constant struggles to show anything in a timely manner.
My children - especially the 10 and 12 year old - will stand around mocking their performance while ordering food.
IDK, Slack literally changed the business world by moving many companies away from email. As it turns out, instant communication and the systems Slack provided to promote communication almost certainly resulted in economic growth as well as the ability to increase remote work around the world. You can call that “insulting” but it doesn’t change the facts of its market- and mind-share.
Emoji reactions, threads, huddles, screen sharing are all squarely in the realm of UX and popularized by Slack. I would argue they wouldn’t have been able to make Slack so feature packed without using web tech, especially when you see their app marketplace which is a huge UX boon.
Slack is not just a “chat app”.
If you want a simple text-based chat app with 0-margins then use IRC.
I could easily make the same argument for VSCode: you cannot ignore the market- and mind-share. If the UX was truly deplorable then no one would use it.
Everything else is anecdotal and personal preference which I do not have any interest in discussing.
I truly miss the days when you could actually connect to Slack with an IRC client. That feature went away in… I dunno, 2017 or so. It worked fabulously well for me.
Yeah Slack used to be much easier to integrate with. As a user I could pretty easily spot the point where they had grown large enough that it was time to start walling in that garden …
This is not a direct personal attack or criticism, but a general comment:
I find it remarkable that, when I professionally criticise GNOME, KDE and indeed Electron apps in my writing, people frequently defend them and say that they find them fine – in other words, as a generic global value judgement – without directly addressing my criticisms.
I use one Electron app routinely, Panwriter, and that’s partly because it tries to hide its UI. It’s a distraction-free writing tool. I don’t want to see its UI. That’s the point. But the UI it does have is good and standards-compliant. It has a menu bar; those menus appear in the OS’s standard place; they respond to the standard keystrokes.
My point is:
There are objective, independent standards for UI, of which IBM CUA is the #1 and the Mac HIG are the #2.
“It looks good and I can find the buttons and it’s easy to work” does not equate to “this program has good UI.”
It is, IMHO, more important to be standards-compliant than it is to look good.
Most Electron apps look like PWAs (which I also hate). But they are often pretty. Looking good is nice, but function is more important. For an application running on an OS, fitting in with that OS and using the OS’s UI is more important than looking good.
But today ISTM that this itself is an opinion, and an unusual and unpopular one. I find that bizarre. To me it’s like saying that a car or motorbike must have the standard controls in the standard places and they must work in the standard way, and it doesn’t matter if it’s a drop-dead beautiful streamlined work of art if those aren’t true. Whereas it feels like the prevailing opinion now is that a streamlined work of art with no standard controls is not only fine but desirable.
This is called confirmation bias.
No, that’s not what confirmation bias means.
This is exactly what confirmation bias refers to.
Confirmation bias is cherry picking evidence to support your preconceptions. This, is simply having observed something (“all Electron apps I’ve used were terrible”), and not being interested in why — which is understandable since the conclusion was “avoid Electron”.
It’s okay at some point to decide you have looked at enough evidence, make up your mind, and stop spending time examining any further evidence.
Yes, cherry picking is part of it, but confirmation bias is a little more extensive than that.
It also affects when you even seek evidence, such as only checking what an app is built with when it’s slow, but not checking when it’s fast.
It can affect your interpretation and memory as well. E.g., if you already believe electron apps are slow, you may be more likely to remember slow electron apps and forget (if you ever learned of) fast electron apps.
Don’t get me wrong, I’m guilty of this too. Slack is the canonical slow electron app, and everyone remembers it. Whereas my 1Password app is a fast electron app, but I never bothered to learn that until the article mentioned it.
All of which is to say, I’m very dubious that people’s personal experiences in the comments are an unbiased survey of the state of electron app speeds. And if your data collection and interpretation are biased, it doesn’t matter how much of it you’ve collected. (E.g., the disastrous 1936 Literary Digest prediction of Landon defeating Roosevelt, which polled millions of Americans, but from non-representative automobile and telephone owners.)
We’re talking about someone who stopped seeking evidence, so it doesn’t apply here.
So would I.
And it doesn’t help that apparently different people have very different criteria for what constitutes acceptable performance. My personal criterion would be “within an order of magnitude of the maximum achievable”. That is, if it is 10 times slower than the fastest possible, that’s still acceptable to me in most settings. Thing is though, I’m pretty sure many programs are _three_orders of magnitude slower than they could be, and I don’t notice because when I click a button or whatever they still react in fewer frames than I can consciously perceive — but that still impacts battery life, and still necessitates a faster computer than necessary. Worse, in practice I have no idea how much slower than necessary an app really is. The best I can do is notice that a similar app feels snappier, or doesn’t uses as much resources.
??? It still applies if they stopped seeking evidence because of confirmation bias. I’m not clear what you’re trying to say here.
Oh but then you have to establish that confirmation bias happened before they stopped. And so far you haven’t even asserted it — though if you did I would dispute it.
And even if you were right, and confirmation bias led them to think they have enough evidence even though they do not, and then stopped seeking, the act of stopping itself is not confirmation bias, it’s just thinking one has enough evidence to stop bothering.
Stopping to seek evidence does not confirm anything by the way. It goes both ways: either the evidence confirms the belief and we go “yeah yeah, I know” and forget about it, or it weakens it and we go “don’t bother, I know it’s bogus in some way”. Under confirmation bias, one would remember the confirming evidence, and use it later.
As a default stance, that’s more likely to be wrong than right.
Which of these two scenarios is more likely: that the users in this thread carefully weighed the evidence in an unbiased manner, examining both electron and non-electron apps, seeking both confirmatory and disconfirmatory evidence… or that they made a gut judgment based on a mix of personal experience and public perception.
The second is way more likely.
It’s the reason behind stopping, not the act itself, that can constitute “confirmation bias”.
As a former neuroscientist, I can assure you, you’re using an overly narrow definition not shared by the actual psychology literature.
From Wikipedia:
Sounds like a reasonable definition, not overly narrow. And if you as a specialist disagree with that, I encourage you to correct the Wikipedia page. Assuming however you do agree with this definition, let’s pick apart the original comment:
Let’s see:
If that’s true, that’s not confirmation bias — because it’s true. If it isn’t, yeah, we can blame confirmation bias for ignoring good Electron apps. Maybe they only checked when the app was terrible or something? At this point we don’t know.
Now one could say with high confidence this is confirmation bias, if they personally believe a good proportion of Electron apps are not terrible. They would conclude highly unlikely that the original commenter really only stumbled on terrible Electron apps, so they must have ignored (or failed to notice) the non-terrible ones. Which indeed would be textbook confirmation bias.
But then you came in and wrote:
Oh, so you were seeing the bias in the second paragraph:
Here we have someone who decided they had seen enough, and decided to just avoid Electron and move on. Which I would insist is a very reasonable thing to do, should the first paragraph be true (which it is, as far as they’re concerned).
Even if the first paragraph was full of confirmation bias, I don’t see any here. Specifically, I don’t see any favouring of supporting information, or any disfavouring of contrary information, or misinterpreting anything in a way that suits them. And again, if you as a specialist says confirmation bias is more than that, I urge you to correct the Wikipedia page.
But… Wikipedia already agrees with me here. This definition is quite broad in scope. In particular, note the “search for” part. Confirmation bias includes biased searches, or lack thereof.
Confirmation bias is about how we search for and interpret evidence, regardless of whether our belief is true or not. Science is not served by only seeking to confirm what we know. As Karl Popper put it, scientists should always aim to falsify their theories. Plus, doing so assumes the conclusion; we might only think we know the truth, but without seeking to disconfirm, we’d never find out.
Why would they be consciously aware of doing that, or say so in public if they were? People rarely think, “I’m potentially biased”. It’s a scientific approach to our own cognition that has to be cultivated.
To reiterate, it’s most likely we’re biased, haven’t done the self-reflection to see that, and haven’t systematically investigated electron vs non-electron performance to state anything definitively.
And I get it, too. We only have so many hours in the day, we can’t investigate everything 100%, and quick judgments are useful. But, they trade off speed for accuracy. We should strive to remember that, and be humble instead of overconfident.
As long as you’re saying “biased search”, and “biased lack of search”. The mere absence of search is not in itself a bias.
Yup. Note that this trade-off is a far cry from actual confirmation bias.
Wait, I think you’re misinterpreting the “it” in my sentence. By “it”, I meant literally the following statement: “I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.”
That statement does not say whether all Electron apps are terrible or whether Electron makes apps terrible, or anything like that. It states what had been directly observed. And if it is true that:
Then believing and writing those 3 points is not confirmation bias. It’s just stating the fact as they happened. If on the other hand it’s not true, then we can call foul:
For the record I’m way past Popper. He’s not wrong, and his heuristic is great in practice, but now we have probability theory. Long story short, the material presented in E. T. Jaynes’ Probability Theory: the Logic of Science should be part of the mandatory curriculum, before you even leave high school — even if maths and science aren’t your chosen field.
One trivial, yet important, result from probability theory, is that absence of evidence is evidence of absence: if you expect to see some evidence of something if it’s true, then not seeing that evidence should lower your probability that it is true. The stronger you expect that evidence, the further your belief ought to shift.
Which is why Popper’s rule is important: by actively seeking evidence, you make it that much more probable to stumble upon it, should your theory be false. But the more effort you put into falsifying your theory, and failing, the more likely your theory is true. The kicker, though, is that it doesn’t apply to just the evidence you actively seek out, or the experimental tests you might do. It applies to any evidence, including what you passively observe.
Oh no you don’t. We’re all fallible mortals, all potentially biased, so I can quote a random piece of text, say “This is exactly what confirmation bias refers to” and that’s okay because surely the human behind it has confirmation bias like the rest of us even if they aren’t aware of it, right? That’s a general counter argument, it does not work that way.
There is a way to assert confirmation bias, but you need to work from your own prior beliefs:
Something like that. It’s not exact either (there’s selection bias, the possibility of “many” meaning only “5”, the fact we probably don’t agree on the definitions of “terrible” and “better”), but you get the gist of it: you can’t invoke confirmation bias from a pedestal. You have to expose yourself a little bit, reveal your priors at the risk of other people disputing them, otherwise your argument falls flat.
Our comments are getting longer and longer, we’re starting to parse minutiae, and I just don’t have the energy to add in the Bayesian angle and keep it going today.
It’s been stimulating though! I disagree, but I liked arguing with someone in good faith. Enjoy your weekend out there.
If Robert Aumann himself can at the same time produce his agreement theorem and be religious, it’s okay for us to give up. :-)
Thanks for engaging with me thus far.
Am I the only one who routinely looks at every app I download to see what toolkit it’s using? Granted, I have an additional reason to care about that: accessibility.
No I do this too. Always interesting to see how things are built.
You should write your findings up in a post and submit it! Might settle a lot of debates in the comments 😉
Were you able to determine that they were terrible because they used Electron?
Who cares? “All electron apps are terrible, most non-electron apps are not” is enough information to try to avoid Electron, even if it just so happens to be true for other reasons (e.g maybe only terrible development teams choose Electron, or maybe the only teams who choose Electron are those under a set of constraints from management which necessarily will make the software terrible).
I think one thing worth noting is this:
Emphasis mine. A lot of users won’t care if an app sort of sucks, but at least it exists.
I totally agree. Specifically, people arguing over bundle size is ridiculous when compared to the amount of data we stream on a daily basis. People complain that a website requires 10mb of JS to run but ignore the GBs in python libs required to run an LLM – and that’s ignoring the model weights themselves.
There’s a reason why Electron continues to dominate modern desktop apps and it’s pride that is clouding our collective judgement.
https://bower.sh/my-love-letter-to-front-end-web-development
As someone who complains about Electron bundle size, I don’t think the argument of how much data we stream makes sense.
My ISP doesn’t impose a data cap—I’m not worried about the download size. However, my disk does have a fixed capacity.
Take the Element desktop app (which uses Electron) for example; on macOS the bundle is 600 MB. That is insane. There is no reason a chat app needs to be that large, and to add insult to injury, the UX and performance is far worse than if it were a native client or a Qt-based application.
Mine does and many ISPs do. Disk space is dirt cheap in comparison to bandwidth costs
Citation needed.
See native (Qt-based) Telegram client.
Is it because it uses Electron that the bundle size is so large? Or could it be due to the developer not taking any care with the bundle size?
Offhand I think a copy of Electron or CEF is about ~120-150MB in 2025, so the whole bundle being 600MB isn’t entirely explained by just the presence of that.
I’m not so sure… the “minimal” spotify build is 344 MB:
And if you decompress it (well, this is an older version but i don’t wanna downlod another just for a lobsters comment):
1.3 GB libcef.so, no slouch.
Hm. I may be looking at a compressed number? My source for this is that when the five or so different Electron-or-CEF based apps that I use on Windows update themselves regularly, each of them does a suspiciously similar 120MB-150MB download each time.
I don’t think the streaming data comparison makes sense. I don’t like giant electron bundles because they take up valuable disk space. No matter how much I stream my disk utilisation remains roughly the same.
Interesting, I don’t think I’ve ever heard this complaint before. I’m curious, why is physical disk space a problem for you?
Also “giant” at 100mb is a surprising claim but I’m used to games that reach 100gb+. That feels giant to me so we are orders of magnitude different on our definitions.
When a disk is getting full due to virtual machines, video footage, photos, every bit counts and having 9 copies of chromium seems silly and wasteful.
Also, the problem of disk space and memory capacity becomes worse when we consider the recent trend of non-upgradable/non-expandable disk and memory in laptops. Then the money really counts.
Modern software dev tooling seems to devolve to copies of copies of things (Docker, Electron) in the name of standardizing and streamlining the dev process. This is a good goal, but, hope you shelled out for the 1TB SSD!
At least games have the excuse of showing a crapload of eye candy (landscapes and all that).
Funny, even within this thread people are claiming Electron apps have “too much eye candy” while others are claiming “not enough”
I believe @Loup-Vaillant was referring to 3D eye candy, which I think you know is different from the Electron eye candy people are referring to in other threads.
A primary purpose of games is often to show eye candy. In other words, sure games use more disk space, but the ratio of
disk space actually used / disk space inherently required by the problem spaceis dramatically lower in games than in Electron apps. Context matters.I care because my phone has limited storage. I’m at a point where I can’t install more apps because they’re so unnecessarily huge. When apps take up more space than personal files… it really does suck.
And many phones don’t have expandable storage via sdcard either, so it’s eWaste to upgrade. And some builds of Android don’t allow apps to be installed on external storage either.
Native libraries amortize this storage cost via sharing, and it still matters today.
Does Electron run on phones? I had no idea, and I can’t find much on their site except info about submitting to the MacOS app store, which is different to the iOS app store.
Well, Electron doesn’t run on your phone, and Apple doesn’t let apps ship custom browser engines even if they did. Native phone apps are still frequently 100mb+ using the native system libraries.
It’s not Electron but often React Native, and other Web based frameworks.
There’s definitely some bloated native apps, but the minimum size is usually larger for the web based ones. Just shipping code as text, even if minified,is a lot of overhead.
Offhand I think React Native’s overhead for a “Hello world” app is about 4MB on iOS and about 10MB on Android, though you have to turn on some build system features for Android or you’ll see a ~25MB apk.
I am not convinced of this in either direction. Can you cite anything, please? My recollection is uncertain but I think I’ve seen adding a line of source code to a C program produce object code which was bigger by more bytes than the size of the source code line was bigger. And C is a PL which tends towards small object code size, and that’s without gzipping the source code or anything.
I don’t have numbers but I believe on average machine code has higher information density than the textual representation, even if you minify that text.
So if you take a C program and compile it, generally the binary is smaller than the total text it is generated from. Again, I didn’t measure anything, but knowing a bit about how instructions are encoded makes this seem obvious to me. Optimizations can come into play, but I doubt it would change the outcome on average.
That’s different to what I’m claiming. I’d wager that change caused more machine code to be generated because before some of the text wasn’t used in the final program, i.e. was dead code or not included.
Installed web apps don’t generally ship their code compressed AFAIK, that’s more when streaming them over the network, so I don’t think it’s really relevant for bundle size.
IPA and APK files are both zip archives. I’m not certain about iOS but installed apps are stored compressed on Android phones.
I’m not sure about APK, but IPAs are only used for distribution and are unpacked on install. Basically like a
.debDMG on macOS.So AFAIK, it’s not relevant for disk space.
FWIW phone apps that embed HTML are using WebView (Android) or WKWebView (iOS). They are using the system web renderer. I don’t think anyone (except Firefox for Android) is bundling their own copy of a browser engine because I think it’s economically infeasible.
Funnily enough, there’s a level of speculation that one of the cited examples (Call of Duty games) is large specifically to make storage crunch a thing. You already play Call of Duty, so you want to delete our 300GB game and then have to download it again later to play other games?
I lost interest really quick after I realized the banner image was ai
It’s just such a mood killer for me
Why not just scroll past it and read the content?
the lack of effort that it symbolizes reflects on the content
Kind of a weird take. For most writers your options are either no image at all, stock photo that probably doesn’t fit well, or AI that I can tell to do something at least slightly related to the content.
Also, saying that the “lack of effort reflects on the content” is rich when you admit that you lost interest…because of the banner image.
No photo is probably the correct answer here.
The writer writes one article, the reader reads from a selection of n articles, and must have some way of determining which are relatively worth their time to read. When one article opens with a full-page distraction that doesn’t bear on the content, my heuristic tells me this one’s probably not it.
Stock images are just as bad. Just leave it out.
just the fact that uBlock Origin just works on mobile. How the heck do people browse at all without ad blocking?
I find that DNS-based ad blocking works surprisingly well, and there are lots of ways to get that (depending on your OS). I guess not many people use it, or the evil people would be working harder to get around it.
Ublock style adblocking where the whole ad element is removed rather just blocking the request that the ad element makes a lot of difference. I guess it depends on your prespective. But the former to me feels almost utopian.
Custom rules and the element picker is another great differentiator. The ability to easily get rid of stuff that you find annoying but the blocklist maintainers don’t consider relevant is great.
I have my own custom DoH server that does adblocking, and it gets rid of nearly every ad I’ve ever seen on iOS Safari (except YouTube, but I have Rehike to handle YouTube requests).
I never browser the web on mobile, it’s such a horrid experience and not because of ads. Maybe some day it’ll change but tbh I use my phone for Youtube/ Music and phone calls and the camera and that’s it. It’s just god awful at browsing ime.
depends entirely how lazy I am at that moment or if I’m out and about and just have the phone on me
in those circumstances it’s fine
I use NextDNS, and it works really well
Firefox Mobile is pretty good. It’s a shame they’ve stopped supporting the platform it was born in (Linux Maemo-Meego-Mer, now succeeded by SailfishOS). Niche mobile OSes need a modern browser to be a viable alternative. My N9 was usable way past its expiration date because it had a relatively fresh Firefox.
It’s also a shame they dropped their classic architecture and their customization ethos. I understand there were security issues, but they should have tried not to throw the baby with the bathwater. Vimperator was the very best browsing experience I have had on any platform, and it’s gone. Vim in the browser with nearly zero glitches, fast and no ads.
From what I remember, it wasn’t security issues. It’s that they tried several times with things like the Jetpack API before admitting to themselves that turning every bit of the browser internals into a de facto API surface was hogtying their ability to re-architect to catch up with Chrome’s multi-process architecture.
Reminds me of what a PITA it is to get all the edge-cases right when extending Vim when everything is monkey-patching keybindings willy-nilly as if they’re DOS TSRs or Mac OS INITs instead of using a proper plugin API.
That’s true, but security also played a role. There’s a post in Tridactyl’s mailing / issues list (can’t find it now) explaining how Mozilla is reluctant to give plugins the ability to change too much of the UI. Therefore, there new APIs do not offer that possibility. There were talks about creating APIs for privileged plugins, but it never panned out. A shame.
I’ve been using cromite for a relatively long time. It had a relatively good adblock and a dark reader — which are the replacements for the only two extensions I have in Firefox on mobile. Since Firefox had added process isolation for tabs I’m back to using it though
Astral seems to be making cool, open source tools, but they’re also venture capital funded with no obvious business model.
Maybe they want to replace PyPI and make money however NPM does?
Agreed and it’s a fair point.
There was a conversation thread about this on mastodon about that is interesting to read summarized nicely here:
https://simonwillison.net/2024/Sep/8/uv-under-discussion-on-mastodon/
Does NPM make money?
Here’s their product page
https://www.npmjs.com/products
I dunno, but they were a private company and sold shit. Idk if they still do now that Github bought them.