It’s way worse than Flash. Flash never used nearly as much RAM or CPU as any of these apps, and it ran on a Pentium 2 with Windows 98!
That’s funny cuz I was about to reply to @mattgreenrocks quip about frame rate by pointing out Half Life could run on my Pentium 2 (200+MHz) with 64MB of total RAM and Windows 98. Haha. You must have had one, too, back in the day.
Heck, I could do so much on that box. Programming, servers, music, movies, MS Office, photo editing, Web, etc. Then these modern apps need a gig for chat or have slow, 2D interfaces? WHAT!?
Maybe they were raised on WebTV, had to use it for developing software, and eventually it “upgraded” to Electron. Some crazy shit like that where the apps would seem impressive.
I still game these days, and it is incredible how far graphics have come since HL1.
I think that feeds part of my pessimism: by evening, I’ll indulge in these incredibly rich, complex graphical worlds (which aren’t perfect by any stretch), and then by day, I keep hearing about devs that are proud they hit 60fps animating a couple of rectangles on a obscenely-powerful CPU and GPU.
I also hate that it feels like I’m out of place in tech for my engineering leanings, despite putting way too many hours of time in HS, college, and after-college. The time honestly feels wasted if expertise is not valued. I know that it is, a bit, but collectively, its all about mouthy Medium thinkpieces and cramming JS into everything and being hailed a hero for making things “accessible,” externalities be damned.
The difference between performance conscious (not even super tweaked code) code and common code is just ridiculous.
Slack in Firefox freezes entirely for 5 seconds when you scroll up while it streams and renders a few KB of text, games stream in and decompress hundreds of megabytes per second without a hitch.
People write brag posts about serving millions of requests per day on their big distributed AWS architecture. Go’s compiler is considered very fast and a quick google puts it at tens of thousands of lines per second. Drawing high hundreds of millions of triangles per second is easy even on shit hardware.
One of the really harmful effects of this is that it makes people think computers are slow. If I have a few thousand things I need to update, I often catch myself thinking “this is going to be slow, I need to make it async/bust out fancy algorithms/etc”. I have to remind myself that if I just write the stupid code it will most likely run instantly. You can even do O(n^2) on hundreds of objects and have it complete in 0ms.
I have to remind myself that if I just write the stupid code it will most likely run instantly.
Indeed. And in the few times it is slow, I usually fix it with two more lines:
// Lots of changes
I’ll eat the humiliation to make a point backing you up! :)
Here is me recording my desktop in 2007 with an old crappy Sony Ericsson phone camera.
This is desktop 3D effects on a f-ing PIII 600 (256 ram) radeon 9250. I had that machine for >10 years.
Did programming on it, gaming, learning. Everything. Finally upgraded to an newer box as web browser became unwieldy.
I miss that box and how software was made back in the day.
I never tried doing a 3D desktop. Neat that it handled it so well at 600MHz. Although, I did some VRML back in the day. Was even on VWWW sites & 3D chat thinking 3D might be next, cool thing. Nope. Worse, VRML chats had environment that loaded up piece by painful piece on my 28Kbps. Back to IRC haha.
So, some things definitely got better over time like networked, real-time apps. The local ones got way worse, though, in terms of what one could do with the machine.
I remember using a 3D desktop program (yes, it was a real program) on my machine  and it ran a bit sluggishly. On a 33MHz CPU with 16M of RAM. That same program, today, would probably be acceptably fast, even on an emulator (the CPU in question was a MIPS R3000).
My, how times have changed.
 Not really mine, but the University’s. I was, however, about the only user on the machine.
What computer did you have from SGI that ran that program on 33MHz CPU, etc? I thought the workstations ranged from 100+MHz on up even in early 90’s. Or was it an emulator or a port?
I’m guessing @spc476 was using an Indigo, the earlier variants of which had a 33Mhz R3000 (later versions had 100Mhz R4000s). I love SGIs :)
Ahh. Didnt know about them. Thanks. Yeah, I miss them too. Way ahead of their time with HW architecture.
IRIS actually, but yes, close enough 8-)
Forcing JS everywhere is arrogance. Accept that native platforms offer a superior UX to web UIs at a fraction of the resource cost. Understand that forcing a document viewing platform to be an app runtime suffers from impedance mismatches that users pick up on. (Shipping a V8 runtime is still hacky).
For all that tech claims to care about execution and UX, they’re awfully wedded to second-best tooling. And for all the talk of “constant learning,” nobody seems to want to move on, instead cramming square pegs further into round holes, wasting engineering effort, and then writing self-congratulatory Medium posts on how they hit 60fps on an i7.
This is not engineering. It is fashion.
it’s not arrogance, it’s a sad but pragmatic concession to the fact that there is no good, productive way to write a cross-platform desktop app with easy packaging and shipping for at least linux, mac and windows. people can simply develop and maintain code a lot more easily in electron than they can in C++/Qt, .NET still has issues on linux, gtk is a pain to package and ship, and somehow java never took off (from personal experience, I gave up on it because swing was painful, but I hear there are better options now).
i have personally settled on ocaml + gtk, which i found nicely productive under linux, but it took me days to get things compiling under windows, mostly because i had to set up a cygwin environment and fight incompatibilities between things compiled against gtk 2.24.30 and things compiled against gtk 2.24.31.
(incidentally, racket is a very productive language for writing desktop apps in; it just needs a lot of work put into optimising the gui platform. i’m keeping a hopeful eye on it.)
This is a good point. There are two issues here:
A language that has equal footing on both macOS and Windows. There are a few that fit the bill here, but it certainly limits your options out of the gate.
A cross-platform UI framework that isn’t terrible. I think they all are, including Qt. (Qt just gets a pass these days because it is less bad than everything else.) Also, chosen language needs decent bindings to the UI framework.
Ideal: common library shared by native apps, but nobody does this.
good, productive way to write a cross-platform desktop app with easy packaging and shipping
It’s going to be controversial but: Java 8 is one, actually. And JavaFX isn’t really bad. And you can package the jre so the user does not have to ship it.
And you can use Kotlin or Ceylon or even Scala if you really don’t like .java.
Can you hot reload UI code without blowing up application state in Java/JavaFX the same way you can with the web? How about inspect and manipulate UI elements in a running program?
Iterative UI development on the web, if you’re careful to disentangle state and operations on that state (React makes this easy!), is very nice.
as an active java developer between 1997 and 2013 I would have said “no, not really”, but since getting into Android development, I have changed my tune completely. well designed code can immediately reload any and all state from storage and a good development environment can ‘hot’ deploy. I’ve extended the notion to some of my Java desktop and server apps and it works just as well. the key is in app design; if your code can tolerate being killed without notice, hot reloads are essentially a freebie!
clojure absolutely lets you do this, though clojure can be very sluggish for its own reasons.
or clojure :) i started writing a desktop app in clojure several years ago, and only gave it up because swing was too painful, but clojure itself was pretty pleasant to develop gui apps in.
I built an IDE for games with Clojure and JavaFX. The combination worked really well.
“Yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” - Dr. Ian Malcom, Jurassic Park (1993)
Shipping a V8 runtime is still hacky
Is shipping a lua runtime similarly hacky? edit: add a y to hack
I don’t know, but at least Lua is made to be embeddable, whereas JS is embedded because they want to write the app in that.
whereas JS is embedded because they want to write the app in that.
What is V8, if not an embeddable scripting language for Chrome’s version of WebKit?
Say what you will about JS (I hate it) but it’s almost undeniable that we’d be talking about Electron enough to have any debate about it at all if it used something other than JS.
Is it really embeddable? Wasn’t one of the issues of Node that V8 changes API so quick, it is difficult to keep track of it so unless you have a lot of resources, you’re going to get stuck on an old version because API compatibility is not a priority for V8?
Wasn’t one of the issues of Node that V8 changes API so quick
I have no idea, as I don’t follow this. But, this sounds like a tradeoff that Node has to make if they want to continue using V8, so long as V8 has no interest in providing this sort of compatibility.
Sure enough, only 4 hours later a discussion on trouble with keeping up with V8 happens on the Node project.
That’s fairly horrifying. :(
Lots of things are stuck inside another thing - Lua is seen as being a particularly embeddable language and runtime because a bunch of choices were made in their design to facilitate embedding them in things. V8 is essentially the opposite. Hence, while it clearly is embedded in various things, it isn’t really a language and runtime combination which is good for that case; so, ‘not really embeddable’.
With @Leonidas comment, I think I (may?) now understand your intention in saying “Shipping a V8 runtime is still hacky.”
Did you mean to imply that because V8 has to be yanked out of Chrome, and the API isn’t stable that it’s hacky?
I’d call it that. Lua, like most embeddable stuff, is designed to easily include in whatever app you want with its existinh interfaces. This will be easy. V8 is embedded in Chrome but maybe not “embeddable” in other stuff easily. Didn’t seem a high priority in its design.
Yup! I was trying to understand if @mattgreenrocks thought shipping any embedded language was hacky, or just V8 / node in particular. But even still, I don’t see how that’s avoidable in Electron’s case, given their goals.
I wouldn’t consider Lua hacky to ship, generally, but there are certainly situations in which shipping a Lua VM would be hacky. It all depends on what the intentions and goals are.
It was targeted more at V8, yeah.
You can usually find ways to package a VM in with an app and remove hassles around needing the VM on the system or certain versions. Desktop apps have big issues around making it easy to package and run everywhere, still.
I may be way off here (I’ve never done an Electron app), but I’m surprised that this thread seems to be going on the assumption (on all sides?) that embedding V8 is really the objectionable part of Electron. I had always assumed it was the embedded web browser (Chromium) that was the culprit in making these apps large, memory-hungry, and “webbish” in their UI conventions, not the embedded JS runtime. I mean, Node CLI apps aren’t necessarily my favorite way to write a CLI app, but they’re not sluggish the way the Slack desktop app is sluggish.
It is, but I think @apg was poking at what part of my “embedding V8 is still hacky” part, not, “what part of Electron is objectionable?”
Re-using the entire browser layout engine is ridiculous, yes.
I had always assumed it was the embedded web browser (Chromium)
I mentally filter most things that look like another JS framework or whatever. Especially if comments start about how resource-hungry it is. This is first one on Electron I actually read where I found out it embeds a whole, web browser. Wow, yeah, easily the most objectionable thing. This tangent covered another important property of how embeddable its main components were in the first place. As in, should they be used at all vs individual libraries? Where that went shows it’s even worse of an idea.
Eh… I don’t think so:
$ cd /tmp/v8; sloccount .
Totals grouped by language (dominant language first):
cpp: 1305265 (97.78%)
python: 27869 (2.09%)
sh: 1147 (0.09%)
ansic: 357 (0.03%)
lisp: 222 (0.02%)
$ cd /tmp/lua; sloccount .
Totals grouped by language (dominant language first):
ansic: 16595 (100.00%)
Lua’s source is a little over 1% of v8’s.
Even LuaJIT which is faster than v8 in many cases is smaller:
$ cd /tmp/luajit; sloccount .
Totals grouped by language (dominant language first):
ansic: 59836 (100.00%)
Lua is the only language I’ve seen that really meets the requirements for being an embedded interpreter (apart from some tiny lisp implementations). Granted, node runs on a lot of IoT devices but I’m pretty sure it’s using more than 256k of flash and more than 64k of ram (unlike elua).
requirements for being an embedded interpreter
What are the official requirements for this?
I don’t think a lot of people care about slocount if it meets their requirements.
There is also an “original” embedded interpreter - Tcl.
Who is forcing anything anywhere? People use the tools they want to use to build their apps. Make the native tools attractive to developers and they’ll build using those tools.
< zedgoat> ransomware of the future will be an electron app that does nothing but run 24/7 until you pay five bucks to close it for an hour.
I already feel like that with Slack.
Maybe it’s not entirely the app framework’s fault?
For example, Discord does a somewhat similar job to Slack, is also an Electron app, and AFAICT uses about 1/10th the amount of RAM. (Windows tells me Discord has ~100MB resident while I read image-heavy channels, I’ve heard Slack users reporting seeing ~1GB resident per Slack group.)
A minimal Electron app burns about 15MB resident memory according to Windows. V8 has reasonably compact object representations. We know how to do stuff without busy-polling from a setInterval() callback.
If we could make acceptably low-latency UIs back in the year 2000, in crummy interpreted VMs, on machines with about 1/20th the CPU and memory then there’s no reason why we shouldn’t be able to do it now with app frameworks that impose definitely no more than a 5x-ish (please excuse my handwaving) performance hit.
In fact, I think the APIs work with in the modern web are way better than the APIs that exist on desktop.
Not to jump off topic, but I strongly disagree with this. Web UI development APIs are uniquely awful, in part because the combination of HTML/CSS/JS is itself just broken, and all our APIs are desperate attempts to patch it.
Immediate mode UI (popularized by React) is an excellent idea that should be copied. The rest of it is awful, and seems to endlessly churn.
Immediate mode UI (popularized by React) is an excellent idea that should be copied
If anyone’s looking for an immediate mode native UI library, imgui is pretty nifty.
imgui looks rad, I’m going to look into that, thanks for posting!
I think that, so long as these UI frameworks are simply patches on top of the DOM, they’re all going to have some serious problems. The DOM is ridiculously complicated for doing UIs because it’s not a User Interface Object Model, it’s a Document Object Model.
I am a very stupid person, and if I have to juggle more than three layers in my head at once, I get confused. Your basic DOM tree contains… a lot of layers. A lot. This text area is about 13 layers deep in the DOM tree, and that’s pretty simple, as web applications go.
Have you used Elm at all? The model you’re required to use addresses this fairly nicely. Everything is strongly local; you’re only ever dealing with one layer of the DOM at once, as well as its interactions with the two layers directly above and below it.
I haven’t really used it, no, but just browsing their reference implementation of Todo.js, I can see that’s not strictly true. I also have issues with mixing presentational behaviors with DOM. It definitely looks better than your average web app, but it’s still not scratchin' my itch.
The DOM is ridiculously complicated for doing UIs because it’s not a User Interface Object Model, it’s a Document Object Model.
That’s not a big distinction when both UIs and “documents” consist of Elements.
The DOM is essentially just a bunch of elements that happen to form something that’s called a document. In some cases, the elements also happen to form a User Interface.
In some cases, but in most cases, DOM elements are not presentational. They’re semantic- a div is a logical division, which means… um… whatever I want I guess? We have new, semantic tags, which add some clarity- a header and a figure at least convey meaning, but they don’t specify any interaction or experience. A button does, at least when we build buttons using the button element, except for all the times we don’t, because anchors are also buttons. And for all of these, their built-in behaviors are basically wrong for the way they’re being used (the default button type is submit, for example), which means we have to supply callbacks to the real behaviors we want it to do.
Yes, but it doesn’t really matter what specific purpose the elements are used for.
You can build a UI with DOM elements with no regard for “semantics” at all, or you can build some sort of “Document” and make it as semantic as the DOM allows for.
My point was largely just that since it’s all Elements anyway, I don’t see how the fact that it’s called a Document Object Model makes it “ridiculously complicated” for doing UIs.
[Comment removed by author]
If your admin won’t enable the gateway, wee-slack allows you to connect from weechat, an ncurses-based client: https://github.com/wee-slack/wee-slack
Thank you! I didn’t know about WeeChat itself.
You can even use WeeChat’s relay functionality to connect from Emacs: https://github.com/the-kenny/weechat.el
I’ve been able to “survive” with these gateways. Even though you lose some features, they are good enough. The main issue is related with the threaded discussions, that get mixed with the normal content.
The IRC gateway is the only way I’ll use Slack. Beyond not wanting to devote gigabytes of ram to chat, I also have no desire to see the flurry of gifs, emojis, and reactions that the more “modern” view provides.
So true. I feel like I have to kill slack off every few days to stop my computer from melting.
On my Windows desktop at work Slack currently has 10 separate processes which amount to 547MB of RAM used. We just started using it, it’s just sitting in the system tray getting no messages.
This is one of those few times I think separation kernels might help in non-security use on desktops. Well, it sort of is a security principle where the app becomes a threat to your machine. The separation kernels would enforce strict time (CPU) and space (memory) isolation on the system where apps only get what you allow them. They can ask for as much as they want but can’t bypass limits. Might even be ways to force a sleep on them that way where they just think the Internet went down and back up a while or something. Nah, the partition just got no CPU time for a while. :)
Example showing how much is controlled:
Note: Either the app would have to run in VM’s or be redeployed to use such tech. Otherwise, it’s all in one partition that all goes sluggish at once.
Can’t a half arsed version of this be achieved with a normal kernel with modified scheduling, “Oh, you took 100% CPU the last 3 times you context switched to you, we’re going to skip you next round, we’ll get back to you in a few nanoseconds, good luck!”.
Yeah you can do that. It’s just not guaranteed to work if the kernel is non-deterministic or easily impacted by what apps do. You can do it, though.
Slack and Atom are both common culprits on my system.
I did not need to research native toolkits, nor learn them, nor test the application. That, for me, is huge.
I don’t care much about the resource use, because this app is not likely to be running for more than a few hours every once in a while. It doesn’t need to run all the time. Does it use a lot more memory than it would need to? Yeah, but as it doesn’t run long, and my users don’t care. Does it use more CPU? Yeah, none of my users care. It does not need to be fast. Does it drain battery? Possibly. But when the application runs, the things it does, will drain considerably more power than Electron itself, so again, I don’t care. Does it conflict with suspend, or the CPU going into battery saving mode? Perhaps. Shut it down then. It does not need to run all the time.
Point is, it saves me time and effort, and I can ship something to my users much, much faster. If I had to deal with native toolkits, the application would either be linux only, or wouldn’t exist at all. As most of my users are not on Linux, and they want the application, either of these alternative options would be far worse for them than the “waste” Electron adds.
It may not be the best choice for everything, but it certainly has its uses. And until a cross-platform toolkit emerges that offers a similar level of convenience for developers, Electron will have its place. I’m not going to hold my breath for an alternative.
It sure is.
Thing is, it’s really easy to quickly get set up with an installer, application, updater, etc etc. The applications can look pretty decent without loads of work. A lot of the native toolkits and frameworks are nowhere near as easy.
I mean, there are a lot of web developers. If you can take what you already know, just learn a little more, and make a desktop app, then it’s a lot more attractive than learning a new language, stack, everything… It’s not a surprise a lot of people choose electron.
But yeah, resource usage is insane. It is actually possible to write a small, neat, fast Electron app that is light on resources. It just isn’t particularly easy.
I think it’d be neat if we could have something vaguely similar to Electron, but without all of the Chrome/V8 bloat.
I only use Slack from the Chrome window and close it liberally. I’m missing out on some other Slack communities but it’s worth it.
So, for all I hate Flash, it did bring about an upsurge in small game production, media production, and random web app production, and a whole bunch of people became programmers, artists, animators, designers, and so forth who otherwise probably would not have. Electron offering a similar gateway effect seems unlikely, but the comparison is interesting.
Exactly. It moved the platform forward. We can hate on it all we want, but it enabled certain kinds of applications to exist that really wanted to be written.
Slack is my least favorite tool that I have to use for work. Not only is it poorly designed, but it’ll randomly use all the resources on my machine.
All this rage over Electron amuses me. Guessing most of you folks aren’t old enough to remember all the rage around emacs.
Feels like exactly the same set of arguments, 2017 edition, only with a slightly different platform, but with exactly the same trade-offs.
Electron is thriving because people desperately want a unified programmer interface that will let them solve many kinds of problems. Not everyone wants to have to learn Cocoa, C#/.Net/Win32 or whatever the MS Flavor of the day is, and pick the competing UI paradign du-jour to get you onto Linux.
I really like Visual Studio Code. It’s fast, super extensible, and stays out of my way. That’s the long and short of it. We can rant and roar all we want, but at the end of the day usable apps that solve problems and are able to be developed and extended quickly will win.
“Electron is thriving because people desperately want a unified programmer interface that will let them solve many kinds of problems.”
“Electron is thriving because people desperately want a unified programmer interface that will let them solve many kinds of problems.”
Sure, programmers want to just program once, run anywhere, but it is always going to be hard to compete with a product (Or 3 products) that are written in the native API of each platform.
In some cases yes, in some cases, not so much. Take VS Code as an example. I maintain that VS Code is one of the best things to happen to editing in the last 20 years. It’s an Electron app.
For some applications, I contend that native look and feel really doesn’t matter all that much. It’s an editor, do you want it to have dancing bears, or to provide a rich environment for editing your code?
Just my $.02 etc etc.
Just discovered nidium, which looks like a nice alternative.
I am wonder how much of the performance loss due to electron apps would be won back if those building UI frameworks took some cues from Apple and iOS. I don’t have much experience there, but one thing that always impressed me is the reuse of UI elements in things like scroll views.