1. 8
  1.  

  2. 17

    Why not?

    Because I actually use my system RAM to do work, and Slack using 1gb out of 16 is moronic. I feel like I’m being trolled every time I need a bit more memory to do something, and I close a text-based chat application.

    1. 3

      This feels like blaming chemotherapy for your cancer. It’s not electron’s fault that none of these platforms ever figured out how to make a decent cross-platform toolkit. People are using electron and javascript because everything else sucks worse. If it bothers you that the only way to make a decent app is to use a hacked-up version of a web browser and a toy language, then dig deeper and fix the problems that make developers resort to it.

      1. 1

        That’s what swap space is meant to solve.

        I have 40GB of virtual memory, and only 16GB of that is RAM backed. Any memory that’s committed but unneeded when there’s memory pressure should be offloaded to swap. I’m assuming you have an SSD, of course. Don’t do this with a disk drive. If you happen to have an PCIe and/or NVMe SSD, then even better!

        Recently, I have committed up to 38GB to virtual memory with no noticeable degradation to my machine’s usability. In fact, I would not have known I was using that much virtual memory because my machine performed as expected. The only reason I knew I had committed that much memory was because my OS warned me with an “almost out of memory” warning when I had less than 2GB of virtual memory remaining (let me go increase my swap size…)!

        For those worried about endurance of an SSD by using it for swap, anecdotally my 480GB Crucial drive has 10TB of writes over the past 4 years (both as my primary OS/app and swap disk), and it’s rated with over 78TB of endurance… And all the while using Firefox which supposedly eats solid state disk with writes like no tomorrow.

        As a totally tangential observation, it’s kind of funny how much of an echo chamber this thread sounds like. All top level replies quote the exact same two words:

        Why not?

        “Why not?”, indeed, because memory usage certainly shouldn’t be one of those reasons!

        1. 6

          “Why not?”, indeed, because memory usage certainly shouldn’t be one of those reasons!

          Sure it should, why should we have N copies of:

          • A web browser
          • Disparate dependencies
          • Javascript modules running in there too
          • Likely each at different versions

          This is not an efficient way to utilize computers, its a devolution. Now how do I know that security fixes are there for all my electron apps? I don’t, they might as well be a fully static executable at that point. Depending on swap to save us from poor engineering decisions is a bit of a cop out and deflection.

          How about I counter your swap with my current use, I have 24GiB of swap committed, and its all from ONE thing. My web browser and javascript that goes nuts allocating memory. And yes I do have an ssd, it still sucks with 16GiB of memory and watching the OS grind to a halt constantly swapping in and out memory pages that amount to some foo.js framework of the week allocating over 3GiB of memory to display a web page. And god help me if I forget to close that tab as it can climb even higher.

          The past four years its been getting worse and worse. The why not response you see here is in my view a direct response to the trend of use a web platform as a vm for everything. My laptop right now performs about as fast as a computer in the late 90’s did with a 28.8k modem. This is with a 100Mb connection, when I found out a single web page was transferring 30MiB of data, I basically have given up on the web as it stands. We lost flash only to have javascript take its place for poor performance.

          1. 2

            Sure it should, why should we have N copies of […]

            If you’re genuinely proposing that just because two (or more) applications are using similar dependencies that they should literally share them, then we come from completely different perspectives. I subscribe to the Qubes OS mode of computing. Not only do I want N copies of web browsers, JS runtimes, and dependencies, I want them to be completely isolated from each other with the help of virtualization at the hardware level!

            I don’t trust any software I run to be free of bugs and exploits, so the more isolated they are from each other, the better. I couldn’t care less what version of a dependency or runtime each one used or how well patched it is, because I already expect them to be compromisable. Exploiting the network stack, browser, <insert software> should not give them access to my OS nor give it DMA to the entire system memory. Thus, isolate, isolate, isolate. Which also implies copies, copies, copies.

            Any amount of RAM that’s not being used is money I’ve spent doing nothing for me. RAM is there to be used, not some scarce resource to be coddled.

            I have 24GiB of swap committed, and its all from ONE thing. My web browser and javascript that goes nuts allocating memory. And god help me if I forget to close that tab as it can climb even higher.

            Sounds like a memory leak. If it’s a website you frequent and you care enough, perhaps report the issue with a memory trace captured using your preferred browser? I am with you on the fact that JS runtimes can do a lot better job of curbing memory usage in general.

            My laptop right now performs about as fast as a computer in the late 90’s did with a 28.8k modem.

            A bit of an exaggeration, but sure, I’ll bite: Why do we expect a computer to perform orders of magnitudes faster from an interface responsiveness perspective? If we really want an linear increase in performance with the same old interface, then run Mac OS 9 (Or Windows 98) under a VM on a modern machine. I can guarantee everything will boot up blazingly fast. But is that what we want? Going from a 10ms-per-frame performance to sub 1ms-per-frame is not going to be perceptible. At all.

            Developers will use the highest level language that will give acceptable performance and designers will add as much visual niceties as it is reasonable to do so while still having a “responsive” interface (defined as “still visually appears fluid”, the fact that it takes 2GFLOP/s worth of computation and 15ms of CPU time is of no concern to them).

            But okay, let’s talk about RAM specifically.

            If we’ve been following Moore’s “law”, we should be doubling our RAM capacities in our machines ever 2 years or so (RAM modules are transistors, too!). We clearly have no been following that trajectory, so if anything, the hardware industry is at fault for lagging behind (I blame Intel for a bug that’s plagued their processors for 4 generations, limiting ram modules to 8GiB [finally fixed with Broadwell]; AMD processors didn’t have this limitation). We should expect there to be performance repercussions when we have a lopsided pairing of CPU performance + RAM capacity.

            This is getting long, so let me just summarize my main points: 1) I want my processes to be isolated. This requires processes to not share memory space. 2) We have a huge disparity between CPU power and RAM capacity. This needs to be addressed by hardware vendors with pressure from hardware consumers (that’s us!). 3) We shouldn’t expect developers and designers to work with the same constraints as they did 10 years ago just for the sake of performance.

      2. 8

        Why not?

        Because Electron (E) is a huge pain in the ass to port. Most apps that use E are using pre-built binaries. This is great, if the binaries exist for your system, or if your system isn’t a “build from scratch” kinda system ({Open,Free}BSD ports, pkgsrc).

        Some of you might ask: “Why don’t you upstream your fixes to E, then they can build binary release?”

        Well, they might take the fixes, but we would also need fixes to V8, Chromium, node modules and likely others. This is a huge task, and until all the patches are upstream’d, every E based port would duplicate the patches required (often overlapping with things like Chromium).

        1. 5

          Why not?

          I’ve yet to come across an example of an Electron app which worked with the standard accessibility tools for people with disabilities. While i’m not saying it can’t be done, the tooling doesn’t seem to have a focus on it like the native toolkits have.

          1. 3

            Why not?

            What happens if one of the many shared libraries electron needs breaks the ABI? Put everything in docker containers?

            And the same portability problems the BSDs have are a problem for linux distributions using musl instead of glibc too.

            1. 2

              Why not?

              Because Electron apps are lowest-common denominator tasteless sludge. Might as well have a web application, if all you’re doing is shipping a crappy non-native runtime. Or Swing, for that matter.