It is remarkable how a 2Gb RAM, 1.8GHz system is considered barebones for desktop Linux now.
I agree, but then one looks on “modern web” and starts to make sense.
My first “desktop” (i.e. with a GUI) Linux machine was a 486 with 12MB of RAM. You can hardly fit the kernel in that space now.
There’s a continuum of these things but I think we hit diminishing returns some time ago.
My first system with a GUI was an 8086 with 640KiB of RAM. It ran Windows 3.0 or GEM, on top of DOS. There was no memory protection and no pre-emptive multitasking so a single buggy app could kill everything. Nothing in the GUI had a spellcheck. I had WordStar for DOS that did support spell checking, so I’d write things in that and then use Write to add fonts and print. Doing anything graphical could easily exhaust memory: a large bitmap wouldn’t fit.
My next machine was a 386 with 5 MiB of RAM (1 MiB soldered to the board, 4 matched 1 MiB SIMMs). It ran DOS and Windows 3.11 and either Word 2.0 or ClarisWorks 1.0. Word had a few more features but ClarisWorks was more modular and so the vector drawing application worked well as a DTP app because every text field embedded all of the functionality of the word processor component. Editing large images caused a lot of swapping. Spell checking happened offline so that the dictionary didn’t need to be memory resident when not in use. Still no memory protection or preemptive multitasking. Minix and early Linux could just about run on this machine but I never tried and so have no idea how well they worked.
My next machine was a 133MHz Pentium clone with 32 MiB of RAM. This ran NT 4 and later dual booted Linux (Red Hat Linux 5) and Office 95. Both operating systems had memory protection and preemptive multitasking, so no buggy app could bring down the whole system (without also triggering an OS bug). This was also my first machine with a sound card and a 3D accelerator. The 3D card wasn’t used for anything except games back then. It had enough RAM to handle large documents, online spell checking, editing large images, and so on. It could play back tiny postage-stamp videos and video editing was so painful that I gave up almost immediately. This machine became a lot less responsive after I installed Internet Explorer 4. In hindsight, this was the harbinger of Electron: a load of things that were previously very fast and low-memory bits of the UI were replaced by slow HTML components.
This machine got a load of incremental upgrades. The upgrade that replaced most of the remaining bits (including the case) ended up with a 550 MHz Pentium III with 512 MiB of RAM running Windows 2000 (and, I think, Office 97 / StarOffice) and dual booting FreeBSD 4.x. I had an ATi All-in-Wonder 128, which had a TV tuner and some video encode / decode hardware but video editing was still too slow on this to be useful.
After a couple of upgrades, my next new machine was a PowerBook G4 (1.25GHz I think - the logic board was replaced a few times over its lifetime and so I don’t remember what it was originally) with 1 GiB of RAM. This could do non-destructive video editing (though the render times were quite long) and basically everything that I do today. Video editing and compiling large programs were the only things that were slow. 3D games would probably have been slow, but this was an early 2000s Mac, so there weren’t any.
Every machine before the first Mac (the fact that it’s a Mac was largely coincidental) noticeably increased the set of things that I could do comfortably. The later upgrades were all incremental speed increases. My mobile phone is significantly faster than that Mac and has more RAM. My current laptop is several times faster, uses a lot more CPU cycles and RAM to do the same things and the only things that I do that are slow are… video editing, games, and compiling large programs. I can’t think of anything I do today that I wasn’t doing in a fairly similar way with Mac OS X 10.6 with a much smaller hardware budget. The only difference is that web apps are now doing some things that native apps used to do and are, in spite of massive improvements in JIT compilers, much more RAM and CPU-hungry.
For me, there have been significant improvements more recently in terms of battery life and portability. Yes, they’re not really “things I can do”, but they’re still meaningful. For example, the battery in my M1 MBA lasts, effectively, all day.
To give Apple credit, some of that is part of the software stack. They’ve done quite a bit of work at the lowest level on things like timer coalescing, where you can tell kqueue how much slack a timer notification will tolerate and it will then try to deliver a lot of callbacks while in a high power state and then put the CPU to sleep, rather than delivering them all over a longer window. They’ve done quite a bit of work at the middle layers to do things like schedule book keeping things while you’re on mains power and avoid doing them on batteries. They’ve also done some UI things to name and shame apps that keep the CPU in high-power states so that people complain about apps that are burning more power than they should. They’ve also done a lot with the ‘power nap’ functionality that allows apps to schedule some background processing (e.g. periodically polling for email) to run on a low-power core that’s woken up occasionally while in deep sleep states.
That said, a lot more of it comes from the hardware. LPDDR4 is a big reduction in power consumption and the M1 has a lot of optimisation here.
I think I paid as much in real terms for the RPi4 w/ 4GB and case as it cost me to buy the 2MB RAM I needed to install Debian on my 386… (it came with 2MB which was just about sufficient to run Win 3.1)
I remember muLinux - a distro that fit an X desktop on two superformatted floppy disks.
Yes, but muLinux when booting from floppy still required 16 MiB of RAM. I started out with linux on a 386DX with 3 MiB of RAM… that was workable but ‘interesting’ in that SLS’ installer (as well as Slackware’s, a bit later) assumed 4 MiB. Well, it did make for a strong motivation to learn the internals to get the stuff installed in the first place. ;)
And then there’s collapseOS…
I’ve got a little HP laptop with 2GB of RAM, and while I can run a text-editor and some tools and Firefox and the GNOME session helpers all at the same time, it’s a bit of a squeeze. Hopefully this will help keep things productive a little longer!
I already had GRUB_CMDLINE_LINUX_DEFAULT=“crashkernel=384M-:128M zswap.enabled=1 zswap.compressor=lz4 mitigations=off”
in my /etc/grub/default file.
I’ve now added zswap.zpool=z3fold.
It’s interesting that this says they’re going to enable compressed swap on all rpi4 devices. I’d think the larger memory systems don’t need it.
Compression means spending CPU to reduce device IO. It makes sense if the storage device is slower (in relative terms) to the CPU. The amount of RAM isn’t really a factor - it decides how often the path executes, but not the optimal form of the path. I think what they are really saying is “SD cards are really slow, particularly for write.”
SD cards also have endurance problems, making this an even better trade.
Sure, personally I’ll just disable the swap completely on an rpi if my workload can fit comfortably in memory. Which in the case of 4/8GB is much more likely.
It is almost always better to use swap to allow more memory to be used for bufcache.
Isn’t the whole point of swap that it automatically uses your storage when your workload doesn’t fit in memory, and hardly costs anything when it does? I’m not sure you’re gaining anything by taking it into your own hands.
I’d rather the OOM killer kill processes instead of adding extra wear on an SD card.
This doesn’t write it to disk if possible. If compression frees up enough ram, it stays in ram. If not, it does write and read faster from storage…
Enabling compressed swap doesn’t change how likely it is for something to be swapped. The logic is probably that if something is to be swapped, at least have it compressed first. Tuning swapiness would be a different topic.
(and I think they could enable zram)
The screenshot shows an empty desktop with a terminal open and nothing else. I wonder if the performance is at all tolerable once you open, say, a web browser and VS Code at the same time. My 4GB laptop ran Ubuntu just fine for a long time, it was once I started having Chrome and Code open all the time that it started hanging.