The common answer of “you need it anyway because for some reason the linux works better with it even though there’s no reason the linux should” is highly unsatisfactory.
But at least slightly more palatable than “Linux is woefully derpy in its handling of low-memory conditions, and often requires silly workarounds. It has chosen a dark road that even Microsoft thinks is a bad idea.”
Then there’s the potential of secrets in RAM ending up in it. It has to be encrypted for that reason. Or just get rid of it enduring the consequences with a bit of extra memory. Also, like in old-school games, do save often.
On a modern computer, RAM does not go unused.
I have a computer with 32 GB of RAM. (It was on sale.) Most of it goes unused. It turns out (shocker!) that 32 GB is actually a shitton of data, and unless you’re allocating extreme amounts to applications (in which case swap isn’t useful), chugging through huge amounts of on-disk data (in which case cache mostly isn’t useful) or going for an uptime award, you’re very unlikely to pagecache anything approaching that much.
…or running a web browser for any length of time.
I also have 32GB in my desktop, which I don’t reboot often. I wish I could find out how large the page cache would grow over time, but instead I routinely end up with Chromium & Firefox devouring dozens of GBs.
I hate web browsers.
I’ve never seen Firefox grow beyond a few gigabytes. (Granted, this is still a ton, but not compared to the total amount of RAM in that system.) My habits may contribute; I rarely have more than a dozen tabs open, and I rarely have a browser process going for more than a week or two.
Chromium I understand to be worse. Also, perhaps ironically, most of that memory usage is cache; one of the arguments for swap is, essentially, that much of that cache will be weeks-old pages you’ll never look at again, and swap lets the kernel get that junk out of memory even though it can’t actually free it. (That suggests to me that better memory-management APIs may be in order, which allow applications to indicate to the OS that certain areas of memory are transient and may be reclaimed if needed.)
I recently released 11 gigs of memory by closing Chrome. I typically have >= 50 tabs, and often over 100 open across 2,3,4 browser windows.
Depending on OS, you might be able to limit the amount it can take up so you can kill it early if it’s doing it. You’ll at least consistently know you have all that extra space available.
With that much, you should consider exploring RAM disks for any apps or projects that you want speed on with a utility syncing them to real disk every so often. There used to be hard disks you could buy that would internally use RAM with flash as backup. USPS actually used them on desktops or something to make apps go crazy fast. I used to use them for both speed boosts and ephemeral data
I haven’t had one in a while so no links to current options or strategies.
I do this. Syching 4GB on every boot is the downside due to waiting times. But it works good and fast: browser cache and so on.
First I used a tmpfs, but found that it didn’t support extended file attributes (xattr; it is supported now). Played a bit around, and found out it’s pretty easy to mount a tmpfs, create (truncate) an empty file in it of a certain size (4GiB), write ext4 to that file and mount it.
Then I discovered zram, which allocates a certain amount of ram to a block device and compresses the data (algo lzo and lz4). There’s a little bit of a trade-off in terms of speed, but fun to play with on a dull hour, and useful in multiple occasions.
I agree - 32G is a big shoe to fill.
Even with zfs, to add to that. Which has an all-you-can eat ram coupon pretty much. Although that can be tweaked, to be fair. And inspite of all its ram use there’s surpisingly little benefit in terms of transfer speed gains. Not that zfs is slow, it’s just a bit sluggish perhaps.
“Syching 4GB on every boot is the downside due to waiting times.”
My strategy for that was causing the boot or sync to happen before I get up. It will just have been running a while. One at night. Basically, try to eliminate the whole waiting phase of permanent to RAM store. Incremental ones during the day as much as you’re willing to avoid losing work. ECC memory a must.
The Sun prestoserve!
That’s a new one on me. Looks to be a NFS offloader of sorts.
Why is this debatable? What is the benefit of NOT having swap space? It’s better to have a cheap but reliable safety net, than not having it. It’s orders of magnitude cheaper to have 16GB of RAM and create 16GB of swap space if you know that sometimes you will get allocations for like 20GB of memory, than buying another 16GB of RAM and make them sit unused most of the time.
The benefit of not having swap is that if you hit swap with any force, you will have to hard reboot the machine. Without swap, you might have to reboot the machine depending on which process gets the short end of the OOM killer/failed malloc stick and, in the latter case, what it does with it.
Good question. I’m guessing some people may think there system will be faster if they don’t have swap (if it’s not there, it can’t be used!). But those are perhaps the same people who -O6 and unroll their loops… In the early days of SSDs there was the argument that paging could cause premature SSD death but those days are long behind us.
Of course, without swap most systems can’t write a crash dump. Perhaps not that relevant for most loop unrollers…
Years ago I had a work machine on which I’d minimize Eclipse, do something in another window for a minute or two, then come back to Eclipse. It took a painfully long time for it to restore as the hard drive thrashed.
I turned off swap, and it was like getting a new computer. The workflow above took milliseconds instead of 15, 20, 30+ seconds. It was a huge win.
So sometimes turning off swap really is a big win.
I have swap disabled on my laptop. Whenever a build process goes crazy and the system starts to swap, I have to reboot. It is better if some random process gets killed, because there is a good chance to keep some state.
There is an (albeit ancient - 2004) quote from Andrew Morton that may be relevant here:
My point is that decreasing the tendency of the kernel to swap stuff out is wrong. You really don’t want hundreds of megabytes of BloatyApp’s untouched memory floating about in the machine. Get it out on the disk, use the memory for something useful.
Admittedly those were different times and he was referring to the swappiness tunable, but there is still some truth there, IMHO.