1. 4

  2. 1

    Why would you do this when you can just download more ram when you need it? ;)

    But seriously this sounds like it would be really useful in datacenters and render farms. Computers would use exactly what they need and no more or less.

    1. 3

      The old stuff like this, called “software distributed shared memory (DSM),” was very useful in datacenters, render farms, and anything where you’d use a NUMA machine. The NUMA machines had dozens to hundreds of CPU’s, tons of RAM, and high-bandwidth interconnect. The bill was six to seven digits with proprietary OS’s. SGI & Sun were major players with SGI UV their top thing left. Advantage over MPI etc was the programmer could do multithreaded apps like it was one OS image (“single, system image”). HW transparently handed the rest with scaling meaning adding nodes + reliability features.

      So, people wanted that without the price & with open standards. First came Beowulf clusters. They were a bitch to program, though. NUMA & DSM models were much better. Academics then started building schemes to put NUMA/DSM model in software libraries, compilers, languages, whatever. Kept devising tricks to get performance up with interconnects getting cheaper & better helping. MOSIX project even tried to do it at OS level where all the machines acted like one and migrations were transparent. Somewhat successful. Here’s a few examples of software DSM:





      Paper just reminded me of that stuff and all the fun people had trying to clone supercomputers of the day. Programming model’s advantages mean it’s still worth improving and building on real quick. Now days, things like NUMAscale’s product can give us real NUMA with commodity stuff. Cheap networking boards also have rDMA and stuff. Academics need to dust off that old research to see what combining it with modern stuff will bring them. Especially if combined with languages like ParaSail that make safe, parallel programming easier.