1. 13
  1.  

  2. 1

    Another place you get a a 1/(1-x) term like that is modeling garbage collection: the x is the fraction of available memory you’ve filled with live data (i.e. non-garbage). In a simple model with full GCs only, if you’ve got 8 GB to play with and 4 GB of live data, you can allocate 4 more GB between each GC. If you have 7.9 GB used, you can only allocate 100MB, a 40x increase in work due to a <2x increase in live data, thanks to the magic of 1/(1-x). As you approach live data in all available RAM, you hit the wall. (Java can raise OutOfMemoryError because of a situation like this, where it can technically allocate but has to collect way too often.)

    There’s a big gap between the toy model and a useful model–short-lived allocations are cheaper in generational GCs, you care about how costly and background-able the GCs themselves are, different runtimes pace GC differently (Go doesn’t look at physical memory size, Java does), etc..

    Still, when I think about how, for instance, the “peak JavaScript” argument (that mobile Web apps would never speed up much after 2013) didn’t pan out, there’s obviously a lot of credit due to better JS engines and faster and more-core’d CPUs (among other things!), but it’s also pretty important that today’s phones pack a few times the RAM of 2013’s–it moves the “wall” on that 1/(1-x) graph a ways to the right.