Another place you get a a 1/(1-x) term like that is modeling garbage collection: the x is the fraction of available memory you’ve filled with live data (i.e. non-garbage). In a simple model with full GCs only, if you’ve got 8 GB to play with and 4 GB of live data, you can allocate 4 more GB between each GC. If you have 7.9 GB used, you can only allocate 100MB, a 40x increase in work due to a <2x increase in live data, thanks to the magic of 1/(1-x). As you approach live data in all available RAM, you hit the wall. (Java can raise OutOfMemoryError because of a situation like this, where it can technically allocate but has to collect way too often.)
There’s a big gap between the toy model and a useful model–short-lived allocations are cheaper in generational GCs, you care about how costly and background-able the GCs themselves are, different runtimes pace GC differently (Go doesn’t look at physical memory size, Java does), etc..