I don’t understand what could possibly be a realistic solution to the hypothetical in the article. What could possibly be done, other than sharing off peak servers with someone else, as done on the cloud?
I think the OP is talking about making the actual software faster. Basic performance things, but things that are primarily available via profiling and analysis. I think the retooling the article is taking about revolves around improving that efficiency, but it requires thinking things like “we want to know how efficient this is, so we can make it better”, rather than thinking, “the efficiency of this is X, so we need Y/X things, where Y is our objective performance level”.