I think it might be nice to discuss what it means for an operation to be expensive. Because even trivial, small functions can become a bottleneck if you’re executing them 100s of millions of times per second.
Really the best way to quantify performance cost of an operation is to ask: How much total speed-up would I get, if I make the operation X% faster?
That was also my thought. The cost of the operation is only half the problem, the other half is the frequency of the operation. A database query may be in the 10^9 range for costs and an add in toe 10^0 range, but you can bet that you’re doing many orders of magnitude more adds that database operations. That’s an extreme example that is probably unhelpful, but something like a malloc+free call is probably a 10^3 operation and if you have a loop doing object allocation on the path to the database then this can eventually outweigh the cost of the remote call.
That said, it’s probably a good rule of thumb. Removing a single database call may well be a much smaller change and will get a big speed up relative to the size of the change. I think that’s a more important metric than the ‘how much speed up would I get’ question that you pose: how much speedup would I get per unit refactoring effort. If there’s a trivial change that gives me a 10% win, I’d rather start with that than a big refactoring that gives a 50% win. If I can find a few such high-value wins, I may not need the really big refactoring.