1. 5

  2. 5

    This is very similar to the queue post from @ferd as few weeks ago. Push back. It makes everyone slower but your system will still function. In a case like retail this is probably much better than just dropping some transactions, which would be another alternative to this. Cap latency on transactions and toss out the rest.

    IMO, this is not the kind of degradation that most web services need to handle these days. Most systems will need to handle what to do when a component they depend on is unreachable or too slow. The lesson there is generally make it such that you can either live without it or have a much simpler version that can be run internally if the actual service is down.

    1. 1

      I heard a very similar tale from an old programmmer….

      They had a data entry system that caused floods of complaints about it being “slow” in the morning.

      On measurement, it was found during the morning signing on peak load, response times were indeed slower, unsurprisingly, but still within spec.

      The options were…

      • Say “It’s in Spec, live with it” - but have unhappy customers.
      • Upgrade hardware.
      • Put a tiny delay on every keystroke.

      They went with the last option.

      Why? It spread the load during the peaks, permitting the hardware to keep up.

      And provide consistency of response time for the typists “muscle memory”.

      Result, cheap easy fix, happy customer.