1. 7
  1.  

  2. 2

    Interesting article!

    The method getRestaurantMenus, when simultaneously invoked by many coroutines, will result in one of the coroutines winning the race condition and successfully entering the body to execute fetchMenuFromRemoteCacheOrDatabase.

    It looks like this is solving the cache stampede problem with the locking approach, but using deferred coroutines for the locking. Couple of questions for the author:

    1. Have you considered working with a CDN cache to eliminate stampedes? With a one second cache, DoorDash should be able to reduce the number of incoming requests to a single menu to the number of CDN PoPs per second.
    2. For the other requests that are waiting, do they serve stale data and return, or just wait until the winning coroutine’s database read completes?
    1. 2

      Hey, If you look closely we are using the deferred not as a using mechanism but as a grouping mechanism. The best part about this approach is the late comers. So if your reads are expensive the readers coming towards the end (when Deferred is about to be fulfilled); see lesser latency. To answer your question:

      1. The above mentioned scenario is just used as an example, of-course one can use CDN to prevent for this scenario. We have done something similar at places where it was applicable. We use this techniques at various places including identity systems, where putting up such information will be a bad idea.
      2. Other coroutines just wait for winning coroutine to complete it’s read. You can have all sort of variations on top of it, e.g. have some sort of timeout and return stale data if scenario permits or start your own DB read. The gist resides in using promises to avoid repeated reads.