As you service is strongly dependent on a cache, you may find as an instance is booting it slurps in a copy of the cache from a neighbouring node. Once complete then your node can start advertising that it is healthy.
If you are feeling really nifty, you may want to for cache misses to actually have nginx to cycle and proxy to neighbouring instances until it gets a hit and the front node locally caches that response. To be fancier still you can have nodes send around cache digests so you do not need to cycle all your proxies.
…then of course your current solution is 20 lines of code.
As you service is strongly dependent on a cache, you may find as an instance is booting it slurps in a copy of the cache from a neighbouring node. Once complete then your node can start advertising that it is healthy.
If you are feeling really nifty, you may want to for cache misses to actually have nginx to cycle and proxy to neighbouring instances until it gets a hit and the front node locally caches that response. To be fancier still you can have nodes send around cache digests so you do not need to cycle all your proxies.
…then of course your current solution is 20 lines of code.