1. 25
  1.  

  2. 3

    Using an in-process L1 cache is likely only useful at scale if:

    • your application makes multiple lookups of a cached object from L2 during handling one request
    • you have some server affinity, routing multiple successive requests from the same client to the application server

    Without one of these two, the chances of finding your entry in L1 decrease as you scale.

    (The first case seems odd, but was actually pretty common in one codebase I’m familiar with. In this case, though, the number of items you need to L1 cache is really small - perhaps a user object or similar).

    Another honourable mention in this space is groupcached: https://github.com/golang/groupcache (also written by the inestimable bradfitz (the person behind livejournal and the original perl memcached)).

    Groupcached is purely client-side, with application nodes co-operating to cache data and avoid thundering herd on cache load. I’ve not used it, but the approach is really interesting.

    1. 1

      Nice comparison/write up! I liked the background depth (for clustering, algorithms).