1. 6
  1.  

  2. 4

    I’m curious why they don’t consider prefetching or even mention it in the future work section of their paper.

    They show the hard limit from compulsory misses on both of their graphs with the “infinite” line; the standard way to get around that is prefetching, and I wouldn’t be surprised if something pretty naive is able to predict both what images will be viewed and which edge cache will serve it.

    This is way outside of my field, though; perhaps there’s some obvious reason not to prefetch.

    1. 1

      I was surprised by how much better the S4LRU cache is than the others, considering how simple the concept is. From the paper:

      Quadruply-segmented LRU. Four queues are maintained at levels 0 to 3. On a cache miss, the item is inserted at the head of queue 0. On a cache hit, the item is moved to the head of the next higher queue (items in queue 3 move to the head of queue 3). Each queue is allocated 1=4 of the total cache size and items are evicted from the tail of a queue to the head of the next lower queue to maintain the size invariants. Items evicted from queue 0 are evicted from the cache.

      Sounds an awful lot like generational GC!