1. 28
  1.  

  2. 9

    This seems to have some similarities to generational garbage collection (an ok description here: https://stackify.com/what-is-java-garbage-collection/). There’s at least some analogy between newly allocated memory and newly loaded files in the cache I think?

    1. 8

      An informative post. The title is slightly off in my opinion, since the popular assets also go through the transient cache, so they can be served from memory too. They basically just evict unpopular assets faster.

      My initial read of their claim was that they’d have a static database of unpopular assets permanently loaded in memory, which would’ve been really something. Still, it’s very interesting to see the resulting graphs even though the technique isn’t as out there as I hoped.

      1. 9

        they’re trying to reduce excessive SSD writes, which for unpopular assets are deadly.

        1. 3

          I also found the title confusing. I think a better title would be: “Why we started putting unpopular assets in memory only” (or something to that effect).

        2. 2

          On the topic of reducing extraneous SSD writes, I think it could be useful to have a filesystem for temporary files that initially stores files in memory, then moves files (or even just blocks of files) to disk if memory usage gets too high. Maybe Linux’s tmpfs with a big swap partition would work. Has anyone tried that?

          1. 2

            This article subtly transitions between “cache” referring to various levels of cache without clearly delineating which is which. It took me a long time to figure out that after a while, “cache” started referring to CloudFlares core business offering, i.e. serving web assets from their own servers instead of proxying the original.