1. 4

I guess it must be important for something, but I can’t think of what user program would need it. Anyone know?

  1.  

    1. 1

      Benchmarking perhaps?

      If you are trying to determine truly which algorithm is faster X or Y…. you had better be sure you are measuring the algorithm not merely whether the caches are hot or not, since the cache effects will dominate.

      Besides you can emulate the effect by filling the caches with other stuff. Just takes longer but still can be done.

      1. 1

        can you get that kind of timing though? All those exploits seem to measure how long clflush takes. I don’t see how you get the same info without it

        1. 4

          Hmm. I thought it was via checking the timing to access a permitted addressable location, but used indirect addressing to load that permitted location into cache based on an indirect value that you are not permitted to access.

          If…

          • you allowed to access location BaseAddress to BaseAddress + 256 * CacheLineSize,
          • you evict all of the allowed range from cache (or flush it from cache, either will do)
          • but you want to know the value of byte at the protected address pointerToByte
          • then attempt to load BaseAddress[ *pointerToByte * CacheLineSize]
          • Which will segfault since you’re not allow to dereference pointerToByte
          • but you had masked and ignored the fault
          • but the damage to the cache has already been done
          • and then walk down for I = 0 to 255 checking the time to access BaseAddress[ I*CacheLineSize]
          • all of which are permitted

          If the time to access BaseAddress[ I0 * CacheLineSize] is significantly faster than the other 255 timings… you know *pointerToByte had value I0