1. 22
  1.  

  2. 2

    The final paragraph is profoundly depressing and interesting:

    Like many with a background in programming languages and their implementations, the idea that safe languages enforce a proper abstraction boundary, not allowing well-typed programs to read arbitrary memory, has been a guarantee upon which our mental models have been built. It is a depressing conclusion that our models were wrong — this guarantee is not true on today’s hardware. Of course, we still believe that safe languages have great engineering benefits and will continue to be the basis for the future, but… on today’s hardware they leak a little.

    I wonder how we can mitigate the cost of context switches.

    1. 4

      For me, it was depressing because security researchers in high-assurance security demonstrated those problems from 1992-1995 on commercial products on top of requiring hardware to be trustworthy, nobody paid attention (esp hackers or security people), and now those people’s models failed to account for this. They’re still ignoring the methods that found the problems in the 1990’s and/or not using what such researchers have created sense. Fortunately, the press waves from Meltdown/Spectre got similarly-bright people to start designing all kinds of solutions that are similar.

      Far as context switches, they went down over time due to the needs of HPC. They were at an all-time low last I checked. Lynx claimed their LynxSecure product could do over a hundred thousand a second with idle time still close to 100%. Some architectures like QNX dodge some of them by design. With a good design, they’re already not that bad. I can’t guess how much they can be improved given I know chip and OS designers put a lot of effort into optimizing them.