1. 30

  2. 7

    Another mitigation with nontrivial performance penalties.

    1. 3

      And another situation where you don’t want to be in a shared environment (“the cloud”) as mentioned on the website. If this keeps going solutions like bare metal clouds and physical machines owned by a customer might become more relevant and common. Most bigger cloud providers are offering these, however they seem to be marketed more like an enterprise product, which makes sense given the costs, but also sounds odd from a security perspective.

      I also wonder how long it will be until systems are upgraded and whether we’ll see another person writing in half a year that they just realized their AWS server seems to be unpatched, because it seemingly was forgotten.

      On one hand it’s good that people now look into these things, on the other hand it can be really hard, if you want to take security seriously and implement more than “feel good” measures.

      1. 1

        Note that they couldn’t get the exploit working on Zen 3. I think the endgame is that all CPUs get good enough, and all clouds migrate to good enough CPUs, not bare metal clouds and physical machines. What happens in transition period is an interesting question, but I don’t think the endgame is in doubt.

        1. 1

          They couldn’t get this exploit working on that one microarchitecture. Why are you so confident that there is an end game where CPU designers are able to fully prevent exploits? The history of computing says otherwise.

      2. 3

        It’s worse than that, it’s a different mitigation per microarchitecture. I think, in this case, you need only the AMD or the Intel ones, but a few of the recent transient execution vulnerabilities need different mitigations for different Intel and AMD microarchitectures. This is heading to the point where shipping software as native code is unsustainable and you need at least to ship something on the abstraction level of LLVM IR that you compile with mitigations for the target microarchitecture at install time.

      3. 2

        Did the name had to sound so much like rectal bleed, though?

        1. 2


          1. what’s the right incantation to avoid the performance impact for this (and related issues) locally (are the patches in desktop consumer distributions?)
          2. For cloud customers of “big enough scale”, surely it makes sense to have a “friendly co-tenants only” switch? Anyone big enough to run k8s probably meets the bar for this to be reasonable/cost-effective?

          If you do that - and one can disable the performance-sapping mitigations in that case - that’s a big win, surely?