1. 42

    1. 3

      Wow, I’ve rarely seen such an angry email from Linus. Last time it got close was with Nvidia. In a way it’s oddly reassuring to know that the Linux kernel is guarded by such a tough Cerberus.

      1. 18

        Cerberus seems to be biting all directions a little mindlessly, though. Reading the followups is interesting:


        You’re looking at IBRS usage, not IBPB. They are different things.

        Seems to be the victim of a mixup during reading. The response is appropriately meek:


        Ehh. Odd intel naming detail.

        1. 9

          To my untrained eye, these considerations seem pretty polite and technically rooted.

        2. 3

          I work in the computer security industry so I can see the fear up close on what meltdown/spectre could do. Still, changing the fundamental operation of every single branch prediction on your CPU is far more wide ranging and troublesome. I can’t understand how anyone’s threat model for “bad JS on a web page” is different with M/S unless you have your nuke codes on your computer. It all feels like an overreaction to privileged memory reads and possible privileged execution. I think the detection of attempted exploitation makes a lot more sense that wholesale attempting to stop it using bolted on, performance devastating, fixes. (I don’t speak for my employer, or for folks with “serious” security concerns, I just think folks with “serious” concerns already had this in their threat model)

          As is the cure seems worse than the disease (modulo script kiddie mass exploitation) and yea the entire processor was designed to run code you sort of trust, which was broken by the open trust model of the internet. Regardless, even if this was mass exploited I’d rather have my cake (fast speculative execution) and eat it (detect exploitation before loss) than just throw my cake in the gutter to keep anyone else from eating it. Linus seems pretty correct here and stuck between a haphazard patch rock and a designed in bug hard place.

          PS this was a rant not a well thought out argument, but I mostly agree with it ;)

          1. 12

            At least on Linux you can disable KPTI with a boot parameter, if you don’t feel the default is a good performance/security tradeoff for a particular case.

            Cases where I could imagine it being reasonable: 1) scientific computing clusters, which tend to use an everyone-trusts-everyone security model anyway, 2) on-premises virtualization where you’re using virtualization just as a deployment/management mechanism, not for security boundaries, and 3) certain kinds of (hopefully well firewalled) database servers, where the performance impact seems to be particularly severe, and where most of the sensitive data is in userland anyway (the database), so the threat model of local privilege escalation isn’t your biggest worry.

            But admins of those kinds of setups know what they’re doing enough to change the default. I think most people who don’t know what they want are better served by a more-secure default, even with a performance hit. There is so much code, from networking to browsers, that relies on these security boundaries not being easily bypassed, that I think a malware-detection approach to mitigating it is likely to be too much of a whack-a-mole game. Plus your average random server on the internet, or home desktop, isn’t even using its compute capacity most of the time anyway, so unsafe performance tuning is hardly necessary.

            1. 1