This is one of those classes of attack where it’s quite clear that it’s both feasible, and desirable for attackers, and I’ve been somewhat annoyed for a long time that very few people seem to take it seriously.
I do remember research quite a few years ago establishing that this is practical to implement - because of course it is! Making pre-boot code portable is the entire point of EFI! Exploits are just code that does something you don’t want it to.
For me, knowing it’s practical is the bar to include it in my planning… but apparently most people need to see it being widespread in practice before it’s worth talking about. To their credit, the largest organizations do adopt security practices that mitigate this stuff, such as chain-of-custody for devices, but that doesn’t help the general public. Only forward-looking development of infrastructure to block this class of attacks would do that.
I wish I knew how to teach security practitioners to be more forward-thinking, but I expect that as long as infosec research is fundamentally driven by a profit motive, things won’t change. It’s upsetting.
Yes and no. The TPM measures the UEFI code and adds it to a PCR (basically a running hash of everything that’s been fed into it). This means that it would detect a modification of the UEFI code and, because the PCR value for the firmware doesn’t match, wouldn’t release the key for decrypting a BitLocker / LUKS-encrypted volume or any WebAuthn tokens or any other credentials stored in the TPM. There are two possible failure modes:
First, if the bootkit is installed before the first SecureBoot boot, then the keys will be released only if you boot with the compromised firmware and you’ll need to do the recovery thing to boot with the non-compromised version. If the malware is installed early on in the supply chain before you do the OS install, then Pluton / TPM is no help.
Second, the symptom that the user sees is likely to be incomprehensible. They will see an error saying BitLocker needs them to enter their recovery key because the TPM has detected a change to the UEFI firmware. For most users, this will read as ‘enter your recovery key because wurble mumble wibble fish banana’ and so they will either enter their recovery key (if they kept it somewhere safe) and grant the malware access to everything or reinstall their OS (if they lost their recovery key) and grant the malware access to everything.
So, it would be more accurate to say that something like Pluton can detect such malware and prevent it from compromising a user’s data, but it is easy for the user to circumvent that protection.
Pluton is for securing company computers against employees, and streaming video against computer “owners”, not for securing your machine against nation-state and organised crime actors.
I’m confused why this was downvoted; it’s correct and answers the question. I think someone may have thought this was unrelated political posturing? If so, please read it again. It is a direct answer to the question it’s responding to.
Not the flagger, but I think a direct answer could refer to the technical differences in protections asserted by Pluton vs these UEFI attacks. Microsoft themselves refer to nation-state actors and cybercriminals in the copy around Pluton, and I remain unclear whether there’s an overlap here.
That’s quite fair. On my own background knowledge, Pluton does not establish a complete chain of trust for the firmware in the way that ie. ChromeOS does, and therefore does not prevent bootkits. At best it provides a fallible approach to detecting bootkits, but a sophisticated attacker would be able to circumvent this detection in common circumstances.
Empty rhetoric about all the threats that are out there is quite common in the security world, and Microsoft’s rhetoric about Pluton is in that category. I could get into why this makes sense for them as marketing strategy, but that would perhaps verge on being too much politics.
IIRC, currently Pluton firmware just implements a TPM, but they promised to add lots more things in the near future. It’s a bit more than just rhetoric since they have actually built the hardware side of things?
Yes, at least partially. The modification of BIOS code would be detected and access to secrets like Bitlocker or LUKS keys could be denied, if the system was set up correctly. Of course now there’s a question of what the user would do in that case, they might just enter the backup key and re-seal the secret, which wouldn’t do anything. The more proper way would be to check with the BIOS vendor whether the measurement the TPM is getting matches with any of their versions or not, and if not, promptly re-flash their BIOS. This doesn’t need Pluton, any old TPM would do though, Pluton just has more security in a case of physical access.
Do BIOS flash utilities work in this scenario? It seems like the utility has to be booted with UEFI so it’s too late to trust it…? Though I guess it has to work when the device is bricked by a bad BIOS, so there’s some even lower-level way to boot the utility?
You can of course try booting it from a USB stick and try re-flashing it, and see if that returns it to a good state. if it doesn’t, you could probably re-flash the SPI flash itself with an inexpensive programmer, but that requires some knowledge and definitely isn’t doable by an end user.
What I’m wondering is, couldn’t the bad BIOS just hook the flash utility the same way it does the OS? What is the accepted secure way for an end user to completely factory-restore the machine? Because that seems like the rational and intended response to the Bitlocker TPM change message.
If you have reason to believe the device is compromised at that low a level, don’t keep using the device. Yes, nobody who’s not a big organization can afford to just throw laptops away, but it’s also quite impractical - especially on closed hardware - to be sure you re-flashed everything that needs to be re-flashed. You should be trying really hard to not be in this scenario in the first place.
The specific attack appears to be Windows-specific but it sounds like it’s probably just the first discovery of a set of “bootkits” going back at least 5 years.
This is one of those classes of attack where it’s quite clear that it’s both feasible, and desirable for attackers, and I’ve been somewhat annoyed for a long time that very few people seem to take it seriously.
I do remember research quite a few years ago establishing that this is practical to implement - because of course it is! Making pre-boot code portable is the entire point of EFI! Exploits are just code that does something you don’t want it to.
For me, knowing it’s practical is the bar to include it in my planning… but apparently most people need to see it being widespread in practice before it’s worth talking about. To their credit, the largest organizations do adopt security practices that mitigate this stuff, such as chain-of-custody for devices, but that doesn’t help the general public. Only forward-looking development of infrastructure to block this class of attacks would do that.
I wish I knew how to teach security practitioners to be more forward-thinking, but I expect that as long as infosec research is fundamentally driven by a profit motive, things won’t change. It’s upsetting.
Prompted by the current top story: The dangers of Microsoft Pluton - would this attack be mitigated by something like Pluton?
Yes and no. The TPM measures the UEFI code and adds it to a PCR (basically a running hash of everything that’s been fed into it). This means that it would detect a modification of the UEFI code and, because the PCR value for the firmware doesn’t match, wouldn’t release the key for decrypting a BitLocker / LUKS-encrypted volume or any WebAuthn tokens or any other credentials stored in the TPM. There are two possible failure modes:
First, if the bootkit is installed before the first SecureBoot boot, then the keys will be released only if you boot with the compromised firmware and you’ll need to do the recovery thing to boot with the non-compromised version. If the malware is installed early on in the supply chain before you do the OS install, then Pluton / TPM is no help.
Second, the symptom that the user sees is likely to be incomprehensible. They will see an error saying BitLocker needs them to enter their recovery key because the TPM has detected a change to the UEFI firmware. For most users, this will read as ‘enter your recovery key because wurble mumble wibble fish banana’ and so they will either enter their recovery key (if they kept it somewhere safe) and grant the malware access to everything or reinstall their OS (if they lost their recovery key) and grant the malware access to everything.
So, it would be more accurate to say that something like Pluton can detect such malware and prevent it from compromising a user’s data, but it is easy for the user to circumvent that protection.
I would even go so far as to say the user is induced to circumvent that protection.
Pluton is for securing company computers against employees, and streaming video against computer “owners”, not for securing your machine against nation-state and organised crime actors.
I’m confused why this was downvoted; it’s correct and answers the question. I think someone may have thought this was unrelated political posturing? If so, please read it again. It is a direct answer to the question it’s responding to.
Not the flagger, but I think a direct answer could refer to the technical differences in protections asserted by Pluton vs these UEFI attacks. Microsoft themselves refer to nation-state actors and cybercriminals in the copy around Pluton, and I remain unclear whether there’s an overlap here.
That’s quite fair. On my own background knowledge, Pluton does not establish a complete chain of trust for the firmware in the way that ie. ChromeOS does, and therefore does not prevent bootkits. At best it provides a fallible approach to detecting bootkits, but a sophisticated attacker would be able to circumvent this detection in common circumstances.
Empty rhetoric about all the threats that are out there is quite common in the security world, and Microsoft’s rhetoric about Pluton is in that category. I could get into why this makes sense for them as marketing strategy, but that would perhaps verge on being too much politics.
IIRC, currently Pluton firmware just implements a TPM, but they promised to add lots more things in the near future. It’s a bit more than just rhetoric since they have actually built the hardware side of things?
Sorry, just now seeing this! That’s quite fair. I’m not familiar with Microsoft’s future plans, so I’m not able to speak to that.
How does a UEFI bootkit circumvent the protections offered by Pluton/TPM?
Yes, at least partially. The modification of BIOS code would be detected and access to secrets like Bitlocker or LUKS keys could be denied, if the system was set up correctly. Of course now there’s a question of what the user would do in that case, they might just enter the backup key and re-seal the secret, which wouldn’t do anything. The more proper way would be to check with the BIOS vendor whether the measurement the TPM is getting matches with any of their versions or not, and if not, promptly re-flash their BIOS. This doesn’t need Pluton, any old TPM would do though, Pluton just has more security in a case of physical access.
Do BIOS flash utilities work in this scenario? It seems like the utility has to be booted with UEFI so it’s too late to trust it…? Though I guess it has to work when the device is bricked by a bad BIOS, so there’s some even lower-level way to boot the utility?
You can of course try booting it from a USB stick and try re-flashing it, and see if that returns it to a good state. if it doesn’t, you could probably re-flash the SPI flash itself with an inexpensive programmer, but that requires some knowledge and definitely isn’t doable by an end user.
What I’m wondering is, couldn’t the bad BIOS just hook the flash utility the same way it does the OS? What is the accepted secure way for an end user to completely factory-restore the machine? Because that seems like the rational and intended response to the Bitlocker TPM change message.
If you have reason to believe the device is compromised at that low a level, don’t keep using the device. Yes, nobody who’s not a big organization can afford to just throw laptops away, but it’s also quite impractical - especially on closed hardware - to be sure you re-flashed everything that needs to be re-flashed. You should be trying really hard to not be in this scenario in the first place.
The specific attack appears to be Windows-specific but it sounds like it’s probably just the first discovery of a set of “bootkits” going back at least 5 years.