So all the reporting on this seems to muddle various aspects a bit… If someone here reads Spanish well I’d love for them to look at the linked presentation and verify some things. My impression from it is that:
a) this is undocumented commands in the HCI protocol, i.e. the interface between a host system and its own Bluetooth chip (i.e. not something accessible remotely)
b) it does give quite low-level control over the ESP, potentially circumventing firmware integrity mechanisms there
c) it gives a lower-level access to Bluetooth protocol handling than most Bluetooth peripherals do
and the “scary” suggestions are around b) and c) – i.e. if you manage to compromise a device that uses an ESP for Bluetooth, then you could use this to potentially persist a backdoor on the ESP (which then could allow remote control) or to use the ESP for fairly advanced Bluetooth attacks. But it is not a remotely accessible general exploit in ESP-based devices, as some comments seem to take it.
If anyone has clarifications on this I want to hear it.
This is my takeaway as well. This is not a remote attack, and it doesn’t let you compromise any devices that you don’t already control. It’s just undocumented functionality in the chip firmware, but not a “backdoor” that is usable for anything evil.
ESP32 chips are like taking an Arduino and connecting It to a Bluetooth dongle, all in one chip. The researchers found commands in the dongle part’s firmware that let you do out-of-spec things, if you already control the Arduino part.
Those out-of-spec things also aren’t anything new. Things like sniffing and changing the Bluetooth address have been possible for many years with hacked Bluetooth dongles. So this just means the ESP32 is now one more platform you can do fun Bluetooth hacks with, but many others exist!
Maaaybe you could use this to escalate to a persistent attack under some contrived compromise scenario? I’m not sure if that makes sense, you’d need a much subtler analysis of the ESP32 security model to figure out if this gives you any access you wouldn’t already have under any realistic scenario, at least as long as the main firmware isn’t doing something completely silly like exposing the raw HCI interface over WiFi or something (which would already be a major security problem either way).
this just means the ESP32 is now one more platform you can do fun Bluetooth hacks with
this was my takeaway from this whole thing. Incredibly poorly communicated, but basically esp32s got a fun new trick they can do. I look forward to a new wave of esp32 firmwares for conducting bluetooth attacks.
I was not satisfied by the explanation for why using open-source code is “not always possible and doesn’t always fix the problem”:
First, being able to read the code and build it yourself makes it possible to find a bug, but only if you’re doing a careful code review. Auditing code that someone else has written is notoriously hard and, as the underhanded C contest showed, hiding intentional vulnerabilities in source code is much easier than finding them. Even finding unintentional bugs is hard, as the 70 published CVEs for the Linux kernel so far this year attest.
How does CHERIoT help here? Obviously the guarantees provided by CHERIoT depend on faithful and correct implementation, so is there some reason CHERIoT implementations would be less likely to have bugs or backdoors than driver code? Will CHERIoT implementations be open-source and auditable?
Second, open-source code may not be an option. Modern radios for wireless networks often have a software-defined component that enforces regulatory compliance. The WiFi standards, for example, all define a set of bands that is the union of all frequency ranges that any regulatory regime permits. To ship a device in a particular country, you may be required to lock down the set of bands that it will use to the ones that are allowed in that country. Depending on who is responsible for the certification, that may mean that a device vendor is required to run a specific version of a driver, provided by a component vendor.
The CHERIoT team presumably opposes these regulatory requirements (since they “strongly encourage” using only open-source code), so why not invest invest effort into changing the requirements? Or at least acknowledge that they depend on our collective consent? What justifies spending effort to push a new architecture rather than fix the regulations? Is there any concern that CHERIoT would make these bad regulations harder to change by mitigating the security problems they cause?
Overall I don’t really get what CHERIoT is about and exactly how it claims to improve security, and this story could be a good opportunity to explain that more fully.
How does CHERIoT help here? Obviously the guarantees provided by CHERIoT depend on faithful and correct implementation, so is there some reason CHERIoT implementations would be less likely to have bugs or backdoors than driver code? Will CHERIoT implementations be open-source and auditable?
The tooling that generates the auditing reports is open source, as is the code that enforces these. You can audit that once and then rely on these guarantees for isolating third-party code.
The CHERIoT team presumably opposes these regulatory requirements (since they “strongly encourage” using only open-source code), so why not invest invest effort into changing the requirements?
I don’t have very strong opinions about them in the context of radios but the equivalent regulations in medical devices are very important. I don’t want people modifying these devices to kill people, I want an audit trail that lets you guarantee that, whatever else in the system you’ve modified, the safety-critical part will not kill people. Most of these regulations exist for a reason.
Overall I don’t really get what CHERIoT is
I’ve written an entire book on that, this post is meant to highlight specific benefits in specific contexts, not cover the entire platform.
The tooling that generates the auditing reports is open source, as is the code that enforces these. You can audit that once and then rely on these guarantees for isolating third-party code.
And does the the tooling that generates and enforces the auditing reports amount to a full implementation of CHERIoT? Or are there other parts that won’t be open source and auditable? Forgive me if this seems obvious; I’m not trying to be obtuse.
I understand that the part of the system that enforces auditing reports must be in control of the instructions that a binary driver can run. You said the “code” that enforces the reports is open source and auditable; is there a hardware component that is not included in that statement?
I don’t have very strong opinions about them in the context of radios but the equivalent regulations in medical devices are very important. I don’t want people modifying these devices to kill people, I want an audit trail that lets you guarantee that, whatever else in the system you’ve modified, the safety-critical part will not kill people. Most of these regulations exist for a reason.
It goes without saying that they exist for a reason, but it’s not obvious that the reasons are better than e.g. the reasons for “felony contempt of business model” rulings. Obviously regulations that unnecessarily limit the supply of medical devices can also kill people.
To understand what you’re arguing I think we need to clarify how these protections are supposed to work. This would also help clarify the stuff above as well: How are restrictions on driver code currently enforced? If an attacker had a malicious version of some driver, would they be able to flash it onto the hardware? And thus do the current “protections” rely entirely on the difficulty of building a driver without access to the source code, or is there some other mechanism to prevent the use of unapproved drivers? Conversely, how does one verify that the driver running on some hardware is the approved version? No doubt the answers vary but they are obviously crucial.
And does the the tooling that generates and enforces the auditing reports amount to a full implementation of CHERIoT?
On the software side, yes. The RTOS and toolchain are all open source. On the hardware side it’s more complicated. The ISA spec and a reference implementation of a core are also open but any chip fabbed on a vaguely modern process will include some non-open IP (cell libraries and analogue components, at least).
I understand that the part of the system that enforces auditing reports must be in control of the instructions that a binary driver can run.
The core guarantees are enforce by the hardware: you cannot access memory unless you hold a capability to it.
The loader determines which capabilities any compartment has. It does so using metadata that is part of the firmware image. This metadata is emitted by the linker, which also emits the auditing report. If the metadata is not emitted then the loader will not provide capabilities to the compartment.
You said the “code” that enforces the reports is open source and auditable; is there a hardware component that is not included in that statement?
Yes. The reference implementation of the core is open source and has been formally verified by folks at RPTU to enforce the security model (including in the presence of side channels, though that bit is only with caches disabled). Other implementations of the ISA are also possible (and permitted, the ISA spec is open and anyone may implement it).
How are restrictions on driver code currently enforced?
By the capability model, which starts with object-granularity memory safety.
If an attacker had a malicious version of some driver, would they be able to flash it onto the hardware?
That depends on your deployment model, but the assumption is that the driver may be malicious. A malicious driver can do only things that you have granted it capabilities to do and those show up in auditing. That’s the point of this post. Even if the driver that you’re using has a supply-chain backdoor, it can!t automatically get full control over your device.
And thus do the current “protections” rely entirely on the difficulty of building a driver without access to the source code, or is there some other mechanism to prevent the use of unapproved drivers?
No, the protection model assumes that you may be linking untrusted binaries for compartments into your firmware and allows you to reason about the damage that they can do if they are malicious. If you have the source code, or you have a trusted build environment, you can integrate the SBOM bits (and the results of source-code audits) into the firmware auditing flow. If you have a binary-only compartment then you can still link this into your final firmware image and reason about security properties.
Conversely, how does one verify that the driver running on some hardware is the approved version?
That will depend on your secure boot flow, which will be somewhat specific to implementations, the core platform lets you reason about the things that you linked. Ensuring that the things that you deploy are firmware images that you trust is a code signing or security update problem (which we also provide help building, but the core thing in this context is that you can build a firmware image with an untrusted component and know that it doesn’t violate your security rules even in the presence of supply-chain attacks).
Thank you for the explanation; I may have some more questions after I think about it more.
I realize my last paragraph was not clear:
To understand what you’re arguing I think we need to clarify how these protections are supposed to work. This would also help clarify the stuff above as well: How are restrictions on driver code currently enforced? If an attacker had a malicious version of some driver, would they be able to flash it onto the hardware? And thus do the current “protections” rely entirely on the difficulty of building a driver without access to the source code, or is there some other mechanism to prevent the use of unapproved drivers? Conversely, how does one verify that the driver running on some hardware is the approved version? No doubt the answers vary but they are obviously crucial.
By “these protections” I was referring to the regulations you strongly support that CHERIoT seems designed to accommodate, and by “current” I was referring to the current standard deployment practice rather than how things are done in CHERIoT.
In particular I want to make sure we’re not conflating source-availability and auditability with the ability to load a modified version of a driver onto a device. Your reasons for supporting restrictions on medical device drivers seem compatible with an alternative model where the code is open source, auditable, and reproducible, but only approved versions can be used. Is there a reason this approach is not sensible or practical? I was asking about how the current restrictions are enforced in order to understand if the desired protection really does necessitate non-auditable binary code running on medical devices. Any thoughts on that?
Your reasons for supporting restrictions on medical device drivers seem compatible with an alternative model where the code is open source, auditable, and reproducible, but only approved versions can be used. Is there a reason this approach is not sensible or practical?
No, that’s entirely sensible and also where I think things like WiFi and BT LE drivers should be. We can easily support that model.
The CHERIoT team presumably opposes these regulatory requirements (since they “strongly encourage” using only open-source code), so why not invest invest effort into changing the requirements?
I don’t have very strong opinions about them in the context of radios but the equivalent regulations in medical devices are very important. I don’t want people modifying these devices to kill people, I want an audit trail that lets you guarantee that, whatever else in the system you’ve modified, the safety-critical part will not kill people. Most of these regulations exist for a reason.
So either the regulations don’t actually mandate closed source code, or they do and you would agree with changing them, right? I am just pushing back against messaging that seems to reinforce bad legal restrictions as a fact of life that’s beyond our control.
It goes without saying that they exist for a reason, but it’s not obvious that the reasons are better than e.g. the reasons for “felony contempt of business model” rulings. Obviously regulations that unnecessarily limit the supply of medical devices can also kill people.
It sounds like you’re expressing a blanket disapproval of regulations. Or do you believe the freedom to modify any device overrides the value of regulations? As in, “my right to broadcast at XXX MHz is more important than the ability of emergency responders to communicate” or “my right to override the X-ray power limit is valid because I am totally confident in my ability to not harm patients”?
It sounds like you’re expressing a blanket disapproval of regulations.
I reject that completely. I mean just read the part you quoted.
Or do you believe the freedom to modify any device overrides the value of regulations? As in, “my right to broadcast at XXX MHz is more important than the ability of emergency responders to communicate” or “my right to override the X-ray power limit is valid because I am totally confident in my ability to not harm patients”?
The strawmen in your second sentence do not follow from the first sentence, but no I don’t think that the ability to modify devices necessarily overrides the potential downside. I’m just not convinced that the supposed benefits of such restrictions as they exist outweigh the drawbacks in practice; hence my questions.
But there is no actual backdoor on the ESP32, right? Just misleading headlines.
According to Hacker News yesterday it was just some undocumented APIs which you can only call if you already have code running on the device. Which makes it a complete nothingburger.
What the authors seem most excited about is the low-level control that these undocumented commands provide, e.g. custom packet injection, which would allow development of Bluetooth attacks that don’t require special hardware/SDR kit.
It cannot expose an API to write all memory, because it does not have access to all memory
Unless it uses DMA (which that network device likely has). So you need an IOMMU as well. And of course the device must not be in cahoots with the driver, which is not necessarily the case on embedded systems like this where the vendor taped out the chip and wrote the driver.
I don’t entirely agree, an IOMMU is a separate piece of hardware that needs to be programmed appropriately to allow DMA to go to separate regions and which also does address translation (many of the earliest ones were designed only for the translation, not security: they let you ship cheap 32-bit devices in 64-bit systems with more than 4 GiB of RAM). The programmer model involves multiple systems tracking permissions. In contrast, a CHERIoT DMA unit lets a compartment DMA to any memory that it can access and enforces the same rules as the core. If that’s an IOMMU then CHERI is an MMU.
I was just in the process of pulling up the SeL4 FAQ about DMA to ask if the same restrictions applied to CHERIoT. They mention x86 VT-d and SystemMMU. But I guess if CHERI already has full control of the hardware (by being a hardware security implementation) they can fix that separately.
Also I think this is two things at the same time, it should be either “and so on” or “and friends”:
what if you access the chip via wlan and then use the undocumented commands to alter the device?
Does bluetooth have something like a loopback device or link local address?
The commands aren’t for a network interface, they are just internal controller commands. They are not packets that are routable or could come from the outside world.
It’s like asking “what if you could access a computer via WiFi and then communicate with a USB device to flash its firmware”. You can’t do that unless there is some software on the computer exposing the USB device to the network (and then that software would be the security problem, not USB itself).
If some ESP32 user firmware in the wild is exposing the Bluetooth HCI commands via WiFi, then that would already be a security problem even without these undocumented commands.
A device like this has two kinds of interface: one that connects to the CPU (aka the host) for controlling the device and others that connect to the outside world. This discovery is about undocumented host control commands.
So all the reporting on this seems to muddle various aspects a bit… If someone here reads Spanish well I’d love for them to look at the linked presentation and verify some things. My impression from it is that:
a) this is undocumented commands in the HCI protocol, i.e. the interface between a host system and its own Bluetooth chip (i.e. not something accessible remotely)
b) it does give quite low-level control over the ESP, potentially circumventing firmware integrity mechanisms there
c) it gives a lower-level access to Bluetooth protocol handling than most Bluetooth peripherals do
and the “scary” suggestions are around b) and c) – i.e. if you manage to compromise a device that uses an ESP for Bluetooth, then you could use this to potentially persist a backdoor on the ESP (which then could allow remote control) or to use the ESP for fairly advanced Bluetooth attacks. But it is not a remotely accessible general exploit in ESP-based devices, as some comments seem to take it.
If anyone has clarifications on this I want to hear it.
This is my takeaway as well. This is not a remote attack, and it doesn’t let you compromise any devices that you don’t already control. It’s just undocumented functionality in the chip firmware, but not a “backdoor” that is usable for anything evil.
ESP32 chips are like taking an Arduino and connecting It to a Bluetooth dongle, all in one chip. The researchers found commands in the dongle part’s firmware that let you do out-of-spec things, if you already control the Arduino part.
Those out-of-spec things also aren’t anything new. Things like sniffing and changing the Bluetooth address have been possible for many years with hacked Bluetooth dongles. So this just means the ESP32 is now one more platform you can do fun Bluetooth hacks with, but many others exist!
Maaaybe you could use this to escalate to a persistent attack under some contrived compromise scenario? I’m not sure if that makes sense, you’d need a much subtler analysis of the ESP32 security model to figure out if this gives you any access you wouldn’t already have under any realistic scenario, at least as long as the main firmware isn’t doing something completely silly like exposing the raw HCI interface over WiFi or something (which would already be a major security problem either way).
this was my takeaway from this whole thing. Incredibly poorly communicated, but basically esp32s got a fun new trick they can do. I look forward to a new wave of esp32 firmwares for conducting bluetooth attacks.
[Comment removed by author]
I was not satisfied by the explanation for why using open-source code is “not always possible and doesn’t always fix the problem”:
How does CHERIoT help here? Obviously the guarantees provided by CHERIoT depend on faithful and correct implementation, so is there some reason CHERIoT implementations would be less likely to have bugs or backdoors than driver code? Will CHERIoT implementations be open-source and auditable?
The CHERIoT team presumably opposes these regulatory requirements (since they “strongly encourage” using only open-source code), so why not invest invest effort into changing the requirements? Or at least acknowledge that they depend on our collective consent? What justifies spending effort to push a new architecture rather than fix the regulations? Is there any concern that CHERIoT would make these bad regulations harder to change by mitigating the security problems they cause?
Overall I don’t really get what CHERIoT is about and exactly how it claims to improve security, and this story could be a good opportunity to explain that more fully.
The tooling that generates the auditing reports is open source, as is the code that enforces these. You can audit that once and then rely on these guarantees for isolating third-party code.
I don’t have very strong opinions about them in the context of radios but the equivalent regulations in medical devices are very important. I don’t want people modifying these devices to kill people, I want an audit trail that lets you guarantee that, whatever else in the system you’ve modified, the safety-critical part will not kill people. Most of these regulations exist for a reason.
I’ve written an entire book on that, this post is meant to highlight specific benefits in specific contexts, not cover the entire platform.
And does the the tooling that generates and enforces the auditing reports amount to a full implementation of CHERIoT? Or are there other parts that won’t be open source and auditable? Forgive me if this seems obvious; I’m not trying to be obtuse.
I understand that the part of the system that enforces auditing reports must be in control of the instructions that a binary driver can run. You said the “code” that enforces the reports is open source and auditable; is there a hardware component that is not included in that statement?
It goes without saying that they exist for a reason, but it’s not obvious that the reasons are better than e.g. the reasons for “felony contempt of business model” rulings. Obviously regulations that unnecessarily limit the supply of medical devices can also kill people.
To understand what you’re arguing I think we need to clarify how these protections are supposed to work. This would also help clarify the stuff above as well: How are restrictions on driver code currently enforced? If an attacker had a malicious version of some driver, would they be able to flash it onto the hardware? And thus do the current “protections” rely entirely on the difficulty of building a driver without access to the source code, or is there some other mechanism to prevent the use of unapproved drivers? Conversely, how does one verify that the driver running on some hardware is the approved version? No doubt the answers vary but they are obviously crucial.
On the software side, yes. The RTOS and toolchain are all open source. On the hardware side it’s more complicated. The ISA spec and a reference implementation of a core are also open but any chip fabbed on a vaguely modern process will include some non-open IP (cell libraries and analogue components, at least).
The core guarantees are enforce by the hardware: you cannot access memory unless you hold a capability to it.
The loader determines which capabilities any compartment has. It does so using metadata that is part of the firmware image. This metadata is emitted by the linker, which also emits the auditing report. If the metadata is not emitted then the loader will not provide capabilities to the compartment.
Yes. The reference implementation of the core is open source and has been formally verified by folks at RPTU to enforce the security model (including in the presence of side channels, though that bit is only with caches disabled). Other implementations of the ISA are also possible (and permitted, the ISA spec is open and anyone may implement it).
By the capability model, which starts with object-granularity memory safety.
That depends on your deployment model, but the assumption is that the driver may be malicious. A malicious driver can do only things that you have granted it capabilities to do and those show up in auditing. That’s the point of this post. Even if the driver that you’re using has a supply-chain backdoor, it can!t automatically get full control over your device.
No, the protection model assumes that you may be linking untrusted binaries for compartments into your firmware and allows you to reason about the damage that they can do if they are malicious. If you have the source code, or you have a trusted build environment, you can integrate the SBOM bits (and the results of source-code audits) into the firmware auditing flow. If you have a binary-only compartment then you can still link this into your final firmware image and reason about security properties.
That will depend on your secure boot flow, which will be somewhat specific to implementations, the core platform lets you reason about the things that you linked. Ensuring that the things that you deploy are firmware images that you trust is a code signing or security update problem (which we also provide help building, but the core thing in this context is that you can build a firmware image with an untrusted component and know that it doesn’t violate your security rules even in the presence of supply-chain attacks).
Thank you for the explanation; I may have some more questions after I think about it more.
I realize my last paragraph was not clear:
By “these protections” I was referring to the regulations you strongly support that CHERIoT seems designed to accommodate, and by “current” I was referring to the current standard deployment practice rather than how things are done in CHERIoT.
In particular I want to make sure we’re not conflating source-availability and auditability with the ability to load a modified version of a driver onto a device. Your reasons for supporting restrictions on medical device drivers seem compatible with an alternative model where the code is open source, auditable, and reproducible, but only approved versions can be used. Is there a reason this approach is not sensible or practical? I was asking about how the current restrictions are enforced in order to understand if the desired protection really does necessitate non-auditable binary code running on medical devices. Any thoughts on that?
No, that’s entirely sensible and also where I think things like WiFi and BT LE drivers should be. We can easily support that model.
Which brings us back to here:
So either the regulations don’t actually mandate closed source code, or they do and you would agree with changing them, right? I am just pushing back against messaging that seems to reinforce bad legal restrictions as a fact of life that’s beyond our control.
It sounds like you’re expressing a blanket disapproval of regulations. Or do you believe the freedom to modify any device overrides the value of regulations? As in, “my right to broadcast at XXX MHz is more important than the ability of emergency responders to communicate” or “my right to override the X-ray power limit is valid because I am totally confident in my ability to not harm patients”?
I reject that completely. I mean just read the part you quoted.
The strawmen in your second sentence do not follow from the first sentence, but no I don’t think that the ability to modify devices necessarily overrides the potential downside. I’m just not convinced that the supposed benefits of such restrictions as they exist outweigh the drawbacks in practice; hence my questions.
Any chance for a PDF variant of the “CHERIoT Programmers’ Guide”?
Looks like we lost the link, but it is still built. Here you go.
I think some of the cross-reference links may be broken in the PDF.
Kudos to bleepingcomputer for changing the headline. (And apologies for sharing such a misleading article in the first place!)
But there is no actual backdoor on the ESP32, right? Just misleading headlines.
According to Hacker News yesterday it was just some undocumented APIs which you can only call if you already have code running on the device. Which makes it a complete nothingburger.
It’s a backdoor into the ESP32 chip from the host
That doesn’t appear to be the case - or rather it’s only activated if you call the undocumented API.
Yes - from the host
Article feels more like marketing desperate to attach to current (very misleading) headlines than useful and thoughtful contribution.
What the authors seem most excited about is the low-level control that these undocumented commands provide, e.g. custom packet injection, which would allow development of Bluetooth attacks that don’t require special hardware/SDR kit.
Espressif has put out a formal statement and accompanying blogpost
Unless it uses DMA (which that network device likely has). So you need an IOMMU as well. And of course the device must not be in cahoots with the driver, which is not necessarily the case on embedded systems like this where the vendor taped out the chip and wrote the driver.
Nope. Our DMA controller is capability aware, DMA does not bypass the memory safety of the system.
That counts as an iommu…
I don’t entirely agree, an IOMMU is a separate piece of hardware that needs to be programmed appropriately to allow DMA to go to separate regions and which also does address translation (many of the earliest ones were designed only for the translation, not security: they let you ship cheap 32-bit devices in 64-bit systems with more than 4 GiB of RAM). The programmer model involves multiple systems tracking permissions. In contrast, a CHERIoT DMA unit lets a compartment DMA to any memory that it can access and enforces the same rules as the core. If that’s an IOMMU then CHERI is an MMU.
but it doesn’t do any of the things that a memory management unit does
An (IO)MMU does two things
If you make DMA controllers “capability aware” then they perform the second function of an (IO)MMU.
And of course the whole thing is moot if you are using a driver written by the chip vendor.
I was just in the process of pulling up the SeL4 FAQ about DMA to ask if the same restrictions applied to CHERIoT. They mention x86 VT-d and SystemMMU. But I guess if CHERI already has full control of the hardware (by being a hardware security implementation) they can fix that separately.
Also I think this is two things at the same time, it should be either “and so on” or “and friends”:
what if you access the chip via wlan and then use the undocumented commands to alter the device? Does bluetooth have something like a loopback device or link local address?
The commands aren’t for a network interface, they are just internal controller commands. They are not packets that are routable or could come from the outside world.
It’s like asking “what if you could access a computer via WiFi and then communicate with a USB device to flash its firmware”. You can’t do that unless there is some software on the computer exposing the USB device to the network (and then that software would be the security problem, not USB itself).
If some ESP32 user firmware in the wild is exposing the Bluetooth HCI commands via WiFi, then that would already be a security problem even without these undocumented commands.
See the other comments in this thread.
A device like this has two kinds of interface: one that connects to the CPU (aka the host) for controlling the device and others that connect to the outside world. This discovery is about undocumented host control commands.