@dstaley is right it’s just an extra small metal trace somewhere inside the die. Like any other fuse you put a high enough voltage across it and it pops. Then the CPU can just check the continuity with a lower voltage to check if it has been blown or not.
Like others have said, these fuses are on the CPU die itself. Fuses like this are actually quite common on microcontrollers for changing various settings, or locking the controller to disallow it from being programmed after its received the final production programming.
Yup, it’s entirely standard for any hardware root of trust. There are a couple of things that they’re commonly used for:
First, per-device secrets or unique IDs. Anything supporting remote attestation needs some unique per-device identifier. This can be fed (usually combined with some other things) into a key-derivation function to generate a public-private key pair, giving a remote party a way of establishing an end-to-end secure path with the trusted environment. This is a massive oversimplification of how you can spin up a cloud VM with SGX support and communicate with the SGX enclave without the cloud provider being able to see your data (the most recent vulnerability allowed the key that is used to sign the public key along with the attestation to be compromised). There are basically two ways of implementing this kind of secret:
PUFs. Physically Unclonable Functions are designs that take some input (can be a single bit) and produce an output that is stable but depends on details beyond the manufacturing tolerances of a particular process. The idea is that two copies of exactly the same mask will produce versions that generate different outputs. PUFs are a really cool idea and an active research area, but they’re currently quite big (expensive) and not very reliable (so you need a larger area and some error correction to get a stable result out of them).
Fuses. Any hardware root of trust will have a cryptographic entropy source. On first boot, you read from this, filter it through something like Fortuna (possibly implemented in hardware) to give some strong random numbers, and then burn something like a 128- or 256-bit ID into fuses. Typically with some error correction.
The MAC address (as @Thra11 pointed out) is a simple case of needing a unique identifier.
The second use is monotonic counters for roll-back protection. A secure boot chain works (again, huge oversimplifications follow, ) by having something tiny that’s trusted, which checks the signature of the second-stage boot loader and then loads that. The second-stage checks the signature of the third stage, and so on. Each one appends the values that they’re producing to a hash accumulator. Again, with a massive oversimplification, you may end up with hash(hash(first stage) + hash(second stage) + hash(third stage) …), where hash(first stage) is computed in hardware and everything else is in software (and where each hash function may be different). You can read the partial value (or, sometimes, use a key derived from it but not actually read the value) at any point, so at the end of second-stage boot you can read hash(hash(first stage) + hash(second stage)) and can then use that in any other crypto function, for example by starting the third-stage image with the decryption key or signature for the boot image encrypted with a key derived from the hashes of all of the allowed first and second-stage boot chains. You can also then use it in remote attestation, to prove that you are running a particular version of the software.
All of this relies on inductive security proofs. The second stage is trusted because the first stage is attested (you know exactly what it was) and you trust the attested version. If someone finds a vulnerability in version N, you want to ensure that someone who has updated to version N+1 can never be tricked into installing version N.
Typically, the first stage is a tiny hardware state machine that checks the signature and version of a small second-stage that is software. The second-stage software can have access to a little bit of flash (or other EEPROM) to store the minimum trusted version of the third-stage thing, so if you find a vulnerability in the third-stage thing but someone has already updated with an image that bumped the minimum-trusted-third-stage-thing-version then the second-stage loader will refuse to load an earlier version. But what happens if there’s a vulnerability in the second-stage loader? This is typically very small and carefully audited, so it shouldn’t be invalidated very often (you don’t need to prevent people rolling back to less feature-full versions, only insecure ones, so you typically have a security version number that is distinct from the real version number and invalidate it infrequently). Typically, the first-stage (hardware) loader keeps a unary counter in fuses so that it can’t possibly be rolled back.
What you describe above is a strong PUF; weak PUFs (that do not take inputs) also exist, and - in particular - SRAM PUFs (which you can get from e.g. IntrinsicID) are pretty reliable.
(But indeed, lots of PUFs are research vehicles only.)
Examples of fuses I’ve seen used in i.MX6 SOCs include setting the boot device (which, assuming it’s fused to boot from SPI or onboard mmc effectively locks out anyone trying to boot from USB or SD card), and setting the mac address.
This is normal. It’s simpler and cheaper (though not as flexible) as using a TPM.
How small is the irreparable damage? Is there a picture of it?
I’m fairly sure the fuses are a part of the CPU die, so they’re only several microns in size.
@dstaley is right it’s just an extra small metal trace somewhere inside the die. Like any other fuse you put a high enough voltage across it and it pops. Then the CPU can just check the continuity with a lower voltage to check if it has been blown or not.
This has some die photos of one example: https://archive.eetasia.com/www.eetasia.com/ART_8800717286_499485_TA_9b84ce1d_2.HTM
Like others have said, these fuses are on the CPU die itself. Fuses like this are actually quite common on microcontrollers for changing various settings, or locking the controller to disallow it from being programmed after its received the final production programming.
The Xbox360 also did something similar with its own “e-fuses.” I assume it’s standard practice now.
Yup, it’s entirely standard for any hardware root of trust. There are a couple of things that they’re commonly used for:
First, per-device secrets or unique IDs. Anything supporting remote attestation needs some unique per-device identifier. This can be fed (usually combined with some other things) into a key-derivation function to generate a public-private key pair, giving a remote party a way of establishing an end-to-end secure path with the trusted environment. This is a massive oversimplification of how you can spin up a cloud VM with SGX support and communicate with the SGX enclave without the cloud provider being able to see your data (the most recent vulnerability allowed the key that is used to sign the public key along with the attestation to be compromised). There are basically two ways of implementing this kind of secret:
The MAC address (as @Thra11 pointed out) is a simple case of needing a unique identifier.
The second use is monotonic counters for roll-back protection. A secure boot chain works (again, huge oversimplifications follow, ) by having something tiny that’s trusted, which checks the signature of the second-stage boot loader and then loads that. The second-stage checks the signature of the third stage, and so on. Each one appends the values that they’re producing to a hash accumulator. Again, with a massive oversimplification, you may end up with hash(hash(first stage) + hash(second stage) + hash(third stage) …), where hash(first stage) is computed in hardware and everything else is in software (and where each hash function may be different). You can read the partial value (or, sometimes, use a key derived from it but not actually read the value) at any point, so at the end of second-stage boot you can read hash(hash(first stage) + hash(second stage)) and can then use that in any other crypto function, for example by starting the third-stage image with the decryption key or signature for the boot image encrypted with a key derived from the hashes of all of the allowed first and second-stage boot chains. You can also then use it in remote attestation, to prove that you are running a particular version of the software.
All of this relies on inductive security proofs. The second stage is trusted because the first stage is attested (you know exactly what it was) and you trust the attested version. If someone finds a vulnerability in version N, you want to ensure that someone who has updated to version N+1 can never be tricked into installing version N.
Typically, the first stage is a tiny hardware state machine that checks the signature and version of a small second-stage that is software. The second-stage software can have access to a little bit of flash (or other EEPROM) to store the minimum trusted version of the third-stage thing, so if you find a vulnerability in the third-stage thing but someone has already updated with an image that bumped the minimum-trusted-third-stage-thing-version then the second-stage loader will refuse to load an earlier version. But what happens if there’s a vulnerability in the second-stage loader? This is typically very small and carefully audited, so it shouldn’t be invalidated very often (you don’t need to prevent people rolling back to less feature-full versions, only insecure ones, so you typically have a security version number that is distinct from the real version number and invalidate it infrequently). Typically, the first-stage (hardware) loader keeps a unary counter in fuses so that it can’t possibly be rolled back.
(You likely know this, but just in case:)
What you describe above is a strong PUF; weak PUFs (that do not take inputs) also exist, and - in particular - SRAM PUFs (which you can get from e.g. IntrinsicID) are pretty reliable.
(But indeed, lots of PUFs are research vehicles only.)
Examples of fuses I’ve seen used in i.MX6 SOCs include setting the boot device (which, assuming it’s fused to boot from SPI or onboard mmc effectively locks out anyone trying to boot from USB or SD card), and setting the mac address.