this is really clever, and definitely something that I could see Yubikey selling directly: one black, one red - black for daily use, red for recovery, with a “order another pair” stamped right on the red one.
Couldn’t an attacker who gained access to the first device just run up its counter to some arbitrarily large number such that it passes the backup device and continue using it? I don’t see anything in the article that would prevent that. (Though admittedly don’t know a lot about the U2F implementation, so maybe there is something that makes this infeasible.)
There’s no easy way to increment the counter, so one would have to invent some automation for sending authentication challenges to the token and pressing the physical button every time (so it’s not just software, it has to be hardware automation). It’s by no means impossible, but the time it’d take should be enough for me to get another pair or tokens, enroll it to my accounts and revoke the old one.
Also we can make it not 1 000 000 but, say, 4 000 000 000, which still leaves plenty of values of a 32-bit value, and makes this attack vector a lot less feasible.
UPD: there is a reliable way to address this issue completely, see my comment below.
It’s not totally trivial to increment the counter, true, but it’s not really that difficult either. Once you have the device you could register and use it against your own service specifically for doing so, and especially with the U2F Zero with the bare pcb, the hardware part of it is pretty basic. It looks like you could likely get alligator clips on the button leads and never even touch a soldering iron, then drive the whole process with a little script. Unless the devices are intentionally rate limited or so slow as to be effectively so, I think it could easily happen faster than you could get a replacement device pair and update all your third-party services with the new ones.
It’s a cool idea, and in many cases is likely fine in practice, but I’d be hesitant to rely on this for anything particularly sensitive.
Ah, I just realized that this issue is actually a non-issue! There is a reliable solution: accordingly to ATECC508A datasheet, its counters can count only up to 2097151, but the whole range of U2F counter is 0xffffffff (which is more than 4 000 000 000). So, the counter boost for the backup token should be set to the value larger than 2097151, and then, the primary token would never be able to return counter which is that large. So once backup token is used, the primary one is invalidated for good.
Ok cool, I’ll update the article with that important detail.
Cool, that sounds like that might be an approach to mitigate. But would the backup device then fail to function due to the counter exceeding the limit? I skimmed the data sheet, and it’s not clear what the behavior would be if you write an initial value larger than it would count to…you might want to test that before recommending it.
Aah, nevermind, I realized you’re offsetting this in the firmware, not actually in the hw counter, so that seems like it should work.
We can’t write any initial value to the hardware counter: it’s monotonic by design, and we can only increment it one by one.
let’s refer to the value from ATECC508A’s counter as hw_counter. Then:
In the primary token, we use: hw_counter;
In the backup token, we use: hw_counter + 2000000000.
Note that we do not modify the actual hardware counter value hw_counter, it still will count from 0 to 2097151. Instead, every time we need to get a counter value, we read hw_counter from ATECC508A, then add boost constant, and return further (for using in u2f calculations).
This way, the counter range of the primary token is [0, 2097151], while the counter range of the backup is [2000000000, 2002097151]. The fact that those ranges don’t intersect ensures that once backup token is used on some service, the primary one is invalidated for good.
I think you can write an initial counter value while the chip is still write enabled for configuration…it looks like that’s how they implement limited use keys by allowing you to tie a key to a counter with an initial value of MAX - USES.
But regardless, you’re right, doing the offset in firmware like you suggest would work better anyway.
I agree it’s doable, but in any case, the backup token is only needed to log into accounts, add another token and revoke the current one, so the time attackers have is quite limited. And comparing to other backup alternatives (regular U2F token at home or some TOTP / recovery codes), the matched token with boosted counter still feels like a better tradeoff. Well, at least personally for me; I’m really uncomfortable about keeping a working U2F token somewhere easily accessible at home.
There is an easy way to reliably address this issue, see my comment below.
Perhaps I missed it, but it doesn’t appear that this author has considered what should happen in the case where an attacker steals the backup token, while the account owner still has the primary token. As I understand it, the backup token automatically and immediately invalidates the primary token when it’s used. That means it could be used on its own to lock the owner out, which seems undesirable.
Of course, the same thing could be done with two credentials that act as “peers”, although an attacker would have to do more than just log in. So I’m not sure it’s actually a problem. But I’d appreciate a flowchart or similar analysis of the recovery flows for various attack scenarios, granting equal attention to those where the attacker “wins”.
Yes sure, if an attacker manages to get the backup token, and if they also have your password so they are able to actually login to your account, then, bad luck. But, well, the whole point of that story is to be able to hide backup so far away so that the probability of such event is negligeable. So it’s up to the token’s owner to secure the token properly.
And again, let’s not forget that the u2f token is just one factor; if by any chance someone gets my backup token, it’s not at all enough for them to use it. I keep my passwords in a keepassx database with huge number of transform rounds, so I don’t really believe I could ever be that unlucky to get both my passwords and backup token compromised.
Oh, I agree that your design decision is reasonable and that there are ways to mitigate the concern. I just think it’s useful to discuss these threats, both so users can make their own decisions about what works for them, and for didactic purposes to show how to reason it out.
“or bury somewhere in the forest. I’m not kidding”
An animal might dig it up and run off with it. I’ve lost [non-security] stuff to animals. Switched to putting it in somewhat heavy, tooth-resistant containers if burying stuff.
“ Even if something bad happens with my primary token, it’s highly unlikely that my backup token could be affected by the same event.”
Just don’t plug them into the same computer if possible. I lost all my shit using the same, clean computer on redundant drives. From what someone told me, a USB driver bug was silently flipping bits which corrupted keyfiles that helped encrypt whole volumes. I don’t know what effect single computer connecting to multiple U2F tokens could have. I’m calling it a known unknown where I avoid the general pattern. Might not be big concern to others, though.
“Log into all accounts with the backup token, thus invalidating the primary one”
This part might even be automated depending on the services.
“ and at least in the past there were some implementations which were keeping private keys on the device, e.g. see this github issue. And the rationale for not storing private keys was to avoid limiting the maximum number of services registered with a given token: since device doesn’t store any per-service data, the number of services is unlimited”
That doesn’t sound so clear cut. If the standard allows, I’d mix them where the most important stuff was in secure hardware and the rest weren’t. It would be done based on criticality. Banking, email, VPN login, and backups are examples of what might go into limited, secure storage.
this is really clever, and definitely something that I could see Yubikey selling directly: one black, one red - black for daily use, red for recovery, with a “order another pair” stamped right on the red one.
Agreed, this is absolutely brilliant. Well done diamonomid!
Anyone know of a place in the US where I can buy a pair of u2f-zeroes? The website links to amazon but they’re sold out.
Not in the US, but if you have a European delivery address, I still have stock in my (official) European distribution https://u2fzero.ch
Conor is hard at work restocking Amazon :)
Couldn’t an attacker who gained access to the first device just run up its counter to some arbitrarily large number such that it passes the backup device and continue using it? I don’t see anything in the article that would prevent that. (Though admittedly don’t know a lot about the U2F implementation, so maybe there is something that makes this infeasible.)
There’s no easy way to increment the counter, so one would have to invent some automation for sending authentication challenges to the token and pressing the physical button every time (so it’s not just software, it has to be hardware automation). It’s by no means impossible, but the time it’d take should be enough for me to get another pair or tokens, enroll it to my accounts and revoke the old one.
Also we can make it not 1 000 000 but, say, 4 000 000 000, which still leaves plenty of values of a 32-bit value, and makes this attack vector a lot less feasible.
UPD: there is a reliable way to address this issue completely, see my comment below.
It’s not totally trivial to increment the counter, true, but it’s not really that difficult either. Once you have the device you could register and use it against your own service specifically for doing so, and especially with the U2F Zero with the bare pcb, the hardware part of it is pretty basic. It looks like you could likely get alligator clips on the button leads and never even touch a soldering iron, then drive the whole process with a little script. Unless the devices are intentionally rate limited or so slow as to be effectively so, I think it could easily happen faster than you could get a replacement device pair and update all your third-party services with the new ones.
It’s a cool idea, and in many cases is likely fine in practice, but I’d be hesitant to rely on this for anything particularly sensitive.
Ah, I just realized that this issue is actually a non-issue! There is a reliable solution: accordingly to ATECC508A datasheet, its counters can count only up to 2097151, but the whole range of U2F counter is 0xffffffff (which is more than 4 000 000 000). So, the counter boost for the backup token should be set to the value larger than 2097151, and then, the primary token would never be able to return counter which is that large. So once backup token is used, the primary one is invalidated for good.
Ok cool, I’ll update the article with that important detail.
Cool, that sounds like that might be an approach to mitigate.
But would the backup device then fail to function due to the counter exceeding the limit? I skimmed the data sheet, and it’s not clear what the behavior would be if you write an initial value larger than it would count to…you might want to test that before recommending it.Aah, nevermind, I realized you’re offsetting this in the firmware, not actually in the hw counter, so that seems like it should work.
We can’t write any initial value to the hardware counter: it’s monotonic by design, and we can only increment it one by one.
let’s refer to the value from ATECC508A’s counter as
hw_counter
. Then:hw_counter
;hw_counter + 2000000000
.Note that we do not modify the actual hardware counter value
hw_counter
, it still will count from 0 to 2097151. Instead, every time we need to get a counter value, we readhw_counter
from ATECC508A, then add boost constant, and return further (for using in u2f calculations).This way, the counter range of the primary token is [0, 2097151], while the counter range of the backup is [2000000000, 2002097151]. The fact that those ranges don’t intersect ensures that once backup token is used on some service, the primary one is invalidated for good.
I think you can write an initial counter value while the chip is still write enabled for configuration…it looks like that’s how they implement limited use keys by allowing you to tie a key to a counter with an initial value of
MAX - USES
.But regardless, you’re right, doing the offset in firmware like you suggest would work better anyway.
I agree it’s doable, but in any case, the backup token is only needed to log into accounts, add another token and revoke the current one, so the time attackers have is quite limited. And comparing to other backup alternatives (regular U2F token at home or some TOTP / recovery codes), the matched token with boosted counter still feels like a better tradeoff. Well, at least personally for me; I’m really uncomfortable about keeping a working U2F token somewhere easily accessible at home.There is an easy way to reliably address this issue, see my comment below.
Perhaps I missed it, but it doesn’t appear that this author has considered what should happen in the case where an attacker steals the backup token, while the account owner still has the primary token. As I understand it, the backup token automatically and immediately invalidates the primary token when it’s used. That means it could be used on its own to lock the owner out, which seems undesirable.
Of course, the same thing could be done with two credentials that act as “peers”, although an attacker would have to do more than just log in. So I’m not sure it’s actually a problem. But I’d appreciate a flowchart or similar analysis of the recovery flows for various attack scenarios, granting equal attention to those where the attacker “wins”.
Yes sure, if an attacker manages to get the backup token, and if they also have your password so they are able to actually login to your account, then, bad luck. But, well, the whole point of that story is to be able to hide backup so far away so that the probability of such event is negligeable. So it’s up to the token’s owner to secure the token properly.
And again, let’s not forget that the u2f token is just one factor; if by any chance someone gets my backup token, it’s not at all enough for them to use it. I keep my passwords in a keepassx database with huge number of transform rounds, so I don’t really believe I could ever be that unlucky to get both my passwords and backup token compromised.
Oh, I agree that your design decision is reasonable and that there are ways to mitigate the concern. I just think it’s useful to discuss these threats, both so users can make their own decisions about what works for them, and for didactic purposes to show how to reason it out.
Thanks, that’s fair point; I should update the article with those details.
Great write-up! Quick observations:
“or bury somewhere in the forest. I’m not kidding”
An animal might dig it up and run off with it. I’ve lost [non-security] stuff to animals. Switched to putting it in somewhat heavy, tooth-resistant containers if burying stuff.
“ Even if something bad happens with my primary token, it’s highly unlikely that my backup token could be affected by the same event.”
Just don’t plug them into the same computer if possible. I lost all my shit using the same, clean computer on redundant drives. From what someone told me, a USB driver bug was silently flipping bits which corrupted keyfiles that helped encrypt whole volumes. I don’t know what effect single computer connecting to multiple U2F tokens could have. I’m calling it a known unknown where I avoid the general pattern. Might not be big concern to others, though.
“Log into all accounts with the backup token, thus invalidating the primary one”
This part might even be automated depending on the services.
“ and at least in the past there were some implementations which were keeping private keys on the device, e.g. see this github issue. And the rationale for not storing private keys was to avoid limiting the maximum number of services registered with a given token: since device doesn’t store any per-service data, the number of services is unlimited”
That doesn’t sound so clear cut. If the standard allows, I’d mix them where the most important stuff was in secure hardware and the rest weren’t. It would be done based on criticality. Banking, email, VPN login, and backups are examples of what might go into limited, secure storage.