Perhaps I missed it, but it doesn’t appear that this author has considered what should happen in the case where an attacker steals the backup token, while the account owner still has the primary token. As I understand it, the backup token automatically and immediately invalidates the primary token when it’s used. That means it could be used on its own to lock the owner out, which seems undesirable.
Of course, the same thing could be done with two credentials that act as “peers”, although an attacker would have to do more than just log in. So I’m not sure it’s actually a problem. But I’d appreciate a flowchart or similar analysis of the recovery flows for various attack scenarios, granting equal attention to those where the attacker “wins”.
Yes sure, if an attacker manages to get the backup token, and if they also have your password so they are able to actually login to your account, then, bad luck. But, well, the whole point of that story is to be able to hide backup so far away so that the probability of such event is negligeable. So it’s up to the token’s owner to secure the token properly.
And again, let’s not forget that the u2f token is just one factor; if by any chance someone gets my backup token, it’s not at all enough for them to use it. I keep my passwords in a keepassx database with huge number of transform rounds, so I don’t really believe I could ever be that unlucky to get both my passwords and backup token compromised.
Oh, I agree that your design decision is reasonable and that there are ways to mitigate the concern. I just think it’s useful to discuss these threats, both so users can make their own decisions about what works for them, and for didactic purposes to show how to reason it out.
Thanks, that’s fair point; I should update the article with those details.
Couldn’t an attacker who gained access to the first device just run up its counter to some arbitrarily large number such that it passes the backup device and continue using it? I don’t see anything in the article that would prevent that. (Though admittedly don’t know a lot about the U2F implementation, so maybe there is something that makes this infeasible.)
There’s no easy way to increment the counter, so one would have to invent some automation for sending authentication challenges to the token and pressing the physical button every time (so it’s not just software, it has to be hardware automation). It’s by no means impossible, but the time it’d take should be enough for me to get another pair or tokens, enroll it to my accounts and revoke the old one.
Also we can make it not 1 000 000 but, say, 4 000 000 000, which still leaves plenty of values of a 32-bit value, and makes this attack vector a lot less feasible.
UPD: there is a reliable way to address this issue completely, see my comment below.
It’s not totally trivial to increment the counter, true, but it’s not really that difficult either. Once you have the device you could register and use it against your own service specifically for doing so, and especially with the U2F Zero with the bare pcb, the hardware part of it is pretty basic. It looks like you could likely get alligator clips on the button leads and never even touch a soldering iron, then drive the whole process with a little script. Unless the devices are intentionally rate limited or so slow as to be effectively so, I think it could easily happen faster than you could get a replacement device pair and update all your third-party services with the new ones.
It’s a cool idea, and in many cases is likely fine in practice, but I’d be hesitant to rely on this for anything particularly sensitive.
Ah, I just realized that this issue is actually a non-issue! There is a reliable solution: accordingly to ATECC508A datasheet, its counters can count only up to 2097151, but the whole range of U2F counter is 0xffffffff (which is more than 4 000 000 000). So, the counter boost for the backup token should be set to the value larger than 2097151, and then, the primary token would never be able to return counter which is that large. So once backup token is used, the primary one is invalidated for good.
Ok cool, I’ll update the article with that important detail.
Cool, that sounds like that might be an approach to mitigate. But would the backup device then fail to function due to the counter exceeding the limit? I skimmed the data sheet, and it’s not clear what the behavior would be if you write an initial value larger than it would count to…you might want to test that before recommending it.
Aah, nevermind, I realized you’re offsetting this in the firmware, not actually in the hw counter, so that seems like it should work.
We can’t write any initial value to the hardware counter: it’s monotonic by design, and we can only increment it one by one.
let’s refer to the value from ATECC508A’s counter as hw_counter. Then:
hw_counter + 2000000000
Note that we do not modify the actual hardware counter value hw_counter, it still will count from 0 to 2097151. Instead, every time we need to get a counter value, we read hw_counter from ATECC508A, then add boost constant, and return further (for using in u2f calculations).
This way, the counter range of the primary token is [0, 2097151], while the counter range of the backup is [2000000000, 2002097151]. The fact that those ranges don’t intersect ensures that once backup token is used on some service, the primary one is invalidated for good.
I think you can write an initial counter value while the chip is still write enabled for configuration…it looks like that’s how they implement limited use keys by allowing you to tie a key to a counter with an initial value of MAX - USES.
MAX - USES
But regardless, you’re right, doing the offset in firmware like you suggest would work better anyway.
I agree it’s doable, but in any case, the backup token is only needed to log into accounts, add another token and revoke the current one, so the time attackers have is quite limited. And comparing to other backup alternatives (regular U2F token at home or some TOTP / recovery codes), the matched token with boosted counter still feels like a better tradeoff. Well, at least personally for me; I’m really uncomfortable about keeping a working U2F token somewhere easily accessible at home.
There is an easy way to reliably address this issue, see my comment below.
I was bothered by “I still use Vim in 2015 as my primary text editor”, not because Vim is bad, but because it feels apologetic. Lots of developers still use Vim–prefer it, even, over everything else. Don’t ever apologize for a preference.
As for the rest of the article, this is very neat. Automated, autoupdating tags is somethi g Ive always wanted in emacs.
That was actually a kind of ironic/sarcastic. :) But anyway, I decided to remove it, thanks.
There are C++ compilers for some embedded CPUs, but they are uncommon, so I need to stick to C, in the name of portability.
Then get a different CPU/toolchain. Why would you make things more difficult for yourself. Also, C++-to-C compilers exist (or used to exist)
Discussed a lot already. See the discussion on HN: https://news.ycombinator.com/item?id=10260517
Excellent write-up. I too was confused by the Wikipedia definition of a closure and found your article and diagrams very helpful. Thanks for writing this! :)
Glad you liked it, you’re welcome!
I’d like to add two more tags to this publication: “unit-testing” and “embedded”. Sadly, there are no such tags, and I seem unable to create them. If someone with powers reads this, please consider adding the aforementioned tags. Thanks.
Seconding that an embedded tag would be appreciated! “Hardware” isn’t quite the same.
An embedded tag would be great, and a testing one too. I wouldn’t restrict it to “unit-testing”, mind you.
Oh, agreed, no need to be that precise. testing is great.