Couldn’t an attacker who gained access to the first device just run up its counter to some arbitrarily large number such that it passes the backup device and continue using it? I don’t see anything in the article that would prevent that. (Though admittedly don’t know a lot about the U2F implementation, so maybe there is something that makes this infeasible.)
There’s no easy way to increment the counter, so one would have to invent some automation for sending authentication challenges to the token and pressing the physical button every time (so it’s not just software, it has to be hardware automation). It’s by no means impossible, but the time it’d take should be enough for me to get another pair or tokens, enroll it to my accounts and revoke the old one.
Also we can make it not 1 000 000 but, say, 4 000 000 000, which still leaves plenty of values of a 32-bit value, and makes this attack vector a lot less feasible.
UPD: there is a reliable way to address this issue completely, see my comment below.
It’s not totally trivial to increment the counter, true, but it’s not really that difficult either. Once you have the device you could register and use it against your own service specifically for doing so, and especially with the U2F Zero with the bare pcb, the hardware part of it is pretty basic. It looks like you could likely get alligator clips on the button leads and never even touch a soldering iron, then drive the whole process with a little script. Unless the devices are intentionally rate limited or so slow as to be effectively so, I think it could easily happen faster than you could get a replacement device pair and update all your third-party services with the new ones.
It’s a cool idea, and in many cases is likely fine in practice, but I’d be hesitant to rely on this for anything particularly sensitive.
Ah, I just realized that this issue is actually a non-issue! There is a reliable solution: accordingly to ATECC508A datasheet, its counters can count only up to 2097151, but the whole range of U2F counter is 0xffffffff (which is more than 4 000 000 000). So, the counter boost for the backup token should be set to the value larger than 2097151, and then, the primary token would never be able to return counter which is that large. So once backup token is used, the primary one is invalidated for good.
Ok cool, I’ll update the article with that important detail.
Cool, that sounds like that might be an approach to mitigate. But would the backup device then fail to function due to the counter exceeding the limit? I skimmed the data sheet, and it’s not clear what the behavior would be if you write an initial value larger than it would count to…you might want to test that before recommending it.
Aah, nevermind, I realized you’re offsetting this in the firmware, not actually in the hw counter, so that seems like it should work.
We can’t write any initial value to the hardware counter: it’s monotonic by design, and we can only increment it one by one.
let’s refer to the value from ATECC508A’s counter as hw_counter. Then:
hw_counter;hw_counter + 2000000000.Note that we do not modify the actual hardware counter value hw_counter, it still will count from 0 to 2097151. Instead, every time we need to get a counter value, we read hw_counter from ATECC508A, then add boost constant, and return further (for using in u2f calculations).
This way, the counter range of the primary token is [0, 2097151], while the counter range of the backup is [2000000000, 2002097151]. The fact that those ranges don’t intersect ensures that once backup token is used on some service, the primary one is invalidated for good.
I think you can write an initial counter value while the chip is still write enabled for configuration…it looks like that’s how they implement limited use keys by allowing you to tie a key to a counter with an initial value of MAX - USES.
But regardless, you’re right, doing the offset in firmware like you suggest would work better anyway.
I agree it’s doable, but in any case, the backup token is only needed to log into accounts, add another token and revoke the current one, so the time attackers have is quite limited. And comparing to other backup alternatives (regular U2F token at home or some TOTP / recovery codes), the matched token with boosted counter still feels like a better tradeoff. Well, at least personally for me; I’m really uncomfortable about keeping a working U2F token somewhere easily accessible at home.
There is an easy way to reliably address this issue, see my comment below.
Are these legitimately redistributable, or just someone who checked in their licensed ebooks into github and hasn’t been DMCA’d yet?
I am afraid I don’t know, and I confess I didn’t think of it such that I assumed they were legitimate.
You mention competing w/ cookiecutter – what specifically do you aim to do better/different? I’m certainly open to improvements, but I’ve also been pretty happy with cookiecutter, so I’m curious what pain points you’ve had that you want to address.
While cookiecutter works nicely and is in general quite similar, I found a few nice things in Kickstart:
As a general look, the word “competition” would be a bit strong, it’s more of a compiled alternative.
Work:
Not work:
I’m not sure I’m sold on this. I get why people building infrastructure software like redis might want this. Yes, it helps them keep the “Foo as a Service” market as a captive income stream without competition from AWS, et al. At the same time it seems like for any service of much worth, it’s going to get cloned by the big providers anyway, and then you have a proliferation of similar but incompatible closed-source versions. I’m not convinced that is necessarily good for the community at large.
I think it’s just a protection to avoid a Redis as a service launched with plain redis and few bits here and there to make the offering work. Big players can obviously clone it and have theirs, but at least most small to middle size players are eliminated. (From what I understand).
You can still start Redis as a Service companies. I was shocked at first because I thought this concerned Redis and their aim was to kill all of the Redis as a Service providers which already exist. But it turns out Redis Core is unaffected by this, only some modules are.
I don’t really know what they intend to achieve with this, except having people avoid using their modules…
Which doesn’t seem worthwhile, as the big players are the ones most likely to be able to market and monetise a service based on core Redis plus their own proprietary add-ons. It’s pretty difficult to compete with AWS on any front at this stage, given their massive resources and the “nobody ever gets fired for buying X” safety of big brands.
Boxing out only the small players doesn’t really feel like it’s going to preserve a whole bunch of market or mindshare for the Redis company.
I’m not into business very much, so I cannot evaluate if this operation is worth it or not, I would just assume that they were going for the long tail, which can be a sufficient number of clients to have decent revenue and continue to work on the Redis company.
In reality I don’t have the feeling that a “long tail” actually exists for a lot of these types of services. I base this on the Firebase/Parse era when there were loads of “backend as a service” companies around that have all withered away (my understanding at least). With only google/Firebase remaining. I personally was surprised by this.