Great article as always. A couple of additional points:
One of the goals of memory hardness is to make FPGA and custom ASIC implementations hard. A moderately expensive FPGA might be programmed with a custom pipeline doing the hash, which is instantiated a thousand times in parallel and able to take one input per cycle. Even at 200MHz, that will outperform a CPU or GPU. An ASIC implementation of the same thing may run at 2GHz. But if each computation needs 64MiBs of state then you’re going to run out of area for SRAM long before you kind of scalability. This matters a lot if your threat model includes nation state actors, less so for other folks.
The text at the start of the recommendations for algorithms describing the use is absolutely critical. If you’re defending against someone grabbing your database of password hashes and going fishing for weak ones, you have a very different set of requirements to if you’re expecting someone to try to compromise a specific password (e.g, the break-glass account for your cloud service). In general, I like to see these kinds of recommendation come with a rough cost for the attacker: how much compute time will it take, on average, to find a hash collision for a password? Ideally, what would that cost in dollars on a public cloud? I can then assume that the cost halves every year and know when the cost of cracking the password drops below the value of the data and do some security economics reasoning.
Great article as always. A couple of additional points:
One of the goals of memory hardness is to make FPGA and custom ASIC implementations hard. A moderately expensive FPGA might be programmed with a custom pipeline doing the hash, which is instantiated a thousand times in parallel and able to take one input per cycle. Even at 200MHz, that will outperform a CPU or GPU. An ASIC implementation of the same thing may run at 2GHz. But if each computation needs 64MiBs of state then you’re going to run out of area for SRAM long before you kind of scalability. This matters a lot if your threat model includes nation state actors, less so for other folks.
The text at the start of the recommendations for algorithms describing the use is absolutely critical. If you’re defending against someone grabbing your database of password hashes and going fishing for weak ones, you have a very different set of requirements to if you’re expecting someone to try to compromise a specific password (e.g, the break-glass account for your cloud service). In general, I like to see these kinds of recommendation come with a rough cost for the attacker: how much compute time will it take, on average, to find a hash collision for a password? Ideally, what would that cost in dollars on a public cloud? I can then assume that the cost halves every year and know when the cost of cracking the password drops below the value of the data and do some security economics reasoning.
What’s a doubly augmented PAKE? What does it do more than a “merely” augmented one?
(Also asked on Reddit.)
See this slide deck, slides 44-46 specifically.
I can’t find the talk online yet.