I’ve always found FIPS to be rather silly because of this sort of thing. You can’t use an industry-standard, secure solution because it isn’t kosher. So instead you have to use a second-rate solution with known problems.
Same thing happens with implementations, where the vendor tries very hard to never discover any bugs, because that would mean recertifying.
This is why I prefer to provide two modes: Secure and FIPS. Then people downstream get to explain to whoever put the FIPS requirement in why they aren’t using the Secure mode.
One benefit I can see in the FIPS way is that the crypto modules are themselves validated rather than source code. In theory that should prevent xz style attacks. That said, I 100% agree that they need to validate more and newer algorithms.
I’ve always found FIPS to be rather silly because of this sort of thing.
What would you do differently, aside from (spending more money on) updating the standards more often?
It seems to me that one must either choose algorithms for vendors, which will mean choosing bad algorithms for them (when algorithms on one’s list become bad), or letting vendors choose the algorithms, which will mean letting them choose bad algorithms (when they’re lazy or ignorant).
I’ve always found FIPS to be rather silly because of this sort of thing. You can’t use an industry-standard, secure solution because it isn’t kosher. So instead you have to use a second-rate solution with known problems.
Same thing happens with implementations, where the vendor tries very hard to never discover any bugs, because that would mean recertifying.
This is why I prefer to provide two modes: Secure and FIPS. Then people downstream get to explain to whoever put the FIPS requirement in why they aren’t using the Secure mode.
This is what I did when I designed CipherSweet, fwiw.
If that’s how it is, we could very easily cause a lot of headache by trolling the validated modules registry for projects to find vulnerabilities in.
One benefit I can see in the FIPS way is that the crypto modules are themselves validated rather than source code. In theory that should prevent xz style attacks. That said, I 100% agree that they need to validate more and newer algorithms.
What would you do differently, aside from (spending more money on) updating the standards more often?
It seems to me that one must either choose algorithms for vendors, which will mean choosing bad algorithms for them (when algorithms on one’s list become bad), or letting vendors choose the algorithms, which will mean letting them choose bad algorithms (when they’re lazy or ignorant).
The standards are designed poorly even judged only based on what was known at the times they were made.