The logic is nonsense. A 4000-bit key is 2^96 times easier to defeat than a 4096-bit key (modulo the fact that factoring algorithms are much better than linear, which smooths the difficulty curve significantly) and is precisely equivalent to a 4096-bit key the first 96 bits of which are 0. (In fact, a 2048-bit key is equivalent to a 4096-bit key the first 2048 bits of which are 0. The key is just a number, and so the key length is just an upper bound on that number.)

While it’s not inconceivable that someone could invent an attack on RSA which is only effective on keys between 2^4095 and 2^4096-1 (but that sounds silly, because surely 7F is a valid first byte of a 4096-bit key? So maybe between 2^4088 and 2^4096-1? But he complained about Let’s Encrypt requiring lengths on an 8-bit boundary…), but since an attack on RSA is isomorphic to a factoring algorithm, finding a factoring algorithm which only works efficiently on numbers in a particular range is especially absurd.

Furthermore, using nonstandard key lengths makes you much more likely to hit bugs in key generation or encryption software causing poor key selection or information leakage. This may be good QA practice, but it’s extremely poor security practice.

“that if I would build a RSA key cracker, there is some likelihood that I would need to optimize the implementation for a particular key size in order to get good performance. Since 2048 and 4096 are dominant today, and 1024 were dominent some years ago, it may be feasible to build optimized versions for these three key sizes.”

Let me stop this wtf article right there. The attackers will focus on what’s popular which is why obfuscation fans like me recommend high-quality stuff that’s not popular esp for Windows users. Lowers average number of problems for no or low effort. However, the security of RSA keys is based on the hardness of defeating the algorithm at each key length. That hardness grows either in multiples or exponentially in difficulty in way best illustrated by prior results [1] against RSA. Lower key sizes gives lower effort. Breaking the high ones means they need a miracle algorithm for factoring that mathematicians everywhere still haven’t found, a quantum computer, defeat of the cryptosystem itself, or flaws in the implementation.

Those risks always exist for RSA. Reducing key sizes towards those that were crackable just multiplies that risk by an amount that’s unknowable but bad. Keep it at 2,048 or above. Also, follow the advice of experienced cryptographers esp when the recommendations have been battle-tested for over a decade by other cryptographers. Good, general rule.

I think the logic here is nonsense, but I’d love to hear what other lobsters (potentially with the background to evaluate this confidently) have to say.

This is similar logic to running services on non-default ports. It /may/ be more secure because attackers will pass over it but that’s a seriously dangerous assumption if that’s what you’re relying on for security.

At least he pushes for modern crypto algorithms, so it’s not ALL bad.

The logic is nonsense. A 4000-bit key is 2^96 times easier to defeat than a 4096-bit key (modulo the fact that factoring algorithms are much better than linear, which smooths the difficulty curve significantly) and is precisely equivalent to a 4096-bit key the first 96 bits of which are 0. (In fact, a 2048-bit key is equivalent to a 4096-bit key the first 2048 bits of which are 0. The key is just a number, and so the key length is just an upper bound on that number.)

While it’s not

inconceivablethat someone could invent an attack on RSA which is only effective on keys between 2^4095 and 2^4096-1 (but that sounds silly, because surely 7F is a valid first byte of a 4096-bit key? So maybe between 2^4088 and 2^4096-1? But he complained about Let’s Encrypt requiring lengths on an 8-bit boundary…), but since an attack on RSA is isomorphic to a factoring algorithm, finding afactoring algorithmwhich only works efficiently on numbers in a particular range is especially absurd.Furthermore, using nonstandard key lengths makes you

muchmore likely to hit bugs in key generation or encryption software causing poor key selection or information leakage. This may be good QA practice, but it’s extremely poor security practice.what about 4097-bit key?

“that if I would build a RSA key cracker, there is some likelihood that I would need to optimize the implementation for a particular key size in order to get good performance. Since 2048 and 4096 are dominant today, and 1024 were dominent some years ago, it may be feasible to build optimized versions for these three key sizes.”

Let me stop this wtf article right there. The attackers will focus on what’s popular which is why obfuscation fans like me recommend high-quality stuff that’s not popular esp for Windows users. Lowers average number of problems for no or low effort. However, the security of RSA keys is based on the hardness of defeating the algorithm at each key length. That hardness grows either in multiples or exponentially in difficulty in way best illustrated by prior results [1] against RSA. Lower key sizes gives lower effort. Breaking the high ones means they need a miracle algorithm for factoring that mathematicians everywhere still haven’t found, a quantum computer, defeat of the cryptosystem itself, or flaws in the implementation.

Those risks always exist for RSA. Reducing key sizes towards those that were crackable just multiplies that risk by an amount that’s unknowable but bad. Keep it at 2,048 or above. Also, follow the advice of experienced cryptographers esp when the recommendations have been battle-tested for over a decade by other cryptographers. Good, general rule.

[1] https://web.archive.org/web/20130405161723/http://www.rsa.com/rsalabs/node.asp?id=2092

I think the logic here is nonsense, but I’d love to hear what other lobsters (potentially with the background to evaluate this confidently) have to say.

The logic here is nonsense

I myself have chosen to only use ed25519 keys for ssh, because I find it amusing that most hack-bots can’t negotiate them.

This is similar logic to running services on non-default ports. It /may/ be more secure because attackers will pass over it but that’s a seriously dangerous assumption if that’s what you’re relying on for security.

At least he pushes for modern crypto algorithms, so it’s not ALL bad.

nb: Simon Josefsson is one of the GnuTLS maintainers (previous lead).