1. 16
    1. 7

      DJB found that Curve25519 came with so many benefits: high speed, resistant to timing attacks, […]

      I’d nuance that with easier to protect against timing attacks. Modular arithmetic based on a pseudo-Mersene prime is easier to implement in constant time, but you still have to take a couple precautions. And the Montgomery ladder still needs a conditional, that we must implement as a constant time selection. Still, much simpler than the best algorithms known for Weirerstraß curves at that time. (Now if I’m being honest, “resistant to timing attacks” is a good enough short description.)

      Further, there is an equivalence proof from Edwards Curves to Montgomery Curves which means that DJB’s findings can be extended to Twisted Edward Curves. That equivalence proof generally uses the words “birationally equivalent”.

      Note that a birational map is not a bijection. There are exceptions, and ignoring the difference cost me dearly. The mistake was a bit subtle, but its effects were impressive enough that I believe some people mistook it for a noob’s blunder: I accepted all-zero signatures, and ECDSA explicitly requires the verifier to reject those… except Ed25519 rejects all-zero signatures implicitly, and reason my implementation didn’t was more subtle than the failure to implement a standard check.

      One new feature of the IETF RFC version of Ed25519 over the original Ed25519 as published is its resistance to signature forgeries.

      Ah, yes, signature malleability. Turns out that there are about 8 equivalent values for the second half of the signature, that can be easily computed from just seeing a legitimate one. Just by seeing one signature, an attacker can compute 7 others. Normally this should not be a problem: a signature proves one thing, and one thing only: that someone with knowledge of the private key signed the message.

      Thing is, people tend to see more than that. When they see a signature, they further assume that producing this particular signature required knowledge of the private key to be produced. Which is only true if we only accept one signature among the 8 possible ones (that would be the smallest).

      Even so, we’re not out of the woods: even if we get rid of malleability, signatures are still not unique. The way EdDSA is constructed, it needs a random nonce, that is part of the final computation. We can thus generate an arbitrary number of signature from the same private key and message. The only reason we only generate one in practice is because the nonce is computed deterministically to avoid user error. A very good move from DJB, but don’t get fooled into thinking the standard way is the only way.

      Normally you will not use the hash function directly while signing. A good cryptography library will choose the right algorithm that goes with the key. If you write against OpenSSL in C, you may have to perform this step manually. For Ed25519, there is only one algorithm permitted: SHA-512.

      Not sure what the picture on the left of this disapproves of, but I know of two libraries that let you chose your hash algorithm: Curve25519-Donna (compile time option) and Monocypher.

      While writing this I found out that the -rawin flag is only “raw” for other DSAs. For Ed25519 and Ed448, -rawin is a required flag that does not behave as documented! Ed25519 still hashes the raw content with SHA-512. I was in the middle of verifying my work with an alternate implementation when it of course failed to verify in node. The reason? OpenSSL’s documentation and flag choices suck that badly.

      To be honest, EdDSA does not make it easy to select the hash. The most natural implementations (not NaCl) pretty much requires an entire context based incremental hash interface. For Monocypher, I ended up implementing a full blown virtual table:

      typedef struct {
          void (*hash)(uint8_t hash[64], const uint8_t *message, size_t message_size);
          void (*init  )(void *ctx);
          void (*update)(void *ctx, const uint8_t *message, size_t message_size);
          void (*final )(void *ctx, uint8_t hash[64]);
          size_t ctx_size;
      } crypto_sign_vtable;

      And I avoid inheritance even in C++.

      Additionally OpenSSL only supports “oneshot” operation with these algorithms. This means that the entire file to be signed/verified must be read into memory before processing it.

      Almost nobody (except Monocypher) provides incremental signing and verification, for two reasons:

      • Signing requires two passes over the message: one to hash it & generate the nonce, and a second one to hash it again, this time to sign it. We could skip the first step and generate a random nonce instead, but that would leave us open to exactly the kind of user error DJB was trying to avoid when he designed EdDSA.
      • Incremental verification easily leaves us open to the Cryptographic Doom Principle: incremental interface makes it very tempting to implement a pipeline with verification in the middle. The problem with that is that the verification succeeds or fails when all data is consumed, at which point we may very well have trusted the beginning of the data before it was verified.

      I still think those interfaces are worth providing, because (i) we can ditch the first signature pass safely if we’re careful enough, and (ii) for big files we need an incremental interface anyway. We need it so badly in fact that people already hash the file before signing the hash instead of signing the file itself, and lose the collision resilience EdDSA originally enjoyed.

      A troublesome part of rolling our own cryptography is that it can appear to work but it might not fit the standard, or we might be doing something accidentally wrong because documentation was unclear.

      The author here is talking about using fucking OpenSSL. Pardon the swear word, but if OpenSSL’s API is so bad that using it amounts to “rolling your own crypto”, we should consider erasing it from the face of the planet, with the possible exception of a select few museums.

      That’s why you’ll see things like testing vectors in good specifications. Bad specifications or publications about an approach do not provide a means to reproduce the work reliably.

      Case in point: Argon2. When I implemented it from scratch, I couldn’t match my results with the reference implementation. Turns out the reference implementation did not follow the specs to the letter. It’s been five years now, I think the specs will never be corrected. At least we have the RFC now.

    2. 6

      What is it with cute animal drawings and exceptionally accessible and pedagogical cryptography explainers?!