1. 14
  1.  

  2. 5

    The continued insistence on SHA based PBKDF is disappointing. These are hash functions designed to be fast and efficient, at complete odds to key stretching. So we try to fix this by increasing iterations, but the cost for attackers doesn’t scale at the same rate.

    1. 1

      I don’t understand how the cost to attackers doesn’t scale at the same rate. Unless someone can develop a flow from every 256 bit SHA to every hash of that hash, then I can’t really imagine a way to short circuit the iteration count, even with a large number of computers or a large memory bank.

      1. 4

        Somebody with a decent graphics card can test about a bajillion hashes in parallel. GPUs are getting wider faster than my CPU is getting faster. Compared to the GPUs of last year, I have to double the iteration count to achieve the same difficulty, which nearly doubles the time it takes me.

      2. 1

        If I had to wager a guess, I’d guess that the reason these things are continually insisted upon, is that it has an RFC (PBKDF2 is RFC2898). Now that scrypt has an RFC, 7914, I wonder if it’ll be recommended with the next update?

        What’s silly about this thought is that both linked RFCs are “informational”, and not “standards track”, so the idea that simply having an RFC and solving the problem is the criteria, is arbitrary and frankly, a bit silly.

      3. 5

        Applications must allow all printable ASCII characters, including spaces, and should accept all UNICODE characters, too, including emoji!

        Oh yeah this is going to be fun! I’m just anticipating issues with full decomposition and composition of strings. I mean, when two entries are canonically equivalent with regard to unicode, is the password entry correct? To even make this guideline half-sane NIST should specify that the unicode strings should be all NFD.

        1. 2

          For the uninitiated, NFD stands for Normalization Form Canonical Decomposition.

          1. 1

            In practice, I think the password will end up being a utf-8 byte sequence. And yes, that means if you switch operating systems or input devices, you may not be able to enter the password exactly the same.

            1. 1

              And this is a huge problem. Just keep in mind that the normal user might not be accustomed to that and end up wondering why ö =/= ö, because one is ö and the other one is o + ¨

              Sadly, as simple and ingenious UTF-8 is as a character encoding, Unicode adds a big bucket of complexity on these things. Even though it is motivating to work out a way to combine characters into new ones, trying to stay backward compatible and keep some combined forms in the definition is a big mess and confuses a lot of people. At suckless.org, we are currently working on a library to handle grapheme clusters in a most simple way, but had to make some “cut-offs” along the way.