1. 8
  1.  

  2. 2

    This is a fun idea, like some sort of inside-out backwards RC4. Note that, while this cryptographic shuffle is constant-space, there are also reservoir sampling algorithms, which are based on classical shuffling techniques and also do not need all elements in memory at once. These also share valuable properties with cryptographic shuffling, like the ability for multiple processes to deterministically generate the same shuffle given the same random seed.

    1. 2

      Shuffling without holding the whole thing in memory is one of my areas of interest!

      Some of the ideas the author mentions (Feistel ciphers, iterating until output is in range) show up in format-preserving encryption. Feistel ciphers in particular are magical to me, because they’re a general-purpose method for converting a random function (which likely has collisions) into a function that produces a permutation (each output appears only once).

      He mentions that “limitation to lists of length 2^k is a bit of pain” and mentions the possibility of timing attacks. I know of some other algorithms that remove these limitations: the swap-or-not shuffle is one that comes to mind. It can shuffle a list of arbitrary size, and the paper is not too difficult to understand (pseudocode example on the first page!).

      1. 1

        Ah, very cool! I’ll have to check out swap-or-not.

        Feistel ciphers really do look pretty magical. They’re so simple! And yet… I don’t actually understand them at all. This kind of math somehow bounces right off my brain. I should probably take some time to work a small example and see what it does with different round keys.

        1. 1

          What made it click for me is the realization that each step in a Feistel cipher is completely reversible (assuming you know the key):

          • Swap left and right: This can be undone by swapping again.
          • left ^= round_fn(right, round_key): In this step, right doesn’t change, and round_key never changes. You can calculate the exact same round_fn(...) again and do the exact same XOR again.

          If you can run each step in reverse, you can run the whole machine in reverse. Which is just another way of saying that you can decrypt the message. And that means it’s impossible to have two inputs that encrypt to the same X – because if you did, how would you ever decrypt X?

          1. 1

            Yeah, I think it’s starting to click for me as I stare at it more – you’re layering on these contingent values with XOR, alternating sides. The reason to have two sides is so you can continue to store the “recovery information” for undoing the XOR; the reason for [edit] the reversal [/edit] is just that it makes the algorithm description simpler.

            Swap-or-not… I’m going to have to stare at that one a lot more, haha! Do you know how much scrutiny that one has undergone? I don’t see a ton of information on it, and I don’t really know how to evaluate what I do see.

            1. 2

              There is a security proof in the swap-or-not paper. I’m not skilled enough to tell for myself whether the proof is correct, but Rogaway is a respected cryptographer.

              The gist of the proof is if you have enough rounds, and your round function is secure, then swap-or-not is secure. This is a similar result to what’s been proven about Feistel ciphers (what the abstract calls “a Luby-Rackoff type result”). A specific cipher may still be broken, but the Feistel design is sound – just as a bridge collapse doesn’t imply that we’ve been wrong for millennia about the concept of the arch. Swap-or-not is similar, although the design hasn’t been in use for as many years (steel-cable suspension bridge, maybe?).

      2. 1

        This makes a no-collisions arbitrarily-indexable key-expander, too, rather than needing to keep state and iterate your key expander n times to get the nth output set. Meaning you can get expanded key material for an arbitrarily indexed block of cyphertext using a fixed amount of work.

        1. 1

          Also, as long as your keyspace is smaller than your “shuffle” space, this should be easily scalable, so that you can arbitrarily index in to a permuted-but-unique sequence of n-bit numbers by just composing smaller versions of itself.

          1. 1

            as long as your keyspace is smaller than your “shuffle” space

            I think this is generally going to be true. Let’s say you’re looking at permutations on a byte. There are 8 bits there, so 256 possible values. The number of permutations should then be 256!, which we can approximate with (256/e)^256 or about 2^1679. So to achieve all shuffles of a byte, you’d need a keyspace of 1679 bits!