1. 33

  2. 11

    I think this post misses the point as severely as the hypothetical conversation in the first panel of the comic. To wit: assume your adversary can’t prove whether you know the key, and they don’t care. If they can punish you without due process, or their judiciary system’s concept of due process does not consider your knowledge (or lack thereof) germane, it may be worthwhile for them to punish you regardless.

    Looking like you could plausibly be sending encrypted messages becomes the crime, and crushing anyone who looks like they might be doing that is a fine remedy. Regardless of whether or not they can prove you did.

    1. 8

      I mentioned this problem:

      Defining success can be tricky as Mallory can ultimately decide not to believe any claim that Alice makes.

      Ultimately this is something that we can’t solve, so I choose to ignore it and suppose that the adversary has to be able to assert that data exists. Without this assumption we lose before we ever begin.

      1. 3

        I noticed that and didn’t read it as the same problem, though it’s certainly related. Mallory choosing not to believe any claim Alice makes versus Mallory not being held to any standard for the evidence that underlies that belief (due process…) is subtly but IMO importantly different.

        And I’ll toss in that I think the only solution is for dissidence (in this case, in the form of unbreakable encryption) to be so pervasively used in a society that Mallory can’t get away with meting out punishment on that basis alone.

        Disobeying rules like this is a public good that needs to be widespread in order to prevent them from being enforceable.

        1. 2

          Whether Mallory does not believe Alice or instead shoots Alice in the head without considering her claims, the fundamental problem is the same. Without defining some (perhaps arbitrary) requirement for winning, the game is meaningless. I think unfalsifiability is as good a criterion as any.

          With regards to the wider scope of societal impact, I agree that a normalised standard is better. Actually what’s interesting is that even in the protocol that I describe, when we have a hosted service that is shared amongst multiple people it becomes way easier to retain deniability. I imagine if there were thousands (or millions) of people using such a service, we would be in a situation similar to what you describe.

          1. 2

            Governments are not under an obligation to consistently enforce their own laws. The worst case scenario is when a government can punish anyone for not disclosing the key to a deniable encryption scheme whether they actually used any encryption or not—using any random blocks from their filesystem as nominal “evidence”. Or planting an output of dd if=/dev/urandom and presenting it as “evidence”.

            If someone with enough power wants to see you behind the bars, they will, whether you actually broke any laws or not. Ultimately it’s a social and political problem of government accountability that has no technical solution.

            1. 2

              I think we’re saying very similar things. The social and political solution to the problem of “If someone with enough power wants to see you behind bars” is due process, to a point. A firm societal expectation of due process at least raises the “enough power” threshold.

              I think a technical tool that can help raise the bar for “nominal evidence” is widespread adoption of encryption by people who have done nothing wrong. If your entire society does it, it gets politically harder to treat it as evidence of wrongdoing.

          2. 2

            It can be solved. People have been doing it forever. Talking in code. A form of one-time pad where the semantics of the text is only known between two parties.

            1. 1

              Are you talking about stenography?

              You could think about the linked post as if you are hiding encrypted data within an image and presenting the image to the adversary. She notices some redundant image data and demands an explanation.

              We try to explain this specific remnant data in a plausible way.

              1. 3

                You mean steganography, not stenography. The latter is a form of typing.

                No, I’m not. I’m talking about how people write something that appears to mean something normal on the surface but has further meaning between two parties :)

                1. 1

                  Ah I see what you mean. That’s somewhat analogous to “encryption” where the ciphertext looks like some innocent plaintext rather than some “suspicious” random noise. The recipient “decrypts” it with their knowledge of the code.

                  It’s a neat way to get deniability! However it is highly specific to circumstances and algorithmically intractable. I also suspect that it would be very easy to break if the adversary suspects there’s a double meaning.

                  1. 1

                    it is highly specific to circumstances and algorithmically intractable

                    Could be seen as strengths.

                    very easy to break if the adversary suspects there’s a double meaning

                    They could never be sure.

                    I’ve put a looot of thought and research into this idea and I’ve concluded that the only way to “hide in the open” is to share a secret before, just like regular cryptography. But this secret information is not like that of a secret key, but secret semantics. There is no way a program could do this without being some form of neural network. In that case it’s easiest to just be a human and write your own.

                2. 1

                  I guess you mean Steganography?

                  (Stenography is a very different art that is also very interesting.)

                  Also while it isn’t totally clear to me what indirection mean above it doesn’t look like steganography, more like using common knowledge in a group to hide what you are talking about. (Examples: referring to someone by their childhood name. Or mentioning the place “where person x fell in such a funny way”. )

                  1. 3

                    Yes that’s what I was referring to, thanks.

                    And honestly I’m not sure what the user I’m replying to is trying to say.

                    1. 1

                      Precisely eriki.

                3. 1

                  For some values of “ultimately”, for sure, but what we really want is the (admittedly much weaker) guarantee that no direct analysis of the volume will suggest Alice is lying.

                  The approach of a “hidden volume” is trivial to spot with fdisk, which means that seeming gap is highly suggestive of another secret key (even more so if the TrueCrypt software is installed on the first layer!)

                  However there are things worth considering we can do about this, for example: http://geocar.sdf1.org/rubber.html

              2. 4

                This is a very interesting article! @awn, I’d love to have a chat in more detail offline! It has a lot in common with the design decisions we’ve made in Peergos, where:

                1. All our encrypted data is stored in a key value store, where the keys are random
                2. You can’t tell the difference between a directory and a small file
                3. Files are split into 5 MiB chunks and also padded
                4. You can’t see the directory topology
                5. You can’t see the size of any individual file, or even the number of files
                6. This has all been designed with plausibly deniable dual (or N) login in mind
                1. 3

                  Sounds interesting. I am working on applying the idea to create a deniable, in-memory, encrypted filesystem. I would love to compare notes! Feel free to reach out at the email in my profile.

                2. 2

                  I have thought about this in relation to border crossings. I think the best solution is to create some very explicit procedure for retrieving your data that “Mallory” is unlikely to replicate.

                  For example: I am traveling to France, and tell my friend that I will send them a selfie from the Eiffel Tower within 24 hours of my plane landing. If I miss this window, they are instructed to throw my computer into a lake. If I make the window, they unlock my computer and send me my data.

                  In the event that I am detained, it is unlikely that Mallory will be able to replicate this selfie within the allotted time window, or believe that hitting me with a wrench will in any way convince my friend to dive to the bottom of the lake to restore my data.

                  1. 4

                    Yeah, but how often do you have data (or travel plans!) that are more important than not getting beaten with a wrench?

                    1. 2

                      For example: I am traveling to France, and tell my friend that I will send them a selfie from the Eiffel Tower within 24 hours of my plane landing. If I miss this window, they are instructed to throw my computer into a lake. If I make the window, they unlock my computer and send me my data.

                      I’ve traveled a lot, and this proposal sounds absurd. Weather, mechanical problems, politics, etc etc can delay you beyond any timescale you’ve planned for. Regardless of your mode of transportation.

                    2. 2

                      If the adversary is willing to beat you with a wrench, they probably won’t stop when you give them the password. Your life is probably forfeit. So you may as well not give it to them.

                      1. 7

                        You are substantially overvaluing your resistance to torture.

                        1. 4

                          Dystopian cyberpunk novel idea: Rubber Hose Security llc. company that offers to test your employees resistance to torture. ;)

                      2. 2

                        Can someone help me with the math notation? In the part that says:

                        Consider a key-derivation function Φ:K→K×K \Phi : K \to K \times K Φ:K→K×K and a keyed cryptographic hash function H:K×N→K H : K \times \mathbb{N} \to K H:K×N→K, where K K K is the key space.

                        What is the x operator that is being used? The key derivation function is from K -> K x K. What does the x denote? I’m having trouble finding a useful resource that explains it since I don’t know the proper name for this operator in this context.


                        1. 5

                          This is the Cartesian product:

                          K x K = {(a, b) | a,b \in K}

                          The key derivation function takes some key and produces two keys. For example, imagine feeding some user input to Argon2, taking the 64 byte digest and splitting it into 2x 256bit keys.

                          1. 1

                            Thank you! I grasp what’s going on now. Very interesting project. I look forward to seeing how it develops!