1. 10
    1. 10

      Anything that claims to be a security, and especially encryption, product that doesn’t contain a prominent description of their threat model makes me super nervous. Encryption doesn’t exist because encryption is cool, it exists because you have a particular adversary in mind. For example:

      • If my laptop is stolen, I don’t want the thief to be able to access my files.
      • If someone compromises a program I’m running, I don’t want them to be able to read my private data or tamper with my files without my being able to detect it.
      • If there is a root or kernel-privilege compromise on my machine, I want to limit the things that they can access to files of users that log in during that compromise.

      Once you’ve identified the threat model, the next thing that you need to do is enumerate the powers that the attacker is expected to have and how you protect against them.

      It looks as if this is a thin wrapper around dm-crypt. This is fine, as far as it goes, but dm-crypt does not provide any integrity protection or side channel protection. For example, an attacker with the ability to see the underlying disk traffic can tell the locations on disk that were modified and can infer the files that you’ve touched in the tomb. They can trivially identify which parts of the file contain the superblock and other important data structures for the filesystem. An attacker who has access to the cyphertext can tamper with it and you can’t detect this tampering unless it either corrupts a filesystem metadata block and fsck notices or you have some application-level detection for corruption. Even more fun, the kernel ext* filesystems are not robust in the presence of arbitrary filesystem corruption and so an attacker who can tamper with the cyphertext of your tomb can at least crash your kernel and possibly gain more interesting compromises.

      There are two dm-* layers that can provide stronger protection here. For example, dm-integrity stores the encryption tags from the dm-crypt layer in a separate location on disk. This means that an attacker who modifies the cyphertext will generate something whose decryption tags don’t match and so will be detected unless the are able to see multiple versions of your cyphertext. If they can, then they can mount replay attacks (for example, they can substitute an old version of a block for a newer one. This can be quite powerful combined with the predictable disk layout). Dm-verity builds on top of dm-integrity by providing a Merkel Tree over the tags so that you can validate that none of them have been tampered with. Unfortunately, dm-verity is currently read-only - it’s primarily used for protecting read-only boot volumes since the root hash of the filesystem can be provided as part of a secure boot chain to ensure that nothing in the core system has been tampered with. I’d love to see a read-write version.

      TL;DR: There’s a lot of subtlety in what the various device mapper layers protect you against and I don’t see any evidence that they’ve thought this through. It’s best-effort confidentiality (but not integrity) protection against a fairly weak adversary. Probably better than nothing.

      Oh, and they’re using gpg to protect pass phrases. I have no idea if this is a sane thing to do - generally the rule of thumb for anything involving RSA is ‘run far away’. It may be possible to use gpg securely.

      1. 1

        I just thought it was kinda cool to see more options out there.

        Everyone always needs to get overly padantic about crypto instead of just appreciating that options exist for various thread models. Sure this likley isn’t going to live up to government espionage prevention standards, but to protect against the average thief and gain some reasonable privacy, why not check this out.

        1. 12

          I’m fine with options existing for different threat models, as long as they say what the threat model is. I can then look at them and decide whether their threat model covers the things that I care about. When they have an incredibly long readme for a security tool that doesn’t say anywhere what their threat model is, I get cranky.

    2. 1

      I’ve recently implemented support for Tomb in prs (like pass-tomb) in an attempt to minimize metadata leakage by Tomb’ing the underlying password store.

      It’s a fun tool! It makes working with an encrypted image from the CLI fairly easy. Though I find some commands and output a little weird/hard to work with. It’s essentially a bash tool that dumps output to std{out,err} so it can be tricky to parse and automate with.

      1. 1

        I’ve recently implemented support for Tomb in prs (like pass-tomb) in an attempt to minimize metadata leakage by Tomb’ing the underlying password store.

        I suppose it’s good to mitigate it, but it’s always been a bit of a head-scratcher to me that a password manager with that metadata leakage baked in to its design has apparently become so popular in the first place.

    3. 1

      if the courier is captured then the key can be found on him or her and the password can be obtained using torture. The solution we propose is that of separating keys from storage, so that a courier alone cannot be the single point of failure.

      This is a bit ridiculous, I make sure I know how to decrypt my stuff explicitly in the case of torture, because I’d rather decrypt than just be tortured indefinitely. If you are really doing something worth being tortured or killed for, it’s wise to have the option of surrendering quickly.

      1. 0

        I love seeing comments like this, and if I was a malicious actor I would be explicitly looking for them.

        People who are advertising that they are both:

        • Working on something interesting enough that they have considered that they might be tortured for it

        • Stating they would give the key up if tortured

        are very obviously some sort of goldmine, that could be tapped by someone sufficiently evil.

        1. 1

          I love seeing comments like this

          Because you are filth and an idiot?

          1. 2

            People with money or secrets worth killing for don’t sacrifice themselves, that why they hire couriers.

          2. 1

            It was literally a joke about a hypothetical but real possibility, but go off about it. Here, I’ll explain the joke: It’s funny that people openly disclose secrets about their systems of cryptography and data security, because anyone interested enough could do two seconds of OSINT and have full information about the systems, and thus know what countermeasures to deploy. You are making it vastly, immensely, easier for the attacker to break your security. Security by obscurity is bad if it’s the only thing you rely on, but it is a tool in the box that should still be deployed.

            I think you missed the part where I stated “sufficiently evil”. I wouldn’t do such a thing.

        2. 1

          As somebody with a similar line of thinking, let me challenge your assumption slightly: I am fully aware that, by our society’s moral standards, I’m evil. I’m considering malicious actions. And yet I won’t torture if I need somebody’s keys. There’s no universal playbook for how to handle torture, but a common understanding among torturers is that the act of torture is sadistic, rather than coercive; torture is unlikely to produce working keys, but is likely to permanently damage the target.

          What is being proposed here is something akin to the common understanding that superpowered individuals share in modern fictional universes with superheroes and supervillains. When everybody has superpowers, committing a murder is likely to start a chain of superpowered revenge killings, which is undesirable; this incentivizes a taboo. Similarly, when everybody has access to mathematical cryptography and also has read xkcd 538, then captors and couriers alike know that divulging keys is the quickest and safest option.

          I’ll give Munroe the last word, from the linked comic:

          Actual actual reality: nobody cares about his secrets.

          1. 1

            but a common understanding among torturers is that the act of torture is sadistic, rather than coercive

            This is actually not the case, as evidenced by the actions of the United States Department of Defense, and Central Intelligence Agency.