1. 29
  1.  

  2. 19

    Eek!

    It looks as though cryptkeeper makes assumptions about encfs' command-line interface that are no longer valid.

    This is a good lesson in the risks of wrapping a command-line interface, blindly trusting that its interface isn’t going to change.

    1. 4

      Exactly. This is what we should show people when discussing not to just blindly use calls to exec().

    2. 6

      It’s so disappointing seeing these bugs as at least for me it has an enormous impact on my confidence in open-source encryption software to safeguard my data. Bruce Schneier remarked way back in 1999 that “In the cryptography world, we consider open source necessary for good security.” He’s right, but at the same time, this bug and many before it is amateur stuff. I mean, look at this code excerpt:

      // paranoid default setup mode
      //write (fd[1], "y\n", 2);
      //write (fd[1], "y\n", 2);
      write (fd[1], "p\n", 2);
      write (fd[1], password, strlen (password));
      write (fd[1], "\n", 1);
      

      Coupled with the earlier execve call to an interactive command-line utility there is just so much wrong with this. I could be wrong, but I just don’t see these sort of fundamental, basic mistakes being made in major proprietary cryptography software? I’m not sure what the conclusion here is, but it seems there’s a real lack of serious code review, adherence to well defined coding standards, periodic code audits, etc… to avoid this stuff from ever making it into a stable release. OpenBSD is the only open-source security project I can think of that gets this stuff right. They’ll still make mistakes, but I at least have confidence they’re working hard to avoid them.

      Likely controversially, I’d argue there’s a broader issue of applying the Unix philosophy of individual utilities that perform a few well-defined functions, and often shell scripts binding them together, to security software. It’s inherently brittle, with each additional independent binary or script making the overall system more fragile. A monolithic approach to ensure the system has a minimum of external dependencies, relies exclusively on stable, ideally low-level APIs that can be reasonably trusted to be correctly implemented, makes sense. It’s just too damn hard to manage a web of disparate utilities, scripts and dependencies with a guarantee they will work together exactly as intended for tasks which are security critical.

      1. 8

        I just don’t see these sort of fundamental, basic mistakes being made in major proprietary cryptography software

        You are very naive to think this, people are people, no matter whether you can see the implementation, or who paid for it.

        1. 2

          It’s not about the people, I realise people are fallible. It’s about processes to combat that inherent fallibility. My point is I wonder if much of the open-source community w.r.t. security software is behind in implementing processes, some of which I referenced, to reduce or hopefully eliminate these sorts of mistakes. Irrespective of your opinion on proprietary software, it’s indisputable that companies like Microsoft have implemented processes designed to improve the security of their software and they’re often quite rigidly enforced. See e.g. the Security Development Lifecycle.

          Are there parallels in prominent open-source security software? Open question, because I legitimately don’t know, but if there are, they often don’t seem to work. It’s clearly possible to get it right though, with OpenBSD the gold standard for high quality, high security, open-source software.

          Put bluntly, assuming I’m not protecting against state-level actors, I’d have more confidence in something like BitLocker encrypting my data properly than most open-source security software. And that’s sad, because I’d much rather use open-source cryptography for the numerous obvious reasons. But taking security seriously means that bugs which result in my data being encrypted with the passphrase “p” comprehensively destroy my confidence. It’s all well and good that the software is open-source, but if nobody is reviewing that the implementation is correct, then one of the most convincing points in favour of using open-source cryptography is pretty much eliminated.

          1. 2

            Comparing the bugs you can see with the bugs you can’t will only ever have one outcome. You say that processes don’t seem to work for OSS, but what makes you think that Microsoft’s process is working out any better for them?

            I’d trust dm-crypt/cryptsetup-luks (or whatever the current mainstream option is) over BitLocker any day. The popular OSS crypto does get in-depth review (at least going by the maintainers I’ve talked to). This “cryptkeeper” is a tool I’d never heard of that sounds like it’s virtually unmaintained. I would say it’s worth checking that security software in particular, even if OSS, comes from a reputable source, and I do think that Ubuntu and Debian in particular need to get a lot more serious about auditing the software they put their name on, especially security software.

        2. 3

          I’m less confident that there’s greener grass.

          I just don’t see these sort of fundamental, basic mistakes being made in major proprietary cryptography software?

          I don’t know for sure, but my guess depends on what type of software you mean. If you mean expensive enterprise stuff that went through security audits, maybe some confidence. I mean there is really shoddy enterprise software too, but I’d be willing to venture some cautious optimism that it’s possible to purchase a not-totally-useless encryption solution.

          But in the space cryptkeeper is operating in, i.e. desktop software targeted at regular end users, I have less confidence. The quality of proprietary software in this space is… bad. There is some extremely bad code out there hiding inside proprietary backup tools, anti-virus suites, etc. The fact that most proprietary disk-encryption software is part of these same suites with a half-dozen utility tools bundled together, most of which are just bad all around, makes me even more skeptical. I would really not bet any money that something like Symantec Endpoint Encryption or Sophos SafeGuard isn’t riddled with holes.

          1. 8

            As someone who has done security audits on some of these “expensive enterprise stuff” I think you should have about as much faith in it as you do for the “desktop software”. There is a massively disturbing lack of good cryptography in them and even when they are found by auditors they tend to be thrown off as “acceptable risk”. I’ve been told that crypto was unbreakable and that crypto findings didn’t matter because the attacker would need access to the machine, despite me showing things like privilege escalation about 10 minutes ago in a debrief. In fact the more expensive the less likely they seem to be to adopt non-FIPS-140 crypto and thus are just fine with 3DES. At least that is my experience.

            1. 2

              3DES is fine isn’t it? A bit slower than AES and rarely hardware-accelerated, but it’s not like it’s known-broken or anything.

              1. 2

                Depends on how you define fine I suppose. 3DES has a couple of key type weakening attacks, padding issues, relatively low key sizes (I’ve seen 56-bit 3DES more than you could imagine), all the issues that come with CBC, and most importantly it isn’t Authenticated Encryption (AE/AEAD). All of these essentially make 3DES the weakest common denominator, and that’s not counting the speed issues like you mentioned, which are pretty critical. When dealing with things like FIPS mode devices it’s pretty important to future proof yourself, when NIST says that you won’t be in compliance for supporting 3DES in 2030 that means that if you don’t add or remove approved ciphers and 2030 rolls by you could very easily lose your certification, which is absurdly expensive and can crush an entire business.

        3. 2

          Heh! Fascinating.

          When I saw this title I thought, this must be an old story, this is the bug my ex-colleague found years ago….

          On closer inspection it’s an entirely different bug in an entirely different package.

          It’s a curious mode of failure common to crypto….

          Something screws up the password handling so the space of all possible passwords is squished down to something much smaller. In this case “p”.

          (I vaguely remember in the other case the password was heavily truncated so “password” and “pissword” were different passwords, but “password” and “passwords” were identical. I’ve forgotten the details).

          Anyhoo, it appears that the crypto magic is working and working well… everything is unintelligible until decrypted.

          And in some cases unintelligible unless the right password (for a curiously broad definition of right) is used for decryption.

          But how do you test for this class of bug?

          In this particular case testing is relatively easy.

          In general it’s extraordinarily hard!

          In some cases, like in the Ashley Madison hack, the squishing is deliberate (they squished the space of all possible passwords to lower case so as to reduce support queries from people who had forgotten which letter they had made uppercase!)

          Even if the crypto library is perfect…. you still can’t combat at the library level stupidity at the UI.

          1. 2

            (I vaguely remember in the other case the password was heavily truncated so “password” and “pissword” were different passwords, but “password” and “passwords” were identical. I’ve forgotten the details).

            crypt() used to truncate passwords to 8 chars. The first reference I could find: https://en.wikipedia.org/wiki/Crypt_%28C%29#Traditional_DES-based_scheme