1. 22
  1.  

  2. 11

    This touches on something I’ve been saying for a while. There’s a certain ideological view that’s been going around for a while that “open source” is always good and any “security by obscurity” is always bad. IMO, this is completely wrong. Open source security is a good system only in specific situations - with code that’s widely used and actively developed enough that bugs may actually get noticed and fixed faster, such as web servers, operating systems, encryption algorithms and libraries, etc.

    When it comes to locking down anything else, there is no benefit to giving out information about your internal processes that could help an attacker. It shouldn’t be used as a substitute for strong passwords, good encryption, and other best-practices, but hiding details, using unusual configurations and tools, and other such types of obscurity can do a lot for stopping attacks in their tracks, sometimes even more sophisticated ones. Attackers don’t have infinite resources either, and if your system is too tough to get into for its value, they will move on to something else.

    1. 6

      When it comes to locking down anything else, there is no benefit to giving out information about your internal processes that could help an attacker.

      I often pointed out that gaining that internal information to setup a better attack is the first step in hacking. So, why give it to them if you’re getting no benefit? If they say review, I tell them to pay someone with talent to review it. Even a college student with a track record breaking stuff if the professionals cost too much. Better than nothing they’ll probably get FOSSing stuff of no interest to legitimate, bug hunters.

      1. 5

        My view is that in cost/benefit terms obscurity is almost never worth it. For the same effort as the continuous discipline it takes to keep something obscure, you could have implemented something more effective.

        Also, a lot of vulnerabilities happen because of confusion between what is and isn’t secure. So I’d argue that using obscurity makes your whole system a lot more vulnerable: if something is habitually kept secret it’s very easy for the next maintainer to assume that anyone with access to that something has and should have access.

        1. 3

          it takes to keep something obscure, you could have implemented something more effective.

          What’s more effective that works in the open against the NSA’s $212 million a year budget or whatever Russians/Chinese spend? Or for a small team of IT people just countering black hats that stay at the stuff all night? I haven’t seen any evidence that what you’re claiming is possible. Most of the companies that thought they were doing it later admitted to being breached invisibly. The FOSS folks doing it that put effort into security got surprised by new classes of attack later on. My obfuscations defended users in several of those.

          I do agree that it can take time. That’s why I advocate low-effort obfuscation if time is a concern. Just using different OS (eg Linux) with unusual software or sandboxes can go a long way. These days, I’d say a BSD since Linux-based stuff is prevalent enough for lots of attacks.

          1. 4

            What’s more effective that works in the open against the NSA’s $212 million a year budget or whatever Russians/Chinese spend?

            If we believe the Snowden leaks then they can’t break GPG when used correctly, and tended to focus on compromising the endpoint machines. If I were worried about the NSA I’d run Tails, booting from a USB key that I kept on my person.

            Or for a small team of IT people just countering black hats that stay at the stuff all night?

            Run OpenBSD. Don’t run any extra services other than the ones you need to. Use fail2ban with ssh.

            Most of the companies that thought they were doing it later admitted to being breached invisibly. The FOSS folks doing it that put effort into security got surprised by new classes of attack later on. My obfuscations defended users in several of those.

            Honestly you come across as overconfident in general; maybe that’s necessary to get anything done, but I’d want to see an analysis from someone I trusted before I believed something so contrary to years of crypto advice (which has generally aligned with my own experience).

            1. 4

              I’m speaking like a paranoid, not a confident person. I’ve seen most of what people are recommending smashed after enough eyes were finally put on them. TAILS depends on Linux and apps with a poor security track record. It will probably get smashed. GPG is a safe bet since key function is simple and NSA couldn’t crack it years ago. Probably OpenBSD if key targets aren’t using it. Now, you’re down to one tool you can trust on one OS you can’t and one that will work for unknown amount of time. Unlike obfuscated setups, they also know exactly what you’re using should they ever want to find the 0-days cost-effectively as possible.

              Note: If you’re wondering what I’d do, I tell people to use computers as little as possible if nation-states are the enemy. Go old school with paper, pencil, meetings, and trusted couriers. Optionally, throw in bug sweeps. It’s what elites do with good track record.

              “but I’d want to see an analysis from someone I trusted before I believed something so contrary to years of crypto advice”

              I hear you. I agree. Let’s do that. So, I initially followed cryptographers’ advice of using one construction for each use case with whatever protocol implementations were recommended. Attacks or just algorithmic weaknesses came in over time on a lot of what they recommended. Alternative part of INFOSEC said do defense in depth layering things up so enemies have to find flaws in multiple algorithms and/or implementations. That advice provably would’ve prevented the problems I experienced following traditional advice from cryptographers since the weaknesses or vulnerabilities rarely overlapped. Additionally, a lot of implementations were broken subsequently to going FOSS since being closed-source hid their flaws for quite a while. Others done with strong security and obfuscation like Boeing SNS Server have no breach on record over decades. People continue to pay $200,000 per HA pair for those.

              Based on such empirical evidence, I went with the strategy of (a) using individual components recommended by cryptographers in intended use case, (b) layering them where possible so long as integration doesn’t violate usage constraints, (c) adopting things with higher-assurance implementation where possible, and (d) using site-specific obfuscations on as many as I can without creating a maintenance nightmare (usually scriptable). It’s actually not all that different from an example (OpenBSD) you cite since they similarly combine strong algorithms, strong assurance of code, tactical mitigations of known issues, and tons of obfuscation in things like memory use or layout. That you argued against my position with an OS doing similar things means we might have more common ground than you initially thought. :)

              1. 2

                Hah, I’m quite aware that OpenBSD doesn’t take the approaches to security that I would, and find it dubious for this reason. But it’s production-ready-enough for IT departments in a way that mirage-based unikernels or qubes aren’t.

                1. 1

                  Ah, interesting.

                  What sort of thing was it that this degree of security mattered?

                  1. 2

                    Well, for high-assurance safety (pre-requisite to security) it’s typically characterized by vendors as transportation (esp aerospace & trains), industrial automation, medical devices, networking gear, and recently radio. I’d say timekeeping, too, like the atomic clocks but not all agree. For security directly, throw in banking, IP protection, contract negotiations (esp international), legal teams, reporters in surveillance states, and so on. Each of these has serious muscle coming at their systems that only security level like I described can defeat or detect regularly in long run and reliably versus hit & miss.

                    For me, I used robust stuff in designs for protecting secrets (mine or others’), root of trust (esp for boot/recovery), key management, crypto things that use keys (esp files or messaging), backup/restore, broadcast of data needing high integrity/authentication, remote access, logging, repos, and especially guards that control flow of data between networks of different security levels. I implemented guards, messaging, and storage more than most stuff since I used it and people were interested. I mostly did applied research, though.

                    The other issue to bring up is that one of founders of high-assurance security, Bell of Bell-LaPadula model, said that all network-connected devices had to be high-assurance due to implications of the Intermediate, Value Theorem. That the boxes in the middle would be used to toast us despite strong endpoints. I thought that was a bit much a long time ago but then DDOS’s starting adding up to ridiculous amounts:

                    http://www.networkworld.com/article/3123672/security/largest-ddos-attack-ever-delivered-by-botnet-of-hijacked-iot-devices.html

                    Note: Passive or active side channel attacks from middle devices might also be an argument here. They’ll happen if software becomes strong enough.

                    The solution to that involved identification by ISP boxes, rate limiting, and/or ejection of nodes from the network. If that’s not implemented, then the endpoints would require high-assurance security against code injection or configuration errors to prevent that. All of them to stop the DDOS’s like in Bell’s hypothesis. Otherwise, the neighborhood-level boxes or cable modems would have to do the policing with high-assurance or someone is toast. So, you say “what sort of thing… degree of security mattered?” Bell would’ve told you everything on a network that is shared with critical, online service that can’t use or afford leased lines. I’m leaning toward agreement.

        2. 9

          All kinds of crypto people griped about this but never posted a single attack against such a scheme. Whereas, I provably stop one-size-fits-all attacks on crypto by layering several randomly with at least one strong one.

          I know you know your shit Nick, but come on here. This part is not good advice. Just because no one bothers to attack a home-rolled cryptosystem doesn’t mean that it’s good for anything. Absence of evidence is not evidence of absence.

          The minimum standard for using any kind of cryptosystem should be a rigorous positive proof that it is secure, at which point it makes sense for cryptographers to spend their limited time analyzing and critiquing it.

          Yes, there are certain constructions that are hard to mess up (which it sounds like you are probably advocating here), but these constructions (if done correctly) are also easy to generate positive security proofs for, so my earlier comment still stands.

          1. 5

            “Just because no one bothers to attack a home-rolled cryptosystem doesn’t mean that it’s good for anything. Absence of evidence is not evidence of absence.”

            I could see how you’d say that. It’s generally true. I’m not sure it applies for this one. Let me show you what I was thinking.

            Them: Use this cipher in this mode with random key, etc. Then, outputs will tell you nothing about the inputs.

            Me: Ok, so I’ll do exactly that. I’ll also do it with two more same way.

            Them: No, you can’t combine ciphers since you don’t know how they’ll interact.

            Me: You just said what comes out of a good one tells you nothing about the input. You also gave no restrictions on what goes in. I’m just composing functions the way you said they work.

            Them: See the DES Meet-in-the-Middle attack.

            Me: That was the same cipher used twice in a row. I’m using different ciphers in intended use case. I mean, you cryptographers never tell people who encrypt something to never encrypt it again with a new algorithm and key.

            So, that was roughly the conversation with the ones who didn’t just dismiss. The only homebrew aspect was essentially generating the random seeds, chaining the functions in a pipeline, and overwriting memory afterward. The algorithms and implementations were pre-made. I attributed their problem with it to being a reflext reaction to either homebrew or concerns based on irrelevant problems (eg 2DES’s unique issue). Repeating what they were told without much thought.

            Further, I put the burden of proof on them since I was leveraging their prior proofs about individual components but they were just speculating about risks of integrating diverse algorithms suddenly breaking one. They never showed evidence of a general problem in layering crypto. So, I developed “dangerously” to be safer. :)

            “but these constructions (if done correctly) are also easy to generate positive security proofs for, so my earlier comment still stands.”

            I encourage that for people with the specialist skills. I didn’t have them. Had to work with what I was given.

          2. 3

            I don’t think there is much to debate here, really.

            People like to characterize this as a “versus” debate, wherein one side “obscurity” is better/worse than the opposite (forget the term, but it’s the thing where a system is mathematically secure even when everything is known about its operation) — I’ll call it cryptography for lack of knowing the correct term.

            The truth is that it’s not about that, but “obscurity versus high-value-buggy-code” and “poorly implemented cryptography versus well-implemented cryptography”.

            All of the examples given in this post fall into those categories. Choosing obscure systems is wise not because of anything having to do with cryptography, but for the reasons outlined in the post. So to get real security you combine the best of both worlds: choose obscure, well-designed systems, and use real cryptography when it’s available.

            We all want “mathematically provable” security, but in real life that is simply hard to come by, and therefore when that option is unavailable, obscurity is the only infosec option remaining, a form of opsec.

            1. 2

              That seems like a good position. Yeah, I see it as one more property that might hurt, might help, and is probably necessary in big picture.