1. 3

  2. 3

    “And, increasingly, I come in contact with a particular narrative that I strongly dislike: that security is an unsolvable problem.”

    We should do what we can. This is true for attackers that matter in too many cases. All the important targets, from OS’s to companies, have been smashed. It didn’t take much past some 0-day hunting and social engineering for a lot of them. The good news is not everyone is targeted so much. We can still help people avoid the riff raft that are increasingly damaging.

    “As best I can tell, this position of hopelessness ironically originates from the actions security people themselves have taken.”

    I criticize the “security people” as much as the next person. Yet, the position of hopelessness comes from the persistance of easily-avoidable vulnerabilities in about everything. Plus the legal system avoiding doing anything about it in terms of liability for even the obvious stuff. Plus the users almost never caring about security in things they choose on top of choosing companies that are malicious. Easy to feel hopeless when your users, computers, protocols, ISP’s, and services all effectively aid the attacker.

    “ Thus, we will hopefully (if we cross our fingers and wish really hard) both avoid surveillance and direct targeting entirely.”

    Author conflates security of our systems/data with what’s essentially anonymity. They’re two, different things. Anything from thorough effort to high-assurance security can achieve the former in many situations. I don’t expect any software to achieve the latter consistently esp if you want integration with modern web or commercial solutions. A cat & mouse game rigged against defender big time.

    “It was no doubt exacerbated by the revelations of Snowden, Manning, and others”

    This was amusing because high-security proposals largely survived NSA per the Snowden leaks. On Schneier’s blog, we took time to compare our solutions against individual leaks to assess that. Mainstream “security people” recommended stuff that didn’t survive. Most people didn’t use our stuff because they liked that other stuff mainstream laypeople, developers, and security people were using. No uptake of strong stuff. So, weak stuff fails and strong stuff ignored per Snowden leaks. That’s bad at least for TLA level. They were sometimes buying 0-days off black market. Bad for many levels.

    “This means that it is impossible for any piece of software to be correct, so all software must either have an infinite number of bugs, or there must be a finite number.”

    INFOSEC 101: Secure software is software that conforms to the security policy in all situations. It can have a million bugs so long as it doesn’t violate the security policy. Example is separation kernel isolating untrusted thing also receiving its input with mediated, simple IPC. App can be arbitrarily complex and buggy with no effect unless former is compromised. Many methods but author doesn’t understand INFOSEC. Unfortunately, common in security industry and blogging.

    “We have, today, incredible containment tools in Linux - SELinux “

    Confirmed. To author’s credit, that will stop a decent chunk of vulnerabilities in applications. Although, not as much as four, decade-old trick of combining minimal kernel with safe language and specs. Also probably less than OpenBSD on kernel side given quality of Linux kernel.

    “so as long as we continue to remove bugs, it will eventually be bug-free. So, assuming we continue to remove bugs at a rate faster than we add them”

    How’s that been working out in the real world? Commercial developers want to build stuff with more features for competitiveness. The FOSS developers scratch an itch improving things in terms of features. Neither side removes bugs more than adding features. A few outliers focus on quality more than features or try to strike a balance. I’m just saying this concept should be considered false by default in any software project until proven otherwise.

    “ Strengthen one’s cryptography; layer security; reduce trust; and on and on. There are so many things that can be done now it is difficult to enumerate them.”

    Follow security advice. Layer up defenses. Good, common advice.

    “What happened to the cypherpunks?”

    They did some great work on crypto-related things. At least two helped up a lot on the tech front. Those two stayed focused on crypto or businesses with their software running on insecure endpoints that got smashed later to bypass the crypto. This continues. Need more than cypherpunks albeit the attitude & coding would help.

    “ And we are even making traction on the Trusting Trust problem”

    A solved problem that takes a hell of a lot more than reproducible builds. Nobody will put in the labor to show no malice in compiler, prove no optimization pass hurts their software, or buy & bootstrap CompCert onto a Mini-ML for critical software. ;)

    “If you can attack software, and you have understanding of what you are doing, you can defend that software. “

    Far from true. Those people know how to cause certain attacks. This might lead to defenses that work for those attacks depending on if they address root cause or do tactical stuff like canaries or randomization. Then, there’s entire classes of attack they won’t have considered without specialist expertise. Further, even the mainstream, security people ignore methods important for finding the rest of the attacks. Quick test is to ask supplier if they’ve done a covert-channel analysis of their product, what storage/timing channels remain, and which are high bandwidth. You’re going to get blank stares or defensive replies. That’s the easy one, too. Wait, ask what the Trusted Computing Base (TCB) is in detail. Easier. Those that can’t tell you what they have to trust for security probably couldn’t secure it either.

    “So what do I want here? I want security to stop spreading fear, first, and I want glorification of the broken (and the breakers) to take a backseat to glorification of the strong (and the fixers).”

    Here we agree! Put pride and energy into those making stuff great the first time or at least fixing stuff that’s critical. It’s what DARPA, NSF, and others do with money invested to top it off. A strong conclusion at least!

    “ The real security people are those who tell you to use Signal, use QEMU: who tell you how to fix it.”

    Oh, nevermind. Real security people tell you to avoid smartphones or electronics in general because we know endless attacks on them. Then, if you need it, our recommendations get increasingly difficult to form based on complexity of the systems, risky uses by people with varying skill, integration with malicious networks, running on insecure endpoints, and so on. Doing real security is quite a bitch when you need connected, black boxes owned by surveillance companies who don’t care about security in a trusted perimiter. Got easier with smartphones since one vendor at least pretends these days plus is cheaper than secure alternatives. The kind of thing that shouldn’t make me feel “hopeless.” ;)

    1. 2

      On Schneier’s blog, we took time to compare our solutions against individual leaks to assess that. Mainstream “security people” recommended stuff that didn’t survive. Most people didn’t use our stuff because they liked that other stuff mainstream laypeople, developers, and security people were using. No uptake of strong stuff. So, weak stuff fails and strong stuff ignored per Snowden leaks.

      Do you have time to go into this a bit more deeply (or a link to somewhere this was discussed) – specifically the mainstream recommendations vs security professional recommendations?

      1. 3

        My essays and high-level designs were on his blog since the engineers there were great at the time and hosting free. In old-school fashion, I have two text files with links to them that I email on request. I’ll send them to you if you want. Far as examples of this topic, I’ll try to remember and describe a few since I doubt I saved that given we spoke generally since we had discussed the specifics for years. In order of memory not importance.

        1. I emphasized massive FOSS investment into medium and high-assurance security for common or critical apps/services like taught in Orange Book B3/A1, Common Criteria EAL6/7, and NSA’s Type 1 devices. The assurance techniques along with a subset of features. Not red tape. The NSA pentesters often failed to breach them during pentests where they succeeded to varying degrees on most everything else. So, we recommended doing what worked. The security industry recommended Windows hardening, using Linux w/ hardening, etc. Most of that failed to regular black hats w/ anything popular smashed by NSA. Even after links & I show people, they still argue their ass off with a famous person saying, “But do the systems have web browsers?” Well, yeah, but is every use case a web browser or even need one? Servers come to mind…

        2. I recommended that military-style we link-level encryption between any two nodes that combat covert channels by fixed-size, fixed-rate transmission with error behavior not leaking much. Fixed-rate where possible at least. Those deploying about any secure chat, VPN, or protocol didn’t do this outside it as a side effect of some streaming setups. Numerous flaws were found in systems that didn’t. They still don’t do it.

        3. I noted that obfuscation on top of good security is very valuable since attackers must expend extra effort to attack. They sometimes even expose themselves in process, esp if your error-handling highlights that. Many security professionals, repeating what they were told without thinking thorough, dismissed it as security by obscurity. Some wiser ones countered on basic the obfuscations could break security itself. I clarified I mean obfuscations that don’t require security techniques to be applied in non-recommended way. No risk. Examples are unpopular ISA, PDF reader, distro’s, or web server that’s otherwise high-quality. Unusual port numbers or names. grsecurity-style stuff to a degree. And never tell attacker what you’re using combined with monitoring. It worked over and over against nation-states, even their main tool against each other in Cold War, but security professionals kept dismissing it with one site owned after another with one-size-fits-all attack.

        4. I’ll note this as a special example since people always argued differently with it. Ages ago I did a polymorphic cipher that essentially combined known-good ciphers operating one after another in counter mode. I randomized which ciphers, which order, and any key/counters fed into them. Similar schemes for integrity or authentication with redundancy. The key was extended to encapsulate those. Crypto lovers and users argued about how the combination might cause problems at algorithmic level with no specifics past DES Meet-in-the-Middle. Whereas, constructions they recommended were periodically beaten with cryptanalysis or implementation attacks on one algorithm that would’ve been harder to chain. It wasn’t until TripleSec that I saw some approval of this. Most stuff still doesn’t do this, though, even though we have fast algorithms and hardware-accelerated ones.

        5. Eliminating root cause vs tactical stuff. This pops up repeatedly with some motivation in terms of how bad things get optimized for by supply-side because optimizing for common case improves sales. Examples of bad things hard to retract were stacks flowing in dangerous direction, no bounds checks, languages hard to run through provers, setuid root, interpreted languages for productivity when productive, compiled ones existed, complex protocols/libraries vs minimal ones for minimal use-cases, and so on. Vast majority of INFOSEC goes with what I call tactical mitigations that try to counter each, individual thing while keeping the root cause. Justifiable for legacy systems or something with truly no alternative. We’ve been about fixing root causes, though. One example is where Trusted Xenix, first secure-ish UNIX in production, eliminated setuid vulnerabilities forever while maintaining compatibility with setuid apps. They just cleared the setuid bit whenever such a file was written to with admin (or update software in theory) needing to reset it after approving the change. Super simple but so ignored. Reverse stacks are another. Decimal over floats for basically decimal math. Language’s like Wirth’s with safety on by default with ability to turn it off per module for low-level or high-speed stuff. Safe concurrency or interface checks in Eiffel. Recently, use of a language that proves absence of flaws for code-injection (i.e. SPARK Ada) for stuff that shouldn’t have code injection. Opa or Ur/Web… even something like them but on normal languages… to do same for web. Bulletproof clustering or multiversion files for common systems like VMS or NonStop. So on and so forth. Little effort to fix root cause productively and efficiently vs tactical stuff that often fails due to clever bypasses.

        6. Separation of trusted and untrusted computers. I said they have to be airgapped with tempest protection in a cage with a power filter. Old, NSA recommendation done in defense forever, esp for SCIF’s. Some buildings also had noise masking to stop words from getting out & solid since you could just record password or leak info through LED’s. Would’ve prevented numerous side channels. Clive Robinson, “man of many brains,” took it further claiming any sharing of matter or energy between machines might be a side channel. Coined the term “energy gapping” for blocking as many forms of it as possible to prevent “known unknowns” and “unknown unknowns.” That it might need to be done on a per-computer basis if it’s about malware. BadBIOS-like happened years later.

        7. Building on CPU’s that make it easier to do security or reliability. That’s existing or homebrew. They’re currently weaker in a number of ways (esp performance and ecosystem). That’s because people aren’t using them! Buying them, building on them, and including them in security appliances (or anything justifying the price) will increase demand to improve supply side. Right now, there’s numerous CPU’s out there that are FOSS (eg Leon3) and/or improve security (SAFE, CHERI). Sometimes they just run a HLL directly (JOP) for no abstraction gaps. In any case, industry as a whole or FOSS groups with money should’ve funding ASIC’s made out of stuff like this for our most trusted stuff at least. Integrate it in a decent, expandable board. Keep ASIC & NRE cheap by reusing microcontrollers with on-board hardware for things like Ethernet, storage, HID, etc. Those are $2-30 a piece in volume with only a few needed. For most attacks, just gotta secure the software on them that we write. Alternatively or additionally, an IO/MMU. Almost all of security folks are pushing two, x86 vendors (a third had security enhancements early) and ARM in their solutions.

        8. Separation kernels for mobile, browsing, and secure comms on untrusted architectures. It doesn’t cover everything but it’s a nice building block. The monolithic kernels never got the job done. One of easiest ways to improve Internet-connected or mobile devices is to virtualize the legacy OS in a VM on top of a secure microkernel with security-critical apps in own protection domains. The GUI (eg Nitpicker GUI), boot, update checks, and crypto at the least. Many systems did this with partitioning networking stacks, filesystems, and other stuff. Idea being they can compromise the crap out of most of it with secure stuff still invisible to that part or just can’t break it. Almost no effort doing this outside of maybe Genode with numerous, commercial implementations. OK Lab’s OKL4 was most widely-deployed but for baseband protection.

        9. Physical separation. They tried to argue about cost and difficulty which was just really an optimization problem you could win splitting between powerful and embedded systems. My secure browsing was a KVM switch on several computers with controlled sharing. Bypassing that required finding a problem in the dead-simple switch or beating the one component required for sharing designed with principles like already mentioned. Way better than attack profile of VMM’s etc. This got cheaper and easier over time w/ me looking at piles of microcontrollers mixed with FPGA’s & standard CPU’s as next strategy. The mainstream security people are starting to come around after they rediscovered cache-based side channels, firmware attacks, and other things high-assurance dodged where possible since the 90’s. They’re really panicking on that stuff although root-cause solutions exist in literature to each. Sometimes in market but usually not cheap.

        So, these are just a few categories of things high-security engineers and I discussed on places like Schneier’s blog over last 10 years. I preach similar stuff when related topics come up. Tools and techniques are better now than ever. Yet, mainstream, security folks will ignore it or argue till blue in the face while their methods keep getting compromised when ours succeeded. They’ll sometimes add new tactics to their already-broken methods to hopefully counter something in a cat and mouse game with smart enemies that they keep losing. They have yet to break the cycle to promote much of what I’ve described here. Rust on language side and resurgence in spec interest with TLA+ might be only counter-examples I can think of. So, I hope that illustrates things well on our side vs majority both pre- and post-Snowden.

        Btw, here’s one of my design essays to a protoge were I apply incremental, high-assurance security to Tor to counter as much of its threat model as possible. I’ll throw in the security framework I used to use for assurance since I made it public in 2013. I turned it into essay form in a conversation there.