1. 42
  1.  

  2. 10

    Along these lines, see David Reed’s memories of UDP “design”, where he notes that he and Steven Kent argued unsuccessfully for mandatory end-to-end encryption at the protocol layer circa 1977.

    1. 3

      also the “bubba” and “skeeter” TCP options; a 1991-era proposal for opportunistic encryption of TCP connections (https://simson.net/thesis/pki2.pdf, https://www.ietf.org/mail-archive/web/tcpm/current/msg05424.html, http://mailman.postel.org/pipermail/internet-history/2001-November/000073.html)

    2. 3

      Good to have this info. In isolation, it makes it seem like security thinking started in the 1980’s or government’s only efforts were killing it. To complement it, here’s Schafer’s write-up on what it was like before security was conceived, the steps the field took, the TCSEC as a solution, and its failures. One of its inventors points out here that it succeeded in so much that the only secure systems ever invented at the time were designed for TCSEC. Private market never produced any without its regulation.

      Back to this article, the early, secure UNIX’s were developed academically and commercially for standards Schafer and Bell describe. Security is a holistic thing that doesn’t mean adding Kerberos. The system needs rigorous design, analysis, implementation, and testing. Examples of those that were would be UCLA Secure UNIX (1977-1979) and commercially Trusted Xenix (1990). OpenBSD stepped in with a different take right after last release of Trusted Xenix. So, there’s been steady examples of what UNIX would look like if the authors cared about security through and through. They weren’t doing that there from what I’ve seen in protocol or implementation vulnerabilities over time.

      Also, note that Thompson knew about these from MULTICS given it was a TCSEC B2 system. They deliberately avoided secure implementation for better performance and RAM use on early hardware.

      1. 2

        Schafer’s writeup is interesting. It clearly blames the commercial industry for failing to take security seriously, while Getty’s claims the government funded designers did consider security from the beginning but didn’t want their works to be labeled munitions and be subject to export controls.

        So it seems our situation is two fold. The profit driven comercial industry preferred features to security and the publicly funded designers did consider security but had real restrictions they wanted to avoid.

        1. 2

          I don’t buy Getty’s claim because security involved designed, implementation (esp interface checks), testing, and so on. Most of them in UNIX community weren’t doing almost any security-boosting activity. Like many do today, he equates security with specific security features such as the encryption or authentication service. Sure that would boost security of those parts. Everything else can still be hacked. The government wasn’t doing anything to stop them from making simpler designs, using bounds checks, etc. Burroughs did that in the hardware + kernel, MULTICS did it in the kernel (used PL/0), Wirth did it in all his languages, and so on. UNIX crowd resisted strongly anything that boosted security. To me, this article clears up one part of the mythology but adds to another.

          The other thing to remember is that commercial, publicly-funded, and government aren’t all one thing. Different groups exist in each with different priorities. Interestingly, the commercial sector as a whole did reject security with Burroughs the only one building it in a bit. They got rid of hardware checks per market demand for performance, dropping it back to zero. In CompSci, there was a tiny sub-field focusing on security (esp high-assurance) with others building insecure stuff since they didn’t care. In government, most intelligence and police organizations pushed for the export controls to achieve their goals while a niche group at DoD and in military branches were inventing and funding INFOSEC with stuff like TCSEC emerging. As Bell indicates, the NSA killed the effort off with MISSI on top of software suppliers bribing Congress to get the COTS mandates in there for acquisitions saying private sector is always better. In security, no the hell it isn’t but we got malware-loaded Windows boxes controlling Predator drones anyway. (Dr Evil meme) “Progress.” ;)

          Note: The crypto war victory actually didn’t free encryption. Only mass-market and some other forms of crypto got the new classification. Anything certified EAL6/7 (i.e. NSA can’t hack) or custom cryptosystems are still munitions per the sheets when I last read them. Hell, most things are still munitions. Even if they let them through now, they could always use that as leverage later on a vendor. I suspect they do for “cooperation” in industry for “SIGINT-enabling.”

          1. 2

            I agree with you that we would need to basically throw everything away and start again to get the level of security needed for building trusted systems. We need to recreate the processors (likely many many small isolated cores, each running 1 isolated process.). Get rid of the operating system completely (Have independent , isolated objects running on the cpu with only message passing as interaction). Rely on a capabilities model for permissions. To get the security required needs a complete rethinking of the whole stack. You might disagree on the specifics here, but you likely agree there would need to be a big engineering effort.

            This is all good, BUT, and this is a big BUT, even if you had a perfectly secure system, you would end up with

            1. A completely unusable system.
            2. The weakest link is always the human, no matter how secure the software.

            While more secure systems might help mitigate indiscriminate attacks, they will have very little impact on targeted attacks.

            In other words, security has always been, and always will be a social problem. UNIX has a very shitty security model but it is true, in the university atmosphere, it was fine, because of the social relations involved.

            1. 1

              “A completely unusable system.”

              Although I agree on No 2, I don’t know that No 1 is true. The systems built in high-assurance security often had to be simplified with rigid limitations. They were still useful, though, with applications in terminals, GUI’s, VPN’s, databases, and what you’d call services today. The capability-based model had PowerBox’s that made it easier to give the right amount of authority to apps. Then language-based models can use a combo of annotations (eg labels), compilers, and software/hardware checks for arbitrary properties. If you can describe it precisely, there’s a good chance it can be implemented with existing approaches.

              That leaves hardware failures. They happen on modern hardware more, though. My high-security designs always called for things on older nodes since they break in fewer ways. They’re slower and use lots of watts, though. Then, there’s architectures like NonStop to block one faulty or hacked component right at I/O response. I’m not sure what the cut-off point is for nearly-perfect security. I am sure such systems are a lot more useful than people think. Just gotta be able to work within their constraints.

            2. 2

              Wirth did it in all his languages, and so on.

              I liked Pascal when I was young… and Oberon seems a piece of art.

              But I could not say if the Oberon system (as hackable as it is) can be defined secure.

              UNIX crowd resisted strongly anything that boosted security.

              Saying this breaks my heart… but this was still true in Plan 9 from Bell Labs: despite some cool innovations (eg no root, factotum, and staric analysis of some new concurrency related syscalls), the “worse is better” approach doesn’t play well with security.

              In my own toy fork, I’m assuming that a simple system would be safe too, so I’m trying to minimize the system API and cleaning the code, I removed swap, caches and other potentially unsafe optimizations.

              But in the dark of my coding room, while everybody else sleep, I know that C is an unsafe language.

              I can live with this double thinking just because by day I’m used to Javascript, that is way harder than C (or even assembly) to get right.

              But… on the other hand, can we really isolate ourselves from the machines?

              At the end of the day, everything is just a chunk of bytes…

              1. 2

                “But I could not say if the Oberon system (as hackable as it is) can be defined secure.”

                To be clear, I’m just saying his languages are safe-by-default. It blocks common types of attacks. There’s still other ways to attack applications. There’s just been a lot of vulnerabilities due to unsafe primitives. His part of the ALGOL line knew those primitives cause bugs. They were deliberately fixing it. They usually include a way to turn checks off for performance or low-level reasons on a per-module basis, too.

                “But… on the other hand, can we really isolate ourselves from the machines?”

                All I say is we can knock out a lot of problems with better language or methods. Languages like Ada/SPARK and Modula-3 make it easier to write safe, systems programs. Someone recommended BetterC in D language, too. For a general overview, here’s is a writeup of techniques you might use.

                “At the end of the day, everything is just a chunk of bytes…”

                It’s all electromagnetic activity. That’s where my work ended since I didn’t understand any of it enough to try to solve the problems there. TEMPEST-like tech has been a cat and mouse game among governments for decades now, though. So, probably no easy solution once you get there. Before that or aside from such attacks, going Burroughs, SAFE, or CHERI on the situation with tagged memory plus enforcement mechanisms in CPU’s can knock out a lot of problems. Combining that with a POLA architecture and strongly-typed languages goes further. Each thing does what it’s good at.