1. 7
  1.  

  2. 4

    It’s a nice write-up. Starting with the definitive work on subversion from high-assurance security in 1980 would’ve helped. That would be Myer’s thesis:

    http://csrc.nist.gov/publications/history/myer80.pdf

    He says “subversion is characterized by the following: 1. It can occur at any time in the lifecycle of a computer system. 2. It is under the control of highly skilled individuals. 3. It utilizes clandestine mechanisms called artifices deliberately constructed and inserted into a computer system to circumvent normal control or protection features.”

    This definition, along with his elaborations, should tell people what they need to know and seem like a sufficient definition of a backdoor. Only exception being 2 as less-skilled attackers exist that still get damage done due to target’s negligence. Myers work along with Karger et al’s pentesting of systems… including inventing the compiler subversion normally attributed to Thompson… led to the creation of Orange Book B3 and A1 requirements that were about addressing subversion by malicious developers as much as correctness. They required formal specification of requirements, design, and security policy along with proof they were equivalent. Then code correspondence, preference for safer languages, repo security, tests, pentests, trustworthy distribution, ability to generate system from source… they had everything but hardware and compiler issues covered. Sucked at matching their policies to commercial activities while capability-security succeeded at that more. Far as further demos, Anderson started with Myers work to write his paper showing examples with SSL and NFS:

    http://www.dtic.mil/dtic/tr/fulltext/u2/a401762.pdf

    Applying the lessoned learned from high-assurance security would’ve gotten them there sooner. After skimming Myers' paper, here’s the definition I brainstormed for you all:

    A backdoor is an artifact that’s deliberately inserted some place in the product at any phase in its life cycle to circumvent its security policy.

    That simple. Any mistake that leads to a similar result, a vulnerability, is equivalent in practice to a backdoor as it violates the security policy albeit without being deliberate. Basic, late 1970’s stuff formalized in 1980, added to security certifications later, and first product implementing that certified in 1985 (SCOMP). INFOSEC professionals just need to read what we already learned and start from there. Save them lots of time. As of 2014, I was again having to explain that the backdoor vs coding vulnerability distinction was missing the point when a NSA apologist (“Skeptical”) said no evidence of NSA introducing [obvious] backdoors. I redefined backdoor as part of writing the essay showing how NSA deliberately kept security weak in US:

    https://www.schneier.com/blog/archives/2014/03/friday_squid_bl_420.html#c5226750

    As of 2016, lessons from the people who invented INFOSEC are still ignored, often mocked, rediscovered, published with less thorough treatment, and often without solutions presented. This one was identified with prevention and detection recommendations starting at least 36 years ago. The starting point of a solution was in production 30 years ago. Numerous systems were done that way from that point on under the new criteria. INFOSEC professionals just independently learned it and are thinking of how to solve it. Shockingly common.