1. 16
  1. 32

    This post reads as if it’s intended as flamebait, but I am going to do my best to respond as if it’s serious.

    The overall problem with this post is that as follows. Any vulnerability P in a set of several vulnerabilities present at a given time Σ(t), such as t₀ now, is sufficient to cause some undesired outcome Q, such as NSA agents passing your dick pics around the office and laughing at them. This post argues that fixing one particular P from Σ(t₀) by time t₁ is useless because Σ(t₁) is still nonempty. This is not a very good argument; the goal is not for Σ(t₁) to be empty, but rather for Σ(t₂) to be empty for some time t₂ not too far in the future, preventing Q thenceforth. If tedu’s argument were taken seriously, it would prevent any progress toward that goal.

    Fixing any of the vulnerabilities represents progress toward that state of affairs.

    The particular set of vulnerabilities @tedu is claiming will allow the CIA (or, more accurately, the NSA) to backdoor Debian systems are as follows:

    P: Breaking into any Debian Developer’s machine and corrupting a package they are going to upload to the archive allows the TLA to insert a backdoor into that package, until that DD or another one uploads a new version of the package.
    Q: Inserting a self-reproducing backdoor into a compiler allows the TLA to corrupt arbitrary future packages built with that compiler.
    R: A backdoor inserted in the process of building a binary package from an irreproducible build process is virtually guaranteed never to be found.
    S: Compromising a download server allows the TLA to insert a backdoor into any version of any binary package on it until a new version of the package is uploaded, and to serve up the backdoor only to people targeted by IP address until the compromise is fixed.
    T: Anonymously-contributed innocent-appearing source code patches can exploit current compiler bugs to introduce backdoors into any package until the compiler bug is fixed.
    U: All our existing software is full of accidental holes, so backdoors are unnecessary.

    Of these items, reproducible builds fix P, Q (which Ted incorrectly dismisses as impractical; it was deployed in the field for a number of years), and R; S is actually not true, because apt will not install packages that are not signed by an authorized key, and the download servers do not have those keys; and T and U are still true and need to be solved by means other than reproducible builds. For example, I am typing this in a version of Iceweasel with known vulnerabilities.

    There is a somewhat weaker real attack S', which is as follows: modify the download server to serve up an old version of the Packages file and the old versions of packages that have been replaced by bug-fixed versions, so that users relying on that download server will remain vulnerable to known vulnerabilities even if they apt-get update && apt-get upgrade regularly. This is a replay attack, and could be fixed.

    There’s another thing I want to point out, which is that T and U are much weaker attacks than P and Q. You are vulnerable to T and U if you are currently running a vulnerable package in a configuration where the vulnerability is exploitable. By contrast, you are vulnerable to P as long as any of the thousands of Debian Developers are currently running or have recently been running a vulnerable package in such a configuration, unless builds are reproducible.

    So @tedu is completely mistaken about the importance of reproducible builds. Or he’s joking, which seems likely. But even if it’s a joke, people might take it seriously. Reproducible builds are one of the most important steps toward eventually having a secure general-purpose computing system, if that is possible.

    His post also expresses some confusion about whether Debian reproducible builds are only a theoretical defense or an actually deployed defense. 83.8% of Debian packages in the testing release are currently built reproducibly. The original Vice article actually explains this, which reinforces my suspicion that his post is just a joke.

    1. 9

      I think we have a difference of opinion about whether or not ‘U’ strictly dominates.

      which Ted incorrectly dismisses as impractical; it was deployed in the field for a number of years

      I would love a good citation for this. As far as I know Ken just demonstrated a simple proof of concept. I know it can be done. I believe it to be fragile, as in will fail to propagate or otherwise reveal itself by failing to compile some other code.

      For the record, though, this is really more a reaction to the motherboard headline than the idea of reproducible builds. I’ll likely clarify a few things tomorrow.

      Thank you, btw; the alphabet of exploits is exactly what more discussions need. That’s what I was trying to provoke.

      1. 3

        What do you mean by “strictly dominates”? Obviously we have to solve U or we don’t get security. But solving U without solving P wouldn’t get us security any more than solving P without solving U. Solving P but not U at least gives you the possibility of having security on a few machines (the ones controlling your power steering, say) when you’re willing to pay a heavy price in convenience.

        AFAIK ken has never admitted in writing to the scale and success of his deployed “Trusting Trust” attack; all we have is the oral tradition. Yes, it’s fragile, and fragile is exactly what you need in many cases; it’s a backdoor written in disappearing ink. If there was a backdoored GCC in Slackware96, who could prove it today?

        I am glad my commentary is useful to you.

        1. 3

          There is some probability that my build chain is backdoored (P), but I think it’s less than 100%. I think it’s considerably less than 100% that the NSA, FSB, PLA all have backdoors in my build chain. I would say, however, that the likelihood of there being some dumb bug (U) like pdf.js file system access is 100%, and that bug is available to all of the above parties to exploit.

          (build chain = compiler, upstream build machine, download server, etc.)

          1. 3

            First, no probability is 100% in the Bayesian sense we’re discussing. That’s on par with the people who said they wanted to come visit me in Buenos Aires because they’ve “always wanted to visit the Amazon rain forest”. I’ve made similarly dumb remarks myself many times. This seems like a good time to thank the good people like Sean B. Palmer, Udhay Shankar, and Jeff Ubois who were kind enough to correct me, usually while laughing at me.

            Second, you’re unreasonably comparing apples and oranges in order to get the answer you want. It would be fair to compare the probability of at least one Debian Developer currently using a backdoored machine with the probability that your machine is currently backdoored; the first is clearly more likely. It would also be fair to compare the probability of at least one Debian Developer running at least one source-code-vulnerable piece of software that could conceivably be exploited to backdoor them, like VLC until a month ago, with you personally running at least one source-code-vulnerable piece of software that could conceivably be exploited to backdoor you. While both of these probabilities are very high, the first one is clearly even higher.

            However, you’re cherry-picking one case from the first category and one case from the second category. This seems unreasonable to me, but it’s part of a larger unreasonableness, which is the following.

            Third, you’re treating this as a probabilistic question about facts, as if we were talking about a disease or natural disaster: what is the probability that you currently have pancreatic cancer? What is the probability that there will be a major earthquake in San Francisco by 2020? But we are talking about intelligent adversaries here, not random events devoid of intentionality. The relevant question to your safety is not whether you currently have a keylogger installed; it’s how costly it would be for an adversary to install a keylogger on you.

            That’s still a statistical question, because it depends on unknowns like what undisclosed vulnerabilities exist in Chrome, how expensive it will be to find them, whether they are exploitable to gain access to your entire account, and whether those exploits have already been written and are already in the hands of your adversary. But it’s a statistical question about costs to your adversary. It’s very clear that an adversary will find it cheaper to successfully attack whoever is the most vulnerable of the thousands of Debian Developers than to successfully attack your machine directly. (It’s very unlikely — at least hundreds to one, if not thousands to one — that you personally are less cautious and competent than all of them. In fact, I imagine that if we ranked the DDs by difficulty of compromising their machines, you’d be most similar to the DDs in the most difficult quartile.)

            (Edited to be less flamey.)

            1. 4

              Oh, I like where this is going. I disagree, of course, because I am an Alabama tick, but I like the argument.

              I’ll grant that somebody upstream is owned. Adversary wants to own me. Agreed going through upstream is one way to do that, but it has its uncertainties. Maybe I don’t use that package, maybe I don’t update frequently or the package is stable. It lacks a certain immediacy. On the other hand, despite my cautious nature, I just can’t stop myself from following every link posted on lobsters. (Actually in practice, using Ubuntu, apt-get insisted I need like 75 updates a week on the “stable” branch. As if I had time to even read the list each time. So, yeah, I’m boned.) I feel like, with end point security being what it is (zero), it’s simpler and safer to always just attack end points. The math on this changes as software becomes more reliable. I would totally agree with you if I didn’t feel so exposed myself.

              I wanted to return to an earlier point, wrt repro builds being a vital step towards secure software. And P without U vs U without P.

              Let’s say we want to “close the loop”. We solve our Ps and Qs. Buttttt… U. Certifiable crap is still crap. :) or let’s assume we finally make software reliable. Everything is written in bug free rust. I can’t be owned directly. But… Neither can upstream actually. Ok, some rando package builder could choose to fuck with me, but they won’t be a party to inadvertent fuckery. P becomes incrementally less of a threat as the state of security advances.

              So, I’m still wrong, but see what I’m thinking? :)

              1. 3

                I’ve been following this with interest but have little to add to it. But a small point:

                At present, no distribution is secure out of the box, especially not if one installs every package it offers. This is point U. But, also at present, everybody has a reasonable opportunity to know that (if they don’t, they’re being willfully obstinate), and hardening a system includes reducing its attack surface as much as possible. This always includes removing unnecessary services, and in anything complicated should include consideration of dividing the things it’s doing across several machines, and creating security boundaries between them. The distribution’s job is to help as much as it can, but it can’t do so alone.

                Also, by the way, how would one change point U? I see two consistent long-term strategies, but I suppose there could be more:

                1) Promote the view that there is no such thing as information security and never will be. Defend yourself in court as needed when your data is stolen. Rely on the general public to not blame you, so that you can still continue business as usual. Implement mitigations as dictated by short-term cost/benefit, with full awareness that this is an arms race with no inherent endpoint.

                2) Promote research on how to make formal verification practical for use on large codebases, especially server software. While waiting for that, conduct and encourage independent security audits of your proprietary software and in everything open-source that you use. Make any incremental improvements to security that help immediately, and, since we’re seeking a global maximum and not a local one, also make improvements that you know to be part of an ultimate solution, even if they aren’t useful by themselves. Think in terms of how we can get to a position where attackers have no further paths to continue an arms race.

                Every middle ground I can think of is in denial about one or more things that aren’t going to change. :)

                To tie this back to the conversation: A fix for U would not make anyone safer without also fixing P, Q, R, S, and T. Strategy 2 demands that we fix all six points; strategy 1 would prefer not to spend money on any of them. If you believe in strategy 1, stop reading now; I believe in strategy 2 and have nothing to say that’s relevant to the other. :)

                Some of these points may be useful to fix without the others, and some may not, but we need them all in the end. Which order to spend effort and money on them in is a question that needs to be answered from more-or-less a business perspective: How difficult are they, how soon will they pay off, how much do they raise the cost of an attack or reduce the value of a compromise? But those concerns can’t control what we consider to need fixing eventually, or we’ve given up.

                1. 3

                  I agree with both of you.

                  My favorite form of denial is denying that small, simple codebases — the ones that could plausibly be secure with our current level of knowledge — are unusable and cannot compete for mindshare with large, complicated ones. So I run my daemons with runit, for years I received my mail (and the FSF’s) with qmail, for years I left JS disabled and still occasionally browse with elinks, I maintain my main address book with pencil and paper, and I keep tinkering with projects to bootstrap from zero, including impractical toys like general-purpose mechanical computation and more practical things like solar thermal energy systems.

                  Maybe one day I’ll find a way to make my denial come true, or more dismayingly perhaps we will suffer some kind of catastrophe, but that’s not where the smart money is.

      2. 3

        In reference to Q there is a lot of information about countering “Trusting Trust” via Diverse Double-Compiling by David A. Wheeler on his site: http://www.dwheeler.com/trusting-trust/

        1. 1

          I think I’ve used boldface in one comment on Lobsters in the time I’ve been here. You are at eight upvotes, which is eight times as many as I got. Well done. :)

          1. 3

            Thank you :) I find that selective boldfacing is often a useful form of rubrication which increases the skimmability of text.

          2. 1

            You state that S is impossible because backdoored packages wouldn’t install because they aren’t signed, but if P is assumed true, surely these packages could just be signed (unknowingly to the developer) by the attackers anyway?

            1. 2

              Yes, if the a developer’s GPG machine is compromised, then the attacker can use that access to sign packages, which they can then upload to download servers they have compromised. But compromising a download server is neither necessary nor sufficient to execute that attack. It just makes it slightly stealthier. That’s why I distinguished between P and S, as @tedu did in his original post.

          3. 6

            Rant tag?

            1. 6

              I’ve added a redux section, for your reading displeasure. :)