1. 44
    1. 65

      The main reason I want reproducible builds is not security, but customer support.

      Before I start debugging, I want to be able to reproduce exactly what the customer has in his hands, and be sure no library or tool has changed something.

      Bugs that only exist if built on a particular developers box on a particular day give me the grues.

      That said, the most effective and repudiable way of hiding a bug door is to make it only exist / be exploitable for some combination of source code, dependency version, toolchain and configuration.

    2. 38

      IMO, reproducibility isn’t desirable because of any benefits it provides, but because the inverse is cause for concern.

      As long as the code and dependencies of a project don’t change, there of no logical reason for the resulting binary to not be identical every single time. If it’s not, randomness is occurring at one of the stages. Randomness in something that has no reason to be is a big red flag.

      1. 3

        I heard that you can get different FPGA bitstreams for the same source, because FPGA layout uses randomized algorithms.

        Do you think randomized algorithms (which often have no competitive deterministic algorithms) should be banned in the toolchain? Why?

        1. 21

          You can always deterministically seed the randomised algorithm — or even just record the truly-random seed, if desired.

          1. 4

            You must also ensure each worker thread has their own PRNG instance derived from that seed, and that no race conditions can affect placing or routing.

            1. 2

              Good point, because otherwise the OS scheduler would inject nondeterminism.

        2. 4

          Yes.

          Because randomizing the layout is equivalent to the developer deciding that they don’t want to solve the problem, and just throwing it to chance instead. How is that a good thing?

          I get that calculating an ideal hardware layout can be too complex to calculate (practically or literally), but it doesn’t have to be optimal. A suboptimal solution can be generated heuristically without resorting to tossing variables to the wind.

          1. 5

            Because randomizing the layout is equivalent to the developer deciding that they don’t want to solve the problem, and just throwing it to chance instead. How is that a good thing?

            To be fair, there are quite a few objects that can be easily constructed by a randomised algorithm with 99% chance in reasonable time, but best known deterministic constructions are both slower and worse… (Also happens that the random construction is widely implemented, and the latter applies to the state of the art of theoretical research and to best known heuristics, too)

    3. 26

      taviso is doing good work, but I’d love to know how much being deeply embedded in a big company that produces closed source software influences this mindset.

      For me, reproducible builds are just as well for projects without multi million dollar funding and hundreds of full time people, it’s about having a common understanding that you can reproduce the build for software that you’ve downloaded, for a Linux distribution for example. Or just so that multiple people can build the source without relying on the “official” build pipeline, which would then yield (after verifying and signing, yadda yadda) the official release artifact, done by one of the important people in the project. I know it doesn’t make sense for Chrome because you have to trust Google anyway.

      1. 16

        Yeah I don’t understand this post. Like what does the part quoted below mean?

        1. Why would a single vendor create and setup 2 disparate build infrastructures?

        The setup I’m imagining for reproducible builds for open source is that someone can build binaries for you, and then you can build them yourself on your own hardware and verify that you got the same binary.

        If you don’t have reproducible builds, then you can’t do that. People do not want to build all their code from scratch. But it’s absolutely a good thing that they CAN do it and CHECK the result.

        Reproducible builds provide the best of both worlds – you don’t have to build it yourself, but you also have some assurance that what you’re running matches the source code (and if you don’t, you can get that assurance by building it)

        1. In open source, you do not care about “stealing proprietary source code”. I think this whole post doesn’t apply to any of the reproducible build work I’ve seen.

        Also, at Google the builds are deterministic/reproducible simply because of caching and distributed builds. About 10-15 years ago a very skilled former teammate of mine went around stamping out determinism problems in a lot of tools so that the cache rate was increased. Sorry I don’t have details since it was a long time ago. But Google’s internals builds are content-based, and sandboxed, thus reproducible.

        So I’m honestly puzzled by the post (although I agree you have to take anything taviso says seriously, especially with regard to security. After all he was the guy who found out that Cloudflare spraying random bytes all over the Internet due to a buffer overflow that could have been easily prevented)

        Q. Build servers get compromised, and that’s a fact. Reproducible builds mean proprietary vendors can quickly check if their infrastructure is producing tampered binaries.

        I think this is true, but ignores significant trade-offs. The vendor needs to create and maintain two disparate build infrastructures, and then provide additional people privileged access to that new infrastructure. If you don’t do this, there was no benefit to reproducible builds because you’d be building the same potentially compromised binary twice.

        We know that attackers really do want to compromise build infrastructure, but more often they want to steal proprietary source code, which must pass through build servers.

        This means that vendors will increase the likelihood of attacks that really are happening, to prevent an attack that could happen.

        That is a significant trade off, and the decision to invest in reproducible builds isn’t as obvious as supporters claim.

      2. 8

        If you care about supply chain security at the distro level, have a look at what Guix is up to. Reproducible builds are just one ingredient, and probably not the most difficult either. It’s definitely not the Google product build system use case, though.

        For security, you don’t really want bit-identical builds so much as a reproducible assurance case. Having a durable, meaningful, and comprehensive verification process is a much more difficult problem than just getting the hashes to match. Minimizing and stabilizing your TCB is a good start, I suppose.

      3. 2

        taviso is doing good work, but I’d love to know how much being deeply embedded in a big company that produces closed source software influences this mindset.

        I choose to believe he wouldn’t write this strawman argument, and instead that my ISP is forcing me onto the unsecured HTTP connection to put up this article. Surely Tavis wouldn’t host his blog without valid HTTPS!

    4. 17

      For me, at least, reproducible builds have nothing to do with any of the arguments this article sets up and knocks down. Instead: If my builds are reproducible, then I don’t have to keep an archive of old binaries around at all because I know I can reproduce them faithfully on demand.

      The main place where I get value out of this requires a more expansive definition of “build” – it’s reproducible system configurations. If I know my Chef/Ansible/Puppet setup is able to reliably get a system to a precise known state, then I can build a new disk image from scratch when I need to deploy a new service on a fleet of VMs, rather than having to keep around a library of intermediate images to use as known starting points. Non-reproducible system builds can easily leave you with a mishmash of slightly different versions of software across your cluster, leading to intermittent bugs that only show up on a couple hosts because they have just the wrong combination of package versions.

      “Just use Kubernetes,” I hear you say, but containers have the same problem unless you’re much more diligent about building minimal images than most people are: it is super common to start with, say, a Debian base image and then run apt-get to install some dependency. Or to run pip install without pinning dependency versions. So hooray, you get a different result depending on whether your build happened to run on the old CI server with the build layer cache from before the package was updated, or the new CI host that had to fetch the newer package, and the bug you’re trying to track down keeps appearing and disappearing for no apparent reason from one build to the next. (Not that I’ve ever seen that happen.) Reproducible builds eliminate a source of uncertainty and inconsistency.

      1. 3

        Oof, my instant reaction to this is a bit of a heart attack :)

        I mean, you’re right in theory, but I’m not sure (actually I’d bet against it) that most orgs who claim they have reproducible build will have all the dependencies and infrastructure around.

        Let’s say you want to build foobar on Distro X 1.2.3 - then reproducible build means:

        • if you install all the dependencies from the Distro X 1.2.3 repo (and only those) and build foobar on July 29th 2020 on two machines, then you have the same binary A
        • if you do the same thing on August 14th you may or may not get the same binary, because a dependency may have changed. In the best case you also get 2xA, if there was only one dependency updated, you now have 2x B

        So it’s probably really hard (if not impossible) to backdate and reproduce builds from a certain day. It should work for Debian if you keep a complete list of dpkg -l on that day. It’s near impossible on rolling release distros like voidlinux, if you wait a few months and don’t archive all versions yourself.

        Also not sure if keeping Docker containers around for this is feasible…

        So yes, you’re not wrong on theory, but in practice most people will be able to build the same binary/package years later. Weeks or months later often works.

        Also happy to be proven wrong here, of course you could keep a docker container “distro-x-2020-07-27” around. I actually do this at work, but it’s not a fully fleshed out concept.

        1. 2

          Right, I think we’re describing the same challenge. Caching/mirroring the distro and freezing the cache until you’re ready to upgrade things is definitely required to make this work. Luckily you don’t need to keep an archive of every version back to the beginning of time, only back as far as the oldest baseline image you want to be able to build.

        2. 2

          It’s near impossible on rolling release distros like voidlinux, if you wait a few months and don’t archive all versions yourself.

          No, it’s possible on Void Linux if they actually saved the appropriate metadata, and provided an archive. Arch Linux is currently doing everything you mentioned without any issues.

          https://archive.archlinux.org/

          https://reproducible.archlinux.org/

          https://wiki.archlinux.org/index.php/Reproducible_Builds/Status

          The issues with reproducing packages backwards in time on our side has been improvements in pacman. But you can recreate the build environment all the way back into 2016 if you want.

          1. 2

            It’s near impossible /right now/ because voidlinux doesn’t keep old versions of packages available.

            Of course it’s much less of a problem if they (or you, on your own) start keeping an archive.

            1. 1

              Void users have a defacto archive of any packages they ever install, on their own machine. However the churn is too great for Void to archive all packages across the tree without significant disk space.

              1. 1

                I’m a void user and I don’t have all of those on all machines. Also sometimes they get cleaned up after a while, that’s not a problem.

                I’m not arguing that void is bad or this is some sort of magic trickery. I’m simply stating that with a rolling release model and not keeping an archive of “all” packages, this will not work except in rare cases where the stars align up correctly and you want to build something again when you have all packages. If they do the usual timestamp-injecting fix and the other common reproducible build-preparing steps.

        3. 1

          Why would you get different dependencies due to a different date?

          That sounds like an astoundingly bad idea.

          1. 1

            if package A depends on package B 1.2.3 and B gets an API-compatible security fix 1.2.3-1 then the resulting bianry is different. Of course this depends on package managers and dependency version pinning. If A pins 1.2.3 and not 1.2.x then it might work.

            1. 1

              If A pins 1.2.3 and not 1.2.x then it might work.

              I would expect that all serious dependency resolvers would to this. Why would one ever want some mystery meat build configuration, in which the same config file results in different outcomes?

              1. 1

                Feel free to nitpick my typo (wanted to write 1.2.3 and not 1.2.3*", I wouldn’t trust that without looking at it in detail, per package manager. npm/bundler/composer all do that, at least to the “patch level” level.

                https://packages.debian.org/buster/nginx-full - “dep: libc6 (>= 2.28)” “dep: zlib1g (>= 1:1.1.4)” - sounds like “would build with a security fix of those” to me

    5. 9

      I don’t see how reproducible builds add any complexity. Having compilers dump crap like timestamps, build hostnames/usernames and so on into binaries never gave me any benefit. Reproducible builds have made our internal continuous deployment setup easier because files don’t change from one build to the next. And that has nothing to do with third-party verification of binaries.

    6. 11

      As Tavis says himself: https://twitter.com/taviso/status/1288244033710481408:

      Yes, there are reasonable non-security reasons you might want it, I’m only opposed to the security arguments.

      Reproducible builds do not add much from a security perspective because to validate them, you have to do the entire work yourself and trusts the inputs.

      They are however useful from a development, debugging, deployment and distribution perspective (as mentioned already several times in the comments) and he does not deny that.

      1. 4

        Reproducible builds do not add much from a security perspective because to validate them, you have to do the entire work yourself and trusts the inputs.

        Nope, you can have multiple builders within the community who reproduce the build and sign off on it being identical. There’s a level of trust between “trust the vendor and their infrastructure entirely” and “build everything yourself”, and it is precisely this level that I have seen promoted by the reproducible builds people. :-)

        1. 2

          F-Droid does this automatically. If upstream provides an APK, and F-Droid can exactly reproduce that APK, then F-Droid will distribute the one it built with the original’s signature applied in addition to F-Droid’s signature.

      2. 2

        And yet.

        Such builds don’t prevent your source code from being malicious. They do make it harder for a compromised toolchain to go undetected by random users. They also help users verify the source they see is the source that built.

        If you build the same artifact twice and get different results, you learn nothing. Build it twice and get the same result, you know the toolchain did the same things both times, and that’s comforting.

      3. 1

        Reproducible builds do not add much from a security perspective because to validate them, you have to do the entire work yourself and trusts the inputs.

        Which isn’t what they are writing though? Tavis claims the following:

        Now if the vendor is compromised or becomes malicious, they can’t give the user any compromised binaries without also providing the source code. […] Regardless, even if we ignore these practicalities, the problem with this solution is that the vendor that was only trusted once still provides the source code for the system you’re using. They can still provide malicious source code to the builders for them to build and sign.

        So this is largely from only one perspective, and that is proprietary vendors where the pristine source is only gotten from the the vendor publishing the binaries themselves. This hold for proprietary vendors, but doesn’t for Open-Source distributions as pointed out earlier in this comment section.

    7. 5

      I don’t want to believe this, as my gut feel is that reproducible builds are always a net good, but I don’t see a hole in the argument. Maybe the “on trusting trust” compiler backdoor?

      1. 8

        I’ve never heard of reproducible builds being advocated for in a proprietary context. That does legitimately seem like a flawed argument to me.

        But it has a wider benefit in an open source context than the author says. If your goal is “make sure I absolutely have a trusted binary” then it doesn’t help, just as the author says. But if your goal is, “make it less likely that I’ve been given a malicious binary” - or in other words, “don’t make this binary fully trusted, but make it more trusted” - then it helps.

        Why? For the same reason that using freely available source code helps. You trust that there are independent experts reviewing the code for flaws, and that if any are found they’ll be fixed and if upstream won’t fix them there’ll be a huge stink about how Foobar Project refused to fix a security vulnerability, and you’ll read about it on e.g. Lobsters. Likewise, if the build is reproducible you trust that independent experts are trying to reproduce the binary and are going to sound the alarm if they can’t. And since that hasn’t happened, you have greater trust in the binary. You can’t trust it completely, but you can trust it more. (Of course, whether anyone is actually performing this verification independently is another matter, and there are plenty of examples in FOSS where this idea has broken down in practice. But that’s a separate matter.)

        I also don’t really buy the argument about bugdoors. It makes a lot of sense, but it’s risky for the attacker. Not in the sense that they might get caught, but in the sense that if their goal is to have a persistent backdoor, it might get fixed! You can claim it was a mistake, but your backdoor is gone either way. It’s not as reliable in the long term as distributing a malicious, tampered-with binary, but with reproducible builds the attacker is forced to not use the binary option anymore.

        (There are more problems, of course. For example, if the attacker only wants to target a few users among thousands, and they control the update server and signing keys, then they can make that attack undetectable by serving the legitimate binary to everyone who’s not targeted, including independent verifiers. But that’s not what the article was saying. Plus, note that even here you’ve already raised the bar to “control the update server” which is a much more specific requirement than “control some part of the build pipeline”, and even this problem can be fixed - hopefully - with something like binary transparency.)

      2. 5

        I think the main thing is that reproducible builds aren’t just about security value. Having a deterministic system is valuable in general because it means that you’re looking at y = f(x) instead of y = f(x, some_random_unknown_garbage).

        Like this focus on the distribution problem is only part of the problem, and I’ve always heard about reproducible builds in the context of stability, much more so than security.

      3. 2

        I’d be curious about any responses to this reply. I don’t see how it applies to open source reproducibility work, and I also think there are other motivations for reproducibility, which Google already has (and has had for a long time):

        https://lobste.rs/s/ha8c42/you_don_t_need_reproducible_builds#c_w2aove

    8. 5

      “Need” is an interesting choice in words. When someone’s non-reproducible build is driving me down the highway at 200km/h in the future, I don’t need it to keep me alive, but I’d like it to. I don’t need the digital coroner of futureland to have an exact build in their hands to analyze the crash, but I’d like them to.

      Security is not the only lens through which reproducibility matters. Granted, I also argue reproducibility matters to security for the same reason it matters to correctness — diagnosis.

    9. 4

      My goal isn’t reproducible builds. It’s hermetic and cacheable builds. A side effect of hermetic cacheable builds is reproducibility improvement. But mostly I want a build that is insulated from environmental effects and is safely cacheable because it improves my workflow and eliminates one class of “works on my machine” problems.

    10. 4

      The problem with this scenario is that the user still has to trust the vendor to do the verification.

      No they don’t, end users can independently verify the binaries. Take OpenBSD ports and Go programs for example.

      More often than not, upstream (gopass, restic.. etc) vendors provide binaries. These binaries can be checked by end users against the version shipped in an OpenBSD package. (Currently OpenBSD makes no reproducible bin guarantees, but it’s entirely possible now that we have Go module support in the ports tree.). They can even be checked without installing the package.

    11. 4

      Certain organizations need to be able to claim with a straight face under penalty of contract/law/reputation that they can ship bit-for-bit the same image they did 10 years ago in order to audit it. I think we were running a 25-year guarantee when I worked there. For software.

      I worked for one of those shops. We checked in artifacts. We checked in compilers. We had, I think, a maintained closet of very antique workstations in case one of our really old customers came by and wanted a patch and didn’t want to accept a free upgrade to a new piece of hardware that did the same functionality.

      Great experience, glad to have had it, don’t turn it down if you get the opportunity.

    12. 3

      I think supporters and opponents of reproducible builds are talking past each other. Do I need everything to be completely reproducible including the OS and system libraries? Probably not. Do I want my critical application used to process bank transactions downloading 10,000 random things from NPM at build time? Also probably not.

      I’ve worked on projects with reproducible builds, and it’s indeed pretty time consuming once you get into complex systems with many moving parts written in different languages. Google can afford it, but your average startup can’t.

    13. 2

      For anybody else having issues accessing the blog (PR_END_OF_FILE_ERROR), the way their server is deployed triggers a bug in https-everywhere, rendering it inaccessible. I’ve filed a bug here: https://github.com/EFForg/https-everywhere/issues/19416