1. 23
  1.  

  2. 5

    That’s a really great development. A number of people don’t try to do anything to make life easier for people who want to build from source or package their programs, and some seem to intentionally make their life worse.

    I don’t want to name names, but there was one really nasty situation, when I absolutely needed a fresh build of a certain program because customer’s system suffered from a bug that was fixed recently and distro packages had no updates. It would be easy enough to build from a tarball, but to make things worse, their CDN got messed up and I couldn’t downloads one, so I had to build from git. And then it turned out that making a buildable release taball from the git source required a number of additional steps no one bothered to automate or even document!

    The maintainers in the IRC channel were online and told me the secret steps, and the problem was fixed, but since then I sworn to always check if my own stuff can be built on a clean setup by following its README mindlessly and when it fails, it explains what went wrong. Hard, but I’m trying.

    1. 5

      A number of people don’t try to do anything to make life easier for people who want to build from source or package their programs, and some seem to intentionally make their life worse.

      That’s not what this is about. They’re trying to address Karger’s compiler subversion (very rare) that Thompson popularized with Wheeler’s reproducible builds which requires additional steps on top of this that most people don’t do from what I can tell. The other Karger attacks were vulnerabilities in the OS or software (99+% of attacks), compilers introducing security problems (still a thing), hardware failures/leaks (re-discovered around 2005), and subtle backdoors in any of them requiring strong requirements-to-design-to-code correspondence (common risk, near-nonexistent mitigation). Also, Karger said secure the distribution with a security-focused SCM, certifying compiler, transport security, and local builds after re-running security analysis on source.

      With that backdrop, the mainstream approach is… assuming easy-to-make-malicious hosts, servers, and build systems with buggy compilers… to put enormous effort into ensuring everyone’s running the same binary from the same easy-to-make-malicious source. Then, they feel safer and accomplished. Karger is probably rolling in his grave at that shit. He was probably rolling in his sleep while alive.

      At least Mozilla is gradually rewriting risky portions of Firefox in Rust to reduce risk on that part of the stack. Some other projects attempted to make mature OS’s, usually FreeBSD for some reason, safer with compiler transformations. INRIA did CompCert for certifying compiler. Aegis did a SCM with security improvements. These other folks high-five each other over matching hashes/signatures before they get hit by the same 0-days over the same preventable vulnerabilities that they’d get hit with if it was non-reproducible. The 0-day will also be reproducible in effect on most boxes. On the bright side, reproducibility will at least help with debugging.

      1. 4

        That’s not what this is about. They’re trying to address Karger’s compiler subversion (very rare) that Thompson popularized with Wheeler’s reproducible builds which requires additional steps on top of this that most people don’t do from what I can tell.

        Really? Wikipedia, at least, claims that the primary reason for reproducible builds is to solve the “lol the source code I built the binary with has a backdoor that the open source code doesn’t” attack. While I’m pretty sure Mozilla isn’t going to pull something like that, it is such an obvious and easy attack that it’s embarrassing so few OSS projects have tried to address it.

        1. 3

          Just making the source code available with a hashed, signed package that someone can build solves that. The vast majority of people, even security people, won’t perform the necessary steps to ensure something with reproducible builds is secure. Most people also don’t care to build from source. So, whatever security it provides is minimal over securing the host OS, browser software, the repo, the transport, and compiler transformations. The risk areas that lead to most hacks of Firefox users. Mozilla Corp also had $7-8 million in profit the year I looked at their financials assessing what they could spend on improving performance and security.

          Like you said, Mozilla is unlikely to subvert the code they give to their users in ways that do real damage. They’ll just accidentally introduce vulnerabilities that are easier to catch if the amount of labor invested in reproducible builds was instead invested in preventing and catching those vulnerabilities. The people are ignoring what leads to most hacks to focus on a risk that rarely if ever happens for openly-developed products like Firefox. Far as compiler-compler subversion, it’s happened about 2-3 times that I know of in decades. The resulting mobilization was a massive amount of attention with about nobody building trustworthy compilers and/or improving repo security against hackers. I can count each on one hand if we’re talking highly-robust approaches.

          1. 2

            Sure, Mozilla probably does not introduce vulnerabilities intentionally but reproducible builds will detect; whether the build host was compromised, or whether the supply chain was compromised. Supply chain attacks actually are a fairly hot topic these days, ie. matrix build server being compromised and more.

            1. 2

              Run it on OpenBSD with signatures and/or some update service. Update service optionally written in Rust with all checks on. Boom. Went from massive labor to reading one book about OpenBSD or hiring an experienced sysadmin.

              Far as true supply security, you need secure SCM that can catch malicious developers putting in bad source. Most backdoors are designed to look like vanilla vulnerabilities or misconfigurations

        2. 3

          I know.

          My point is, before you can have reproducible builds, you need to have repeatable builds to begin with, and there are still projects failing even at that!

          1. 1

            I 100% agree with that part of your comment. I feel for all the developers that have to deal with that crap.

            1. 4

              There are worse cases actually: companies intentionally hiding parts of the build toolchain so that no one can actually build exactly what they build. pfSense, for example, put their image build tools behind an NDA that prevents you from distributing the builds.

              1. 1

                There’s a lot of companies in proprietary software make builds hard or restricted. I didn’t know pfSense did it. Quite contrary to what I’d expect of an “open source distribution.”

                1. 5

                  That was the biggest reason for OPNSense maintainers to make the fork. pfSense’s response to the fork was… quite a sight: https://opnsense.org/opnsense-com/

        3. 5

          In my experience it’s very easy to slip and forget a step in the documentation. Or a build dependency that most users have installed on their systems.

          The best remedy I have found so far is to package the project using Nix. The sandbox forces me to declare all the build time dependencies and make sure that the whole build process is working from source. Even then it’s not guaranteed that all the runtime dependencies are being captured but it’s the best that I have found so far.

          Another approach is to use a docker image to build the project but it’s less precise because it can fetch stuff arbitrarily and also usually starts with a base OS like Ubuntu, which means that any of the tools pre-installed might also be build dependencies.