1. 47
  1.  

  2. 8

    I can’t speak one way or another of other tools, but maven is being unfairly blamed here; it’s one of relatively few language packagers that requires signing all packages. Of course it’s possible to download a jar and not check its signature, but that’s equally true for tarballs or any other format.

    Also makefiles are just another kind of “incomptaible, and non-portable “method of the day” of building”. Every makefile I’ve seen runs a bunch of random binaries from your PATH, and doesn’t document what it requires of them. (e.g. if tar on your system is BSD rather than GNU tar, many Makefiles will just break).

    1. 5

      I feel the rant is more about open-source projects not correctly implementing a build process that you can trust than anything else. The fact of having containers fetched from the internet is the same issue as downloading the binary somewhere. If you could correctly build the project, then you could package it yourself in a container.

      In addition, for trusted images, Docker introduced https://blog.docker.com/2015/08/content-trust-docker-1-8/ some time ago.

      1. 3

        I find this post to be too much on the “rant for the sake of rant” side:

        None of these “fancy” tools still builds by a traditional make command.

        – that can be said about almost anything not written in C. I understand where the author is coming from (the old-school universe of C packages), but come on, the world is much more complex and diverse now. Hadoop is definitely a good horse for beating in this case, but would not it be a nightmare to build and maintain it with any approach?

        the Docker approach boils down to downloading an unsigned binary, running it

        – I find this statement very weird: docker build and private registries are your friends. I understand that setting up proper CI and a private registry can take a non-trivial amount of complexity and effort, but it is a way to ensure trust and upgrades, but the author is still talking only about running unsigned binaries from Dockerhub as if a proper secure process does not exist.

        Feels like downloading Windows shareware in the 90s to me.

        – with the difference that shareware is explicitly hostile to user inspection and modification, and any pre-built Docker containers should have clear, public and reproducible build process, otherwise I don’t bother running them.

        With signed packages, built from a web of trust.

        – maybe I’m being too strict here in my understanding of “a web of trust”, but has the author actually ever exchanged PGP keys with maintainers of the old-school Linux distros/BSD distributions? Isn’t the traditional sysadmin model also based on trust: you get an installation image from a trusted place and trust the distro maintainers? In case of almost any distro except Gentoo, you also run binaries made by someone else (with root access to your system and no isolation whatsoever except for manually set up chroots, cgroups, namespaces, … oh wait, it smells like Docker already).

        Some even work on reproducible builds.

        – isn’t that one of the promises of Docker, reproducible builds with some effort, but without overhauling-in-depth every build system in existence?

        Docker itself may be a short-sighted overhyped product with lots of technical weaknesses, but containerization is here to stay.

        1. 2

          but would not it be a nightmare to build and maintain it with any approach?

          Yes, that’s the core of the problem: The design choices that make it a nightmare to build and maintain regardless of the approach taken. Potentially, if containers weren’t used, people might feel the need to fix this bug.

          isn’t that one of the promises of Docker, reproducible builds with some effort, but without overhauling-in-depth every build system in existence?

          Except I need to build code on systems without docker. Getting some docker-built libraries into a state where I could include them linked into an android app turned from a “Well, it’ll take an hour or so, uses cmake, can’t possibly be that bad” into 3 engineers trying things out for about a week, including debugging cmake with strace, and gdb in order to get the build to work.

          1. 1

            – isn’t that one of the promises of Docker, reproducible builds with some effort, but without overhauling-in-depth every build system in existence?

            How is that possible, though? Programs still need to get built through their existing infrastructure.

            1. 2

              Taking a minimal fixed image as the base for building and paying some attention to what is being downloaded from where is much more closer to reproducible builds than building a package on a continuously changing traditional system with lots and lots of libraries and admin/user activity.

              When you start nailing versions of OS and every library to build a reproducible image, soon you will be down to managing many virtual machines, and at some point plain containerization of builds makes more sense.

              1. 2

                But we still have the problem of building our software reproducbly.

                Also, I disagree that reusing the same binary artifact (in this case an OS image) counts as reproducible builds. It’s like saying installing a prebuilt piece of software is a reproducible build.

                1. 1

                  I agree that full reproducibility is a noble goal and I hope that NixOS/bazel approach will be dominant someday.

                  Right now, it’s easier to deal with existing/legacy systems just building from minimal fixed images: that does not give full reproducibility in the strict sense, but it’s way better to have those kinda-reproducible builds in practice than what we had before.

                  1. 1

                    Also, I disagree that reusing the same binary artifact (in this case an OS image) counts as reproducible builds. It’s like saying installing a prebuilt piece of software is a reproducible build.

                    Seems closer to building with a prebuilt compiler to me. Of course it’s important to ensure that you can rebuild the compiler/OS image, but you wouldn’t expect to rebuild it for every build.

                    1. 2

                      We’ve had that state of affairs for over a decade, at least, though. I don’t know anyone who was calling building AMIs “reproducible builds”. To make my critique more nuanced: I don’t think Docker is changing the state of reproducible builds.

                      1. 1

                        Usability matters. I don’t think I ever saw people using AMIs to run the build in, which is what would be the equivalent here.

                        1. 1

                          What does “run the build in” mean?

                          1. 1

                            Performing the build process - running the compiler etc. - inside the known-state system that the AMI provides. I didn’t ever see them being used that way.

                            1. 1

                              That is how one builds a new AMI: you launch an existing AMI, run your build process, then save that as a new AMI.

                              1. 1

                                I never saw that being used as the primary build process, probably because it takes so long (comparatively) to launch an AMI. Rather there would be a language-specific process that built a binary, and then a distinct AMI-building process that just stuck that binary in the AMI. Whereas docker is more practical to use as your sole build process.

                                1. 1

                                  Maybe, I see using AMIs to build AMIs quite commonly. But either way, I’m not really sure if this changes my core critcism: Docker is not changing the state-of-the-art of reproducible builds.

                                  1. 1

                                    As I said, usability matters; Docker doesn’t change what’s possible, but it does change what’s easy.

                                    1. 1

                                      That’s fine, but as I explicitly stated earlier, my point was about the ability to do reproducible builds, and docker has not changed one’s ability to accomplish that.

            2. 2

              So where can one learn about sound systems administration?