1. 11

  2. 9

    If you’re a maintainer and someone asks you to do something you don’t want to do, they are free to do it themselves. You don’t owe anyone anything, unless your project is also your product.

    1. 6

      I can appreciate that perspective, though I also think there’s a perspective that maintainers are invested in the projects they maintain. When people come asking for more and more “table stakes” there’s a feeling of threat that the project will be wrested from them via fork, they will lose valuable contributors, that the thing they worked hard on will be negatively impacted. The baseline stress level of having useful projects out there rises, and drives away people.

      I don’t have answers, but I think it’s worth acknowledging some of these perspectives.

    2. 6

      Security is a very difficult problem for a lot of open source projects because the ecosystem encourages shipping components, whereas security is a property of systems. Consider something like LLVM. It’s used in a load of different places with very different security requirements. If there’s a bug that allows someone who controls the input of the program to execute arbitrary code, that has zero impact on clang, rustc, or switfc (for example): the only person who can trigger it is the person providing the source code and by definition they already have arbitrary-code execution because that’s the entire point of using the compiler. Now what happens when they’re part of a GPU driver stack and are used with WebCL / WebGPU? Now the person who can control the IR is providing code from a random server on the Internet.

      Exactly the same thing happened with things like libgif, libjpeg, libpng, libmpeg, libavcodec and libavformat. They were all written on the assumption that the input was trusted. They were intended to be used by people who had some images or videos and wanted to display / transcode them. Then people put them in web browsers, where they were exposed to random files from untrusted sources, so a compromise allowed an attacker to execute code with the same permissions as the web browser (which were typilcally the same permissions as the user running the browser back then). Some spectacularly incompetent people put them in the in-kernel part of virus scanners so a compromise allowed kernel-privilege arbitrary-code execution.

      For any of these things, what should the library project do? Their code was written on the assumption that the inputs came from trusted sources and so the vulnerabilities are all outside of their threat model. The ‘correct’ threat model is probably a consensus view from downstream consumers but if they disagree with upstream’s threat model then they need to actively contribute.

      1. 3

        This article only matters for libraries whose maintainers want to be exploited by corporations. Free Software maintainers do not need to change a thing with respect to this article’s requests.

        1. 2

          I don’t even think it’s relevant to most of those. It seems to be only relevant if you’re somehow packaging, or making available, a bunch of open source packages that you didn’t write or maintain.

        2. 1

          As an open source maintainer, I was hoping for some relevant insights from this, but as much as I respect Luis, there was nothing in this that was relevant to me, or most maintainers.

          Security is important, but it’s always been important, so setting this up as a new expectation is a little misleading. One does not need to be a security expert to write secure software, you just need to be aware of the footguns in the languages and systems you’re using, and

          1. be careful to avoid firing them
          2. be quick to patch when you’ve inevitably failed at #1.

          As for Legal metadata, I’ve been on both sides of the fence (both writing commercial software that leveraged open source, and writing open source software), and I’ve never heard of anyone unsatisfied with a simple LICENSE file distributed with the release.

          As for SBOM, I’m not even sure what this means. It sounds like more of a requirement for Enterprise to know what software they’re packaging and shipping, or making available to the public. It is not a requirement for open source maintainers that are full participants of the ecosystem of open source.

          If you’re distributing a piece of software that packages up a bunch of open source software, you’re right that you need to keep track of vulnerabilities in the software that you package along with yours, but if you distribute your software as a solitary piece of software that you maintain yourself, as most open source maintainers do, then you can then depend on the open source ecosystem, like Linux distributions, or even managers like homebrew, to ensure that the dependencies your software relies on are kept up to date and bug free.