I had a walk while preparing a presentation around Supply-Chain security when this thought occurred to me. It was later refined by a friend of mine.
There are two camps of “Supply-chain security”.
How can i continue to extract labor from OSS and mitigate the risks? (camp 1)
How can we model a supply chain that accounts for OSS and recognizes their effort/collaborates with them? (camp 2)
Most (all?) efforts from Google, OpenSSF and Companies largely fall into Camp 1. They are check boxes you can tick, measurements for you to look at Open-Source labor and ways to get badges so maintainers can look more attractive.
However, if you actually look at what is needed very little resources are spent on figuring out how we can better support maintainers. That is a problem. I get disappointed every time Google launches a new supply-chain product they are one lay-off/promotion away from being abandoned.
I couldn’t agree with this more. I remember vividly sitting in Kelsey Hightower’s closing StrangeLoop keynote on securing the software supply chain thinking “who is this for?”. OSS developers providing supply chain integrity proofs and Kelsey out on the conference circuit astroturfing this as “the expected minimum” has nothing to do with developing software that serves users. It has everything to do with helping companies (Google etc.) satisfy their regulatory obligations. Scaremongering about how artifact signing may become legally required in the future is something Google is concerned about and needs to hedge against, not something OSS developers should joyously lean into.
One of thebest initiatives I have seen the past years with the supply-chain security hype we’ve had since log4j/solarwinds has to be Open Collective hiring maintainers. More of this please.
Open Collective does seem like a pretty good way to get maintainers paid on a donation model. Haven’t used it myself or know of any projects that especially use it though; hopefully it works as well as it sounds?
100% agree. Google clearly financially benefits from this - both internally through the use of open source as well as externally through encouraging more people to adopt GCP. They are already putting in effort to track security vulnerabilities and in some cases contribute fixes upstream, why not go the last mile and also offer grants to upstream projects to help them triage and merge in fixes?
The only reason I can think is if they funded OSS directly, then patches would land upstream faster, which would slightly reduce the competitive advantage of Google’s repositories. That seems like a small downside to me, but I haven’t built a trillion dollar multinational conglomerate, so my viewpoint is probably not aligned with theirs. It makes me think a camp 2 solution to this problem simply isn’t realistic in a capitalist economy where corporate profit is the only motivator. Maybe I’m just being overly pessimistic though.
They are already putting in effort to track security vulnerabilities and in some cases contribute fixes upstream, why not go the last mile and also offer grants to upstream projects to help them triage and merge in fixes?
So they do something like this through their “Secure Open Source Reward” program, https://sos.dev/. However the major caveat here is that the rewards are based on their evaluation of the work you are submitting to the program. You don’t get paid to do the work, you get paid for the completed work.
That’s not good enough.
They did fund Open-Source security work of a few open-source maintainers, but this was discontinued.
The Google Open-Source Security Team was funding some of the Reproducible Builds work until very recently as well.
Honestly it sounds like “Camp 1” is what managers think privately while “Camp 2” is what they say publicly. “Modeling a supply chain and recognizing their effort” is just a way to sugar coat labor extraction.
I guess the material difference you’re pointing to is whether they support maintainers?
if you actually look at what is needed very little resources are spent on figuring out how we can better support maintainers
100% this. I gave a talk at PhillyETE this week on the premise that lots of coders want to contribute (willing). But they’re not ready and not able. I don’t think most OSPO offices even realize this is a problem let alone are ready to invest in it.
The talk comes from the years I’ve worked on CodeTriage and my recent book How to Open Source (dot dev). It focuses on: what are practical and evidenced based interventions we can introduce to increase contributor success rate. Ultimately with the goal of reducing maintainer burden.
The title of the talk is “How to Steal from Maintainers” but the video is not yet published
Most (all?) efforts from Google, OpenSSF and Companies largely fall into Camp 1
I don’t really understand why you think companies are different from other consumers of F/OSS here. As a user of a load of F/OSS applications and libraries, I care that nation-state actors are not inserting malware into the code that I’m running. As a maintainer of F/OSS projects, I don’t have the resources to formally versify every patch that I get, so I can’t tell if someone is sneaking subtle backdoors into my projects (remember the null pointer vulnerabilities in SELinux that the NSA introduced that weren’t even a known vulnerability class until they were discovered?) and I would appreciate tools that would at least let me limit the damage if this happens.
The null pointer handling that the NSA contributed to SELinux converted a bunch of crashes into arbitrary code execution bugs. This became an entirely new vulnerability class. There’s no evidence that this was done maliciously (it’s far more plausible that the folks at the NSA who introduced the bug were also unaware of its potential for exploitation), but it serves as an example of a contribution that expert reviewers missed as introducing vulnerabilities. A nation state adversary would find it very easy to sneak code like this into a lot of projects deliberately.
Companies look at licenses in a fundamentally different way from individuals; a company is less likely to honor the license overall, and also more likely to be intimidated by an uncommon license. Google looks at its offerings in terms of legal liability and partnerships, not in terms of machine capabilities and programmer efficiency.
a company is less likely to honor the license overall, and also more likely to be intimidated by an uncommon license. Google looks at its offerings in terms of legal liability
These two statements seem fundamentally at odds. I would expect that a company that has lawyers review a license and avoids licenses that their lawyers do not say that they can comply with easily would be more likely to honour the license.
If anything, I’d expect individual users and developers who don’t have legal teams to be more likely to honour licenses. For example, I’ve seen users pass copies of GPLv2 binaries to their friends, yet that is directly against the terms of the license (unless they provide the source code or a written offer good for at least three years to provide the exact version of the source code). I’ve seen a lot of community Linux distros do this as well. I’ve also seen a load of people incorporate permissively licensed code into their own projects and ignore the attribution requirements. It’s far more rare in my experience for a big company to do this because they have a lot more to lose. It does happen, but it’s far less common.
I had a walk while preparing a presentation around Supply-Chain security when this thought occurred to me. It was later refined by a friend of mine.
There are two camps of “Supply-chain security”.
Most (all?) efforts from Google, OpenSSF and Companies largely fall into Camp 1. They are check boxes you can tick, measurements for you to look at Open-Source labor and ways to get badges so maintainers can look more attractive.
However, if you actually look at what is needed very little resources are spent on figuring out how we can better support maintainers. That is a problem. I get disappointed every time Google launches a new supply-chain product they are one lay-off/promotion away from being abandoned.
I couldn’t agree with this more. I remember vividly sitting in Kelsey Hightower’s closing StrangeLoop keynote on securing the software supply chain thinking “who is this for?”. OSS developers providing supply chain integrity proofs and Kelsey out on the conference circuit astroturfing this as “the expected minimum” has nothing to do with developing software that serves users. It has everything to do with helping companies (Google etc.) satisfy their regulatory obligations. Scaremongering about how artifact signing may become legally required in the future is something Google is concerned about and needs to hedge against, not something OSS developers should joyously lean into.
Waste of a dang keynote.
I forgot my last point :)
One of the best initiatives I have seen the past years with the supply-chain security hype we’ve had since log4j/solarwinds has to be Open Collective hiring maintainers. More of this please.
Open Collective does seem like a pretty good way to get maintainers paid on a donation model. Haven’t used it myself or know of any projects that especially use it though; hopefully it works as well as it sounds?
100% agree. Google clearly financially benefits from this - both internally through the use of open source as well as externally through encouraging more people to adopt GCP. They are already putting in effort to track security vulnerabilities and in some cases contribute fixes upstream, why not go the last mile and also offer grants to upstream projects to help them triage and merge in fixes?
The only reason I can think is if they funded OSS directly, then patches would land upstream faster, which would slightly reduce the competitive advantage of Google’s repositories. That seems like a small downside to me, but I haven’t built a trillion dollar multinational conglomerate, so my viewpoint is probably not aligned with theirs. It makes me think a camp 2 solution to this problem simply isn’t realistic in a capitalist economy where corporate profit is the only motivator. Maybe I’m just being overly pessimistic though.
So they do something like this through their “Secure Open Source Reward” program, https://sos.dev/. However the major caveat here is that the rewards are based on their evaluation of the work you are submitting to the program. You don’t get paid to do the work, you get paid for the completed work.
That’s not good enough.
They did fund Open-Source security work of a few open-source maintainers, but this was discontinued.
The Google Open-Source Security Team was funding some of the Reproducible Builds work until very recently as well.
https://reproducible-builds.org/who/sponsors/
Honestly it sounds like “Camp 1” is what managers think privately while “Camp 2” is what they say publicly. “Modeling a supply chain and recognizing their effort” is just a way to sugar coat labor extraction.
I guess the material difference you’re pointing to is whether they support maintainers?
I think contributing towards figuring out how we can solve the problem of “supporting maintainers” should be the bar.
It’s a low bar.
100% this. I gave a talk at PhillyETE this week on the premise that lots of coders want to contribute (willing). But they’re not ready and not able. I don’t think most OSPO offices even realize this is a problem let alone are ready to invest in it.
The talk comes from the years I’ve worked on CodeTriage and my recent book How to Open Source (dot dev). It focuses on: what are practical and evidenced based interventions we can introduce to increase contributor success rate. Ultimately with the goal of reducing maintainer burden.
The title of the talk is “How to Steal from Maintainers” but the video is not yet published
I don’t really understand why you think companies are different from other consumers of F/OSS here. As a user of a load of F/OSS applications and libraries, I care that nation-state actors are not inserting malware into the code that I’m running. As a maintainer of F/OSS projects, I don’t have the resources to formally versify every patch that I get, so I can’t tell if someone is sneaking subtle backdoors into my projects (remember the null pointer vulnerabilities in SELinux that the NSA introduced that weren’t even a known vulnerability class until they were discovered?) and I would appreciate tools that would at least let me limit the damage if this happens.
What?
The null pointer handling that the NSA contributed to SELinux converted a bunch of crashes into arbitrary code execution bugs. This became an entirely new vulnerability class. There’s no evidence that this was done maliciously (it’s far more plausible that the folks at the NSA who introduced the bug were also unaware of its potential for exploitation), but it serves as an example of a contribution that expert reviewers missed as introducing vulnerabilities. A nation state adversary would find it very easy to sneak code like this into a lot of projects deliberately.
Companies look at licenses in a fundamentally different way from individuals; a company is less likely to honor the license overall, and also more likely to be intimidated by an uncommon license. Google looks at its offerings in terms of legal liability and partnerships, not in terms of machine capabilities and programmer efficiency.
These two statements seem fundamentally at odds. I would expect that a company that has lawyers review a license and avoids licenses that their lawyers do not say that they can comply with easily would be more likely to honour the license.
If anything, I’d expect individual users and developers who don’t have legal teams to be more likely to honour licenses. For example, I’ve seen users pass copies of GPLv2 binaries to their friends, yet that is directly against the terms of the license (unless they provide the source code or a written offer good for at least three years to provide the exact version of the source code). I’ve seen a lot of community Linux distros do this as well. I’ve also seen a load of people incorporate permissively licensed code into their own projects and ignore the attribution requirements. It’s far more rare in my experience for a big company to do this because they have a lot more to lose. It does happen, but it’s far less common.
I’d appreciate any feedback on our approach to funding open source https://www.theregister.com/2023/04/07/thanksdev_open_source_funding/