1. 19
  1.  

  2. 7

    An implicit assumption of this article is that free labor is acceptable for security researchers. I’m starting to lean in the other direction not just on FOSS licensing but vulnerability disclosure for similar reasons. Here’s the backdrop:

    1. Companies earn billions of dollars making certain software. There’s tools and practices that knock out entire classes of errors, esp “own the box” kind. They refuse to use them to save pennies on the dollar. The software is intentionally vulnerable. While managers and developers make bank, security professionals are supposed to work for free to do their QA, maybe get a bounty on the software, and maybe disclose the vulnerability for free.

    2. FOSS has a lot of free labor. Most prioritize new features or tools over securing existing ones. Even if going memory safe is easy (eg Go language), they often use languages that are either not memory safe or have lots of unsafe runtime (attack surface). They also do little QA even with automated tools available. They’re intentionally creating vulnerable software because they think it’s beneficial in other areas to ignore the security practices or techniques. Security researchers are expected to do the QA cheap or free regardless of whether design/coding style makes it easy or not.

    3. High-security, minimalist products largely don’t sell. The FOSS projects get little uptake. Both private markets and FOSS users value insecure software for its benefits (sometimes tiny) over secure software. They also expect the stuff to get periodically hacked. Insecure is in high demand. Secure is hard to fund.

    4. Offensive, security companies (aka vulnerability brokers) license 0-days to governments to mostly spy on each other and threats (eg terrorists, drug smugglers). These companies, esp Zerodium, pay six to seven digits per vulnerability. The products they want vulnerabilities in steadily have vulnerabilities already due to intentionally-bad, security practices often used by customers or users who won’t pay for or use secure products. Contributing to these will probably have no change in outcomes for anyone given those factors.

    So, I was doing a thought experiment about a new route: find vulnerabilities in software on lists like Zerodium’s, sell it to them, and use the money for security. That includes making secure versions of that product or entirely new products.

    The first thing could be a compiler-oriented tech like Softbound+CETS used on the product. Tell customers it blocks most attacks with a performance penalty. They pay yearly for it. It can also be a transfer where say a vulnerability in Nginx is used to securely extend a verson of lwan ported to Rust with overflow checks on. If wanting to protect [F]OSS, the model can exclusively target proprietary software to generate funds to secure [F]OSS software. So, a portion of HardenedBSD’s funding comes from selling Windows, mainframe, and Oracle vulnerabilities. Hell, you even got the skills to do it. ;)

    In any case, this thought experiment reflects what the people are currently doing. They demand insecure stuff by using it despite secure alternatives, they don’t want to pay for software (much less securing it), the private companies who pay for software don’t pay much for security, the vulnerability brokers pay a ton for vulnerabilities, and that’s a better funding source for improving security than most things. If productizing things, then the revenue streams themselves become ways to sustain it where one might ditch the vulnerability brokers at some point. I’ve been seriously considering this model given about every high-security company folded unless they were defense, safety-critical, or smartcards. Even then, a tiny number that scrapes by vs suppliers of known-insecure apps.

    Even if this model is adopted, I think a few tools critical to protecting dissenters should be excluded from list of things to sell 0-days in. GPG, TLS, Wireguard, Tor/TAILS, and security-focused OS’s/RTOS’s come to mind. A certain amount of proceeds should go to making such things more secure, usable, etc.

    Also, this model never adds vulnerabilities: only sells what the developers are already adding. All security-focused software created will be secured to the max so the organization remains trustworthy. Its existence and selling others’ vulnerabilities would be making a point about how easy it would’ve been to reduce lots of attack surface. Those examples with their practices and numbers might make it into arguments for regulations or software liability later on.