1. 14
  1.  

  2. 2

    are we really giving free advertising to a company that offers large sums of money to anyone that introduces vulnerabilities to open source OSes?

    1. 1

      Financial opportunities for FOSS hackers is how I read it. They could even hit a rival BSD or Linux to make money to pay for developer time, features and/or security reviews, on their own project.

      I at least considered doing something like that at one point. Although I didn’t, I wouldn’t be surprised if someone rationalized it away: greater good of what money bought; fact that vulnerabilities were already there waiting to be found in product that will be hacked anyway; blame demand side where FOSS and commercial users willingly use buggy/risky software for perceived benefits instead of security-focused alternatives.

      1. 1

        The amount of trust people need to put in others for a functioning FOSS world is very high. Groups that have strong financial behavior to betray their surroundings have to behave in an extremely paranoid way, and it’s far easier to introduce vulnerabilities in your own project than find a vulnerability in another.

        suppose I find a vulnerability, I report it to security-officer@somebsd.org, they didn’t fix it yet. what am I supposed to understand? that they are behind on handling tickets (it happens), or that security-officer had 500,000 reasons to stay quiet?

        What about the person who is creating the release - he can do the build with an extra change. Are all builds that aren’t reproducible suspicious now?

        Suppose you do find a vulnerability in your own project. You can see who introduced this. Are you kicking them out of your project or assuming it’s a mistake?

        Yes, I should review the work of others and I do, but there’s a limit for how much one person can check.

        1. 1

          re vulnerability brokers in general

          You’re giving me a lot of examples but missing or disagreeing with a fundamental point. I’m going straight for it instead. It’s been bugging me a lot in past few years. It’s that most users and developers want their product to be vulnerable to achieve other goals. They willingly choose against security in a lot of ways. Users will go with a product that has lots of failures or hacks even with safer ones are available because it has X, Y, or Z traits that they think is worth that. The companies usually go with profit and/or feature maximization even when they can afford to boost QA or simplify. Both commercial and FOSS developers often use unsafe languages (or runtimes), limited security tooling, or small amounts of code reviews. These behaviors damn-near guarantee a lot of this software is going to be hacked. They do it anyway.

          So, the market is pro-getting hacked to the point they almost exclusively use things with lots of prior CVE’s. The network effects and oligopolistic tactics of companies mean there’s usually just a few things in each category. Black hats and exploit vendors are putting lots of time and money into the bug hunting that those suppliers aren’t doing and customers are voting for with their wallet. There’s going to be 0-days found in them. If there’s damage to be done, it will be done just as each party decided with their priorities. With that backdrop, will your bug in a Linux or BSD make a difference to whether folks buying from Zerodium will hack that platform? Probably not. Will it make a difference as to who gets paid and how much if you choose responsible disclosure over them? Probably so.

          To drive that home, Microsoft, IBM, Google, and Apple all have both the brains and money to make their TCB’s about as bug-proof as they can get. If they care about security, then that’s a good thing to do. If their paying users care, then it’s even more a good thing to do. They spend almost nothing on preventative security compared to what they make on their products and services. They don’t care. They’ll put the vulnerabilities in themselves just to squeeze more profit out of customers. Letting a broker have them before someone else isn’t making much difference. That’s at least on damage assessment angle.

          I think about it differently if the customer is paying a lot extra for what’s supposed to be good security. I think the supplier should be punished in courts or something for lying with the cost high enough that they start doing security or stop lying about what they’re not doing. Also, I think suppliers who have put good effort in shouldn’t be punished over a little slip or a new class of attack. I’d rather people finding those get paid so well by the companies and/or a government fund that they don’t go to vulnerability brokers most of the time. I’m just not having much sympathy for either users or suppliers griping about vulnerability brokers if they both favor products they know will get hacked because they accepted the tradeoffs. Whereas, projects that focus on balance of features and security with strong review often languish with low revenues or (for FOSS) hardly any financial contributions.

          re suspicious builds

          All software is insecure and suspicious until proven otherwise by strong review. That’s straight-up what security takes. Since you mentioned it, the guy (Paul Karger) that invented the compiler-compiler attack that Thompson demod laid out some requirements for dealing with threats like that. Reproducible builds don’t begin to cover it, esp malicious developers. For the app, you need precise requirements, design, security policy, and proof that they’re all consistent with nothing bad added or good subtracted. Then, a secure repo like described here. Object code validation like in DO-178C regulations if worried about compilers. Manual per app or use a certifying compiler like CompCert after it is validated. Then, Karger et al recommended all of that be sent via protected channel to customers so they can re-run the analyses/tests and build from source locally. All of that was what would be required to stop people like him from doing a subversion attack. Those were 1970’s to early 1990’s era requirements they used in military and commercial products.

          re someone introduces vulnerability

          I’d correct the vulnerability. I’d ask them if it was a slip up or they’d like to learn more about preventing that. I’d give them some resources. I’d review their submissions more carefully throwing some extra tooling at them, too. Anyone that keeps screwing up will be out of that project. People who improve will get a bit less review. However, as you saw above, my standard for secure software would already include some strong review plus techniques for blocking root causes of code injection and (if needed) covert channels. Half-ass code passing such a standard should usually not lead to big problems. If it is, they or their backers are so clever you aren’t going to beat them by ejecting them anyway. Ejection is symbolic. Just fix the problem. Add preventative measures for it if possible.

          Notice what I’m doing focuses on the project deliverables and their traits instead of the person. That’s intentional. If I have to trust them, my process is doing it wrong. At the least, I need more peer review and/or machine checks in it. As Roger Schell used to say, software built with the right methods is trustworthy enough that you can “buy it from your worst enemy.” He oversold it but it seems mostly true on low-to-mid-hanging fruit.

      2. 1

        free advertising to a company

        or a heads up for people running those systems that a vendor is actually restocking targeting those platforms. Which implies that either the exploits they had for the platform were recently patched or they were actually approached by a customer for targeted exploitation.