1. 29
  1. 9

    The black market would have paid much more for these exploits than the chump change he was given as bug-bounty by most of the companies (if at all).

    Given we all know how much money is wasted in big corporations everywhere, they should drastically increase their bounties if they really want to motivate people to become and remain white hats.

    1. 5

      I agree with you that bounties should be much higher, but I think there’s also substantially higher risk involved with the black hat world, so the bounties shouldn’t actually need to have perfect parity.

      There’s also the risk of “cobra effect” if your bounties get too high, where your employees or vendors are eventually incentivized to secretly collaborate with “researchers” to introduce and “find” security flaws.

      1. 3

        Is it illegal or unlawful to sell computer exploits?

        1. 7

          IANAL, but Yes-ish. Even if found not guilty, it is immensely powerful to silence and intimate security researchers by reaching for broadly worded laws like the Computer Fraud and Abuse Act (CFAA) or DMCA. There are some intimidating laws in various jurisdictions (Germany has the “Hacker paragraph” which seems worse than CFAA imho)..

          For things to be research-friendly, bug bounty programs typically have to promise not to sue explicitly. E.g., https://blog.mozilla.org/security/2018/08/01/safe-harbor-for-security-bug-bounty-participants/

    2. 6

      The title blames HTTP/2, but most of the exploits presented here are because HTTP/2 can reliably delimit requests, and losslessly handle arbitrary header values — better than downstream HTTP/1 proxies and applications using it.

      1. 2

        In the conclusion, the author says that these bugs are due to the complexity of HTTP/2: which I see. If HTTP/2 wasn’t tricky to implement there wouldn’t be all of this HTTP/1 downgrading.

        On the other hand, these downgraders are embarrassingly bad. I think it’s pretty ridiculous that a major project like Apache isn’t correctly handling the method field or that others are failing to do simple validation. It’s not like we haven’t known for decades that code generation (which this is) is really susceptible to security issues.

        1. 7

          I don’t think the root cause is protocol complexity. It can simply be that HTTP/1 was first, so it’s already supported everywhere. The vulnerable HTTP/1 proxies were probably considered to be “mature” and “battle-tested”. H/2 doesn’t have any speed advantage on localhost, so there was no incentive to switch.

      2. 2

        Unfortunately, they [Amazon] don’t have a research-friendly bug bounty program.

        Can someone elaborate on this?

        1. 4

          How else is Jeff gonna go to space?