1. 13

Abstract:

Sophisticated attackers find bugs in software, evaluate their exploitability, and then create and launch exploits for bugs found to be exploitable. Most efforts to secure software attempt either to eliminate bugs or to add mitigations that make exploitation more difficult. In this paper, we introduce a new defensive technique called chaff bugs, which instead target the bug discovery and exploit creation stages of this process. Rather than eliminating bugs, we instead add large numbers of bugs that are provably (but not obviously) non-exploitable. Attackers who attempt to find and exploit bugs in software will, with high probability, find an intentionally placed non-exploitable bug and waste precious resources in trying to build a working exploit. We develop two strategies for ensuring non-exploitability and use them to automatically add thousands of non-exploitable bugs to real-world software such as nginx and libFLAC; we show that the functionality of the software is not harmed and demonstrate that our bugs look exploitable to current triage tools. We believe that chaff bugs can serve as an effective deterrent against both human attackers and automated Cyber Reasoning Systems (CRSes).

  1.  

  2. 3

    alternative idea: Don’t do this

    1. 3

      I wonder whether the introduced bugs or any pattern they form are detectable? If they are, attackers would move on to other targets rather than get trapped in ‘flypaper.’ Making attackers believe that the bugs are exploitable would be the real win. It would be like the tactic of keeping the telemarketer on the line to keep them from calling others.

      1. 3

        Making attackers believe that the bugs are exploitable would be the real win.

        That’s a really, common strategy called honeypot systems. Some even fake entire networks.

        1. 2

          I believe the initial assumption is that people treat large classes of bugs, like “the program crashes on invalid input”, as promising exploit candidates, in part because there is tooling to find those kinds of bugs (fuzzers and such). So you can maybe make that search harder if you inject a buch of non-exploitable bugs for each of those common categories, so that fuzzers turn up far too many false positives. But yeah, then you have the usual arms race: can people just narrow their heuristics to exclude your fake bugs? There’s a small discussion of that from one of the authors on Twitter.

        2. 2

          The link to the actual paper is here. For the record, I hope that this idea never makes it into common practice. It seems like a return to security through obscurity, with the added detriment of increasing development, debugging and testing complexity.

          1. 2

            This can happen as a side effect of using obfuscation of what defenses are in the software from OS up. One might have several different methods like memory safety, CFI, data flow integrity, etc to choose from. Attacker knows each are in use in targets’ systems but not which is in use for specific target. This can drive the attack up. On top of it, you get whatever level of protection that scheme offers from the riff raff or even accidental faults.