1. 3

Abstract: “Fuzzing is a software testing technique that quickly and automatically explores the input space of a program without knowing its internals. Therefore, developers commonly use fuzzing as part of test integration throughout the software development process. Unfortunately, it also means that such a blackbox and the automatic natures of fuzzing are appealing to adversaries who are looking for zero-day vulnerabilities.

To solve this problem, we propose a new mitigation approach, called FUZZIFICATION, that helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-art fuzzing techniques. Given a performance budget, this approach aims to hinder the fuzzing process from adversaries as much as possible.

We propose three FUZZIFICATION techniques: 1) SpeedBump, which amplifes the slowdown in normal executions by hundreds of times to the fuzzed execution, 2) BranchTrap, interfering with feedback logic by hiding paths and polluting coverage maps, and 3) AntiHybrid, hindering taint-analysis and symbolic execution. Each technique is designed with best-effort, defensive measures that attempt to hinder adversaries from bypassing FUZZIFICATION.

Our evaluation on popular fuzzers and real-world applications shows that FUZZIFICATION effectively reduces the number of discovered paths by 70.3% and decreases the number of identifed crashes by 93.0% from real-world binaries, and decreases the number of detected bugs by 67.5% from LAVA-M dataset while under user-specifed overheads for common workloads. We discuss the robustness of FUZZIFICATION techniques against adversarial analysis techniques. We opensource our FUZZIFICATION system to foster future research.”

  1. 4

    Security by Amplifying the Obscurity

    1. 1

      Yeah, it’s an obfuscation approach. I was curious if it led to any interesting comments. I told an author on Hacker News to focus on improving the performance of stuff that addresses root causes.

      1. 2

        The paper is almost as if the authors read https://blog.regehr.org/archives/1687 and tried to invert everything he said. :-)

        Whilst fuzzing has been wondrous at finding CVE’s…. I wonder whether they are the tool of choice for finding 0-days?

        1. 1

          I’m too sleepy to look at the first point right now. Check it later.

          On other point, fuzzing found plenty of 0-days that I’m aware of. That’s both AFL and some new fuzzers. They tend to try to benchmark each other by looking at the same codebases to see if they find both the old bugs and any new ones. They sometimes find new ones. Really good ones find plenty of new ones unless the code base has just been beaten to death by such tools. Same for static analysis, etc. Quite a few submissions I did mentioned new bugs.

    2. 2

      At least one of the techniques described allows expanding the amount of time the compiled binary takes to perform tasks, up to some defined usability threshold, so as to make branch exploration time-intensive. Which smells like Proof-of-Work in so far as “Let’s solve this problem by burning more CPU cycles and putting more carbon in the atmosphere”

      I’d be scared to learn what kind of carbon footprint a technique like this would have if implemented in widely-used software.