As the second slide shows… the number of papers on fuzzing is exploding.
What drives me nuts is you try use one of these new fancy fuzzers…. and there is an explosion of dependencies and you have days of work to (maybe) get one of them working.
Currently, there are only two I’d say are “production ready”, where you can just “apt install” and away you go… and they are afl and libfuzzer
Sadly, although afl is the basis for many next gen fuzzers, it has gone unmaintained. (Last release 2017 and the author has taken to woodworking to soothe his nerves.)
I wish more fuzzing researchers would work on making their tools “apt install and go” instead of adding one more conference paper to their trophy cabinet.
Sadly conferences and academia have some very perverse incentives.
You’re right about the incentives. On top of it, I don’t know that this paper really compared to state-of-the-art fuzzers. I mean, a few are given they were made in past year or so. I submitted those, too. I do remember submitting some that outperformed many on that list more recently. I think I read Trail of Bits was making use of the binary fuzzer. It would be interesting to see how this tool fairs again the more recent ones that beat the competition.
Personally, I’d also just drop the weaker ones off new comparisons, too, unless they caught stuff the better ones missed. Only do truly the top tools with reproducible results via same benchmark and good packaging. Otherwise, the comparisons are at least partly staged given we know some of them are obsolete.
AFL still does its job well - it is rock solid as a fuzzer and never crashes. I also have written multiple instrumenters to prioritize paths not often hit by AFL (incentivizing paths to such functions) using the LLVM-pass infra that it has (the afl-gcc afl-as based code injector is too messy for my liking).
Depends on what one’s goal is. AFL will do its job well if the goal is to find some problems with a large expenditure of time and resources. It’s far from well if you’re wanting to find maximum problems with small amount of time and resources. The latter is what the newer tools claim to be doing. They’re doing it across all or most benchmarks, too, depending on the tool. That means one or more of them should be the new default, replacing the obsolete AFL.
As the second slide shows… the number of papers on fuzzing is exploding.
What drives me nuts is you try use one of these new fancy fuzzers…. and there is an explosion of dependencies and you have days of work to (maybe) get one of them working.
Currently, there are only two I’d say are “production ready”, where you can just “apt install” and away you go… and they are afl and libfuzzer
Sadly, although afl is the basis for many next gen fuzzers, it has gone unmaintained. (Last release 2017 and the author has taken to woodworking to soothe his nerves.)
I wish more fuzzing researchers would work on making their tools “apt install and go” instead of adding one more conference paper to their trophy cabinet.
Sadly conferences and academia have some very perverse incentives.
You’re right about the incentives. On top of it, I don’t know that this paper really compared to state-of-the-art fuzzers. I mean, a few are given they were made in past year or so. I submitted those, too. I do remember submitting some that outperformed many on that list more recently. I think I read Trail of Bits was making use of the binary fuzzer. It would be interesting to see how this tool fairs again the more recent ones that beat the competition.
Personally, I’d also just drop the weaker ones off new comparisons, too, unless they caught stuff the better ones missed. Only do truly the top tools with reproducible results via same benchmark and good packaging. Otherwise, the comparisons are at least partly staged given we know some of them are obsolete.
AFL still does its job well - it is rock solid as a fuzzer and never crashes. I also have written multiple instrumenters to prioritize paths not often hit by AFL (incentivizing paths to such functions) using the LLVM-pass infra that it has (the afl-gcc afl-as based code injector is too messy for my liking).
True, my only sorrow is there is a steady stream of fuzzing papers going by claiming to improve on afl…..
…but oh my, what an immense pain to try actually get any of them up and going….
I wish they’d upstream their improvements (including updating the packaging).
Depends on what one’s goal is. AFL will do its job well if the goal is to find some problems with a large expenditure of time and resources. It’s far from well if you’re wanting to find maximum problems with small amount of time and resources. The latter is what the newer tools claim to be doing. They’re doing it across all or most benchmarks, too, depending on the tool. That means one or more of them should be the new default, replacing the obsolete AFL.