Is this really fuzz testing? It feels more like property-based testing, since you’re comparing outputs at the end instead of just making sure that your test didn’t crash.
In the article, he or she equates the two - what is the difference you know of?
They might be similar when stating the property or whatever to check the output. In general, though, the basic concepts are opposite on input side. In property-based testing, we keep the test generation close to what the developer specifies trusting their intuition to find the issues in the implementation of that specification. When fuzzing, we assume the developer might have the wrong specification in mind and the program might have issues. That’s why the input is really random (i.e. unconstrained vs property-based).
Empirical evidence shows both get results with them sometimes getting different results. We still don’t fully understand where we can predict which method is good enough. The great thing is it’s cheap to do both. So, that’s the current recommendation in even medium-assurance systems where verification costs still must be low. Just do both. And if you have spare cycles, throw other forms of automated testing or analysis at anything that passed those two just in case something turns up.
Unlike fuzzing, property-based testing has shrinking – if a PBT library’s input generation finds any failures, it has built-in heuristics for generating simpler variants on that input. That way, it can turn a random failure with lots of irrelevant detail into nearly minimal input to reproduce the bug.
Widely-used fuzzers (eg AFL) also do this.
In afl-fuzz’s case, it’s because it’s instrumenting the executable to measure branch coverage.
There’s overlap, but fuzzing and property-based testing seem to be coming from different directions and meeting in the middle.