1. 26
    1. 4

      The opening comments - particularly about print/parse round trips etc. - suggest a link between fuzzing and property-based testing that I’d love to see explored more. I know that a fuzzer based on Haskell QuickCheck exists but haven’t played with it.

      1. 4

        Properties are specifications: what your program is supposed to do. Other names include models and contracts. The code itself is how you attempted to do it. Tests generated from them naturally check the how against the what. Finally, you or your tools can convert each property to a runtime check in the code before fuzzing it. Takes you right to point of failure.

        Design-by-Contract, contract-based test generation, and fuzzing with contracts as runtime checks is a combo that should work across about any language. Add static/dynamic analysis with low false positives if your language has them. Run this stuff overnight to get more CPU time fuzzing without dragging down performance of your system while you use it.

      2. 2

        There are a couple papers on Targeted PBT essentially adding argMax semantics to (at least an Erlang) QuickCheck lib. One can say “test this property using this somewhat non trivial generator and also try to maximize code coverage, as this may help the generation of interesting values”. This is exactly what I did in this proof of concept [1]. It indeed finds counter examples faster than the non maximizing code. In this PoC the non maximizing version often doesn’t find anything at all.

        I have discovered a passion with this technology and (plug!) am building what will essentially be a language agnostic PBT/fuzzing tool and hopefully SaaS at [2]!

        [1] https://github.com/fenollp/coverage_targeted_property_testing

        [2] https://github.com/FuzzyMonkeyCo/monkey

      3. 1

        The way I use the terms, the link is quite simple: both are instances of automated tests with generated input data, but with property based testing, there is a relatively strong oracle, whereas with fuzzing, the oracle is limited to “did it crash?”

        This might be slightly different to how the author here uses the terms, though.

        1. 4

          Your point about oracle is the biggest difference; I think I would expand that to; property based testing can give you statistical guarantees, which means that it tries to sample your program input space according to some pre-defined probability distribution. It doesn’t particularly care about things like coverage either (and as far as I understand it, property based testing should not use feedback — but lines are bluring[1]).

          Fuzzing, on the other hand does not particularly care about statistical guarantees (not that you cant make it, but typically it is not done). All it cares about is “can I exercise interesting code that is likely to invoke interesting behaviors”. So, while we use coverage for as a feedback for fuzzing, it is OK to leave aside parts of the program that are not interesting enough.

          At the end of the day, I would say the similarities are that both are test generation tools (which also include things like Randoop and Evosuite which are neither fuzzers nor property checkers).

          [1] ArbitCheck: A Highly Automated Property-Based Testing Tool for Java

        2. 3

          I used afl fuzzing to find bugs in math libraries, see e.g. [1] (i.e. things like “divide input a through b with two different libraries, see if the result matches, otherwise throw an assert error”). So you can get the “strong oracle” with fuzzing. I guess you can’t really have a strong line between “fuzzing” and “property-based testing”, it’s just different levels of test conditions. I.e. “doesn’t crash” is also a “property” you can test for.

          [1] https://www.mozilla.org/en-US/security/advisories/mfsa2016-07/

        3. 2

          The original twitter thread where he solicited ideas about how to write fuzzable code had a conversation about how PBT and fuzzing relate: https://twitter.com/mgambogi/status/1154913054389178369.

        4. 1

          Fuzzing does not limit the oracle to “did it crash?” Other oracles (address sanitizers, for example) are quite common.

          There’s obviously some overlap between fuzzing and property based testing, but:

          Fuzzing tends to work on the whole application, or a substantial part of it, at once. PBT is typically limited to a single function, although both fuzzing and PBT are useful in different scopes.

          Fuzzing tends to run for weeks on multiple CPUs, whereas PBT tends to run alongside unit tests, quickly.

          Fuzzing (often!) tends to use profile guidance, whereas PBT does not.

    2. 2

      I recently worked with Csmith, which generates random C programs. It found some bugs in several of my companies compilers (some serious, some not so serious).

      “But I Want Fuzzing My Code to be Harder, Not Easier”

      It depends on the type of project, but it seems perfectly reasonable to have compile a ‘debug’ version which shows detailed information, but to release a version which just crashes with a generic error message. This way, you can fuzz your own program easily, but it will be hard for people looking for security holes in your program.

      1. 3

        I don’t think this is buying you as much “security” as you might think it does.

        1. 1

          So you don’t think that detailed error messages make fuzzing easier? Or you don’t think that fuzzing will show the existence of security problems? I’m eager to hear your argument.

          1. 3

            You won’t really make fuzzing harder by stripping symbols, removing backtrack generation etc. There are easy reverse engineering /black box analysis tools like afl-gunicorn that will do the job just fine

            1. 1

              Cool! I wasn’t aware of that.