1. 9
    1. 3

      Now if only there was a matching dpkg for each one…..

      Sadly, fuzzing is mostly an academic paper mill rather than software mill.

      Out of the box on ubuntu you only get

      afl/bionic,now 2.52b-2 amd64 [installed] instrumentation-driven fuzzer for binary formats

      afl-cov/bionic,bionic 0.6.1-2 all code coverage for afl (American Fuzzy Lop)

      fusil/bionic,bionic 1.5-1 all Fuzzing program to test applications

      libfuzzer-9-dev/bionic-updates,bionic-security 1:9-2~ubuntu18.04.2 amd64 [installed] Library for coverage-guided fuzz testing

      wfuzz/bionic,bionic 2.2.9-1 all Web application bruteforcer

      zzuf/bionic,now 0.15-1 amd64 [installed] transparent application fuzzer

      1. 2

        Interesting that https://gitlab.com/akihe/radamsa is not packaged in apt. IIRC E.g. homebrew has it. Well, best idea to install from the source anycase – usually you want the latest and the best when fuzzing.

        1. 1

          Fascinating.

          Nice idea…..

          Very light on dependencies… trivial to build and install.

          Comes along with it’s own Scheme interpreter and a bunch of scheme programs by the look of it!

      2. 2

        We have enough fuzzers already, we want people to run them and find interesting stuff.

        Anything slightly useful eventually becomes an academic paper mill. It’s the nature of the system. Research is like VC investments, most die and a few take off spectacularly.

        Anyway back to fuzzing.

        Here is a discussion of one of the listed papers, which I thought was excellent work: http://shape-of-code.coding-guidelines.com/2020/01/27/how-useful-are-automatically-generated-compiler-tests/

        1. 2

          Yes, I read your blog post when it popped up on my feed.

          Very interesting indeed.

          It reminds me of a moment of “Too Much Truth in Advertising”….

          One of the big static analysis firms used to have an page of recommendations from happy customers.

          One customer, a Big household name, said something like, “We used X to find ten’s of thousands of real bugs in code that we have been shipping to our customers for more than a decade!”

          Which immediately told me most bugs are never found in testing, and if they are, they’re probably aren’t triggered, and if they are, it probably doesn’t matter…..

          Which also says, by far, most software is in the grade of serving up cat pictures, if it fails to serve up one to one person… who cares? ie. Most software isn’t doing stuff that really matters.

          Which also says, in the fields where it really really really does matter (Avionics / Self driving cars / ….) by far most practical experience of software engineering isn’t really relevant.

          Except as a warning that “Here be Dragons! Do you really want to trust this stuff?

          And also, don’t use C. I’m not sure what The One True language is…. but I bet it is one that makes automated whole static analysis a lot easier than C does.

          All this said, to me, defects really do matter, even if you’re only serving cat pictures….

          Why?

          Because testing and debugging a change built on top of pile of flakiness is much much much harder than testing and debugging one built on a rock solid foundation.

          Because as our systems get bigger and bigger built on more and more layers, the probability of one the tens of thousands of very low probability bugs biting us tends to one.

          As usual, MonkeyUser puts it succinctly… https://www.monkeyuser.com/assets/images/2019/139-mvp.png

          Which brings me back to fuzzing, I’m using fuzzing and watching the field because of one simple habit.

          When I start working on an area of code…. I stress the hell out of it, and make it rock solid.

          Then I start with any enhancements……

          Then I stress the hell out of my work.

        2. 2

          We have enough fuzzers already, we want people to run them and find interesting stuff.

          I would say, not really. In the hierarchy of fuzzers, we are struggling to go till or beyond level 3, that is we can generate syntactically valid programs if we have the grammar, but anything beyond is really hard. We are still making progress, but we are no where near fuzzing programs that take multi-level inputs (most of the interesting stuff happens beyond the first level parsers).

          Sadly, fuzzing is mostly an academic paper mill rather than software mill.

          Unfortunately, I agree with this mostly. Quite a lot of fuzzing papers seem to be making rather limited improvements to the domain, and original ideas are few and far between. I believe that part of the reason is faulty measurement. Given a target such as a benchmark suite of programs, and faults, it is relatively easy to over optimize for them. On the other hand, finding bugs in numerous programs often may only mean that you went looking for them, and may not say anything more about the impact or originality of your approach.

          1. 1

            Quite a lot of fuzzing papers seem to be making rather limited improvements to the domain, and original ideas are few and far between.

            Actually I’d argue most fuzzing papers are tweaks on afl (or to a lesser extent) on libfuzzer, since they are so easily available.

            If the first task in reproduction is downloading and building /patching/fixing an arcane set of out of date dependencies…. no giants are going to be standing on your shoulders.