Threads for fenollp

  1. 12

    Anyone else liked the labelled arguments? I liked how they added expressivity to the code without looking hackish, and I think It was a clever idea to make the label different from the parameter name

    1. 6

      Thank you! All credit goes to the clever people who designed Swift. We went through quite a few different designs but it turned out that copying Swift was best.

      1. 4

        well, this predates Swift by ages — this style of labeled args came to Swift directly from Objective-C :)

        1. 4

          I meant specifically the syntax here, but yes! We’re certainly standing on the shoulders of giants

          1. 4

            This predates objective-c by ages. It came to objective-c from Smalltalk.

        2. 3

          I didn’t get it at first, but took a second look upon reading your comment. They’re really nice! I can see it being useful to have both labels and arguments when you’d refer to those parameters while calling the method versus executing the method. Correct me if I’m wrong, but you could use it like this in pseudo code:

          pub fn run_job(at time) {
            if (time == now) execute(job_id)
          run_job(at: 2020-01-01)

          I’ve had “cute” code in Ruby where I’d have arguments named at or in which is nice when calling the method, but not as nice in the body of the method. That’s a really neat feature.

          1. 2

            Yes, exactly, It reads really well from both outside and inside.

          2. 1

            Too bad default values and random ordering didn’t make it.

            1. 5

              When using labelled arguments random ordering is supported, and default values may come later :)

          1. 4

            The opening comments - particularly about print/parse round trips etc. - suggest a link between fuzzing and property-based testing that I’d love to see explored more. I know that a fuzzer based on Haskell QuickCheck exists but haven’t played with it.

            1. 4

              Properties are specifications: what your program is supposed to do. Other names include models and contracts. The code itself is how you attempted to do it. Tests generated from them naturally check the how against the what. Finally, you or your tools can convert each property to a runtime check in the code before fuzzing it. Takes you right to point of failure.

              Design-by-Contract, contract-based test generation, and fuzzing with contracts as runtime checks is a combo that should work across about any language. Add static/dynamic analysis with low false positives if your language has them. Run this stuff overnight to get more CPU time fuzzing without dragging down performance of your system while you use it.

              1. 2

                There are a couple papers on Targeted PBT essentially adding argMax semantics to (at least an Erlang) QuickCheck lib. One can say “test this property using this somewhat non trivial generator and also try to maximize code coverage, as this may help the generation of interesting values”. This is exactly what I did in this proof of concept [1]. It indeed finds counter examples faster than the non maximizing code. In this PoC the non maximizing version often doesn’t find anything at all.

                I have discovered a passion with this technology and (plug!) am building what will essentially be a language agnostic PBT/fuzzing tool and hopefully SaaS at [2]!



                1. 1

                  The way I use the terms, the link is quite simple: both are instances of automated tests with generated input data, but with property based testing, there is a relatively strong oracle, whereas with fuzzing, the oracle is limited to “did it crash?”

                  This might be slightly different to how the author here uses the terms, though.

                  1. 4

                    Your point about oracle is the biggest difference; I think I would expand that to; property based testing can give you statistical guarantees, which means that it tries to sample your program input space according to some pre-defined probability distribution. It doesn’t particularly care about things like coverage either (and as far as I understand it, property based testing should not use feedback — but lines are bluring[1]).

                    Fuzzing, on the other hand does not particularly care about statistical guarantees (not that you cant make it, but typically it is not done). All it cares about is “can I exercise interesting code that is likely to invoke interesting behaviors”. So, while we use coverage for as a feedback for fuzzing, it is OK to leave aside parts of the program that are not interesting enough.

                    At the end of the day, I would say the similarities are that both are test generation tools (which also include things like Randoop and Evosuite which are neither fuzzers nor property checkers).

                    [1] ArbitCheck: A Highly Automated Property-Based Testing Tool for Java

                    1. 3

                      I used afl fuzzing to find bugs in math libraries, see e.g. [1] (i.e. things like “divide input a through b with two different libraries, see if the result matches, otherwise throw an assert error”). So you can get the “strong oracle” with fuzzing. I guess you can’t really have a strong line between “fuzzing” and “property-based testing”, it’s just different levels of test conditions. I.e. “doesn’t crash” is also a “property” you can test for.


                      1. 2

                        The original twitter thread where he solicited ideas about how to write fuzzable code had a conversation about how PBT and fuzzing relate:

                        1. 1

                          Fuzzing does not limit the oracle to “did it crash?” Other oracles (address sanitizers, for example) are quite common.

                          There’s obviously some overlap between fuzzing and property based testing, but:

                          Fuzzing tends to work on the whole application, or a substantial part of it, at once. PBT is typically limited to a single function, although both fuzzing and PBT are useful in different scopes.

                          Fuzzing tends to run for weeks on multiple CPUs, whereas PBT tends to run alongside unit tests, quickly.

                          Fuzzing (often!) tends to use profile guidance, whereas PBT does not.

                      1. 3

                        Lots of things! Three of the most fun side projects I have ongoing:

                        • a thing that projects files into colored particules inside a 3D cube. Future: dive through pi’s decimals in a VR headset.
                        • a work in progress JIT for BEAM languages that uses the amazing tracing capabilities of the Erlang VM to make decisions and recompile optimized modules & hot-swap them without reboots.
                        • a client (server isn’t FOSS :|) that aims to simplify developer UX greatly when it comes to testing software with QuickCheck / Hypothesis / fuzzers. I want this to become my job eventually!