1. 12
  1.  

  2. 2

    Round-tripping is one example of why I’ve never seen the value of generative testing: if you can formalise your properties to the extent that you can do generative testing, can’t you encode them right into the structure of your code? Why are you repeating the definition of how a code value corresponds to a serialised value in two separate places that can get out of sync, and then testing to make sure they are in sync - can’t you just define that relation in one place and then derive your serialization/deserialization from it, so that they will always roundtrip by construction?

    1. 1

      TL;DR it is easy to spot errors using formal spec, it is really hard to generate an entire routine.

      Well, basically, there is a hudge gap between finding out some properties of a routine and integrally specifying an operation.

      Let’s take the article example, the author identified one invariable here ( ast == parse(print(ast)) ), but this one property is not enough in order to implement a language printer… For instance, nothing is assumed about the caracter encoding in this predicate (there are thousand other examples, I just randomly picked one for the sake of my argument :) )

      This is basically why full software verification and code generation are doomed in the industry for now. While it is quite easy to find some properties and do a partial verification using them, it is a whole new story when you start trying to implement something only using predicates. This is why some formal methods like Atelier B or VDM are still using state machines for the implementation part.

      All you can do with a partial formal specification is:

      • Property based testing, using predicates as test cases. You basically generate tests and check the implementation using them. In case of error, you just need to fix the bug before sending the code to your production environment.
      • Contract oriented programming. You add sanity checks on runtime coupled with some fault tolerance/error recovery mechanism. Eiffel and Ada have nice tools for doing that (sadly, I did not found anything for haskell…)
      • Both of the above.

      Hope this was helpful, I can provide you more concrete examples if needed.

      1. 2

        Encoding is a good simple example. It makes no sense to be specifying that \u0080 encodes as 0x80 and then separately that 0x80 decodes to \u0080. Couldn’t we write some kind of encoding table datastructure that includes the relation \u0080 <-> 0x80 (and inherently enforces that our encoding is prefix-free), and then derive both encode and decode implementations from this single source of truth? That’s what I’m trying to suggest.

        1. 1

          Ok, I get your point, I was actually confused just like you for a long time.

          This actually works only if the function you are defining is bijective. In our case, as long as the decoder is strictly identical as the encoder, which is a false assumption in the case of UTF-8.

          Here’s a counter example to the UTF-8 encoding problem.

          Let’s assume that:

          1. Computer A runs a windows based os and encodes a unicode string to a UTF-8 file.
          2. Computer B runs a unix based os and reads the utf-8 file previously encoded.
          3. Computer B encodes the unicode string previously decoded.

          Now under the hood:

          1. “Hello\nWorld” -> “\x68\x65\x6C\x6C\x6F\x0D\x0A\x77\x6F\x72\x6C\x64”
          2. “\x68\x65\x6C\x6C\x6F\x0D\x0A\x77\x6F\x72\x6C\x64” -> “Hello\nWorld”
          3. “Hello\nWorld” -> “\x68\x65\x6C\x6C\x6F\x0A\x77\x6F\x72\x6C\x64”

          As you can see “\x68\x65\x6C\x6C\x6F\x0D\x0A\x77\x6F\x72\x6C\x64” != “\x68\x65\x6C\x6C\x6F\x0A\x77\x6F\x72\x6C\x64”, still, both are perfectly valid representations of the same string.

          Basically, you just have been screwed by a wanky newline representation and poor OS interoperability. Still, you need to handle that. And here, we are talking about a really small language, a language just composed by its alaphabet and which got almost no grammar at all.

          If you were using decoding = encoding-1 (the opposite function of encoding) you would have a bug here.

          Let me wrap up my argument.

          • Checking round trip when designing a printer/parser is important because it let you find lot of errors.
          • Round trip correctness is not enough in most cases because you need to handle more weird edge cases in your parser/printer.

          TL;DR: real world is a poorly designed mess :)

          1. 1

            “Hello\nWorld” -> “\x68\x65\x6C\x6C\x6F\x0D\x0A\x77\x6F\x72\x6C\x64”

            That’s not what happens when you write your own UTF-8 encoder. You get \x68\x65\x6C\x6C\x6F\x0A\x77\x6F\x72\x6C\x64 everywhere, you only get an \x0D byte if you had an \r codepoint. (Indeed I’d argue it’s a problem you hit precisely when you use different definitions of your encoder and decoder - windows’ and unix’).

            (And in any case, it’s not the kind of issue you could catch with this kind of testing, or that you should solve in your UTF-8 encoder/decoder)

            1. 1

              That’s not what happens when you write your own UTF-8 encoder.

              The point of UTF-8 is to provide interoperability, you cannot oblige people to use your particular , you need to deal with that kind of things.

              Indeed I’d argue it’s a problem you hit precisely when you use different definitions of your encoder and decoder - windows’ and unix’

              This is my point. round-trip property does not cover entirely the implementation. The parser often need to implement more cases than printers.

              Apparently using UTF-8 as an example was confusing. My point was that real world is complicated and always has edge cases like this one. This is actually why writing correct parsers is so hard.

              The approach you are mentioning has been implemented under the name of BiYacc, I encourage you to use that software in order to implement a printer/parser couple for a simple language. I personally used that software to implement a parser for this language and rolled back to parsec because of the limitations I was mentionning above (and because of some purely technical problems using biyacc).

      2. 1

        IME it’s a few things:

        • Often times, for me, it takes building the generator to realize the cases I didn’t think would happen. I could take that experience and just turn it into check when done.
        • Sometimes your implementation has other attributes, such as performance, that make the code less clear. A property test can be implemented in the simple way that is clear and used to verify the clever way is still correct.
      3. 1

        Hmm. I wonder if you could achieve the goal in a term-rewriting language and/or logic since they define the properties + how they’re related with transformations. It would seem doable so long as reversible operators are used. An example of those is Maude (esp K framework) that’s been used for C semantics among others.

        1. 1

          Great post, thanks for sharing!

          Too bad the author did not include a kind of contract sanity check on top of the monadic parser…