1. 24

Blame /u/jamesjporter for the title.

  1.  

  2. 3

    This type of work is interesting to me, but I’m also interested to hear about what I would consider the next step of this process: checking the code against your specification.

    It seems the implementation still leaves room for error and I’d be curious to see language options where the compiler enforces your invariants, similar to Rust’s borrow checker (as an example).

    1. 6

      Great question! I’m going to give you what might be the worst-possible answer and say that I consider “leave room for error” a sort of worse-is-better feature. Formal verification is a lot harder than formal specification and has a much higher learning curve. “TLA+ can’t guarantee your implementation’s correct” is the price you pay for being able to write good specifications quickly. For my line of work, I think that’s a good tradeoff. On the other hand, if you were writing more serious systems, you’d want to be much more careful.

      1. 5

        I’ll add for anyone following along that the main value of hwayne’s link to the lander project is it shows all the little invariants & verification conditions generated from requirements to implementations. These represent a combo of limitations of the method & the intrinsic complexity of the problem where every assumption, dependency, etc in the code is stated explicitly. It’s what you have to mentally juggle and handle correctly to get full correctness. That’s also a very simple system. One can imagine complex systems have even more preconditions, invariants, and postconditions that aren’t readily visible to the user.

        To hwayne: Adding to my comment to blake, I did find a tactic for dealing with some of that. Instead of doing everything in Event-B, a team recently split the work between high-level design verified in Event-B with code verified by SPARK Examiner. They just feed the verification conditions from one into the other. I think this strategy really needs more exploration esp with tools like Microsoft’s VCC or Frama-C. Or even TLA+ verifying the algorithms interactions with the ordering invariants fed into such tools to show the code doesn’t break the model. Stuff like that. Here’s the new research:

        https://pdfs.semanticscholar.org/481c/d4d2409115429f4b824f370eb08fa338d67a.pdf

        1. 3

          For my line of work, I think that’s a good tradeoff. On the other hand, if you were writing more serious systems, you’d want to be much more careful.

          Yeah that point comes across in the post. The process of writing and verifying the specification did force a lot of learning about the problem domain and likely led to a much more accurate implementation.

          1. 2

            That’s one of the classic motivations for formal specification. It’s an explicit description of the problem and/or solution. If done right, it’s meant to aid the user in understanding things better while maybe mechanically catching some problems. Another lesson from high-assurance security was that formal proof was super-hard & caught esoteric errors but making the specs simple enough for proof caught even more errors. As in, the provers required they understood everything to the tiniest detail expressed in a way where the dumb algorithms could catch problems. That same approach let humans catch problems better, too. So, they were catching them while simplifying their specs even if going for full proving didn’t help much.

            The increase of understanding via formal specs and extra benefits of specs that provers can handle are two lessons worth always remembering/applying in the long term. Those benefits showed up in vast majority of projects that attempted mathematical engineering of software/systems.

        2. 5

          As hwayne showed with that paper, it’s quite hard to do full, formal verification of low-level code. You will spend a long time trying to learn that. The easier thing to do is partial verification where you verify the high-level design or verify some properties of the code. TLA+ is one way to verify properties of a high-level design or model. For low-level, the easiest one that’s already in significant, commercial use is Design by Contract with a language supporting automated verification.

          Eiffel introduced it. Their introduction is good:

          http://www.eiffel.com/developers/design_by_contract_in_detail.html

          SPARK Ada took it to next level by being able to automatically verify your program is free of many common vulnerabilities plus maintaining some formal properties. Frama-C is a tool for C to do that but not as strong as SPARK since C isn’t designed for verification. These kind of tools can be combined with high-level stuff like TLA+ or B method where you feed the properties/invariants you proved in the high-level specs into the specs of the code. The prover will give you a yes or no where no means either there’s a flaw or it’s too dumb to know otherwise. You get at least some assurances. Here’s links to some of that:

          https://frama-c.com/jessie.html

          https://bts.frama-c.com/dokuwiki/lib/exe/fetch.php?media=mantis:frama-c:tutorial:acsl-by-example-neon.pdf

          http://www.spark-2014.org/about

          http://www.anthonyhall.org/c_by_c_secure_system.pdf

          Note: SPARK GPL is on AdaCore’s website. Also, as evidence of how useful these are, Rod Chapman found an error in the Skein algorithm’s implementation in C just by translating it to SPARK & running the automated prover. The SPARK code was also as portable & high-performing as the C code. :)