1. 5

Abstract: “Many algorithms can be implemented most efficiently with imperative data structures that support destructive update. In this paper we present an approach to automatically generate verified imperative im- plementations from abstract specifications in Isabelle/HOL. It is based on the Isabelle Refinement Framework, for which a lot of abstract algorithms are already formalized.

Based on Imperative/HOL, which allows to generate verified imperative code, we develop a separation logic framework with automation to make it conveniently usable. On top of this, we develop an imperative collection framework, which provides standard implementations for sets and maps like hash tables and array lists. Finally, we define a refinement calculus to refine abstract (functional) algorithms to imperative ones. Moreover, we have implemented a tool to automate the refinement process, replacing abstract data types by efficient imperative implementations from our collection framework. As a case study, we apply our tool to automatically generate verified imperative implementations of nested depth-first search and Dijkstra’s shortest paths algorithm, which are con- siderably faster than the corresponding functional implementations. The nested DFS implementation is almost as fast as a C++ implementation of the same algorithm.”

  1. 3

    The main advantage of imperative code over functional is its performance. Being able to generate imperative from functional in certifiable way might negate a lot of advantages to coding in imperative style. You get the easy analysis and productivity of functional with efficiency of imperative.

    That’s the dream anyway. I wonder how close this kind of work can get to it.

    1. 3

      I think the main trouble with imperative programming models is the problem of temporality and state explosion. The combinatorics completely overwhelm any analysis. The trouble with declarative models is somewhat the opposite, they hide too many details to get non-trivial results that are widely applicable. I think this tension is unavoidable so instead of trying to work around it better to embrace it and start with general purpose declarative models that are amenable to being refined into more imperative forms for optimization purposes.

      1. 2

        Yes, I agree with you on saving imperative for optimization step where possible. Far as combinatorics, I’ll add that there’s been excellent work in static/dynamic analysis focusing on interleavings with clever ways of bounding combinatorials to get past that for a lot of realistic examples. Always more of these turning up.

        So, my default would be to design it high-level, synthesize an imperative implementation somehow, and apply those tools to it. If it passes, use imperative. If it fails, use non-imperative form. Runs in background or overnight to not slow developers down. Work like Lammichs and folks mixing Haskell with Rust also suggests a mix can happen where such tooling makes each piece (eg module) go imperative, tries analysis on each, and integrates only imperative versions that pass. So, as in Lammich’s piece, you get some benefits like speedups over purely functional or declarative code. And, in both these models, modifications still happen on non-imperative side so you keep maintenance benefits of non-imperative programming. What you think of this?

        Oh yeah, I probably should add that I prefer all software to be efficient by default for the indirect benefits that can create. Adding it in case you’re wondering why I’d want an imperative optimization on something declarative that’s already “fast enough.” User might like having option of running their software on a cheaper box or using less electricity.

        1. 2

          What you think of this?

          Completely missed the question so a bit late with an answer.

          My current answer would be we don’t know how to deal with the state explosion problem for analyzing imperative implementations. I know that when Xavier and co. were working on CompCert they used a mixed strategy for register allocation which involved specifying and verifying as much as was possible with Coq and then leaving the harder parts to verifications during compile time

          … CompCert uses Iterated Register Coalescing (IRC) [George and Appel 1996]. This algorithm is very effective, but it’s also quite complicated. In CompCert, this algorithm is not proven correct. Instead, after generating the graph, the compiler calls an OCaml implementation of IRC and then checks that the coloring is valid. If the checker confirms that the coloring is valid, the compiler uses it and proceeds along. If it does not confirm that it’s valid, compilation is aborted.

          http://gallium.inria.fr/~scherer/gagallium/register-allocation/

          I think this is a fruitful approach that should be utilized more often where the verification of the hard parts can be staged in a verification pipeline and moved between stages as more pieces are proven correct. In general, I’m in favor of incorporating SAT/SMT solvers in the development pipeline and I’ve fruitfully used mixed-integer programs to solve allocation problems but this hits the same wall all other formal methods hit: general lack of knowledge and education about useful patterns that any software engineer can utilize.

          Fundamentally it is a mindshare problem and it’s unclear to me how to effectively bridge the gap between current practices and “better” ways of building software systems. Mixing Rust and Haskell or another high-level declarative language seems to fit the pattern used in CompCert so in that sense there is definitely precedent.

          1. 2

            That technique is called translation validation created in 1998. The linked work is an example from Necula, who invented Proof-Carrying Code. It’s similar. They’re all part of an abstract pattern I call Untrusted Generator with Trusted Checker. That’s for anything: software, hardware, proof terms, whatever. In all instances, the setup allows us to know the untrusted part did its job so long as the smaller, simpler, trusted part works. Leroy did point out sometimes it’s easier to verify the generating part. So, we gotta go with easiest one on case by case basis.

            “I’ve fruitfully used mixed-integer programs to solve allocation problems”

            I’ll try to remember that. Mixed-integer is a technique I haven’t used before. Now I know it can do allocation. :)

            “Fundamentally it is a mindshare problem and it’s unclear to me how to effectively bridge the gap between current practices and “better” ways of building software systems.”

            I’ve been pushing for a gradual process that starts with Design by Contract. That link is good for project managers, too. ;) Essentially, you teach the benefits of contracts starting with simple stuff like ranges and ordering. Then, the contract-based, test generation plus runtime checks with fuzzers. Collect the data about how quickly one knocks out bugs with so little work. Also, the contract-based, test generation and fuzzing makes most unit tests unnecessary. They can focus on just what’s hard to express with contracts. Throw in some TLA+ for distributed stuff catching problems that take 35 steps to get to for illustrating model-checking superiority over testing in such examples. Maybe Cleanroom to illustrate engineered software. Then, these little things let the use and benefits of specs sink in with a percentage upgrading to stuff like Haskell or formal methods.

            On formal methods side, I’ve been collecting data on what’s easiest to learn to teach the benefits. I have two methods that were specifically for education claiming to get results. The only problem each had was students needed a prior exposure to first-order logic and set theory to be comfortable with it. I found an easy resource on set theory to get them an intuition for it. I don’t have one on FOL. I’m up for any people have. Eventually, I’ll have a small chain of resources from the logic introduction to those courses or tools that can be tested for effectiveness on students. If that works, we tweak it to take them toward the heavier stuff. I also lean toward things like ASM’s and PlusCal since industrial experience shows people learned those more easily than stuff like Coq and HOL. ASM’s and B Method also support code generation.

            So, those are the ideas I’ve been working with.

            1. 1

              Seems like a reasonable strategy.

      2. 3

        I’d slightly qualify that sentence with “an advantage of imperative code over function is its control over constraints”. It might not be performance you’re looking for, but memory, low power usage, minimal latency, etc. Also, “control” could go either way. it can easily be the case that imperative code is worse for the constraint than the functional code. Quicksort is faster than Haskellsort, but Haskellsort is way faster than Bubblesort.

        1. 1

          That’s a great point. How to word it, though? Just “low-level control” or “control over low-level details of program execution?” I think I’ve seen the latter mentioned as a benefit of C language.