This original talk / blog post was one of the major reasons that I started playing around with Isabelle. I had taken a look at Coq and the Software Foundation series a few months earlier, and it just couldn’t click with me. As you see in this post, modeling systems in Isabelle doesn’t feel like “math” at all, and feels very much like programming in an ML-family language.
I actually get a huge benefit out of modeling things in Isabelle from this alone! Proving things is great, but being able to formally specify something is the really important skill. I seriously think people would benefit from just writing models and proof skeletons like this, without even finishing proofs. Learning different proof techniques comes with time, not that I’m even an expert on that yet.
For example, in the post, an entire distributed system is simply modeled as:
A step_func is just a function that takes a process ID, current state, and an event as input, and returns a new state plus a set of messages to send to recipient processes. This is enough to model any distributed system! That’s not that complicated at all.
You may also be thinking, hey - that looks like the Elm Architecture or Redux. And you’d be right - TEA is literally just a functional design pattern of a state machine, and again, this simple definition is enough to reason about all of computation.
Anyway. Great post, great tool, happy to see Larry and Martin collaborating, and I always love a good state machine model of a system.
I fully agree that having specifications is wonderful. I’m my team we do a lot of specifications in Alloy. We always find design problems that we would take days to find manually and POCs in a couple minutes.
Alloy is great. Models / specs are a great way to fight the complexity of software. We’ve found that verification effort grows quadratically with code size, which means the inverse should be true: smaller programs are quadratically easier to manage (if we agree that verification is a proxy for all kinds of reasoning).
I think this matches everyone’s intuition: toy programs can be extremely ergonomic and efficient to work with, whereas business-sized applications are extremely unwieldy. It’s not like the behavior of these systems is any different, it’s just spread out across a larger surface area.
For this reason, and many others, I really feel like any model-based technology is really where we should be focusing our energy.
Thank you SO MUCH for this! I am adding it to my stack now. I am also looking at the problem from a security angle as well so runtime checks are also of interest. Very cool links for this topic, I am grateful.
This original talk / blog post was one of the major reasons that I started playing around with Isabelle. I had taken a look at Coq and the Software Foundation series a few months earlier, and it just couldn’t click with me. As you see in this post, modeling systems in Isabelle doesn’t feel like “math” at all, and feels very much like programming in an ML-family language.
I actually get a huge benefit out of modeling things in Isabelle from this alone! Proving things is great, but being able to formally specify something is the really important skill. I seriously think people would benefit from just writing models and proof skeletons like this, without even finishing proofs. Learning different proof techniques comes with time, not that I’m even an expert on that yet.
For example, in the post, an entire distributed system is simply modeled as:
Taking away some of the polymorphic type noise of the
step_func
declaration, it becomes:A
step_func
is just a function that takes a process ID, current state, and an event as input, and returns a new state plus a set of messages to send to recipient processes. This is enough to model any distributed system! That’s not that complicated at all.You may also be thinking, hey - that looks like the Elm Architecture or Redux. And you’d be right - TEA is literally just a functional design pattern of a state machine, and again, this simple definition is enough to reason about all of computation.
Anyway. Great post, great tool, happy to see Larry and Martin collaborating, and I always love a good state machine model of a system.
I fully agree that having specifications is wonderful. I’m my team we do a lot of specifications in Alloy. We always find design problems that we would take days to find manually and POCs in a couple minutes.
Alloy is great. Models / specs are a great way to fight the complexity of software. We’ve found that verification effort grows quadratically with code size, which means the inverse should be true: smaller programs are quadratically easier to manage (if we agree that verification is a proxy for all kinds of reasoning).
I think this matches everyone’s intuition: toy programs can be extremely ergonomic and efficient to work with, whereas business-sized applications are extremely unwieldy. It’s not like the behavior of these systems is any different, it’s just spread out across a larger surface area.
For this reason, and many others, I really feel like any model-based technology is really where we should be focusing our energy.
Agreed. I am considering a PhD in distributed systems correctness and have been looking at modeling and friends for this purpose.
May I recommend Data Refinement: Model-Oriented Proof Methods and their Comparison as a great resource. This was the basis of the functional correctness proof of sel4. Which isn’t a distributed system, but is still an amazing case study.
Of course, TLA+ is also highly associated with models and refinement to implementations.
Thank you SO MUCH for this! I am adding it to my stack now. I am also looking at the problem from a security angle as well so runtime checks are also of interest. Very cool links for this topic, I am grateful.
:) It’s a pretty small group of people interested in this stuff, so I love to share anything that I find useful. Cheers.