Many years ago when I was TA for my university’s concurrency course we used a fantastic piece of software called LTSA to model concurrent systems.
The app itself wasn’t amazing, but it was a fairly basic and robust piece of Java, and it did a very good job of teaching how concurrent systems work.
Essentially it was a text editor (nothing integrated just a convenience thing, it used a custom language in plain text files) where you could could write in descriptions of each of your tasks, and it could prove whether or you would deadlock, have race conditions, etc - you could even have forward progress assertions, etc.
It did this by taking each of your tasks and then merged them together into one single graph and then it could do all this great proving, etc.
But in addition to all this neat stuff, it would also have a small bit of status text including the number of nodes and transitions there were. I think that that text, and the memory usage, gave a really visceral insight into what exponential growth actually means as you could see how adding one additional task with just a few steps would suddenly blow out your system memory.
I’m kind of sad that there isn’t more work into supporting concurrent programming with this model of correctness, but at the same time a lot of what it did was only possible through finite analysis, and a lot of exponential growth issues leads to a lot of sadness trying to model real systems.
The problem with equivalence classes and testing is that there’s no way to know that the partitions are broken down “properly,” meaning you can easily say elements are equivalent when they shouldn’t be. Partitioning the input space is an optimization to avoid exhaustive testing, but there’s no way to know that optimization is correctness-preserving. It’s a completely subjective activity.
So of course, if you partition the input space, you have a fighting chance of making it manageable. There’s just no way to know if your partitions imply total correctness.
Many years ago when I was TA for my university’s concurrency course we used a fantastic piece of software called LTSA to model concurrent systems.
The app itself wasn’t amazing, but it was a fairly basic and robust piece of Java, and it did a very good job of teaching how concurrent systems work.
Essentially it was a text editor (nothing integrated just a convenience thing, it used a custom language in plain text files) where you could could write in descriptions of each of your tasks, and it could prove whether or you would deadlock, have race conditions, etc - you could even have forward progress assertions, etc.
It did this by taking each of your tasks and then merged them together into one single graph and then it could do all this great proving, etc.
But in addition to all this neat stuff, it would also have a small bit of status text including the number of nodes and transitions there were. I think that that text, and the memory usage, gave a really visceral insight into what exponential growth actually means as you could see how adding one additional task with just a few steps would suddenly blow out your system memory.
I’m kind of sad that there isn’t more work into supporting concurrent programming with this model of correctness, but at the same time a lot of what it did was only possible through finite analysis, and a lot of exponential growth issues leads to a lot of sadness trying to model real systems.
That sounds reminiscent of TLA+.
I wonder if this argument holds against defining coverage on equivalence classes of the input space. Any thoughts?
Unfortunately it’s very hard to formally show an equivalence class.
The problem with equivalence classes and testing is that there’s no way to know that the partitions are broken down “properly,” meaning you can easily say elements are equivalent when they shouldn’t be. Partitioning the input space is an optimization to avoid exhaustive testing, but there’s no way to know that optimization is correctness-preserving. It’s a completely subjective activity.
That doesn’t mean we shouldn’t try it. In fact, I’ve had this study in mind for a long time - a group of researchers tried input space partitioning on some financial calculations used in Freddie Mac’s software and they had really promising results. And since verification is the hardest problem in computer science, we need all of the empirical results that we can get.
So of course, if you partition the input space, you have a fighting chance of making it manageable. There’s just no way to know if your partitions imply total correctness.