Actually, formal methods only solve a third of our problems: nobody can tell us whether the model we came up with is sensible, and this is often the hardest part to get right.
A project that some of my colleagues work on tried to use some formally verified code a few years ago. The code was, among other things, verified to be memory safe. This included a proof obligation that no object was accessed after it had been deallocated. It turned out after running the code for a little while that this was trivially guaranteed by not freeing anything. Changing this to require that all resources were deallocated at specific points made the verification much harder and reminded me that formal verification behaves a lot like a mythological genie: it will give you exactly what you ask for, not what you want.
I think performance modeling tool as good as correctness modeling tool is a great idea, but I am not sure whether performance model in such tool will reuse or share much of correctness model. Models are simpler to analyze because they abstract, but things abstracted in correctness model because they aren’t relevant to correctness can be very relevant to performance and vice versa. The worry is that shared model will be needlessly more complex than what both correctness model and performance model would be and it won’t be worth the trouble.
Actually, formal methods only solve a third of our problems: nobody can tell us whether the model we came up with is sensible, and this is often the hardest part to get right.
A project that some of my colleagues work on tried to use some formally verified code a few years ago. The code was, among other things, verified to be memory safe. This included a proof obligation that no object was accessed after it had been deallocated. It turned out after running the code for a little while that this was trivially guaranteed by not freeing anything. Changing this to require that all resources were deallocated at specific points made the verification much harder and reminded me that formal verification behaves a lot like a mythological genie: it will give you exactly what you ask for, not what you want.
Ok, I know what TLA+ is. (https://en.wikipedia.org/wiki/TLA%2B)
But, what is P?
Might be this one - https://en.wikipedia.org/wiki/P_(programming_language)
Ah, thanks. I skipped over that at first, but I followed up on it per your suggestion and found this:
– https://p-org.github.io/P/whatisP/
I think performance modeling tool as good as correctness modeling tool is a great idea, but I am not sure whether performance model in such tool will reuse or share much of correctness model. Models are simpler to analyze because they abstract, but things abstracted in correctness model because they aren’t relevant to correctness can be very relevant to performance and vice versa. The worry is that shared model will be needlessly more complex than what both correctness model and performance model would be and it won’t be worth the trouble.