An extract from a paper by von neumann
Combinatorial effects certainly are a pain today in verification. That looks on the money. Many system designs mostly dodged the need to consider errors he worries about by using redundancies, voting schemes, and comparisons at I/O level. They just fail and restart the whole, rogue unit with the others behaving according to design. Some monitoring to figure out what’s failing, too. Instead of fine-grained errors, most of the logic ignores the errors with one spot that’s easier to model handling it. Often multiple mechanisms but again easier to model than arbitrary software.
What was your takeaway from this excerpt in light of modern systems and verification efforts?
I think that verification has suffered from too much attention to foundational logic and formalism to produce methods that are not useful for humans to do mathematical reasoning with.
Thanks. I’m guessing you lean with the authors of the Social Proofs paper we had on here. I can see why they’d think that. I also wonder if its a bad decision or just something inherent to proving things about programs. Maybe it’s always going to be tedious piles of proof terms if modeling the concrete thing. The abstracted versions of algorithms might not impose that problem, though. Maybe folks can make better human-centric methods for them.
Von Neuman, in my view was one of the last polymaths – just an incredible depth and breadth of natural talent, communication skills and envy-deserving work ethic. Leaving this world at age of 53, was an incredible loss to humanity.
He seems to suggest that formal logic, when applied to non-trivial problems requies a huge number of steps.
Each step, on its own, has a probability of incorrect behavior (eg hardware error, this was, clearly, a lot more pronounced problem in early days computer systems).
Therefore the huge number of steps required (driven by combinatorial explosion and just difficult of optimizing combinatorial calculations), and possibility of small, but cumulative errors in each step - creates a significant problem for applicability of formal methods.
So he was calling for analytical approach.
When thinking about it, my first reaction is that we have not developed computational methods (or at least I do not know them) that takes a problem and assigns some formal classification to it.
For example, in mathematics – given a polynomial, one can use Galois theory to figure out if it has roots (solutions) or not, or not possible to know until we try…
Galois’ theory not only provides a beautiful answer to this question, but also explains in detail why it is possible to solve equations of degree four or lower in the above manner, and why their solutions take the form that they do. Further, it gives a conceptually clear, and often practical, means of telling when some particular equation of higher degree can be solved in that manner.
But we are lacking this type of classification for programs. That, in turn, leads, to domain-unaware methodology to prove correctness, or reliance on exhaustive search across all possible data inputs.
So to me, I think analytical or approximate solutions are possible (and it seems where Dr Van Neumann eluded to), are possible, but on the way of getting there, we will need some sort of universal categorization theory, and programmer-accessible methods of describing our artifacts such that theory can be fed non-ambigious set of specifications.
Eg: we have to be able to describe an initial state, a set of transformations, our program, knowingly applies to that state, a set of conditions that may happen outside of our control that we know of (eg operating system putting a mobile app on pause), and what can be acceptable outcomes.
Otherwise even a good theoretical foundation will continue being untouchable for us.
I personally, hope that in my lifetime programming will evolve to the point, were every programming language taught in schools/universities will allow for ‘in-place’ specification description and formal verification.
Such that we can move away from the post-dev testing (that I would call ‘alchemy/astrology’) , into verifiable correctness using tools that are built into our development tool chain.
Probably worth pointing out just to add some context that Neumann wasn’t just ranting about a distance subfield. He made quite significant contributions to mathematical logic during is “golden age”, alongside Church, Zermelo, Fraenkel, Turing, etc. He actually developed a system alongside ZF set theory for foundations work. This was really interesting, nice find @Vyodaiken and thanks for posting
I think I lack a lot of terminology and background here so I don’t think I’m grasping it at full depth, but does this abstract boil down to something like “automata have too many steps and are too complex and variable rendering analysis by formal logic difficult or impossible’ - or am I missing the mark entirely?
The short synopsis is: formal logic is a pain in the ass to work with (“refractory”, is the way he puts it) and lacks the kinds of powerful approximation/abstraction methods that are used in physical science (differential equations, probability etc. ). Also the focus of formal logic as originally applied to computation ignores feasibility and efficiency ( what we now call complexity). So “automata theory” should evolve to use more methods of applied math and more methods like those of applied math.
In some ways, what he predicted has come to pass since we have a very successful field of analysis of algorithms that makes use of probability and limits and number theory etc.
Thank you very much, so as I suspected I gleaned one facet of it and missed the whole :)