The idea of computing systems predates our current concept of programming languages by some decades. The historical shift gives an interesting view on the interplay between cultures of engineering and science. From Richard Gabriel’s essay on incommensurability:
The real paradigm shift? Systems versus languages. Before 1990, a person interested in programming could work comfortably both in programming languages and in programming systems, but not so easily after. […]
A system is a set of interacting components, though sometimes the interaction is in the realm of ideas — and thus a language can also be a system. But the usual case requires a set of actual, observable behavior. A real set of things doing real stuff. — Even if in a computer.
A language is a system of signs but for the purpose of conveying meaning. A language is words on the page. Grammatical correctness is important to a language. You can see the overlap in meaning. But the difference is clear: systems are about things happening, and languages are about conveying meaning.
For those too young to recognize the name, Gabriel’s the Worse is Better guy. A LISP hacker and researcher roughly contemporary with Alan Kay. I regard him as one of our “elder statesmen”.
Whereas Humans view computers as tools below them to which they give orders and that do their bidding, Houyhnhnms view computing as an interaction within a system around them that extends their consciousness. Humans articulate their plans primarily in terms of things: the logical and physical devices they build (sometimes including tools to make more tools), in the lower realms of software and hardware. Houyhnhnms weave their conversations foremost in terms of processes: the interactions they partake in, that they attempt to automate (including these conversations themselves), which always involves wetware first. In short, Humans have computer systems, Houyhnhnms have computing systems.
There are eight or nine chapters, all worth contemplating.
This looks like as good a place as any to pay my respects to the late Robin Milner’s final work.
The term ‘informatics’ describes our discipline better than ‘computer science’, as it is now concerned with communicating and informing as well as with calculation. The most long-lived and famous informatic structure is the von Neumann machine, which gave rise to an impressive series of languages and theories. But the von Neumann machine treats primarily sequential computing on a single machine, and does not scale up to explain modern informatic behaviour.
If you dig into this stuff, you’ll find it addresses all these issues, and is so much more rigorous than the typical hand-wavy philosophising that it’s a qualitatively different thing altogether. (Yes, some category theory is involved. Deal with it.)
I don’t find Milner’s work illuminating or rigorous in the least. Bigraphs are, like process calculi, based on ad hoc agglomeration of rules and don’t appear to have any actual deeper mathematical content. Nested graphs are actually complicated and hard to work with as people have redicovered many times. The von Neumann machine is not a mathematical model at all - it is an engineering model. Putting category theory on this mishmash doesn’t make it rigorous or insightful.
Thanks for sharing your perspective. The basic issue with any modeling formalism, as I see it, is simply how many additional complications it requires to adequately capture phenomena of interest. An appropriate choice of formalism depends on what those phenomena are.
What you call “ad-hoc”, I think I’d call “axiomatic”. I’m curious as to where you might stand on the lambda calculus. Does it have any “deeper mathematical content” than Turing machines? Is is any more or less useful? (For what?)
As to whether the von Neumann model is a mathematical or an engineering concept, doesn’t it depend on your level of abstraction? I suppose we could ask Johnny himself, if he weren’t long gone. I do seem to recall him being a mathematician, though. I recognize Milner’s use of it as a reference to that famous 1977 Backus lecture.
I think axiomatic is right, but in mathematics the axiomatic approach is useful when you are formalizing a succesful project in working mathematics. Peano Arithmetic is interesting because of how much of working number theory it captures. Axiomatizing groups is interesting because it gives a different, more abstract, approach to group theory. But nobody would attempt to e.g. understand something about simple groups by starting with sets or something elementary and piling up axioms. The formalizers are like the busboys - they clean up the mess on the table, but you don’t send them out to replace the chef, waiters, and customers. In CS, people keep trying to define some formal approach and short circuit the problem of identifying and proving something about the properties of systems. As for LC, I have a similar view. The purpose of LC was to identify a way of formalizing “computable” mathematics in the most elementary setting, with the simplest mechanisms. Why would we want to try to do mathematics in such a setting? What you end up with is a million proofs of trivialities about the formalism: OH BOY, ASSOCIATIVITY!!!
Of course von Neumann was a mathematician, but if you read his paper on “the von neumann model”, which is a amazing read, you will see that its applied mathematics and engineering - a lot of thinking about mercury memory and it’s really essentially a brilliant write up of what Eckert/Mauchley etc were building. Thankfully wikiquotes had the quote from Von neumann I was thinking of so I didn’t have to rely on my fading memory:
“ I think that it is a relatively good approximation to truth — which is much too complicated to allow anything but approximations — that mathematical ideas originate in empirics. But, once they are conceived, the subject begins to live a peculiar life of its own and is … governed by almost entirely aesthetical motivations. In other words, at a great distance from its empirical source, or after much “abstract” inbreeding, a mathematical subject is in danger of degeneration. Whenever this stage is reached the only remedy seems to me to be the rejuvenating return to the source: the reinjection of more or less directly empirical ideas.”
Axiomatizing groups is interesting because it gives a different, more abstract, approach to group theory. But nobody would attempt to e.g. understand something about simple groups by starting with sets or something elementary and piling up axioms.
That is, to my ears, quite an odd statement. Now I’m a little curious about how you learned group theory. For me (and this wasn’t really that long ago) it definitely started with the axioms. “Applications” to polynomials and the like came later in the course. Sure this inverts the historical development, but I’d argue there was no such thing as our current concept of “group theory” per se until there were axioms. They sort of fade into the background as one encounters more advanced material, but they’re still there, holding up the fancy stuff. They’re not the bus-tubs, they’re the tables. Without them, no restaurant.
I agree that von Neumann was a rare genius and a shining example of a truly creative applied mathematician. (I mean, except for his politics, which were frankly terrifying.) But how does one do “more or less directly empirical” work in a field where all the objects of study are last year’s gadgets? Johnny also had the considerable advantage of a mostly green field, which will not appear again.
You’ve said a fair amount about what you don’t like, and why. So let me ask you, what do you consider a promising approach to a way out of our current mess? (I trust you do acknowledge that contemporary computing is a mess, although we probably disagree somewhat about what’s wrong and why.)
I learned it with axioms and then once we understood it, we never saw them again. None of the great results of group theory, the ones that make the theory worth studying and even axiomatizing, were produced by someone writing down a bunch of axioms or within any formalized context at all. Jordan-Holder theorem is a theorem of informal mathematics, for example. My point is that until there is a body of interesting mathematical results about computer science, there is no value in inventing mathematical formalisms that may or may not capture those results. The axioms of group theory are used as a simple conceptual tool in teaching. But nobody waited for the formalization before inventing Galois theory or cataloguing the finite simple groups or any other result. Computer scientists tend to invert the order of things because, perhaps, we spend so much time on syntax of programming languages and because “computability” was part of metamathematics.
I think you are misunderstanding Von Neumann’s point which is that you can’t do physics by e.g. following the milner plan and proposing “pleasant” mathematical-sounding formalizations of the surface of things. You have to engage in a deep study of how things work, try to develop mathematical truths about that, and then, if you want to, try to package all that up in some formalism. But not the other way around. For something positive, the most productive work in mathematical computer science is the study of algorithms and complexity of algorithms and things like computational algebra. None of that comes out of axiomatics - one assumes that induction works and matrix algebra and number theory are sound i and go on to the hard stuff. I am not at all a fan of Lamport’s temporal whatever it is, but at least he’s trying to prove things about actual complicated algorithms.
I don’t think we can really “do informatics” like physicists “do physics” at all, and our attempts to do so just dig us deeper into the muck. See my link to Simon’s book, above.
the most productive work in mathematical computer science is the study of algorithms and complexity of algorithms and things like computational algebra
I trust you’re not so cynical as to measure “productive” by number of papers published, but I’m having a hard time understanding what you mean. I don’t work in complexity, but it doesn’t look very promising from the outside. The more practical side (which I assume you favor) makes too many assumptions about physical machines, and the more theoretical side looks like a paper mill. Modulo material science and the like, machines are artificial. We build our computers the way we do because of our software, and we build the software the way we do because of our (often informal) theories of computing. Surely you can see the problem with basing our theories on the machines? Many “fundamental results” from the sixties seem increasingly narrow and arbitrary as the economics of building the machines has changed. I like Milner’s bigraph work because he gets beyond that while maintaining an awareness of the social and economic context of computing. I also like Lamport’s TLA+ and similar things for similar reasons.
When I design a computer system, the results of algorithmic complexity provide information that is of great practical importance. I’m curious to know what you think has changed so much. FYI, when I get a pause in my day job, this is what I think is a good direction
http://www.yodaiken.com/papers/modules.pdf
Good article. One thing that was backwards, though, was that programs lead to static analysis with systems to continuous integration. People have been using formal specifications of systems with static analysis (i.e. proofs) going back to the 1960’s with Dijkstra. Systems analysis got bigger with Yourdon’s Just Enough Structured Analysis. There were also “executable specifications” that let you run them like you would programs. Similarly, a combination of system modeling and programming, Simula, even inspired OOP paradigm and C++. These were mostly used with testing, not static analysis.
Continuous integration appears to have come out of the programming community to solve their problems. I’m not sure where exactly it came from. I know in high-assurance security the TCSEC had a requirement of automated handling of software artifacts for highest levels. It had to manage source like SCM, build the system, and run tests. The customers had to be able to do that, too, for independent verification. That’s close to CI even though not exactly it. I’m curious where it was first done and why.
You’re right the static analysis vs. CI note isn’t well put. The point I’m after is that we can’t verify, build and deploy large systems in one shot - we’re always growing and maintaining a system by replacing and adding parts. So live integration with some incremental consistency checking is more desirable than the ‘verify everything’ approaches used when compiling statically typed binaries.
Yes we can use TLA+ etc to formally verify properties of large systems, but this is not encoded within the system parts being manipulated.
Thanks for chiming in. I agree with the always growing and maintaining. They tend to be quite organic in practice. There’s also this paper that takes it further where they always run degraded. There’s also some that run really well, adapt carefully with minimal problems, or (in rare cases) run five 9’s (eg their stuff).
“Yes we can use TLA+ etc to formally verify properties of large systems, but this is not encoded within the system parts being manipulated.”
My comment hinted you could do that. It’s called Design by Contract. I’ve also seen it done for hardware/software systems with Abstract, State Machines and several methods for user/environment interaction. You can do it piece-by-piece looking at individual behavior and interactions. There’s still stuff to evolve, mess up, and so on. Fixes go to high-level specs that can be checked for consistency and correctness far as feasible. Programs generated from them (eg SCADE) or checked against them (eg SPARK or Frama-C). These methods result in low errors early on, high predictability, and less breakage from changes. That’s regardless of where you apply them. Upfront cost, feature velocity, and talent required can be worse, though, depending on a project’s needs.
That State Machines paper is fantastic - very interesting stuff! SPARK looks interesting too but it looks a bit more like a ‘program’ approach. I might be missing something, but the docs talk about verifying properties of the program while I’m thinking about properties of the running system. So unless the whole system contains exactly one program, you’d need some integration testing around it to verify system behavior. I feel the one-shot verification model has some hard limits in scaling up. The systems I’m mostly thinking about are ones like a PC - with its OS and numerous programs; or a web app implemented as various services. Can we write an entire system as one SPARK program? (Probably, any Turing complete language will do). But how do we now add a feature? Consider also we want to preserve the current state of the system, allow multiple people to work on parts of it concurrently, etc. We end up wanting to update the system in place and we need a ‘unit of update’. If we break the original program into multiple ones, we could use the smaller programs a unit of update (this is the de facto model), but what about cross program verification now?
Contracts are definitely closest of what I imagine point-to-point dynamic verification might be like. The contact can be put around the object/program (the unit of update) inserted into the running system and it verifies certain properties hold at the coupling point - this could be a bind-time verification check. However there may be other properties of system that are indirectly affected so just contracts may not be enough. You want to know that when this new unit of update is installed, what desirable properties of the system will fail. This is similar to what a CI might tell you, but doesn’t have to be test based, it could be type/logic based.
We often think off a computer system as containing multiple communicating programs. But one twist is that this may not be the only perspective we want use to look at the system. E.g. the human body could be made of multiple organs, limbs etc. but alternatively also made of the nervous system, circulatory system etc. Perhaps we want to be able to modify the system through any of these models - we want projectional editing of the system - very different from the single view construction and installation of programs.
re ASM’s. There’s quite a few submissions here on them for further reading. Tools exist like Asmeta and AsmL. Been a while since I messed with them, though.
re SPARK. Yes, it’s for program verification. The point was that it handles that part of system verification. The consensus is we use a combination of tools for system verification since each one might be good or bad at certain things. For example, there’s a lot of groups in safety-critical fields experimenting with AADL modeling, temporal logics for timing, generating code from that, and stuff like SPARK for individual programs. Another example is doing a system-level model in Event-B with its tooling (example) that you translate to SPARK to feed formal properties into its tooling. One is better at analyzing models, one at code. Similarly, have stuff for environmental interactions (esp UI), distributed aspects, and so on.
re integration testing. Earlier systems, like BLACKER VPN, did what you might call composable or interacting state machines. The individual components were done with formal specs of abstract FSM’s. Then, the interactions or integration were treated as yet another set of the same thing. The ASM model does that really well for all sorts of software. Another thing this allows, which I always encourage, is to leverage the formal specifications (esp bounds) to automatically generate tests (aka property-based testing) and insert runtime checks for fuzz testing (i.e. AFL). That way, you get mathematical analysis for all inputs on whatever it can handle, guided testing for structural properties of whole system, and random testing for extra assurance. This kind of thing can also be done with system models if using an executable, modeling language.
re multiple people on system. That’s a lot easier with formal specs, esp contracts. They encode assumptions and intent in a precise way. When things break, it’s easier to figure out why.
re contracts not enough. I don’t advocate one tool for all jobs. Contracts should be used where the requirement is expressable as a contract. If it isn’t, then an informal description with examples should be given. From there, the system might be validated with review against that requirement, tests specifically for it, and so on.
re think of it as… Yeah, that’s really interesting. We just need something more specific to work with. :) Fortunately, there’s a lot of abstractions out there to work with. My preference there is a language that supports easily creating DSL’s that let us look at the system from the many viewpoints. Anything hard to do in DSL is done in underlying, powerful language. Live editing can help here, too. LISP’s, Smalltalk, REBOL/Red, and stuff inspired by them seem to work. On far other end, I’ve seen modeling languages like Modelica that might be useful for system experiments like this. I haven’t tried them yet but the listed uses were quite diverse.
The idea of computing systems predates our current concept of programming languages by some decades. The historical shift gives an interesting view on the interplay between cultures of engineering and science. From Richard Gabriel’s essay on incommensurability:
For those too young to recognize the name, Gabriel’s the Worse is Better guy. A LISP hacker and researcher roughly contemporary with Alan Kay. I regard him as one of our “elder statesmen”.
This reminds me of the perspective of the Houyhnhnm Computing Chronicles:
http://ngnghm.github.io/blog/2015/08/02/chapter-1-the-way-houyhnhnms-compute/
There are eight or nine chapters, all worth contemplating.
Quagaars, it’s a name I made up! Double-A, actually!
I hadn’t run across that, what a delightful read. Sharing with friends.
If you’d ever run across the TUNES project (2000 - 2008-ish), these Chronicles are meant to be a retrospective: https://ngnghm.github.io/About.html
This looks like as good a place as any to pay my respects to the late Robin Milner’s final work.
If you dig into this stuff, you’ll find it addresses all these issues, and is so much more rigorous than the typical hand-wavy philosophising that it’s a qualitatively different thing altogether. (Yes, some category theory is involved. Deal with it.)
I don’t find Milner’s work illuminating or rigorous in the least. Bigraphs are, like process calculi, based on ad hoc agglomeration of rules and don’t appear to have any actual deeper mathematical content. Nested graphs are actually complicated and hard to work with as people have redicovered many times. The von Neumann machine is not a mathematical model at all - it is an engineering model. Putting category theory on this mishmash doesn’t make it rigorous or insightful.
Thanks for sharing your perspective. The basic issue with any modeling formalism, as I see it, is simply how many additional complications it requires to adequately capture phenomena of interest. An appropriate choice of formalism depends on what those phenomena are.
What you call “ad-hoc”, I think I’d call “axiomatic”. I’m curious as to where you might stand on the lambda calculus. Does it have any “deeper mathematical content” than Turing machines? Is is any more or less useful? (For what?)
As to whether the von Neumann model is a mathematical or an engineering concept, doesn’t it depend on your level of abstraction? I suppose we could ask Johnny himself, if he weren’t long gone. I do seem to recall him being a mathematician, though. I recognize Milner’s use of it as a reference to that famous 1977 Backus lecture.
I think axiomatic is right, but in mathematics the axiomatic approach is useful when you are formalizing a succesful project in working mathematics. Peano Arithmetic is interesting because of how much of working number theory it captures. Axiomatizing groups is interesting because it gives a different, more abstract, approach to group theory. But nobody would attempt to e.g. understand something about simple groups by starting with sets or something elementary and piling up axioms. The formalizers are like the busboys - they clean up the mess on the table, but you don’t send them out to replace the chef, waiters, and customers. In CS, people keep trying to define some formal approach and short circuit the problem of identifying and proving something about the properties of systems. As for LC, I have a similar view. The purpose of LC was to identify a way of formalizing “computable” mathematics in the most elementary setting, with the simplest mechanisms. Why would we want to try to do mathematics in such a setting? What you end up with is a million proofs of trivialities about the formalism: OH BOY, ASSOCIATIVITY!!!
Of course von Neumann was a mathematician, but if you read his paper on “the von neumann model”, which is a amazing read, you will see that its applied mathematics and engineering - a lot of thinking about mercury memory and it’s really essentially a brilliant write up of what Eckert/Mauchley etc were building. Thankfully wikiquotes had the quote from Von neumann I was thinking of so I didn’t have to rely on my fading memory:
“ I think that it is a relatively good approximation to truth — which is much too complicated to allow anything but approximations — that mathematical ideas originate in empirics. But, once they are conceived, the subject begins to live a peculiar life of its own and is … governed by almost entirely aesthetical motivations. In other words, at a great distance from its empirical source, or after much “abstract” inbreeding, a mathematical subject is in danger of degeneration. Whenever this stage is reached the only remedy seems to me to be the rejuvenating return to the source: the reinjection of more or less directly empirical ideas.”
That is, to my ears, quite an odd statement. Now I’m a little curious about how you learned group theory. For me (and this wasn’t really that long ago) it definitely started with the axioms. “Applications” to polynomials and the like came later in the course. Sure this inverts the historical development, but I’d argue there was no such thing as our current concept of “group theory” per se until there were axioms. They sort of fade into the background as one encounters more advanced material, but they’re still there, holding up the fancy stuff. They’re not the bus-tubs, they’re the tables. Without them, no restaurant.
I agree that von Neumann was a rare genius and a shining example of a truly creative applied mathematician. (I mean, except for his politics, which were frankly terrifying.) But how does one do “more or less directly empirical” work in a field where all the objects of study are last year’s gadgets? Johnny also had the considerable advantage of a mostly green field, which will not appear again.
You’ve said a fair amount about what you don’t like, and why. So let me ask you, what do you consider a promising approach to a way out of our current mess? (I trust you do acknowledge that contemporary computing is a mess, although we probably disagree somewhat about what’s wrong and why.)
I learned it with axioms and then once we understood it, we never saw them again. None of the great results of group theory, the ones that make the theory worth studying and even axiomatizing, were produced by someone writing down a bunch of axioms or within any formalized context at all. Jordan-Holder theorem is a theorem of informal mathematics, for example. My point is that until there is a body of interesting mathematical results about computer science, there is no value in inventing mathematical formalisms that may or may not capture those results. The axioms of group theory are used as a simple conceptual tool in teaching. But nobody waited for the formalization before inventing Galois theory or cataloguing the finite simple groups or any other result. Computer scientists tend to invert the order of things because, perhaps, we spend so much time on syntax of programming languages and because “computability” was part of metamathematics.
I think you are misunderstanding Von Neumann’s point which is that you can’t do physics by e.g. following the milner plan and proposing “pleasant” mathematical-sounding formalizations of the surface of things. You have to engage in a deep study of how things work, try to develop mathematical truths about that, and then, if you want to, try to package all that up in some formalism. But not the other way around. For something positive, the most productive work in mathematical computer science is the study of algorithms and complexity of algorithms and things like computational algebra. None of that comes out of axiomatics - one assumes that induction works and matrix algebra and number theory are sound i and go on to the hard stuff. I am not at all a fan of Lamport’s temporal whatever it is, but at least he’s trying to prove things about actual complicated algorithms.
I don’t think we can really “do informatics” like physicists “do physics” at all, and our attempts to do so just dig us deeper into the muck. See my link to Simon’s book, above.
I trust you’re not so cynical as to measure “productive” by number of papers published, but I’m having a hard time understanding what you mean. I don’t work in complexity, but it doesn’t look very promising from the outside. The more practical side (which I assume you favor) makes too many assumptions about physical machines, and the more theoretical side looks like a paper mill. Modulo material science and the like, machines are artificial. We build our computers the way we do because of our software, and we build the software the way we do because of our (often informal) theories of computing. Surely you can see the problem with basing our theories on the machines? Many “fundamental results” from the sixties seem increasingly narrow and arbitrary as the economics of building the machines has changed. I like Milner’s bigraph work because he gets beyond that while maintaining an awareness of the social and economic context of computing. I also like Lamport’s TLA+ and similar things for similar reasons.
When I design a computer system, the results of algorithmic complexity provide information that is of great practical importance. I’m curious to know what you think has changed so much. FYI, when I get a pause in my day job, this is what I think is a good direction http://www.yodaiken.com/papers/modules.pdf
Good article. One thing that was backwards, though, was that programs lead to static analysis with systems to continuous integration. People have been using formal specifications of systems with static analysis (i.e. proofs) going back to the 1960’s with Dijkstra. Systems analysis got bigger with Yourdon’s Just Enough Structured Analysis. There were also “executable specifications” that let you run them like you would programs. Similarly, a combination of system modeling and programming, Simula, even inspired OOP paradigm and C++. These were mostly used with testing, not static analysis.
Continuous integration appears to have come out of the programming community to solve their problems. I’m not sure where exactly it came from. I know in high-assurance security the TCSEC had a requirement of automated handling of software artifacts for highest levels. It had to manage source like SCM, build the system, and run tests. The customers had to be able to do that, too, for independent verification. That’s close to CI even though not exactly it. I’m curious where it was first done and why.
I remember seeing as far back as Netscape days tracking the state of the build as passing or not–I wouldn’t be surprised if it predates that.
(Author here)
You’re right the static analysis vs. CI note isn’t well put. The point I’m after is that we can’t verify, build and deploy large systems in one shot - we’re always growing and maintaining a system by replacing and adding parts. So live integration with some incremental consistency checking is more desirable than the ‘verify everything’ approaches used when compiling statically typed binaries.
Yes we can use TLA+ etc to formally verify properties of large systems, but this is not encoded within the system parts being manipulated.
Thanks for chiming in. I agree with the always growing and maintaining. They tend to be quite organic in practice. There’s also this paper that takes it further where they always run degraded. There’s also some that run really well, adapt carefully with minimal problems, or (in rare cases) run five 9’s (eg their stuff).
“Yes we can use TLA+ etc to formally verify properties of large systems, but this is not encoded within the system parts being manipulated.”
My comment hinted you could do that. It’s called Design by Contract. I’ve also seen it done for hardware/software systems with Abstract, State Machines and several methods for user/environment interaction. You can do it piece-by-piece looking at individual behavior and interactions. There’s still stuff to evolve, mess up, and so on. Fixes go to high-level specs that can be checked for consistency and correctness far as feasible. Programs generated from them (eg SCADE) or checked against them (eg SPARK or Frama-C). These methods result in low errors early on, high predictability, and less breakage from changes. That’s regardless of where you apply them. Upfront cost, feature velocity, and talent required can be worse, though, depending on a project’s needs.
That State Machines paper is fantastic - very interesting stuff! SPARK looks interesting too but it looks a bit more like a ‘program’ approach. I might be missing something, but the docs talk about verifying properties of the program while I’m thinking about properties of the running system. So unless the whole system contains exactly one program, you’d need some integration testing around it to verify system behavior. I feel the one-shot verification model has some hard limits in scaling up. The systems I’m mostly thinking about are ones like a PC - with its OS and numerous programs; or a web app implemented as various services. Can we write an entire system as one SPARK program? (Probably, any Turing complete language will do). But how do we now add a feature? Consider also we want to preserve the current state of the system, allow multiple people to work on parts of it concurrently, etc. We end up wanting to update the system in place and we need a ‘unit of update’. If we break the original program into multiple ones, we could use the smaller programs a unit of update (this is the de facto model), but what about cross program verification now?
Contracts are definitely closest of what I imagine point-to-point dynamic verification might be like. The contact can be put around the object/program (the unit of update) inserted into the running system and it verifies certain properties hold at the coupling point - this could be a bind-time verification check. However there may be other properties of system that are indirectly affected so just contracts may not be enough. You want to know that when this new unit of update is installed, what desirable properties of the system will fail. This is similar to what a CI might tell you, but doesn’t have to be test based, it could be type/logic based.
We often think off a computer system as containing multiple communicating programs. But one twist is that this may not be the only perspective we want use to look at the system. E.g. the human body could be made of multiple organs, limbs etc. but alternatively also made of the nervous system, circulatory system etc. Perhaps we want to be able to modify the system through any of these models - we want projectional editing of the system - very different from the single view construction and installation of programs.
re ASM’s. There’s quite a few submissions here on them for further reading. Tools exist like Asmeta and AsmL. Been a while since I messed with them, though.
re SPARK. Yes, it’s for program verification. The point was that it handles that part of system verification. The consensus is we use a combination of tools for system verification since each one might be good or bad at certain things. For example, there’s a lot of groups in safety-critical fields experimenting with AADL modeling, temporal logics for timing, generating code from that, and stuff like SPARK for individual programs. Another example is doing a system-level model in Event-B with its tooling (example) that you translate to SPARK to feed formal properties into its tooling. One is better at analyzing models, one at code. Similarly, have stuff for environmental interactions (esp UI), distributed aspects, and so on.
re integration testing. Earlier systems, like BLACKER VPN, did what you might call composable or interacting state machines. The individual components were done with formal specs of abstract FSM’s. Then, the interactions or integration were treated as yet another set of the same thing. The ASM model does that really well for all sorts of software. Another thing this allows, which I always encourage, is to leverage the formal specifications (esp bounds) to automatically generate tests (aka property-based testing) and insert runtime checks for fuzz testing (i.e. AFL). That way, you get mathematical analysis for all inputs on whatever it can handle, guided testing for structural properties of whole system, and random testing for extra assurance. This kind of thing can also be done with system models if using an executable, modeling language.
re multiple people on system. That’s a lot easier with formal specs, esp contracts. They encode assumptions and intent in a precise way. When things break, it’s easier to figure out why.
re contracts not enough. I don’t advocate one tool for all jobs. Contracts should be used where the requirement is expressable as a contract. If it isn’t, then an informal description with examples should be given. From there, the system might be validated with review against that requirement, tests specifically for it, and so on.
re think of it as… Yeah, that’s really interesting. We just need something more specific to work with. :) Fortunately, there’s a lot of abstractions out there to work with. My preference there is a language that supports easily creating DSL’s that let us look at the system from the many viewpoints. Anything hard to do in DSL is done in underlying, powerful language. Live editing can help here, too. LISP’s, Smalltalk, REBOL/Red, and stuff inspired by them seem to work. On far other end, I’ve seen modeling languages like Modelica that might be useful for system experiments like this. I haven’t tried them yet but the listed uses were quite diverse.
Starting with this would make many “programmable” devices more usable.
Programming could be seen like making a wrapper around syscalls and algorithms.