1. 14
    1. 9

      Right from the beginning, and all through the course, we stress that the programmer’s task is not just to write down a program, but that his main task is to give a formal proof that the program he proposes meets the equally formal functional specification.

      In 8 years of professional software development, I’ve yet to see a single “formal functional specification”.

      1. 16

        I’ve seen several. TLA+ seems quite popular for specifying protocols these days. They aren’t common in ‘move vast and break things’ development, but for places where you actually want things to work reliably they’re quite common. They’ve become a lot more common in the last decade, as complexity for a lot of software has grown beyond the point where informal reasoning works.

        1. 10

          I’m jealous, that sounds really nice. I’d love to move out of the “b2b saas” world, hah.

          1. 1

            Note that’s just formal specification. Formal verification (proving that the implementation matches the specification) is still far less common. Even without it, having an executable formal specification is useful for being able to fuzz an implementation against a gold model, but verification is a step further that’s still very rare. And verification doesn’t mean that it’s correct, just that it is correct with respect to the properties that you verify. My favourite example here was some of the EverCrypt code that was formally proven to not have any use-after-free bugs, something it accomplished by leaking all allocated memory (the properties were later updated to require all objects associated with state to be freed after specific points).

            On the hardware side, we’ve had two different teams formally verify our core. One verified trace equivalence between the core and the ISA specification (for an unbounded number of steps, the core will behave the same way as the ISA model given the same initial memory contents). Another group verified the security properties. There was some overlap in the bugs that they found but both teams found bugs that the others didn’t. The kind of bugs that they did find were very niche. For example, trace equivalence showed that, in one condition, we raised an exception but put the wrong value in the cause register, which would probably have no software impact. The team proving security properties found a side channel that can leak half a bit per 32 bits of memory that you shouldn’t have access to (not really exploitable because you can probe only addresses that you have in the immediate field of a jump target and we don’t allow on-device code generation).

        2. 3

          You may enjoy looking at The Definition of Standard ML, which is the complete specification of the ML programming language in about 100 pages, the majority of which is a formal specification.

          It turns out that producing a formal specification is as or more work than producing a formal proof that a program meets a formal specification. But I still think there are a lot of good ideas in Dijkstra’s writing and formal reasoning is useful even if you don’t use it exactly the way he is suggesting there.

          1. 2

            I like reading old stuff. One of the tricks you have to perform is determining which tidbits of wisdom have stood the test of time, and which haven’t aged particularly well. Throwing out the baby with the bath water isn’t a good solution. Discernment and reflection are good strategies. FWIW I also had the same reaction about this particular topic. I’ve been writing line of business software for ~20 years, and nothing has ever been formally specified. So the quote you referenced shouldn’t apply to all programmers. But as @david_chisnall replied, it probably should apply to some things like: language specifications, data serialization specifications, security algorithms, etc.

          2. 6

            Dijkstra was an erudite polemicist, but had a hard time getting to grips with the reality of computers and computing. He was disliked even by his peers in the formal methods community, even as that community gradually fell out of relevance during the eighties and beyond. The bitter fruit of arrogance. Not a good role model, in my opinion.

            https://en.wikipedia.org/wiki/Edsger_W._Dijkstra#Use_of_technology

            1. 6

              Seems to presume the goal of one’s work should be to be well-liked.

              1. 1

                Not at all! But especially for an academic, the goal of one’s work should be to be influential. As self-assessed (even here in the linked EWD), Dijkstra more or less entirely failed to convince computing practitioners to adopt most of the methods he advocated for.

                1. 1

                  seems to presume that all people have the same ethical compass

                2. 4

                  Yet he has a number of ideas that have immense impact and stood the test of time, like:

                  • Djikstras’s algorithm
                  • Semaphores
                  • Structured programming
                  • Predicate transformer semantics

                  A pretty good role model if you are interested in ideas that outlive yourself.

                  1. 3

                    Semaphores are weird. They are usually taught as a fundamental concurrency primitive, but they only have that privileged position because they were invented very early. In practice they are hardly ever the right thing. As discussed previously … oh and I am happy to find that https://zombo.com/ is back up.

                    1. 2

                      Compare with, say, Knuth’s work? He’s a pretty nice guy.

                      Or just within FM, compare with Tony Hoare or Robin Milner? Dijkstra was on the scene early, but I don’t think his work was really as influential as some of his less acerbic colleagues.

                      1. 4

                        I’ve had the huge pleasure of working with Tony Hoare and he is absolutely someone I would feel comfortable asking stupid questions. He usually wouldn’t end up answering them, but I’d always learn something interesting from the tangents that he would wander off down (usually far more interesting than the original question). The first time I met him, we were both on a panel at the Psychology of Programming conference. They invited two people to the panel who weren’t presenting papers. I was the one that didn’t have a knighthood and a Turing Award. Tony made me feel very welcome.

                        I didn’t have the opportunity to work with Milner but I was hugely impressed with him when I attended a talk he gave. One of the best pieces of advice I’ve heard came from that talk. He used several historical examples, but his key point was that credit for an invention should not go to the first person to invent something, but to the person who can explain it well enough that no one needs to invent it again. A lot of the people whose ideas I admire, such as Alan Kay, have been happy being someone who was right but ignored by the rest of the world and could then say ‘I told you so’ decades later. That is good for feeling smug, but doesn’t make the world a better place.

                    2. 4

                      His EWD essays strike me as blogging before the web — but no RSS, no self-service subscription, you had to be on his mailing list.

                      I’ve seen a lot of mentions of scientific correspondence in the 1600s and 1700s, but sharing ideas by sending letters seems a lot less prominent in more recent centuries. I suppose there were a lot more ways to share ideas with the rise of the modern university in the 1800s.

                      1. 3

                        What you’ve rightly labeled a polemic was created to combat the expectation that educators employ analogous concepts to teach novel ones. This war on analogies, not the pens or typewriters or even his lack of popularity (which might have been a symptom rather than a cause), is the problem with the polemic. He was sufficiently fed up with what he considered coddling to, ironically enough, employ an allegory:

                        When King Ferdinand visited the conservative university of Cervera, the Rector proudly reassured the monarch with the words; “Far be from us, Sire, the dangerous novelty of thinking.”. Spain’s problems in the century that followed justify my characterization of the shortcoming as “serious”.

                        Setting aside how suspiciously difficult it is to verify this quote, it’s so divorced from historical context as to be disingenuous. He presents all of 18th century Spain—victors and vanquished alike—as the enemy in his personal war between thinking people such as himself and non-thinking people such as them. The truth of the short-lived University of Cervera and its rector, which would be off topic to discuss properly here, was somewhat more complicated than the straw man as which Dijkstra represents it.

                        The use of the word “cruelty” in the title seems like a retort to someone who ascribed that label to his teaching style. He was evidently tired of education in the etymological sense of leading someone (who has at least some agency of their own) to knowledge. His job, as he presents it here, was to transmit his genius (relishing in the brutality of his own uncharitable, hyperbolic vitriol) and for his students to receive and absorb it. If they didn’t understand, that was their own moral failing.

                        Indeed, not a role model.

                      2. 3

                        One thing underlying this whole thing, and unanswered within it is the question: what does Dijkstra think computer programs are for?

                        He tells us that all they can do is abstract symbol manipulation, but what that might be in service of, except perhaps mathematics itself? Ironically he rejects the idea of using a computer to exhaustively search the space of his dominoes problem and extract insight from the results of that.

                        I think that Dijkstra must have thought of computing as primarily existing for its own sake with occasional applications occurring serendipitously somewhat like pure mathematics turns out to have a lot of applications.

                        Incidentally by treating computing as only dealing with abstract symbol manipulation and denying the importance of intermediate states, Dijkstra completely rules out any use of computers for control of processes in any kind of dialog with the external world (because the programs must be stateful on some level to do that).

                        1. 2

                          So, use formal methods because Dijskstra disliked metaphors, software engineers, and the Bourbon Dynasty? Methinks if this was a contemporary blogger and not Dijkstra, it would be considered off topic.