This article really gets to a fundamental misunderstanding I feel our whole industry has: Programming is not construction, it is design.
Yeah, houses rarely collapse, but structural engineers don’t expect that their second draft will be taken out of their hands and built. Or that the fundamental requirements of their structure will be modified.
I don’t mean to suggest that programming should behave more like construction. The value of programming is the design. Programming is the act of really thinking through how a process will work. And until those processes are really done and won’t change (which never happens) that design never stops.
At some point I got this thought that programming can be mapped into the framework of the scientific method, as below. Later I think I found a similar interpretation in some older article so I guess I might have not been the first. Specifically, my analogy goes like this:
a program ~ a scientific theory
running the program (incl. via tests) ~ running a scientific experiment
where a program tries to model (some aspects of) some specific problem domain, and a scientific theory in e.g. physics tries to model (some aspects of) the physical world around us.
This seems to fit with some known aspects of programming: e.g. that tests never can confirm 100% that a program is correct, but the more of them you do the more confident you can become. Or that sometimes a program seems to have “hit a jackpot” of being a “good theory”, when new features result in only small tweaks to the program (similar as new experiments fit well in a correct scientific theory, with maybe just small tweaks to constants). And wildly different features (wildly new experimental results) may force us to ditch the model and start searching for a new one - major architectural refactoring of the code (or searching for a new theory that may still explain the old one - Einstein, Newton, or completely break it - how the aether theory was invalidated even though it gave useful results in some areas IIUC).
Developers jump to coding not because they are sloppy, but because they have found it to be the most effective tool for sketching, for thinking about the problem and getting quick feedback as they construct their solution.
Ime this is not empirically true, and in fact not even close to true. The vast majority of engineers would produce far better work with methodologies like Readme Driven Development than by jumping in and coding, even with a 2nd pass for cleanup.
In fact, most of the time coding is an incredibly poor tool for thought, and it is far more likely that inessential implementation details at the code level will take control of and obscure clear high-level thinking than aid it.
That said, precisely because code is a poor tool for thought, sometimes you have to actually code something before you can see if your idea will work, and I can see how this phenomenon might give the illusion of the article’s claim. But this, imo, is not evidence of your mastery of a powerful thought tool. It is evidence of how poorly that tool maps to intuitive ideas you can understand and express with ease in natural language.
I don’t think Lamport meant to say that coding cannot be used as a way to make sketches. It’s just that those sketches should not be confused with the final implementation.
I think that those who push for more formalism in programming are not saying that you must map out everything ahead of time. I mean, things change in construction projects all the time even after the plans are approved. It seems to me they are pushing for some serious forethought.
Developers jump to coding not because they are sloppy, but because they have found it to be the most effective tool for sketching, for thinking about the problem and getting quick feedback as they construct their solution.
Sort of, but not really?
I certainly don’t jump to coding to figure out the solution to a problem. Sketching something in (working) code is a waste of effort most of the time, in my experience. It requires far too much precision to get larger ideas across. Pseudocode may be fine, but then, is that “jumping to coding”?
On the other hand, I will make changes to an existing code base to see what will happen so that I can understand the problem environment better. Perhaps this is what the author is getting at. Changing code may be the only reasonable way to map out the behaviour of the environment in which you need to implement an eventual solution. Reasoning about the environment ahead of time without trying something first is often not feasible. I don’t consider this this “sketching” of a solution, though. I consider it knowledge acquisition that will help out with any sketches of solutions later.
I’ve experienced both of these methods of “sketching”, and I think they both have their place in our work. I recently had to write a fairly complex program in assembly for my processor design class. I worked an entire weekend in the “sketching with code” fashion and wound up with a buggy, failing program. My dad (who was a software engineer for many years) stepped in at this point and helped me take a beat and sketch out in English exactly what I needed to build. In just a couple of hours, I had a complete, working program that was more robust and elegant than my original solution. Designing without code helped me immensely in this and many other cases.
On the other hand, when I want to build a quick project, sketching with code instead of English helps me learn a lot, and quickly. I recently started building a small text editor and by sketching through code I learned a lot about terminal modes and how the tty subsystem works in UNIX systems. This method often comes out in my personal projects and has helped me grok complex systems with relative ease, and is just a fun and enjoyable way to learn about different systems. So it seems to me like sketching with something other than code is good for things that need to be robust and work in very specific ways, and sketching with code is good for exploration and learning new things quickly.
I have done this for years, but not because white boarding, READMEs, formal programs, or any other means is less useful. They each serve a purpose for different contexts and fidelities of the system. Coding just happens to be the highest fidelity of expression which has its trade-offs if you jump right into it for a complex system.
This article really gets to a fundamental misunderstanding I feel our whole industry has: Programming is not construction, it is design.
Yeah, houses rarely collapse, but structural engineers don’t expect that their second draft will be taken out of their hands and built. Or that the fundamental requirements of their structure will be modified.
I don’t mean to suggest that programming should behave more like construction. The value of programming is the design. Programming is the act of really thinking through how a process will work. And until those processes are really done and won’t change (which never happens) that design never stops.
At some point I got this thought that programming can be mapped into the framework of the scientific method, as below. Later I think I found a similar interpretation in some older article so I guess I might have not been the first. Specifically, my analogy goes like this:
where a program tries to model (some aspects of) some specific problem domain, and a scientific theory in e.g. physics tries to model (some aspects of) the physical world around us.
This seems to fit with some known aspects of programming: e.g. that tests never can confirm 100% that a program is correct, but the more of them you do the more confident you can become. Or that sometimes a program seems to have “hit a jackpot” of being a “good theory”, when new features result in only small tweaks to the program (similar as new experiments fit well in a correct scientific theory, with maybe just small tweaks to constants). And wildly different features (wildly new experimental results) may force us to ditch the model and start searching for a new one - major architectural refactoring of the code (or searching for a new theory that may still explain the old one - Einstein, Newton, or completely break it - how the aether theory was invalidated even though it gave useful results in some areas IIUC).
Ime this is not empirically true, and in fact not even close to true. The vast majority of engineers would produce far better work with methodologies like Readme Driven Development than by jumping in and coding, even with a 2nd pass for cleanup.
In fact, most of the time coding is an incredibly poor tool for thought, and it is far more likely that inessential implementation details at the code level will take control of and obscure clear high-level thinking than aid it.
That said, precisely because code is a poor tool for thought, sometimes you have to actually code something before you can see if your idea will work, and I can see how this phenomenon might give the illusion of the article’s claim. But this, imo, is not evidence of your mastery of a powerful thought tool. It is evidence of how poorly that tool maps to intuitive ideas you can understand and express with ease in natural language.
There is a nice comment on Reddit that describes something similar.
I don’t think Lamport meant to say that coding cannot be used as a way to make sketches. It’s just that those sketches should not be confused with the final implementation.
Yes.
I think that those who push for more formalism in programming are not saying that you must map out everything ahead of time. I mean, things change in construction projects all the time even after the plans are approved. It seems to me they are pushing for some serious forethought.
Sort of, but not really?
I certainly don’t jump to coding to figure out the solution to a problem. Sketching something in (working) code is a waste of effort most of the time, in my experience. It requires far too much precision to get larger ideas across. Pseudocode may be fine, but then, is that “jumping to coding”?
On the other hand, I will make changes to an existing code base to see what will happen so that I can understand the problem environment better. Perhaps this is what the author is getting at. Changing code may be the only reasonable way to map out the behaviour of the environment in which you need to implement an eventual solution. Reasoning about the environment ahead of time without trying something first is often not feasible. I don’t consider this this “sketching” of a solution, though. I consider it knowledge acquisition that will help out with any sketches of solutions later.
I’ve experienced both of these methods of “sketching”, and I think they both have their place in our work. I recently had to write a fairly complex program in assembly for my processor design class. I worked an entire weekend in the “sketching with code” fashion and wound up with a buggy, failing program. My dad (who was a software engineer for many years) stepped in at this point and helped me take a beat and sketch out in English exactly what I needed to build. In just a couple of hours, I had a complete, working program that was more robust and elegant than my original solution. Designing without code helped me immensely in this and many other cases.
On the other hand, when I want to build a quick project, sketching with code instead of English helps me learn a lot, and quickly. I recently started building a small text editor and by sketching through code I learned a lot about terminal modes and how the tty subsystem works in UNIX systems. This method often comes out in my personal projects and has helped me grok complex systems with relative ease, and is just a fun and enjoyable way to learn about different systems. So it seems to me like sketching with something other than code is good for things that need to be robust and work in very specific ways, and sketching with code is good for exploration and learning new things quickly.
I have done this for years, but not because white boarding, READMEs, formal programs, or any other means is less useful. They each serve a purpose for different contexts and fidelities of the system. Coding just happens to be the highest fidelity of expression which has its trade-offs if you jump right into it for a complex system.