1. 42
  1.  

  2. 17

    How did André do this, with no tool but a pencil?

    I wish the authors of this and other similar posts (this one about Knuth, ctrl-f “single compilation”) would dig deeper into how this type of thinking functions and how it relates to modern development. Does anyone on Lobsters do this for fun or as an exercise? My closest experience to this is white board interviews but that rarely involves managing sub-components of a larger design.

    Disclaimer: I have only programmed in the age of relatively fast compiles and feedback cycles so punch card programming has an air of mystery surrounding it.

    1. 13

      Agreed, there’s a lot worth investigating here.

      I did something like it for fun, a very long time ago, but I was a kid, and didn’t finish the effort. I was trying to program on a pocket calculator (the TI-85), during a few weeks I spent physically isolated from other computational devices. The calculator had very bad editing capabilities, so I wound up doing a lot of the work on paper.

      I think it’s realistic that people could impose similar constraints on themselves if sufficiently motivated. I don’t know if I’d recommend it; it’s incredibly tedious by today’s standards.

      1. 2

        Haha, I have exactly the same experience, though I was writing code for the Casio FX9860. I think I may still have my sheets of hand written code somewhere around here still. :)

      2. 13

        He just does the work in his head. It’s like math when you move from using your fingers to your head or do algebra that way. I would keep things simple, look at the structure, look at how I connected or transformed things (esp follow rules), plug in values periodically to sanity check, and keep some stuff on paper that’s either intermediate values too big for head or temporary answers.

        In high-assurance security or systems, they teach us to do the same with programs using formal specifications, simplified structuring, and careful analysis of it. Started with Dijkstra with Cleanroom and others building on it. My Cleanroom link especially will help you see how it’s done without heavy math or anything.

        Let’s take a quick example. I had to do a Monty Hall implementation in a hurry after losing programming to a brain injury. I chose FreeBASIC since it was close to pseudocode and memory safe. I used a semi-formal spec of requirements and initial design forcing precision. I carefully looked at value range, interfaces, and structure. Spotted a moronically-bad decision in structure that led to refactor. Rechecked. Good. Produced FreeBASIC implementation directly from specs that closely corresponded to them sometimes 1-by-1. Worked on first compile and every other run.

        Dijkstra THE Multiprogramming System

        http://uosis.mif.vu.lt/~liutauras/books/Dijkstra%20-%20The%20structure%20of%20the%20THE%20multiprogramming%20system.pdf

        Cleanroom Low-Defect Methodology

        http://infohost.nmt.edu/~al/cseet-paper.html

        1. 2

          nickpsecurity, top-notch response as always.

          EWD has many similar stories, even if they aren’t written with the whimsy and splendor of tvv’s André story - but his writings like the EWD#-series of papers form a masterpiece in their own right, on a Knuth TAOCP level, and sometimes more interesting for enthusiasts because of his unique and unusual philosophies.

          “Selected Writings on Computing: A Personal Perspective” should be included with every computer sold as mandatory reading.

          1. 2

            Thank you. :) Far as EWD, I’ve read some of them but not all. I should read more some time. He had a great mind. His only problem was a great ego where he’d sometimes dismiss rational arguments to take time to slam something. I thought he may have done that with Margaret Hamilton’s tool for highly-assured software. Led me to write a counterpoint essay to his since he likely either didn’t get the method or was doing the ego thing. Stranger still since it copied his and Hoare’s techniques to a degree.

            His methods of declaring the behavior of functions, decomposing them, and hierarchical flow of control with no feedback loops became a cornerstone of designing highly-assured kernels for INFOSEC purposes. The method allowed each component to be tested in isolation. Then, the integration itself was more predictable. It was basically interacting FSM’s like in hardware design and verification. Hoare’s verification conditions combined with Dijkstra’s structuring is still used today for best assurance of correctness. They mostly stopped coding the stuff if not performance-critical: the code is produced by an extraction process from the specs. Others still do hand-coding against specs for efficiency with equivalence checks against specs or just partial correctness with conditions similar to EWD’s in THE Multiprogramming System. SPARK is good example of latter.

            Btw, if you like reading papers like that, check out Hansen’s which are great esp in language reports. Plenty of clarity with a mix of innovation and pragmatism. He invented nucleus concept of operating systems and some others things like race-free concurrency in statically-scheduled software. Hoare stole one apparently. Related to this thread, he often wrote his programs in ALGOL on a board that he checked by hand and eye before directly translating it into assembly. They couldn’t build ALGOL compiler for those machines at the time but he said it caught more bugs you wouldn’t see in assembly alone. Clever, interesting researcher that did great work in quite a few areas.

            http://brinch-hansen.net/papers/

        2. 6

          I think there are four prerequisites to accomplishing something like this:

          1. Luck. I don’t doubt there are many stories of people of similar ability working against similar specs with similar techniques who then had to do a dozen rounds of bugfixes after. Those stories are less compelling, though, so we don’t hear them as much.
          2. A simple, well-defined specification. When the target spec is excessively complicated, the likelihood of getting it right the first time goes way down. Similarly, when the target spec is ill-defined (or changes in the middle of the process), the odds of a correct implementation on the first try drop almost to zero.
          3. Enough time to fully design, in detail, prior to implementation.
          4. Extreme patience and attention to detail.

          In most modern dev jobs, I suspect the job itself precludes one through three. Thus, four is almost impossible to evaluate, so I won’t comment on it other than to say that people have decried the ills of the younger generations for literally all of human history and been wrong the whole time. I suspect three isn’t actually any faster or likely to yield correct results than iterating on partially designed and implemented code, as well, but I don’t know of any actual research to indicate either way.

          1. 4

            I just read Joe Armstrong’s interview in coders at work, and there’s a similar story. Do fast compiles and feedback cycles make us better? It seems we get answers faster with less thinking. And there’s a temptation to stop thinking as soon as we get an answer.

            Perhaps an experiment: add a five minute sleep to the top of your build. Add another sleep to the top of your program. Resist, as much as possible, the temptation to just immediately remove them, but instead take a moment to consider if you really want to start a build or spend another minute reviewing your change. After a week, more or less productive?

            1. 4

              Many times, I’ve gone through several compile/run cycles for a stubborn test failure, finding and fixing bugs each time through—only to discover that due to a misconfiguration I was just running the original binary the whole time! So I know I would find plenty of bugs if I just read the code for a while and didn’t even bother to run it.

              I date from a time when compile cycles were minutes, not seconds, so I already know this…but apparently I forgot.

              1. 3

                “Do fast compiles and feedback cycles make us better? It seems we get answers faster with less thinking.”

                Maybe. I read PeopleWare along with looking at studies on LISP and Smalltalk vs other languages. I saw that productivity was going up with defect rate still lower than most 3GL’s. The trick was people were focusing on the solution to their problem more than trivial crap. The iterations were faster. In PeopleWare and some other resources, they also claimed this increased the use of a focused state of mind they called “flow” where people were basically in the zone. Any interruptions, even slow compilers, could break them out of it.

                Given that, I’m not going to buy the idea we’re thinking less with faster feedback. Instead, evidence indicates we might be thinking more so long as we aren’t skipping the pause and reflect phase (i.e. revision, quality checks). I think most problems come from skipping the latter since incentives push that.

                1. 2

                  To the extent I’m conscious of being in the zone, I’m not compiling. The zone is the code writing period. I probably could do it with paper and pencil.

                  Perhaps I’m particularly bad, though I think not necessarily, but I’ve noticed I immediately reduce, if not halt, thinking about a problem when I have a solution that appears to work. If I revisit it, it’s almost certainly out at lunch or “in the shower” or whatever. And from observing others, I really don’t think I’m alone giving up the thinking when I get something “working”.

                  I’ve been thinking a lot about this this week, and I think I’m going to try my little experiment with introduced delays. Current hypothesis: better results.

                  1. 1

                    You’re describing a common issue that is partial inspiration for having separate people do code reviews. If one person, then one could build it into version control where whatever is checked in isn’t final until it’s reviewed a certain amount of time later. Also, if you get in the zone coding, then dynamic languages might be a big boost for you. Or Wirth-style system languages with a short, compile cycle. They compile about as fast as people think. So, more thinking, less compiling. :)

                2. 3

                  The flip side of this is the “I have only proved it correct, not tested it” problem. There is a lot of broken code from (generally very smart) “rationalist” programmers–those who place a high value on understanding what’s happening inside all the relevant “black boxes,” and write code based on that mental model with the goal of initial correctness, with limited testing and experimentation.

                  It’s my opinion that many or most modern systems (especially those that span multiple languages, processes, servers, etc.) are too complicated to yield to this approach, because no model that can fit in one person’s head (no matter how smart) can be sufficiently accurate. Only an empirical approach is capable of reigning in the chaos. (Full disclosure: years of Lisping have left me entirely hooked on REPLs and REPL-integrated development environments like SLIME and Cider, where short feedback cycles are taken to the extreme, so perhaps there is an element of self-justification here.)

                  The value in the rationalist approach, I think, is when it can be used to constrain the level of allowed complexity in the first place, as perhaps in this story. In a world of long feedback cycles, there is no choice but to build systems that can fit in your head. But generally speaking we don’t live in that world anymore.

                  1. 2

                    I encountered that problem when I first started in BASIC with 3rd-party code. The solution was to use a subset you can understand, create a model of behavior it should follow, make tests for that, and run them to be sure it does what it’s supposed to do. One can then compose many such solutions to get similar results. Error handling for the corner cases.

                    Also, DSL’s can do this stuff even in complicated applications. Sun Microsystems had a DSL (DASL?) for three-tier web apps that could easily describe them with all the messy stuff synthesized. The example I saw was 10kloc in DSL turned into 100+Kloc of Java, Javascript, and XML. So, we can constrain what’s there to fit with our models and we can synthesize messy stuff from cleaner tooling.

              2. 7

                It’s less impossible for me to believe if you consider that all other engineering was done this way too. People used to draw designs on paper and refine them until they were pretty sure they really worked, all the parts fit together, the parts could be fabricated on the existing equipment, etc. Then, actually fabricate a prototype. That’s often slow and expensive so you don’t really want to iterate on it a lot. Simulations are much cheaper to iterate on, so this kind of on-paper design of complete machines has fallen out of use, and now people do something more iterative using CAD-type packages and simulation environments, not entirely unlike in programming.

                1. 3

                  One thing that’s not mentioned is what language the code was. My guess is assembly language, given that it was an operating system and it was the 60s/70s. If you’ve ever worked with assembly, you may know that it’s actually feasible (and fun!) to write assembly manually. It’s probably the most enjoyable programming language to write by hand.

                  But that depends on the instruction set architecture is like. Some ISAs are so complicated that you’re better off writing the code and running it to see what it does than trying to reason out the full answer. MIPS and ARM aren’t too bad. x86 and z Series (aka mainframe) can be downright impossible to keep in your head.

                  I’m sure it can be done, but should it be done? Who knows.

                  1. 5

                    The language was Multics PL/I and this is the file:

                    http://multicians.org/vtoc_man.html

                    Seems it did get a bug fix or two but years and years of production later. As you can see, the VTOC Manager is not trivial!

                    Somewhat off-topic, but - if you want to play with my live emulated Multics system, it’s available via mosh or ssh dps8@m.trnsz.com

                    Edit: I just was poking around and reading the live output of vtoc_buffer_meters and I’m always surprised how awesome Multics is. So much is unmatched - to this day.

                    1. 4

                      Anyway, I’m very happy for the response this has here, and posted this because Andre’s story is by far my favorite story in all of computing - something about it has stuck with me for many years and left me in various states of meditation, awe, and contemplation! I’m not exactly sure how or if we should try to replicate the story in this the modern era, but I love the discussion here.

                      Maybe the fact that this “can no longer be done” is a symptom and indicates a larger problem with how our craft has evolved.

                      1. 2

                        It’s a great story and thanks for posting it. I’ve never heard it before.

                        I wonder if it’s the efficiency of the technique that’s the key factor here. How long did it take to write and test a solution in the Multics world? Do we write code really early these days simply because it’s faster than using pencil?

                        1. 2

                          From a video in my history of C essay:

                          https://vimeo.com/132192250#t=328s

                          That’s the backdrop. MULTICS was started around 1965. So, I looked for something about programming in the 1960’s to see if there were differences:

                          https://www.slideshare.net/lenbass/programming-in-the-1960s

                          The repairman terrorist stuff is hilarious lol. I’m submitting it as it’s own story. Anyway, they clearly had to do a lot of waiting and debugging. So, doing a lot of the work in one’s head allowed them to move faster than the computers. Oh, the ironies of how computing changes over time. :) Additionally, people like Dijkstra using correct by construction methods were getting even better results on top of it. They benefited extra then by the fact that having time for QA or an incentive to reduce costs of failed executions was a default scenario. Today, it’s more rare given how IT shops run. There’s still companies such as Altran/Praxis doing it. They’re rare, too.