1. 33
  1.  

  2. 5

    This is something I’ve done for a while and I find it to be effective. It’s basically trying to code based on a specification. It’s also a good way to find out if the book is any good and getting its point across.

    I don’t entirely agree with the statement, “Approach #1 [type in each line] is a method that, like a lecture, causes the code to go from the author’s page to the reader’s screen without passing through the heads of either.” I’ve found that when I type things in I tend to think about them carefully, always asking why each token I’m typing is what it is. Copying code, though, is truly a bane to understanding.

    1. 4

      I think the method is describing what’s now called deliberate practice.

      The better books will have relevant exercises - different from the material in the book - that test your understanding of the material and stretch it a little (otherwise known as deliberate practice.) Focussed exercises that stretch your ability somewhat beyond your current ability.

      I mean people should really do that for themselves, but often you need some domain expertise and guidance to steer you in the right direction. The best teachers I’ve known do this, sometimes naturally, sometimes through experiences and sometimes through deliberate practice.

      1. 5

        The Benjamin Franklin method is a subset of deliberate practice. K. Anders Ericcson, the author mentioned in the post, is actually the father of deliberate practice.

        One of the components of training is rapid feedback. The Benjamin Franklin method specifically solves the problem of how to get feedback in the absence of a teacher.

        1. 2

          Thanks. I wasn’t aware of K. Anders Ericcson. I learned about the ideas and techniques from Daniel Coyle’s books - The Talent Code and The Little Book of Talent, The wonders of Myelin!

          Interesting point about rapid feedback - it’s often said that unit tests should run fast so that you can try changes quickly, but I think there’s a large psychological component to that.

      2. 4

        I do something similar, but different in a key way. I keep the original code sample handy, but I enter in a subtly different version based on my understanding of the code. What I’m doing is testing my mental model: if this code does $X, then this alteration should produce $Y.

        It’s a great way to crash a new tech stack in a day or two, and it’s a really useful way of understanding documentation for libraries or frameworks. Essentially it’s a simplified version of what you’ll do by referring to the docs in the future.

        1. 4

          I kind of felt like you left me hanging there at the end of a great article. Although I got the concept, I was hoping to see some examples or just people’s experiences. This one has potential to be expanded on. Especially if you add tooling like REPL’s that can help with it. Also, if solving a problem, a technique from formal verification could help: untrusted generator, trusted checker. The problem used for language learning could be one whose output is obviously correct or incorrect with a trusted checker coming with the problems. Then, they can try to code up a solution getting some feedback from IDE, some from REPL if one exists, and final confirmation with the checker.

          Just brainstorming here.

          1. 3

            Thanks for the thoughts. It is a bit funny to think that, while I’ve practiced this on various articles and chapters, the only full book I’ve used this on was a book on a general-purpose programming language called LaTeX. It was effective, but not a great example to share. (It was not quite as effective as the math version, which left me feeling like I wasted my undergrad.)

            I’m not sure I understand what you’re suggesting for language learning. I can say that this approach already fits somewhat into the “untrusted generator, trusted checker” framework: your brain is the untrusted generator, and the compiler/you running the program is the trusted checker.

            As I side question: do you know much about the “untrusted generator, trusted checker” approach from pre-2006? That idea’s been present for a while in e.g.: theorem proving tactics, but I think most of the momentum behind that approach comes from my advisor. :)

            1. 1

              Re compiler

              Im going to hold off on that concern until it takes better shape in my mind so I can word it better.

              Re history/influence

              Originally it was a method I developed for mutually-suspicious parties sharing a repo for high-assurance security. I later found a formal version called LCF that long predated my idea. It inspired major provers’ style. Later saw Necula’s work in PCC drive a lot of the research around a complicated thing driving a process with evidence produced at each step. Then a paper with Appel and some others made it foundational PCC.

              That’s all I recall off the top of my head with Necula probably most influential doing the stuff on real-world language. Most papers cite him on top of the LCF stuff. So, who was/is your advisor and what role did they play?

              1. 1

                My advisor is Armando Solar-Lezama. He invented CEGIS, the dominant “generate and validate” algorithm for constructing everything from loop invariants to optimized assembly.

                So, by LCF, are you referring to its architecture of using an arbitrary “tactic” program to generate a proof term, and then type-checking the proof term.

                1. 2

                  “but I think most of the momentum behind that approach comes from my advisor. :)”

                  I doubt it since I didn’t recognize the name. My memory could be getting faulty here. Let me try to trace it to double check to make sure I’m not slighting anyone. (Googling stuff.)

                  Ok, yeah, Milner and de Bruijn are where it starts. Milner has the LCF style that creates a kernel of sorts for a more correct-by-construction process. You can feed it illogical steps that it will catch long as you defined the valid, inference rules. Nice tribute piece explaining impact. de Bruijn, who created Automath, pushed (see p 12) for all proof activities to generate steps that a small checker can verify. Although many mathematicians didn’t trust computers, de Bruijn believed the checkers would be so small they’d be easier to get right than the monstrously-large proofs users would be handling. Even using mechanical checks is still debated by traditionalists. I’m in the de Bruijn camp with all the errors I’ve seen in both proofs and software, esp if tedium is involved. Good news for me is productive grandmasters of program verification are, too. :)

                  Later, Necula created Proof-Carrying Code that took idea into territory of verifying programs where the policy would be proofs of safety policy (a form of correctness). The neat part was combining LCF concept, de Bruijn’s criteria, a compiler, and a checker to make it extra useful. The compiler would generate what amounts to a safety proof for the binary code that would be checked by smaller, trusted checker. Appel et al later made a foundational version which is buzzword for low TCB from doing everything possible in proof assistant. That one was tiny. Myreen et al doing stuff like CakeML continue the foundational tradition with PCC being an open topic but didn’t go as far. Lots of exploration in mobile code and such, though.

                  And your advisor’s work would be this. Ok, so he’s doing synthesis rather than verification. That’s why I didn’t know his name. Last time I studied synthesis, those I was reading on called it “automatic programming” on tail end of a cold time for AI. ;) There was also Koza’s Genetic Programming scoring a lot of hits. The logical stuff wasn’t doing so great. So, I just switched to human augmentation with stuff like contracts or verification. I like his idea of sketches as it sound similar to what many of us thought about, but never figured out, when doing either portable code in macros or trying to make functional languages efficient. I also recognize StreamIt from when I surveyed pays to parallelize stuff. Impressive they managed to get the performance at or above hand-tuned in various benchmarks.

                  Neat you get to work with him. Most of his stuff is outside my expertise but I saw one really, good use of synthesis. It’s going to be on Lobsters in the morning. :)

                  1. 2

                    Koza and automatic programming! Very old-school.

                    You’re probably thinking of StreamIt, not StreamBit. StreamBit is just a tool for optimizing bit-ops. It was but a warmup to his magnum opus, Sketch and CEGIS: https://people.csail.mit.edu/asolar/papers/Solar-LezamaTBSS06.pdf .

                    This sparked the modern renaissance in synthesis, and the new interesting work of the generate-and-validate approach.

                    Examples of applications of CEGIS include:

                    https://arxiv.org/pdf/1211.0557.pdf (superoptimization) https://pdfs.semanticscholar.org/19d6/d2b18a540f75bbcfff41402f1d2514b1ec79.pdf (assembly generation) https://arxiv.org/pdf/1201.0979.pdf (multiple applications, including invariant generation) https://www2.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-15.pdf (deobfuscation) http://angelix.io/ ( program repair )

                    Sanjit also showed that the older CEGAR (counterexample-guided abstraction refinement) approach to model checking is a special case of CEGIS.

                    This work definitely has a different taste from PCC and LCF. It’s probably that these approaches make much heavier use of search, compared to, say, Coq’s “auto” tactic. Also because I find synthesis way more exciting than verification.

          2. 3

            TLDR: Try to write the code examples without looking at the book.

            1. 8

              TLDR: Read the book normally. For each code sample, read it, close book, and try to code what you saw on your own. Rinse repeat to intuitively pick up structure.