1. 21
  1.  

  2. 7

    Wow, the author puts these on the same level (“fundamental algorithmic techniques”):

    1. Computing values by recursion over inductively-defined structures.
    2. Computing values least fixed points of monotone functions by bottom-up iteration.

    I think of structural recursion as the fundamental algorithmic technique. It’s like magic! It helps you organize your thinking: it says “the input can only be these cases, and here are the tools available in each case”. It works on lists, trees, and acyclic graphs. If you memoize, it can be as efficient as dynamic programming. When you’re done, the code is very understandable; sometimes it reads almost like a definition.

    I wonder if this “least fixed point” technique is just as powerful, but works on general graphs?

    I remember reading some intriguing examples on Matt Might’s blog, but not really understanding how define/fix works:

    The nullability procedure appears to be infinitely recursive:

    (define/fix (nullable? l)
      #:bottom #f
      (match l
        [(empty)           #f]
        [(eps)             #t]    
        [(token _ _)       #f]
        [(orp l1 l2)       (or (nullable? l1) (nullable? l2))]
        [(seqp l1 l2)      (and (nullable? l1) (nullable? l2))]
        [(redp l1 _)       (nullable? l1)]))
    

    But, define/fix saves it by computing the least fixed point if it detects the function recurring over a cyclic graph instead of a tree. The #:bottom argument specifies where to begin the fixed point computation.

    http://matt.might.net/articles/parsing-with-derivatives/

    This makes me want to understand it better!

    1. 3

      I wonder what the applications of this “least fixed points of monotone functions by bottom-up iteration” thing, outside of compilers/formal methods, are.

      1. 3

        I’ve long thought it would be powerful to design a four-year CS degree program around compilers.

        The first course wouldn’t start until the second year. Maybe in the last semester/quarter, maybe several different compiler courses throughout that second year. Starting in the 2nd year allows concepts to be gently and gradually introduced, while also “weeding out” those who can’t hack it.

        And from then on, students would always be enrolled in some sort of course in compiler design and construction. Throughout their 3rd and 4th year, every semester. Building up deeper knowledge about all those topics a typical single compiler course shreds through at warp speed.

        Imagine someone coming out that degree program, being able to design and implement compilers as easily as most devs can create a standard reusable library.

        Because most people who get a CS degree only ever have just one compiler course - and a lot don’t even have that, apparently. We barely retain what we crammed in our heads in that exhausting, over-packed course. And that’s why we rarely create parsers and DSLs when we’re on the job - much less full-fledged compilers.

        How would our profession be different if this was a standard skill?