1. 71

  2. 19

    This is a fantastic and insightful analysis. You could debate it, of course, but the fact that Lisp emerged when the alternatives were so low-level is amazing; the comparison to calculus is a good one.

    1. 3

      Following the Maxwell equations analogy: if Lisp is analogous to vector calculus, today we have programming language analogues of tensor calculus, differential geometry, de Rham cohomology, etc.

      1. 3

        Which languages are they? Almost all currently-used languages I can think of come from the Algol stable, rather than Lisp; to use the analogy they are built from counting pebbles rather than vector calculus. Clojure and Elixir seem like the only counterexamples I can recall.

        1. 6

          Tensor calculus makes finer distinctions that vector calculus (at least as taught to engineers) totally glosses over: contravariant vs. covariant indices, tensors vs. pseudotensors vs. tensor densities, etc. These distinctions are useful, because they prevent you from wasting time saying things that are geometrically and physically nonsense. Does this remind you of a certain language feature that causes endless online debates, and neither Clojure nor Elixir has?

          If Lisp is analogous to a tool that was good enough to formulate classical electrodynamics, then we today have analogues of tools that are good enough to formulate general relativity, quantum mechanics, etc.

          1. 2

            Does this remind you of a certain language feature that causes endless online debates, and neither Clojure nor Elixir has?

            Wait, I thought Clojure does have contracts

            1. 3

              Runtime-enforced contracts at best distinguish wanted from unwanted meanings. They don’t distinguish the presence from the absence of a meaning (i.e., nonsense). For if there were no meaning, there would be no program to run.

              1. 2

                The way I think of it is that runtime-enforced contracts allow you to fail quickly in response to bad data. Check a POST body against a JSON schema and return 400 if it doesn’t match, and the like. Compile-time “contracts” (i.e. types) allow you to be certain that the data you are handling in a given function or module is valid.

                Runtime contracts could be seen as protecting against defects in the type system. “Integer” may be the most limiting type expressible in a given language, but a contract can say “this value is between 0 and 9.”

                1. 1

                  Compile-time “contracts” (i.e. types)

                  “Compile-time contracts” needn’t be types. It could be something you proved on your own the traditional way, i.e., using paper and pen. Why limit yourself to the subset of mathematics that one specific tool (a type checker) can handle?

                  Runtime contracts could be seen as protecting against defects in the type system. “Integer” may be the most limiting type expressible (…)

                  This is a virtue, not a defect. At some point you start hitting diminishing returns when you try to prove everything with types. Other mathematical tools are more flexible and scale better to more complex scenarios.

                  1. 4

                    To be frank, I don’t trust paper and pen proofs. You can prove something ten ways from Sunday, and then get a NULL pointer where you don’t expect it and blow up. I think of the Knuth quote, “Beware of bugs in the above code; I have only proved it correct, not tried it.” That’s not to say that verification and proof outside of the code is valueless (certainly not), but if I’m relying on it I would also encode my assumptions in runtime contracts if they exceed the power of the type system to encode.

                    1. -4

                      To be frank, I don’t trust paper and pen proofs.

                      This is only natural if you don’t know mathematics. The fix is obvious: learn mathematics.

                      1. 9

                        I’ve got a PhD in mathematics and therefore the number of mistakes I’ve made in so-called “proofs” is staggering.

                        1. 6

                          There’s a reason we left pen and paper proofs for mechanically-checked proofs with minimal TCB’s. People were screwing up too much. You should thoroughly read this paper by Guttman:


                          Key except illustrating the problem:

                          “The problems inherent in relying purely on a correctness proof of code may be illustrated by the following example. In 1969, Peter Naur published a paper containing a very simple 25-line, text-formatting routine which he informally proved correct. When the paper was reviewed in Computing Reviews, the reviewer pointed out a trivial fault in the code which, had the code been run rather than proven correct would have been quickly detected. Subsequently, three more faults were detected, some of which again would have been quickly noticed if the code had been run on test data.

                          The author of the second paper presented a corrected version of the code and formally proved it correct (Naur’s paper only contained an informal proof). After it had been formally proven correct, three further faults were found which, again, would have been noticed if the code had been run on test data.

                          This episode underscores three important points made earlier. The first is that even something as apparently simple as a 25-line piece of code took some effort (which eventually stretched over a period of five years) to fully anaylse. The second point is that, as pointed out by DeMillo et al, the process only worked because it was subject to scrutiny by peers. Had this analysis by outsiders not occurred, it’s quite likely that the code would’ve been left in its original form, with an average of just under one fault for every three lines of code, until someone actually tried to use it. Finally and most importantly, the importance of actually testing the code is shown by the fact that four of the seven defects could have been found immediately simply by running the code on test data.”

                          Mathematics alone won’t help if one is verifying algorithms and/or code. There’s many ways to screw up. The correct solution might have to take a heavy amount of proofs that are then thoroughly checked by human minds and/or tools. Most people working alone on pencil and paper might be making serious mistakes that will end up in the codebase.

                          See also my submission Don’t Trust the Math:


                          1. 2

                            These are different problems. Paper and pencil proofs are to convince people and to understand the principles. Mechanical proofs are for details - but they can be wrong on bigger issues. The big limitation right now is we don’t have good ways of thinking about complex state systems mathematically.

                          2. 6

                            I majored in mathematics and don’t think it obviates the need for careful testing.

                            1. 5

                              “Mathematics” does not govern software. Software and hardware do. There are edge cases that writing a mathematical proof (on paper!) does not cover. Many of them. That’s the meaning of the Knuth quote I included above. Unless you think he doesn’t know math, either.

                              1. 2

                                I’d argue that people govern software - people, in all their irrational, legally mandated requirements that have to be satisfied before they’ll sign the functional acceptance agreement!

                              2. 5

                                Condescension is not knowledge. Real mathematicians know that proofs can be wrong. Real engineers/scientists know that correct proofs may be for mathematical models that miss some aspect of the physics. Being able to use some terms from category theory doesn’t make you qualified to be in either one of those categories.

                  2. 6

                    Lambda functions are fairly widely implemented in most mainstream languages by now (C#, C++ IIRC, Rust, PHP)

                    While they are still mostly Algol-like, I believe that a lot of the things LISP invented/discovered have slowly bled into other languages we use today.

                    1. 4

                      I believe that a lot of the things LISP invented/discovered have slowly bled into other languages we use today.

                      What did you had in mind? As far as I can tell most of the languages we use today lack those iconic lisp traits:

                      • homoiconicity
                      • hygienic macros
                      • interactive development - here I’m referring to the situation where there is runtime error and lisp system stops, lets you fix the code, recompile only the changed parts and continue instead of forcing you to fix and restart whole interaction
                      1. 1

                        Maybe ultra-high-level programming, memory safety, garbage collection, and DSL’s could be on the list of what went mainstream. If looking at LISP machines, keeping your whole stack in high-level languages as much as possible. There’s containers and VM’s for that now. They were also among earliest IDE’s with interactive development and debugging. Just a few ideas that popped into my head rather than solid since the lineage of programming languages is wide enough for me to forget or just not know something in a quick post. :)

                    2. 1

                      Odd then that not only has the vast majority of scientific computing bypassed Lisp, but the higher level mathematical tools are mostly built on Algol descended languages.

                  3. 1

                    The other one that came out a few years after McCarthy was META II:


                    He liked that language for similar reasons in terms of DSL building or just expressing transformations. Kay ended up merging LISP variants and META successors in his later work in STEPS project.


                  4. 8

                    Is it just me or are the parenthesis in his post unbalanced?

                    1. 8

                      It is not just you. The first paragraph has an open parenthesis that never closes.

                      A funny error to see in a post praising Lisp.

                    2. 7

                      It’s actually an answer from the dude himself.

                      1. 4

                        It’s an opinion statement, of course. It’s consistent with his values: introspection rather than abstraction, flexibility rather than correctness (“arrogance in computer science is measured in nano-Dijkstras”), etc.

                          1. 1

                            Not serious lisp bashing ahead: lisp is so great that it has poor success. too much parenthesis …

                            More seriously: I never practice lisp a lot, but I feel like it is a great language.

                            1. 1

                              Recommend trying Clojure, it’s a modern Lisp that runs on JVM, Node, and the browser. It’s pretty applicable for doing useful real world things. :)