1. 19

  2. 16

    (not (true? scots-man))

    1. 14

      This article would have gone down a lot smoother if it was a single sentence, “I like Common Lisp and I wish more languages adopted its model of image/repl-based development.”

      But of course, there’s an extra and implicit step here: the author has identified a certain set of the features of Common Lisp which they particularly like with Lisp itself and is thus tremendously upset that there are so many languages out there which claim to be Lisps but don’t have those features. The common sense response—maybe Common Lisp is very good at being Common Lisp and other Lisps are good at being other Lisps—is abandoned in favor of the conclusion that Common Lisp is good at being a Lisp and all other languages are bad at being a Lisp.

      I do wonder why this happens so much with Lisp. We’re constantly squabbling over whether something satisfies the criteria. The reference to Algol is instructive; while we might all agree that C++ is in the Algol family, we neither claim that it is an Algol, nor would we really care.

      I’m inclined to gesture vaguely towards Wittgenstein; Lisp is a useful category, probably more useful than Algol, because there is a family of languages which bear family resemblance to each other; each one shares some features with the others, though no two share exactly the same features. So it is a useful distinction. But we are as ever deeply uncomfortable with family relations. When we are presented with a category we insist on identifying the set of features shared by all its members. And we (perhaps very understandably) will start with that set that we, personally find most beneficial. So for the author it’s image development. For me it’s s-expressions, homoiconocity, and macros. And then we attempt to identify that set with the whole category, and become terribly agitated when there isn’t a good fit.

      1. 4

        I do wonder why this happens so much with Lisp.

        I also wonder about why this is the case with Common Lisp. My understanding is that there were a number of lisps and standardization was meant to unify them. Maybe to some people this standardization process means that it’s the heir to the throne and therefore the one true Lisp of lore. To me, it means the opposite: it’s yet another lisp.

        Am I missing something here?

        1. 7

          You’re right, I think the ‘smug lisp weenies’ generally come from that tradition rather than, say, the scheme one.

          If I had to guess I’d say it’s two things:

          1. Common Lisp comes out of the industrial tradition, as opposed to the academic or hobbyist one. Therefore its practitioners have had the greatest interest in identifying, honing and indexing on the best: the most powerful features, the most powerful libraries, the most powerful practices. Because they’ve had the greatest evolutionary pressure to actually produce good code, quickly (though probably not in large groups!)

          2. Given point 1, Common Lisp users for decades did have to constantly justify their choices, and for decades, many (more) of the things that made Common Lisp special were much rarer in industry. First class/higher order functions, lambdas, closures, memory safety, syntactic macros: all features which, like image based development, made Common Lisp special, but unlike image based development, were once exotic and are now commonplace. So in a way the greater Lisp family has won—those things (except for macros) are arguably all table stakes for any ‘modern’ language—but the Common Lisp community was forged in the heat of battle. Its discourse is still one predicated on the assumption that it’s a very special (as opposed to the current reality, which is that it’s simply rather special) paradigm, which requires constant justification and differentiation.

          1. 4

            The alternative explanation of course is that Common Lisp truly is unique and special, solely by its technical merits, and all other languages, including other Lisps, are Blub, and those of us who consider ourselves Lisp programmers but nevertheless fail to see that fact are the worst of the heretics, because we’re so close to the truth but still worshiping the wrong god.

            1. 5

              Cue that famous Emo Phlips skit about a person trying to talk someone off a ledge by trying to find common ground in religion.

              1. 3

                The narcissism of small differences, in other words. See also: academic politics.

                1. 3

                  academic politics

                  Well it’s Lisp so it’s on brand.

        2. 9

          So, I’m having a hard time following what the core of the critique here? It seems like the author:

          • Has beef with Clojure, because it models changes over time differently than CL does
          • Thinks that the core appeal of Lisp is the whole system being in one language, rather than trying to Layer Lisp Atop Other things
          • And then thinks that syntax matters a lot less than the environment (which I kinda follow?)

          I feel like I’m missing a lot of context here

          1. 4

            My understanding is that this is the core of the argument:

            In a similar fashion, I have attempted to illustrate a style of Common Lisp, in which the “language” is in fact an interactive programming system, features subsume multiple programming paradigms, occasionally “beating them at their own game”, and provides excellent efficiency for high-level, readable programs. Such a style is why one sticks around with a language; there is not another language with the described design strategies.

            And the rest is beating down every other model of Lisp for taking only bits of Lisp and still calling it Lisp (“selling Lisp by the pound”), which the author believes defeats the purpose of the whole interactive system thing.

            1. 4

              If that’s the core of their argument, then Factor and Smalltalk would also effectively be LISPs to, no?

              1. 2

                “subsume multiple programming paradigms”

                Can’t speak to Factor. Smalltalk might have it on image-based, readable, and maybe fast. Lisp can then add new paradigms by just adding a library. The language itself morphs to better express the solution, maybe multiple times in a program. People did it for OOP and AOP. Everything in a Rails-like framework might be DSL’s in the same ultra-fast, base language. Like with Rayon in Rust, the macros might make similar or the same constructs automagically handle goals like parallelism. Lots of flexibility aiming for lots of potential uses.

                If all Smalltalks have that, then they’d be comparable enough minus the syntax advantages. If not, it’s a tremendous advantage of Lisp if that’s an advantage. Detractors would think Smalltalk would be better the second the codebase was in maintenance mode. “No macros = easier-to-understand code,” they say. They’ll argue most uses of macros can be done with library calls easy enough in dynamic, generic, or whatever language. There’s counterpoints.

                Note: Quick look at Factor page says it’s stack-based. Might cancel multi-paradigm for it, too, if it’s more constrained than Lisp from whatever constraints being stack-based forced onto it.

                1. 2

                  Factor can have a large amount of DSLs in it, though, which is why I bring it up.

              2. 3

                Which is part of why I don’t describe Clojure as a LISP.

            2. 6

              One nice thing to model is “identity” of data, i.e. that two structures with the same values are different somehow. Object-oriented languages (and procedural languages with some effort) provide intrinsic “identity” by using references to structures; the memory address is used to form identity. Some functional programmers, particuarly Clojure programmers, said the equivalent of “fuck it, we don’t need identity, cause that doesn’t model time. With pure functions, one can model time.”

              I don’t really understand this critique. It feels like there’s two separate things here: one is the question of an equivalence class for “equal but not identical” objects, and another is modeling and control of state and side effects. For the first, Clojure can compare memory addresses just like Java can with ==: (identical? [:a] [:a]) returns false because the two vectors have different memory addresses. For the second, Clojure has a rich set of reference types (what Clojure calls “identity”) for explicitly modeling changing state, including vars for the global environment, promises, atoms, and STM refs for changing data, and defrecord and deftype, in addition to all the JVM container types (e.g. AtomicReference) for Weirder Stuff. It’s not at all a “purely” functional approach–you can do that, but I think the default Clojure path leans a lot more towards eagerness and mutability rather than, say, an effects system or monadic transformers.