1. 11
  1.  

  2. 3

    It’s interesting that lexical scope is only an honorable mention. For all the other listed examples, I can think of major, mainstream languages that do not have them. Lexical scope is the only exception, where I can’t think of a mainstream language today that doesn’t default to it.

    Maybe so widespread that it feels obvious – kind of like a couple of other things I feel should be on that list: Recursion and pass-by-value. Both of those were once the subject of debate, but won so resoundingly that nobody even thinks about them today.

    1. 2

      Besides the lexical/dynamic distinction, to me an important part of lexical scope is the programmer indicates their intent of where a new variable should be introduced, and as a result shadowing can happen at this introduction point. CoffeeScript does not respect this part of the deal. (Prolog and Erlang are also kind of weird at it.)

      1. 2

        Emacs Lisp comes to mind, defaulting to dynamic scope. That said, it isn’t mainstream, but a ton of code is written in it.

      2. 1

        Abstraction … Modularization

        Yer double-counting Harry.

        Not mentioned,

        • non-strictness
        • explicit effects
        • purity
        • typeclasses (not just in Haskell now, they’re what Rust’s traits are based)

        I could probably think of a few others. It’s a bit strange that the author picks the oldest possible paper for modularization rather than an assortment of historically or educationally interesting papers.

        1. 2

          They are all great points (aren’t “explicit effects” and “purity” in essence the same thing?), but I am not sure that I would trade them for the ones that are proposed in the list.

          An interesting aspect of the items you propose is that they also are less well-understood, I think: explicit effects/purity is fascinating, but the consensus on how to best achieve it, both in the theory and in the practice, is not yet clear (monads? algebraic effects? should effect inference be done Koka-style, Frank-style?). We have a bit more experience with lazyness, but the gap between the theory (where it’s well-understood that evaluation order should be a per-type property) and practice (where a language has a global strategy with explicit opt-out markers) also gives me pause.

          You point out that Rust has traits, but that’s a strange narrative. Scala had implicits for a while, and I think that the Coq/Agda/Idris/Isabelle use of type classes are notable as well. They may appear less “mainstream” than Scala or Rust, but they had for example the important consequence of demonstrating that coherence is not an issue in a dependent language: as the elaborated instance shows up in the type, any issue caused by locally distinct resolutions will be caught by the type system.

          1. 1

            (aren’t “explicit effects” and “purity” in essence the same thing?)

            I’d like to believe so but some people believe Haskell isn’t pure because we have partial functions :)

            ‘Sides, you could have explicit effects in something like Rust and still not be purely functional because it’s an imperative programming language. Adding explicit effects to OCaml would be much of the same. Y'all hacked some limbs off a dead cow and tacked them onto your lambda calculus, so purity’s a dead letter.

            consensus on how to best achieve it, both in the theory and in the practice, is not yet clear

            Monads work great for me and I have a range of methods at my disposal for working with them. mtl-style (nota bene: not the actual mtl library) has been best-of-all-worlds. I have not been impressed with Eff even when an entire language was catering to it.

            You point out that Rust has traits, but that’s a strange narrative. Scala had implicits for a while

            I point out Rust because it disallows orphans entirely.

            Scala had implicits for a while

            Not canonical, so not typeclasses. Implicits are terrible and not equivalent to typeclasses at all. This isn’t some hypothetical, implicit scope behavior in Scala bites users of Scala all the time.

            Adding an import shouldn’t silently change the behavior of my code at runtime with no warning. That is obscene.

            important consequence of demonstrating that coherence is not an issue in a dependent language

            shrugs I’ve only kicked around tutorials in Agda, Idris, and Coq, whereas I work in and write about Haskell. I’d prefer to have the explicit option of making the overloading unique per type across the entire class for the sake of the humans reading and reasoning about the code.

            1. 1

              I’d like to believe so but some people believe Haskell isn’t pure because we have partial functions :)

              Touché, but the expected reply is that non-termination (or other sources of partiality) should be tracked in the types. There is something not completely clear about this idea, but I don’t personally understand whether it’s non-termination that is special, or just the current way we track effects that are too heavy for this orthodox reply to be really convincing.

              ‘Sides, you could have explicit effects in something like Rust and still not be purely functional because it’s an imperative programming language. Adding explicit effects to OCaml would be much of the same. Y'all hacked some limbs off a dead cow and tacked them onto your lambda

              I don’t see a difference with the do notation. (One minor difference is that the do-notation forces explicit sequencing of evaluation order, so it would be compared to these languages in ANF form. Some people think it’s good, some think it’s bad.)

              or the sake of the humans reading and reasoning about the code.

              Proof-aware languages bring much more powerful ways of reasoning about the code (eg. carrying properties around for the various type-class laws) that are mediated by the type system and I think completely subsume the guilty comfort modularity-breaking determinism. It’s liberating.

              1. 1

                but I don’t personally understand whether it’s non-termination that is special, or just the current way we track effects that are too heavy for this orthodox reply to be really convincing.

                Start concrete, shift into mtl-style as needed: light as a feather baby.

                I don’t see a difference with the do notation.

                I find it hard to believe you don’t care about the semantics of your programming language.

                Proof-aware languages bring much more powerful ways of reasoning about the code (eg. carrying properties around for the various type-class laws) that are mediated by the type system and I think completely subsume the guilty comfort modularity-breaking determinism. It’s liberating.

                Sometimes I don’t want proofs about a set of implementations over a type, sometimes I want one implementation per type as a hard invariant. Nobody’s going to squander brain budget to make an academic happy. PL work would reach industry more often if y'all understood that.

                1. 1

                  I find it hard to believe you don’t care about the semantics of your programming language.

                  We may be talking about different things as I don’t follow your reply; I do care. I am just pointing out that “imperative language” means “you write effectful programs in direct style”, and that the do-notation is precisely there to let people write effectful program in (ANF-)direct style, so it gives the same notational convenience and (if both sides track effects in types) exactly the same reasoning difficulties.