1. 13
  1.  

  2. 10

    IMO the list has a couple of flaws. For instance:

    (16) Pattern matching — The ability to destructure/deconstruct data based on its structure rather than projecting data out of a single value.

    This is a great feature, and strongly indicative of working in a functional world, but it isn’t core to functional programming. A very minimal scheme is purely functional, but may lack pattern matching.

    (17) Lack of subtyping — The language does not have a construct for subtyping, and thus have no common notion of “OO”. In it’s place, other type constructions are used, herein row-types.

    I’m a bit on the fence here. I think subtyping in general is a mis-feature. However, I don’t think its presence prevents a language from being functional.

    (19) Type Safety — The language has type safety: the idea that well-typed programs can’t go wrong.

    How does this apply in a language like Scheme or Common Lisp? CL is clearly not purely functional, but minimally typed functional languages exist.

    (26) Programmer defined infix and mixfix operators — The ability to extend the language with new fixity operators as the programmer needs them.

    No. User-defined operators are the devil and I defy anyone to show me a use case that couldn’t be expressed more clearly with normal functions. User-overloaded operators are generally OK, as they allow things like overloading +, *, etc. for linear algebra without having it in the core language. However, I’ve never had a situation where user- or library-defined infix operators were a meaningful win once one considers the cost to readability.

    This list reads like the author is from primarily ML languages – which is fine. However, the definition he gives conflates features common to ML (whose utility I won’t argue, except for operators) with features defining FP. As someone whose world is Scheme and Lisp, I would love to have some of these features (and do have several: Guile, Racket, Chicken Scheme, and Clojure all have Pattern Matching extensions, and Racket/Clojure have Typing extensions) but they don’t define FP

    1. 3

      No. User-defined operators are the devil and I defy anyone to show me a use case that couldn’t be expressed more clearly with normal functions

      Monads and applicatives.

      Although I think you’re drawing an unnecessary distinction. I don’t see why a language should treat the operator (+) differently than (<*>). I am fairly conservative with defining my own operators, though. I think restricting the operators like C++ does makes a lot of awkward code that is less readable. Because one cannot define their own operators, they end up reusing the existing ones for non-obvious things because they have no choice. For example, the comma operator.

      However, depending on what you mean by “user-overloaded operators”, I disagree with you. If you mean that I can have multiple definitions of (+) in my scope and one is called based on the type of its operands (a-la C++), I think that is bad. However if you mean I can rebind (+) in my scope (a-la ML & Haskell), I think that is good.

      1. 1

        I humbly disagree that Monad / Applicative code is improved by the use of operators. Ultimately that is a matter of taste. To each their own, but tangential to the definition of FP.

        I would prefer rebinding to multiple dispatch on operators, but have to admit the utility of both after the amount of time I’ve spent using Numpy (which allows, for example, matrix + matrix and matrix + scalar, each of which does “the right thing”)

      2. 3

        Right. Also, many of these principles are not specific to functional programming.

        (21) α-conversion — The language has the property of alpha conversion. (22) β-reduction — The β reduction rule plays a central part in the evaluation strategy of the language. (23) η-conversion — The η-conversion rule holds.

        So, α-conversion  is renaming of bound variables, or the principle that the semantics of a function don’t change if you change argument names, and η-conversion identifies f with λx. f x. I don’t think that those traits are specific to functional programming.

        I suppose there’s a point to be made about the centrality of β-reduction in functional programming languages, insofar as you have to roll your own in, say, assembly language, and C procedures compile down into assembly code. Structured programming ultimately took something that was hard-core imperative (the machine itself) and imposed a slight bit of functional thinking on it. Now, 60 years later, we have the luxury of programming in languages that are closer to λ-calculus than the machine.

        However, I’ve never had a situation where user- or library-defined infix operators were a meaningful win once one considers the cost to readability.

        I’ve seen it in Haskell, but one has to be careful. It requires a lot of taste.

        For example, the lens package fixes Haskell’s lack of a nice update syntax, and in a type-safe, purely functional way. This is achieved using infix operators. It required a lot of sophistication on the designers' part to get it right.

      3. 2

        The whole list is suspect.

        The author should have used an ordered list instead of an unordered list with manually added numbers. Poor markup choices are a red flag in my book.

        (Just kidding. But it does bug me a little.)