1. 5
  1. 5

    I came to functional programming via Haskell, so I tend to think that if your core language includes unmediated side effects and mutable types then it’s not functional, but there’s a spectrum. A lot of people regard ML as a functional language, because it encourages you to write in a functional style, even though it happily accepts side-effecting imperative code. I have a lot of sympathy with that view.

    I think this problem started with the conflation of ‘has higher-order functions’ and ‘is functional’. A decade or so ago there were a load of people arguing that Python was a functional language because it had higher-order functions and map. Those are certainly things that exist in a functional language, but they’re also things that existed in Smalltalk, which is the archetypal dynamic OO language.

    Most of these arguments really boil down to trying to condense ‘X is a kind of Y’ and ‘Y is a good thing’ to ‘X is a good thing’. Why do people care if Python or Swift is a functional language? Because there are some things that they want to do that are easy in functional languages. Do these languages support those idioms? If so, they capture the part of functional programming that you want and that’s great. But if you want to communicate effectively then you need to say what you mean: Python and Swift (and C++ and Objective-C and Smalltalk and Ruby and…) make it easy to apply arbitrary transforms to a collection by defining the transform on an element.

    The part I want from functional programming is deep immutability guaranteed by the type system, so I guess I have to keep looking…

    1. 3

      “Functional programming” cannot be defined in terms of language features. It has to be defined socially. How “functional” a language is can be defined by how many other groups of languages its users look down on and consider to be impure.

      The base of all FP beliefs – the fundamental creed, if you will – is to declare that von Neumann was a heretic who blasphemed against the holy purity of computing, and renounce him and all his works. The most fundamental language level is probably McCarthy’s original vision of Lisp.

      The next level up is to make heretics of languages and programmers at the base level. So Schemers, for example, adhere to the idea that the only proper computation is a tail recursive computation, and anathematize anyone who disagrees or who dares to use a non-Scheme Lisp. And the ML family sneers at apostates who have the gall to use untyped lambda calculus as the basis of their languages.

      And from there you climb the hierarchy until you find a place where you’re content that everyone below you is a heretic but everyone above you is just a weird schismatic arguing about pointless doctrinal arcana. And this is fluid over time! Haskell folks now look down on a lot of other groups, but one day there will be another language whose adherents will sigh and point out that only a heretic or, at best, a hopelessly confused dilettante, would ever think Haskell of all things had any capacity for functional programming.

      Once you shift your viewpoint to understand this, it starts making a lot more sense.

      Also, I am absolutely not trolling or being satirical here – this is actually how I approach the question of how to define “functional programming”, and I find it’s a far more practical approach than any other I’ve seen proposed.

      1. 1

        Hey David, I used to follow the Étoilé project pretty closely. Nice to have you around! It would be great to read about your thoughts on the dynamic/static divide some time, since I know you’ve delved deep into both sides.

        1. 2

          Hi. The language that I’m working on now is Verona. We haven’t yet implemented any of the object model yet (we’ve been working on the concurrency model first, because getting that part of the abstract machine right is hard - I think we know roughly how to do object models well). Static vs dynamic means different things to different people and most languages are somewhere in the middle. Some slightly random thoughts on where I hope we’re going:

          There’s been a lot of recent work on static type systems with structural and algebraic types (exemplified in Pony). The combination of these and type inference is very powerful. For example, in some made-up pseudocode, imagine writing:

          var a;
          if (x)
          {
              a = foo();
          }
          else
          {
              b = bar();
          }
          a.something();
          

          Now, if foo() returns a Foo and bar() returns a Bar, then the static type of a at the end will be Foo | Bar. Now if both Foo and Bar implement a something() method, then the call at the end will work. It’s up to the compiler to decide if the most efficient way of implementing that dynamic call. It may be to define an interface containing only the something() method and do an interface cast (which can be efficient if your runtime uses selector colouring), or just check the type of the return values and dispatching to a direct call based on that is fine.

          A system like this feels a lot like a dynamic language, but has the implementation efficiency and static checking advantages of a static language. You can, for example, change the first line to:

          var a : Foo | Bar;
          

          When you do this, you get static checking if you add another path that uses different types. This composes well with a separation where classes are concrete types, with no subtyping. You can potentially have subclassing, but if A is a subclass of B, that doesn’t mean A is a subtype of B. This means that you cannot assign an A pointer to a B* value, but you can have an interface type that describes the methods in B and will accept an A pointer or a B pointer (or anything compatible). This lets the caller decide whether to pay the cost of dynamic dispatch, not the callee. If you want to write code that is generic over A-like things, you can pay some cost, if you want to write code that is specialised to A, then you can do that.

          This becomes more interesting in combination with generics. One of the many painful things in C++ is that you have two completely different syntaxes for writing code that handles different types. You can provide an interface (or, in C++ terminology, an abstract class) and do dynamic dispatch over it in a single version of a function, or you can create a templated function and have a version of your function generated for each type that it’s called with. These have different performance characteristics (and different ABI impacts), but you have to decide which one you want to do very early on. We should be able to write code with generic parameters and decide later whether to we want to do run-time dynamic dispatch or compile-time specialisation. Rust and C# have mechanisms for doing this.

          I’m increasingly convinced that we don’t want run-time reflection in the language. We instead want a compile-time reflection mechanism that is sufficient for implementing a standard library reflection framework. Reflection is also tricky with modularity. I like Smalltalk / Objective-C reflection and have used them a great deal, but it’s incredibly annoying that you can use reflection to inspect a private field and now it’s part of the ABI that you depend on. It would be great to have uses of reflection part of the type system and part of the ABI contracts. If something chooses to expose itself to some uses of reflection across a module boundary, that’s a commitment to an ABI contract. If it doesn’t, then it should be opaque to reflection. Most of the uses of reflection I have are actually fine to drive from the reflected object. For example, if an object wants to support serialisation, it’s fine for it to import a serialisation adaptor that uses reflection and it can then expose an interface for multiple serialisers (and guarantee upgrade from prior serialised representations). If a program wants to use reflection to hook up things from a UI builder, it can expose some interfaces to dynamic reflection.

          That brings me to two related things: Most languages today have no concept of an ABI at the source level. One of the reasons C remains popular is that C is not close to the metal, it’s close to the linker. C source-level constructs map very cleanly to things that are exposed as part of your module’s ABI (especially with compiler extensions for things like ELF visibility). It’s relatively easy to tell if a source-level change to a C program will change the public ABI (though only if you can see which structures escape at the boundary). For software engineering, I’m increasingly convinced that ‘public’ and ‘package’ are the only two visibility types that make sense: is this part of my public, guaranteed stable, ABI / API, or not? If not, then I may still want to break abstractions within my package and poke at that field, but I’m going to always rebuild that code at the same time, so I don’t care too much about the ABI and if I break the API then the person breaking it will see and fix build failures. I would love to have a language explicitly understand ABI versions and compatibility adaptors though, and have ABI guarantee violations be build failures for a package. If you release v1.0 and then you rebuild in a way that removes or changes the exposed ABI, you get a compile / link failure if you didn’t bump the version to 2.0 (or something like 2.unreleased).

      2. 5

        This article is from 2014. If one continues to read through the author’s journey, then they eventually discover map and others in Swift. They also explain their wonder at Haskell.

        As usual, “functional” is being used here to signify Lisps and MLs; it sounds like, in particular, the author expects a functional programming language to be like Haskell.

        Objects and functions have a well-known sort of duality. We can formally model objects as state transformers; they are pure, but become impure when embedded in a particular stateful context. The rest is window dressing. For example, linked lists can be encoded as objects, and the encoding is the same as in traditional C exercises handed to undergraduate students, with a structure holding a pair of pointers, giving the O(1) behavior that the author craves.

        To some extent, the author’s complaint is about the prelude chosen by the language designers. Like with many other languages, the lowest layers of Swift’s user-facing APIs are built out of Swift itself, and so the degree to which Swift feels “functional” is controlled by the choice of prelude. I don’t work with Swift myself, but under a minute of searching led me to alternative functional Swift preludes.

        As a final note, the author’s criteria don’t really separate functional programming from, say, relational logic programming. Both paradigms orient around immutable built-from-scratch data structures and arrows which send values to other values; the main difference is the choice of category (Cartesian closed vs. compact closed) but it leads to a difference in behavior and experience.

        1. 1

          Good point about the prelude, but on the other hand it makes little sense to separate a language from its ecosystem, especially a language like Swift that was created to fill a very specific niche. For me, it was the nature of the standard data structures that made me feel it’s an uphill struggle to try to stick to the functional idiom in Swift, although some of the other pieces are there. Glancing at the prelude you linked to, it seems they are trying to address that which is great. If it can co-exist with the standard prelude, I might even try it out sometime.

          Your last point is very interesting to me - to be fair, virtually no-one takes an interest in the similarities and differences between functional and relational programming. Do you have a link for the category theoretic characterization?

          1. 1

            I don’t have a single really good link. The closest I have is this entrypoint to nLab, but that table doesn’t show allegories. The main idea, informally, is that we can lift functions to relations, but we cannot always find a function which expresses a given relation. So, we can lift all of functional programming into relational programming, and we gain the ability to run in reverse, but lose the ability to have exactly one answer. There is more to discover formally; I have yet to figure out how to apply it to programming, but Set and Rel together form a very nice double category, and perhaps this doubling-up of heterogenous categories is the right explanation for how they are different and yet related.

            1. 1

              Thanks, interesting stuff! I posted another paper that you might find interesting (although it doesn’t concern functional programming): https://lobste.rs/s/zvum8i/from_conventional_institution

        2. 4

          The confusion for many is not realizing that most mainstream programming languages have both a major, and one or more minor paradigms. See P. Van Roy and S. Haridi. Concepts, Techniques, and Models of Computer Programming, MIT Press, 2004.

          There are few pure (single paradigm) languages, because as Gerry Sussman (co-author of SICP) says…

          “Remember a real engineer doesn’t want just a religion about how to solve a problem, like object-oriented or functional or imperative or logic programming. This piece of the problem wants to be a functional program, this piece of the program wants to be imperative, this piece wants to be object-oriented, and guess what, this piece wants to be logic feed based and they all want to work together usefully.”

          Consider for example, Java’s main paradigm is object-oriented, but it has both procedural and functional minor paradigms. Swift is the same. That Smalltalk supports procedural programming as a minor paradigm doesn’t mean that we call it a procedural language.

          1. 1

            Well, at the time the article was written, Java only supported functional paradigms as an experimental feature.

          2. 3

            I read this article when it was only a year old, so it’s funny to see it come back now. I wonder how it would look if written a few years later (Swift was in complete flux when the article was written, and didn’t stabilise basic syntax until 2016–2017). flatMap, for one thing, has been part of the language at least since Swift 2, if not earlier. The functionally-inspired Swift users certainly don’t use mutable values unless absolutely necessary, so at least that is a functional way of thinking that has influenced Swift in practice, if not the language as such (which is very multi-paradigmatic and doesn’t strive to be purely functional).