I don’t see anything in this post that I necessarily disagree with, but it begs the usefulness of the term “functional programming” since it’s a particularly hand-wavy term that defies a clean crisp definition. As used colloquially it’s a vestigial term for a family of languages that share some historical influence on each other, generally evolving out some sort of λ-calculus (but not necessarily). In 2015 languages that are considered functional-style overlap very little semantically.
There are plenty of other interesting classifiers that have well-defined meanings. Is it statically typed? Is it unityped? Is it pure? What kinds of polymorphism does it support? How does it track effects? Does it use a boxing for heap allocation? Does is it use runtime tagging? Does it support higher-kinded types? Does it have dependent types? … These questions have definitive answers and lead to much more interesting discussions, in my opinion.
This post boils down to one sentence, presented without any evidence:
Say it long, say it loud, functional programming is about side-effects.
The rest is elaborations on this theme.
Nowhere does he explain why this is a fundamental and not a consequence of a fundamental. Others have argued that equality is a more fundamental concern, and I agree. Whichever is true, though, I agree with Cyrus Omar’s comment on Twitter:
thinking of “functional” as having a technical, rather than a sociotechnical, meaning is fraught with peril
So any post which claims “FP is this one precise technical thing” is probably wrong, even if I personally agree with it.
I think I enjoy where this author went and cannot disagree on technical detail, but I’m a vocal proponent of tossing away the idea that languages are functional or not. FP is a culture, not a technology, and you can place whatever core values, principals, or practices you want at the center and then go to town trying to say that a language is or is not functional. I just don’t see the merit.
That said, I’d be happy to argue that recognition that side effects are a thing and then handling them as the thing that they are is something very dear to this culture. FP has a long tradition of talking about and taking seriously fine distinctions between things and “values” versus “computation” is a major one. It also has a strong reflection in strictness versus laziness, to be clear.
FP may be a culture but languages are a culture too. Try writing purely functional code in java. The stdlib will fight you, your code reviewers will fight you. Heck, your IDE will fight you. Java’s tooling, libraries, and workaday programmers are all part of Java’s culture and that culture is vehemently anti-pure-functional.
I agree. As long as we’re talking cultural phenomena then it’s vital to note how pervasive they are. To say that FP-as-culture and Java-as-culture are at odds is a powerful and, I believe, irrefutable statement. We can discuss how the use of objects as metaphorical abstractions or to wrap verbs clouds one’s ability to express and work with the leaner tools of “functional” abstraction, for instance.
It’ll just lack the hard-edged appeal of a technical argument. That’s totally fine, but I feel like it just wracks up fewer points in today’s FP news zeitgeist. I’d personally rather see more of those arguments than trying to wrap up the whole distinction all within on technical trick or language feature.
This is a very strong analysis, and it’d be interesting to take the discussion even further. Side effects are a core part of our job, because (a) some things are semantically stateful, and (b) mutable state is a powerful optimization and harmless when it doesn’t “leak” (e.g. memoizing a function, which uses state for performance but doesn’t dump any complexity upstream). The question is: what is the best way to manage them? IO is a start, but monad transformer stacks get hard to use and probably limit Haskell’s mass appeal. I’m excited by Purescript’s extensible effects and by free monads/interpreters, and by dependent types… but it can get hard to tell which direction is “into the rabbit hole” and which is toward stuff that we could actually push into the mainstream (since the demand for statically-reasonable code is there, and few good programmers love Java or or Python or Go, though many use them).
I actually don’t consider the core concepts of Haskell to be that much of an impediment to learning the language. The ideas of lazy computation and IO a representing a stateful action that returns an a are not that hard; what gets difficult is the mounting complexity of doing something larger, like a game, “right”. The first iteration with lots of IOs might work, but this is unsatisfying and leads into the question about what monad transformer stacks should replace the stuff that really doesn’t need to be “IOing”… and that’s the part that takes a long time to grok (and explain to new programmers).
It will be interesting to see how Clojure evolves if an off-JVM Clojure overtakes the existing implementation. The JVM is heavily stateful and makes it possible to do all sorts of janky stuff while nominally “in Clojure”, but an off-JVM Clojure could be much more aggressive in enforcing referential transparency (which is often what we desire when we talk about functional programming.)
The further I get into functional the more I wonder if what really holds Haskell back isn’t just syntax and nomenclature, as superficial as they may seem. Even as an experienced Scala programmer I find Haskell examples quite unreadable at a syntactic level - unparseable even. At a higher level so many functional names are just detached from any referent - I may understand Functor/Monoid/Applicative/Bind/Monad/Kleisli/Kind from simple experience, but even at this level of knowledge I have no idea what a Strength or a Profunctor is (other than the notion that it’ll be somehow related to a Functor) - it never gets any easier. Other names are actively misleading - I’m thinking particularly of “type constructor” or “monad transformer”. Contrast this with the Gang of Four patterns - Strategy or Factory or Mediator immediately evokes something real-world that gives a reasonable intuition for what something like that is supposed to do.
Not to take away from your point, but for interest:
Profunctor is a generalization of Functor which captures behavior similar to a function in that is has an “input” side and an “output” side. Normal Functors as used in Haskell are “covariant” which is what makes them “output” like. Contravariant Functors (seen here) are ones that are “input” like. Finally, Profunctor sort of glues a covariant and a contravariant functor together. Function arrows (_ -> _) are Profunctors because they’re covariant in their output parameter and contravariant in their input parameter.
Costrong (“strength”) is a refinement of a Profunctor, so named because there’s a category theory concept of “functorial strength” which corresponds to the strength of the Kleisli Profunctor. In particular, a functor is “strong” if it “commutes around” products, so Strength just talks about a Profunctor which “commutes around” products (tuples).
Similarly, “Choice” is a kind of “co-strength” where a Profunctor “commutes around” coproducts. The actual “Costrong” asks for “commuting around” products in the “other direction”.
So why are all these names really abstract and nonsensical? In my mind, it’s because these concepts are really all “geometric”. I visualize them not with words but instead with directions and “layerings"—so it’s difficult to capture them in such practical terms as Factory or Mediator.
I think both of these posts are great examples of why Haskell, Scala, and similar languages aren’t catching on as fast as they could be. Most developers aren’t math PhDs, so using a ton of advanced math theory and terminology hurts up take more than it helps.
If I want to learn a language, I probably don’t want to take a big detour into obscure abstract math, especially when there’s not a very clear, very big advantage to doing so.
It’s like the Java “architecture astronauts” all over again, but with math concepts and terminology instead of object oriented concepts.
I guess in my mind the right posture is to be patient and keep sharing. This terminology exists for very real reasons and forms something significantly more permanent than the tools of architecture astronauts (IMO). To try to avoid it feels like sugarcoating something valid, putting the gloves on.
The very hard work of building a suitable onramp from to this sort of conceptual world is happening from other angles and yet is also somewhat outside of the scope of my personal power and interest.
It could be the case that all of this is just going to evaporate the same as Java architecture astronauting. I can’t directly refute that. I’m personally invested in that not being the case, though.
The other answer is that someday we’ll find better terminology which succeeds in not pulling punches and also provides a softer approach for those unfamiliar. We’ll replace “Profunctor” with something carrying the same meaning but less ceremony. I’m totally happy with that as given two equal names I am not one to take sides. I’m just not sure anything has come close to parity.
We’ll replace “Profunctor” with something carrying the same meaning but less ceremony. I’m totally happy with that as given two equal names I am not one to take sides. I’m just not sure anything has come close to parity.
I find that this is often the case. For example, renaming Functor to Mappable seems like a good idea, but it’s not quite correct. Sets are ostensibly “Mappable” but not Functor, because of their semantics depend on equality and there’s no type-agnostic, structure-preserving fmap. And there’s really no good name for Monad (“Flatmappable?”).
Learning one new “mathy” concept isn’t the problem. If it just stopped at monads, I’d think that Haskell is still salable to average programmers. However, it doesn’t stop there. When you find yourself wanting to replicate the nice parts of traditional languages in an idiomatic and sound way, you find yourself getting into lenses/prisms/opticals, monad transformers, various language extensions that push Haskell toward being a dependently typed language, row types in Purescript and the lack thereof in standard Haskell, and so on. All of the new concepts are cool, and lends rigor to the nice parts of traditional imperative programming– a lot of this stuff makes it possible to do imperative programming, when that style is more useful, right!– but it opens up a very deep rabbit hole.
I’m a huge fan of Haskell, but I think it might be a Banquo language (“Thou shalt get kings, though thou be none”) that will influence the next big language, but not be it. (Scala won’t. It’s riding on a white elephant.) That said, it’s still one of the best languages ready for production right now, and the problems that remain with it are all so hard to solve that it’s likely to retain its superiority for a wide variety of purposes for a long time.
It’s probably the case that Haskell blazes too many trails to spend the time in education and smoothing the progression, but I also don’t really see too many other languages taking up this mantle. Ideas are spreading, to be sure, but it’s rare to see another community go in whole hog like this.
You can’t “map” a set while preserving its structure (at least without constraining your function a bit more), so I think Mappable is entirely accurate - after all, its only member is fmap. I’ve had some success with “Flatmappable”, though I know some approaches will put flatMap in a distinct Bind type (I’d be slightly interested to see examples of things that are Bind but not Monad - I guess I can imagine some of the unpointed types might fit?)
Finite maps are obvious examples of Bind. You cannot return a value into a Map since you haven’t got a natural default key (establishing the pointedness you mention) but you clearly can define a bind operation.
Say it long, say it loud, functional programming is about side-effects.
Is that this is one definition of a functional language but certainly not the only one. Functional Programming doesn’t have a hard definition which always seems to cloud discussions about it. You give a title like “Which Programming Languages Are Functional?” and people come into it with a particular definition which may or may not be the same definition.
What this article really means is “What Programming Languages are built around Pure Functions?” which is a much more precise term.
I don’t see anything in this post that I necessarily disagree with, but it begs the usefulness of the term “functional programming” since it’s a particularly hand-wavy term that defies a clean crisp definition. As used colloquially it’s a vestigial term for a family of languages that share some historical influence on each other, generally evolving out some sort of λ-calculus (but not necessarily). In 2015 languages that are considered functional-style overlap very little semantically.
There are plenty of other interesting classifiers that have well-defined meanings. Is it statically typed? Is it unityped? Is it pure? What kinds of polymorphism does it support? How does it track effects? Does it use a boxing for heap allocation? Does is it use runtime tagging? Does it support higher-kinded types? Does it have dependent types? … These questions have definitive answers and lead to much more interesting discussions, in my opinion.
This post boils down to one sentence, presented without any evidence:
The rest is elaborations on this theme.
Nowhere does he explain why this is a fundamental and not a consequence of a fundamental. Others have argued that equality is a more fundamental concern, and I agree. Whichever is true, though, I agree with Cyrus Omar’s comment on Twitter:
So any post which claims “FP is this one precise technical thing” is probably wrong, even if I personally agree with it.
I think I enjoy where this author went and cannot disagree on technical detail, but I’m a vocal proponent of tossing away the idea that languages are functional or not. FP is a culture, not a technology, and you can place whatever core values, principals, or practices you want at the center and then go to town trying to say that a language is or is not functional. I just don’t see the merit.
That said, I’d be happy to argue that recognition that side effects are a thing and then handling them as the thing that they are is something very dear to this culture. FP has a long tradition of talking about and taking seriously fine distinctions between things and “values” versus “computation” is a major one. It also has a strong reflection in strictness versus laziness, to be clear.
FP may be a culture but languages are a culture too. Try writing purely functional code in java. The stdlib will fight you, your code reviewers will fight you. Heck, your IDE will fight you. Java’s tooling, libraries, and workaday programmers are all part of Java’s culture and that culture is vehemently anti-pure-functional.
I agree. As long as we’re talking cultural phenomena then it’s vital to note how pervasive they are. To say that FP-as-culture and Java-as-culture are at odds is a powerful and, I believe, irrefutable statement. We can discuss how the use of objects as metaphorical abstractions or to wrap verbs clouds one’s ability to express and work with the leaner tools of “functional” abstraction, for instance.
It’ll just lack the hard-edged appeal of a technical argument. That’s totally fine, but I feel like it just wracks up fewer points in today’s FP news zeitgeist. I’d personally rather see more of those arguments than trying to wrap up the whole distinction all within on technical trick or language feature.
This is a very strong analysis, and it’d be interesting to take the discussion even further. Side effects are a core part of our job, because (a) some things are semantically stateful, and (b) mutable state is a powerful optimization and harmless when it doesn’t “leak” (e.g. memoizing a function, which uses state for performance but doesn’t dump any complexity upstream). The question is: what is the best way to manage them?
IOis a start, but monad transformer stacks get hard to use and probably limit Haskell’s mass appeal. I’m excited by Purescript’s extensible effects and by free monads/interpreters, and by dependent types… but it can get hard to tell which direction is “into the rabbit hole” and which is toward stuff that we could actually push into the mainstream (since the demand for statically-reasonable code is there, and few good programmers love Java or or Python or Go, though many use them).I actually don’t consider the core concepts of Haskell to be that much of an impediment to learning the language. The ideas of lazy computation and
IO arepresenting a stateful action that returns anaare not that hard; what gets difficult is the mounting complexity of doing something larger, like a game, “right”. The first iteration with lots ofIOs might work, but this is unsatisfying and leads into the question about what monad transformer stacks should replace the stuff that really doesn’t need to be “IOing”… and that’s the part that takes a long time to grok (and explain to new programmers).It will be interesting to see how Clojure evolves if an off-JVM Clojure overtakes the existing implementation. The JVM is heavily stateful and makes it possible to do all sorts of janky stuff while nominally “in Clojure”, but an off-JVM Clojure could be much more aggressive in enforcing referential transparency (which is often what we desire when we talk about functional programming.)
The further I get into functional the more I wonder if what really holds Haskell back isn’t just syntax and nomenclature, as superficial as they may seem. Even as an experienced Scala programmer I find Haskell examples quite unreadable at a syntactic level - unparseable even. At a higher level so many functional names are just detached from any referent - I may understand Functor/Monoid/Applicative/Bind/Monad/Kleisli/Kind from simple experience, but even at this level of knowledge I have no idea what a Strength or a Profunctor is (other than the notion that it’ll be somehow related to a Functor) - it never gets any easier. Other names are actively misleading - I’m thinking particularly of “type constructor” or “monad transformer”. Contrast this with the Gang of Four patterns - Strategy or Factory or Mediator immediately evokes something real-world that gives a reasonable intuition for what something like that is supposed to do.
Not to take away from your point, but for interest:
Profunctor is a generalization of
Functorwhich captures behavior similar to a function in that is has an “input” side and an “output” side. NormalFunctors as used in Haskell are “covariant” which is what makes them “output” like. ContravariantFunctors (seen here) are ones that are “input” like. Finally,Profunctorsort of glues a covariant and a contravariant functor together. Function arrows (_ -> _) are Profunctors because they’re covariant in their output parameter and contravariant in their input parameter.Costrong (“strength”) is a refinement of a Profunctor, so named because there’s a category theory concept of “functorial strength” which corresponds to the strength of the Kleisli Profunctor. In particular, a functor is “strong” if it “commutes around” products, so Strength just talks about a Profunctor which “commutes around” products (tuples).
Similarly, “Choice” is a kind of “co-strength” where a Profunctor “commutes around” coproducts. The actual “Costrong” asks for “commuting around” products in the “other direction”.
So why are all these names really abstract and nonsensical? In my mind, it’s because these concepts are really all “geometric”. I visualize them not with words but instead with directions and “layerings"—so it’s difficult to capture them in such practical terms as Factory or Mediator.
I think both of these posts are great examples of why Haskell, Scala, and similar languages aren’t catching on as fast as they could be. Most developers aren’t math PhDs, so using a ton of advanced math theory and terminology hurts up take more than it helps.
If I want to learn a language, I probably don’t want to take a big detour into obscure abstract math, especially when there’s not a very clear, very big advantage to doing so.
It’s like the Java “architecture astronauts” all over again, but with math concepts and terminology instead of object oriented concepts.
I guess in my mind the right posture is to be patient and keep sharing. This terminology exists for very real reasons and forms something significantly more permanent than the tools of architecture astronauts (IMO). To try to avoid it feels like sugarcoating something valid, putting the gloves on.
The very hard work of building a suitable onramp from to this sort of conceptual world is happening from other angles and yet is also somewhat outside of the scope of my personal power and interest.
It could be the case that all of this is just going to evaporate the same as Java architecture astronauting. I can’t directly refute that. I’m personally invested in that not being the case, though.
The other answer is that someday we’ll find better terminology which succeeds in not pulling punches and also provides a softer approach for those unfamiliar. We’ll replace “Profunctor” with something carrying the same meaning but less ceremony. I’m totally happy with that as given two equal names I am not one to take sides. I’m just not sure anything has come close to parity.
I find that this is often the case. For example, renaming
FunctortoMappableseems like a good idea, but it’s not quite correct. Sets are ostensibly “Mappable” but notFunctor, because of their semantics depend on equality and there’s no type-agnostic, structure-preservingfmap. And there’s really no good name forMonad(“Flatmappable?”).Learning one new “mathy” concept isn’t the problem. If it just stopped at monads, I’d think that Haskell is still salable to average programmers. However, it doesn’t stop there. When you find yourself wanting to replicate the nice parts of traditional languages in an idiomatic and sound way, you find yourself getting into lenses/prisms/opticals, monad transformers, various language extensions that push Haskell toward being a dependently typed language, row types in Purescript and the lack thereof in standard Haskell, and so on. All of the new concepts are cool, and lends rigor to the nice parts of traditional imperative programming– a lot of this stuff makes it possible to do imperative programming, when that style is more useful, right!– but it opens up a very deep rabbit hole.
I’m a huge fan of Haskell, but I think it might be a Banquo language (“Thou shalt get kings, though thou be none”) that will influence the next big language, but not be it. (Scala won’t. It’s riding on a white elephant.) That said, it’s still one of the best languages ready for production right now, and the problems that remain with it are all so hard to solve that it’s likely to retain its superiority for a wide variety of purposes for a long time.
It’s probably the case that Haskell blazes too many trails to spend the time in education and smoothing the progression, but I also don’t really see too many other languages taking up this mantle. Ideas are spreading, to be sure, but it’s rare to see another community go in whole hog like this.
You can’t “map” a set while preserving its structure (at least without constraining your function a bit more), so I think
Mappableis entirely accurate - after all, its only member isfmap. I’ve had some success with “Flatmappable”, though I know some approaches will putflatMapin a distinctBindtype (I’d be slightly interested to see examples of things that areBindbut notMonad- I guess I can imagine some of the unpointed types might fit?)Finite maps are obvious examples of
Bind. You cannotreturna value into aMapsince you haven’t got a natural default key (establishing the pointedness you mention) but you clearly can define a bind operation.Not that clear to me - how do you form the keys after a
flatMapin a way that wouldn’t allow collisions?The problem with statements like
Is that this is one definition of a functional language but certainly not the only one. Functional Programming doesn’t have a hard definition which always seems to cloud discussions about it. You give a title like “Which Programming Languages Are Functional?” and people come into it with a particular definition which may or may not be the same definition.
What this article really means is “What Programming Languages are built around Pure Functions?” which is a much more precise term.