The author advocates the heavy use of natural transformations, which (using the type system) statically force you to separate concerns. This removes discipline requirements from the human and shifts them to the compiler.

I haven’t used these in many places, so I can’t speak for the author, but one cool way I’ve seen these used is this: imagine you want the user to provide a function that does something to a tree (e.g. cut off the bottom layer, remove the left side, etc.) but under no circumstances does the function vary its behavior depending on the contents of the tree. This is important to creating well-behaved algorithms sometimes. You could do

foo :: (Tree a -> Tree a) -> [Tree a] -> [Tree a]
foo = map

The problem is that the user could easily supply a function like

foo (fmap (+1))

Which is wrong because it inspects the contents of the tree, not just its structure. Instead, if you declare

foo :: (forall a . Tree a -> Tree a) -> [Tree b] -> [Tree b]
foo = map

It is impossible to provide a function that actually inspects the contents of the tree. The function must be a “natural transformation” which cannot inspect the tree’s contents.

You can prove other cool things like this using universally quantified types (the “forall”); this is a very contrived example. Check out stuff like

I’m not sure how you would use this construction all the time, and I don’t know what a language conducive to it would look like. It would probably support impredicative types.

One of the big insights of ML-like languages is that you can have a really, really rich language of “algebraic data”. Essentially, this means building rich types up from first creating “patterns” via “AND”-ing, “OR”-ing, functions, constants, and “holes” and then taking their fixed points with respect to those holes.

Or, as an example, consider the following data

F a b = Empty | Pair a b

which reads as either Empty OR Pair which is the “hole” named a AND the “hole” named b. You take the fixed point by repeatedly filling a hole with a new copy of the pattern, so here’s one step of taking the fixed point with respect to b

F a b = Empty | Pair a (F a b)

and if you do this “infinitely many” times, you end up with a linked list.

Functor-oriented programming takes this perspective one step further by noting that while we often like to work at the level of the fixed points of these patterns (“functors”), we usually define functionality inductively, one “layer” at a time. If you write all of the functionality of your program directly at this level then you can just compose things together later with some very, very generic and very, very composable glue.

The issue is that it’s a big kludge to talk about this “layer at a time” programming in Haskell since Haskell really does want to deal with the fixed points instead. While there are plenty of provisions available to talk about “functor oriented programming”, it’s not the exact aim of Haskell and therefore the UX kind of sucks.

Most importantly, you have to provide a lot of hints to the type system to tell it what you’re doing with regards to that Really General Compositional Glue I mentioned above. This is annoying and sort of unforced. The reason Haskell requires this is because we can add mechanisms for “functor oriented programming” on as a library… and thus none of it is privileged by the compiler.

Well a functor is a type with an associated map function. This allows you to take a function from a -> b and map it to M a -> M b, M being your functor type. Good example is List and its associated map function. That all being said I haven’t a damn clue what they’re talking about.

As fine of a language that Haskell is, it is not actually that suitable for functor oriented programming. The problem is that, under normal circumstances, there is no reduction or equivalence classes at the type level. For example, the identity functor does not transparently disappear during composition, the Compose functor is not transparently associative, and the Swap functor composed with itself does not reduce to the identity functor. To cope with this one must litter one’s code with newtype wrapper and unwrappers to make all these natural transformations explicit. In principle, these transformations should have no run-time consequences, but when they are used in higher-order ways, unfortunately they sometimes do. Despite the problems, I am not aware of any another practical language that better supports this style of programming.

Perhaps this would work better in a dependently-typed language, like Idris or Agda, where we can apply functions to types and the type-checker’s unification is performed after beta-normalising such function applications.

For example, there’s a nice pattern where we use a function to ‘calculate a type’, but rather than “building up” a fancy type, we just choose between very simple types. For example, we might have a type isEven n which is only inhabited (“provable”) when n is an even number. That sounds hard, but it’s actually trivial: the isEven function checks whether or not n is even, if it is we return the Unit type and if it’s not we return the Empty type:

isEven : Nat -> Type
isEven Zero = Unit
isEven (Succ Zero) = Empty
isEven (Succ (Succ n)) = isEven n

Now we can use isEven n in our types, e.g. to ensure that we don’t try to halve an odd number (since that wouldn’t give us a Natural number):

halve : (n : Nat) -> isEven n -> Nat
halve Zero _ = Zero
halve (Succ (Succ n)) = Succ (halve n)

We can call this function as follows:

twentyOne : Nat
twentyOne = halve 42 unit

Here the value unit is a value of type Unit. Since 42 is even, then isEven 42 will reduce to Unit and our argument unit has the correct type. If we gave an odd number, like 43, then isEven 43 would reduce to Empty and unit would have the wrong type and the call would be rejected by the type checker. In fact, since the Empty type has no values, there’s no argument we could give that would make such a call pass the type checker. Hence why isEven n acts as a proof that n is even.

I’m not sure if this would work in general for such ‘functor-oriented programming’, but I can immediately see how to normalise-away an Identity functor:

At the risk of sounding very ignorant, what is Functor Oriented Programming, and what would a language that supported it well look like?

The author advocates the heavy use of natural transformations, which (using the type system) statically force you to separate concerns. This removes discipline requirements from the human and shifts them to the compiler.

I haven’t used these in many places, so I can’t speak for the author, but one cool way I’ve seen these used is this: imagine you want the user to provide a function that does something to a tree (e.g. cut off the bottom layer, remove the left side, etc.) but under no circumstances does the function vary its behavior depending on the contents of the tree. This is important to creating well-behaved algorithms sometimes. You could do

The problem is that the user could easily supply a function like

Which is wrong because it inspects the contents of the tree, not just its structure. Instead, if you declare

It is impossible to provide a function that actually inspects the contents of the tree. The function must be a “natural transformation” which cannot inspect the tree’s contents.

You can prove other cool things like this using universally quantified types (the “forall”); this is a very contrived example. Check out stuff like

https://hackage.haskell.org/package/mmorph-1.1.0/docs/Control-Monad-Morph.html#v:hoist

https://hackage.haskell.org/package/base-4.10.0.0/docs/Control-Monad-ST.html#v:runST

https://hackage.haskell.org/package/free-4.12.4/docs/Control-Monad-Free.html

I’m not sure how you would use this construction all the time, and I don’t know what a language conducive to it would look like. It would probably support impredicative types.

Thanks, an example like that is missing in the article.

For Java programmers, this seems to be equivalent to wildcards?

More like polymorphic methods:

Update: Silly mistake.

One of the big insights of ML-like languages is that you can have a really, really rich language of “algebraic data”. Essentially, this means building rich types up from first creating “patterns” via “AND”-ing, “OR”-ing, functions, constants, and “holes” and then taking their fixed points with respect to those holes.

Or, as an example, consider the following data

F a b = Empty | Pair a b

which reads as either Empty OR Pair which is the “hole” named a AND the “hole” named b. You take the fixed point by repeatedly filling a hole with a new copy of the pattern, so here’s one step of taking the fixed point with respect to b

and if you do this “infinitely many” times, you end up with a linked list.

Functor-oriented programming takes this perspective one step further by noting that while we often like to work at the level of the fixed points of these patterns (“functors”), we usually define functionality inductively, one “layer” at a time. If you write all of the functionality of your program directly at this level then you can just compose things together later with some very, very generic and very, very composable glue.

The issue is that it’s a big kludge to talk about this “layer at a time” programming in Haskell since Haskell really does want to deal with the fixed points instead. While there are plenty of provisions available to talk about “functor oriented programming”, it’s not the exact aim of Haskell and therefore the UX kind of sucks.

Most importantly, you have to provide a lot of hints to the type system to tell it what you’re doing with regards to that Really General Compositional Glue I mentioned above. This is annoying and sort of unforced. The reason Haskell requires this is because we can add mechanisms for “functor oriented programming” on as a library… and thus none of it is privileged by the compiler.

Well a functor is a type with an associated map function. This allows you to take a function from a -> b and map it to M a -> M b, M being your functor type. Good example is List and its associated map function. That all being said I haven’t a damn clue what they’re talking about.

Perhaps this would work better in a dependently-typed language, like Idris or Agda, where we can apply functions to types and the type-checker’s unification is performed after beta-normalising such function applications.

For example, there’s a nice pattern where we use a function to ‘calculate a type’, but rather than “building up” a fancy type, we just choose between very simple types. For example, we might have a type

`isEven n`

which is only inhabited (“provable”) when`n`

is an even number. That sounds hard, but it’s actually trivial: the`isEven`

function checks whether or not`n`

is even, if it is we return the`Unit`

type and if it’s not we return the`Empty`

type:Now we can use

`isEven n`

in our types, e.g. to ensure that we don’t try to halve an odd number (since that wouldn’t give us a`Nat`

ural number):We can call this function as follows:

Here the value

`unit`

is a value of type`Unit`

. Since 42 is even, then`isEven 42`

will reduce to`Unit`

and our argument`unit`

has the correct type. If we gave an odd number, like 43, then`isEven 43`

would reduce to`Empty`

and`unit`

would have the wrong type and the call would be rejected by the type checker. In fact, since the`Empty`

type hasnovalues, there’s no argument we could give that would make such a call pass the type checker. Hence why`isEven n`

acts as a proof that`n`

is even.I’m not sure if this would work

in generalfor such ‘functor-oriented programming’, but I can immediately see how to normalise-away an`Identity`

functor: