Some well-meant criticism: I wish the article would have kept going. Since the first two sections are usually what the developer is used too, I thought it was a setup to finally have a comprehensive explanation and example of higher order types, only to being told that higher types are useful and that’s it, end of the story.

This is great. Could someone please clarify/expand on:

List has kind Type -> Type.

Is the idea here that the List (constructor? first order type? operator?) takes one type (the thing we are making a list of, Foo) and returns a new type List(foo)?

This was short and straight to the post, amazingly written, however I feel it’s a bit too short. Not everyone will understand the return type annotation such as (Type -> Type) -> Type, personally I would of also went and slightly explained how that format works. Just my two cents.

Nice article. I wanted to expand a little on Foldable and Traversable.

In Haskell at least, Foldable and Traversable don’t have kind “(Type -> Type) -> Type”, instead, they have kind “(Type -> Type) -> Constraint”. Playing in ghci:

So, what’s that “Constraint” thing, in what way it’s different from “Type”?

“Foldable List” doesn’t mean that “Foldable List” as such is a type that can have values. Instead, it says “the type List (of kind Type -> Type) supports the Foldable operations, like length, fold, and so on”. That is: “Constraint” is the kind of type-level assertions that say “such type supports the operations of such typeclass”. “Foldable List” has kind “Constraint”.

A simpler example, with the Ord typeclass which enables comparisons:

ghci> :k Ord
ghci> Ord :: Type -> Constraint

It takes a Type instead of a Type -> Type because Ord is applied to “proper” types like Int or Bool. “Ord Int” has kind “Constraint”.

So, what kind of type would have kind “(Type -> Type) -> Type”? A simple but not very useful example would be a wrapper that is parameterized by a type constructor “Type -> Type” and contains a value of that constructor applied to Int:

ghci> data Wrap f = Wrap (f Int)
ghci> :k Wrap
Wrap :: (Type -> Type) -> Type

That is, Wrap List contains a List of Ints. Wrap Maybe contains a Maybe Int.

More useful is the kind of monad transformers, used to “augment” a base monad with some extra behaviour:

ghci> import Control.Monad.Trans.Maybe
ghci> :k MaybeT
MaybeT :: (Type -> Type) -> Type -> Type

The first argument is a monad type of kind Type -> Type (like IO for example). The second argument is simply the type of the resulting action:

ghci> :k MaybeT IO
MaybeT IO :: Type -> Type
ghci> :k MaybeT IO Int
MaybeT IO Int :: Type

I admit to hand-waving around that question a bit at the end of my post. I wanted to keep the material as accessible and as general as I could, and introducing kinds built on anything other than Type (including Constraint, or e. g. any use of DataKinds) seemed slightly out-of-scope.

Not necessarily. Check out Andrej Bauer’s “Spartan Type Theory” for a simple OCaml implementation. The link to the lecture recording no longer seems to work, but there are some slides too.

This stuff is pretty new to me. Apologies for naive thinking, but I think my next question is along the lines of “what is the benefit of treating types as different to kinds?”.

I think I’m thinking along the lines of older languages having a distinction between values and functions, and a selling point of better/cleaner abstraction being “first class functions”.

What would be the implications of having “first class types/kinds”? That then makes “higher kinded types” more normalised, as “higher order functions” are perhaps today?

There’s definitely work in this direction, and there’s merit to it. There a few different spots in the design space this line of thought could take you, some of which result in incredibly powerful systems. I’ll draw distinctions between a few different designs:

Infinite type hierarchies: this is when you keep types vs. kinds stratified, but the kind layer mirrors the type layer, and has another layer above it (sorts? categories? let’s just go with types^3), which has another layer above it (types^4), etc. All of these look identical, but we keep them separated.

type :: type - read “‘type’ has type ‘type’.” – in this system, we actually flatten all of the layers of the type system, so that they all exist in the same space.

Dependent types, where we actually allow types to depend on runtime values.

I don’t know of any system that does just (1). It seems like it would make implementation more complex, and I don’t have a strong intuition of what the impact of it would be on programming.

A lot of concessions in type systems come down to “implementing this is hard;” it’s very easy to trip over undecidable problems trying to add fancy features to type checkers. This is especially true if you want to support type inference.

(1) and (3) are often combined, typically in languages designed as theorem provers, such as Coq and Agda. As it turns out, there’s a direct correspondence between type systems and formal logic: a type can be considered a logical proposition, and a value is a proof of that proposition. Tuples are logical and, functions are logical implication (give me a proof of A, and I can use it to prove B), etc. And these more advanced type systems can be used to prove non-trival mathematical theorems. It’s pretty mind-blowing when you first encounter it.

These systems do not do (2), because doing so introduces logical paradoxes in the same vein as Russel’s Paradox; the stratification prevents this. They also have to impose some restrictions to ensure that programs always halt – otherwise an infinite loop could allow what is essentially circular reasoning.

Idris is an example of a language geared towards actually building software that does (1) and (3). It’s very much a research project.

One snag that Idris 1 hit is that when writing proofs, you often end up using very inefficient representations of things for the sake of making formal reasoning easier, and because you’ve allowed types to depend on values, that means they sometimes end up being there at runtime when you didn’t expect them to. I’ve heard stories of accidentally-quadratic implementations of various “provably correct” algorithms. Idris 2 addresses this by adding a rust-ownership-like system to track whether things are actually used, so you can have the types enforce that something is only used at compile time.

The big downsides to all this are:

As you can probably imagine, implementation of this is generally much much more complex than having a simple one-off special case.

Having a super powerful type system increases the learning curve of the language itself.

How to actually build software in systems like this is still something of an open research question (which Idris was built to explore).

One thing to take note of though is that if you write the description of your program down in Idris then you will likely not have a very hard time porting it to whatever compiler ends up being most powerful. The descriptions are very precise and in that sense you get to approach the future-proof-ness of mathematics (modulo whether anyone wants to maintain a reader for this dataformat).

One practical thing that kinds of types enables is the ability to create different layouts of data in memory. This may be useful in a garbage-collected language that has a uniform representation of types on the heap, but might want to also be able to pass values around on the stack, for example, and therefore would need to indicate to the compiler somehow that these should not be GC’d; kinds would enable this, by helping to delineate from pointers to heap-allocated values in the former case to, say, a packed 64-bit representation of some struct that is meant to be copied about by value in the latter. I’m just a dilettante on this topic, see this talk about unboxed types for OCaml for an actual expert: https://www.janestreet.com/tech-talks/unboxed-types-for-ocaml/

Others have given specific answers, but I will give a general one:

There is always a tradeoff between expressiveness and analyzability. When something is second-class, that means useful guarantees can be made about it. In the limit, the tradeoff is between type systems which are complete (all well-formed programs are typeable) and consistent (all typeable programs are well-formed). Most type-system designers choose for their type systems to be consistent (which is a very reasonable choice, imo).

That is a case where the balance falls fairly clearly on the side of analyzability over expressiveness. In other cases it is not so clear-cut. And there are other axes to consider, too; in many cases, increasing expressiveness will result in a more complex type system. I believe that e.g. idris’s type system is not even decidable!

I know things like this are usually written in terms of Haskell, etc but for people familiar with C++ higher minded types are template template parameters.

e.g. the >>= compose function has the type:

compose :: m a -> (a -> m b) -> m b

(renamed for clarity)
The C++ equivalent is:
template <template class M, typename A, typename B> M compose(M, std::function<M(A)>);

Note I am not saying that the C++ syntax is pretty, just that it’s a thing that exists, and is useful. Just in C++ you’re less likely to use it the the way it’s used in functional languages.

Something C++ does not have is higher ranked type which is another fun one - it’s essentially the ability to pass around a function without binding all (or any) of the type parameters. Interestingly higher ranked types cannot be inferred (at least in DHM 15 years ago), and need to be specified explicitly

This is well-written. Nice.

This is well explained thank you!

Some well-meant criticism: I wish the article would have kept going. Since the first two sections are usually what the developer is used too, I thought it was a setup to finally have a comprehensive explanation and example of higher order types, only to being told that higher types are useful and that’s it, end of the story.

Apparently I thought so too; immediately after posting it on Mastodon I followed it up with this example:

This is great. Could someone please clarify/expand on:

Is the idea here that the

`List`

(constructor? first order type? operator?) takes one type (the thing we are making a list of,`Foo`

) and returns a new type`List(foo)`

?Yes, I think you got that right

Thank you.

That’s right!

`List`

is a first-order type, and also called (at least in Haskell) atype constructor(because it constructs types).This was short and straight to the post, amazingly written, however I feel it’s a bit

tooshort. Not everyone will understand the return type annotation such as`(Type -> Type) -> Type`

, personally I would of also went and slightly explained how that format works. Just my two cents.Nice article. I wanted to expand a little on Foldable and Traversable.

In Haskell at least, Foldable and Traversable don’t have kind “(Type -> Type) -> Type”, instead, they have kind “(Type -> Type) -> Constraint”. Playing in ghci:

So, what’s that “Constraint” thing, in what way it’s different from “Type”?

“Foldable List” doesn’t mean that “Foldable List” as such is a type that can have values. Instead, it says “the type List (of kind Type -> Type) supports the Foldable operations, like length, fold, and so on”. That is: “Constraint” is the kind of type-level assertions that say “such type supports the operations of such typeclass”. “Foldable List” has kind “Constraint”.

A simpler example, with the Ord typeclass which enables comparisons:

It takes a Type instead of a Type -> Type because Ord is applied to “proper” types like Int or Bool. “Ord Int” has kind “Constraint”.

So, what kind of type would have kind “(Type -> Type) -> Type”? A simple but not very useful example would be a wrapper that is parameterized by a type constructor “Type -> Type” and contains a value of that constructor applied to Int:

That is, Wrap List contains a List of Ints. Wrap Maybe contains a Maybe Int.

More useful is the kind of monad transformers, used to “augment” a base monad with some extra behaviour:

The first argument is a monad type of kind Type -> Type (like IO for example). The second argument is simply the type of the resulting action:

I admit to hand-waving around that question a bit at the end of my post. I wanted to keep the material as accessible and as general as I could, and introducing kinds built on anything other than

`Type`

(including`Constraint`

, or e. g. any use of DataKinds) seemed slightly out-of-scope.That’s a clear explanation, great contribution!

Makes me appreciate the conceptual simplicity of dependent types.

Way harder to typecheck in practice, though.

Not necessarily. Check out Andrej Bauer’s “Spartan Type Theory” for a simple OCaml implementation. The link to the lecture recording no longer seems to work, but there are some slides too.

Not too much harder? What complexity are you thinking of?

This stuff is pretty new to me. Apologies for naive thinking, but I think my next question is along the lines of “what is the benefit of treating types as different to kinds?”.

I think I’m thinking along the lines of older languages having a distinction between values and functions, and a selling point of better/cleaner abstraction being “first class functions”.

What would be the implications of having “first class types/kinds”? That then makes “higher kinded types” more normalised, as “higher order functions” are perhaps today?

There’s definitely work in this direction, and there’s merit to it. There a few different spots in the design space this line of thought could take you, some of which result in incredibly powerful systems. I’ll draw distinctions between a few different designs:

`type :: type`

- read “‘type’ has type ‘type’.” – in this system, we actually flatten all of the layers of the type system, so that they all exist in the same space.runtime values.I don’t know of any system that does

just(1). It seems like it would make implementation more complex, and I don’t have a strong intuition of what the impact of it would be on programming.A lot of concessions in type systems come down to “implementing this is hard;” it’s very easy to trip over undecidable problems trying to add fancy features to type checkers. This is especially true if you want to support type

inference.(1) and (3) are often combined, typically in languages designed as theorem provers, such as Coq and Agda. As it turns out, there’s a direct correspondence between type systems and formal logic: a type can be considered a logical proposition, and a value is a proof of that proposition. Tuples are logical and, functions are logical implication (give me a proof of A, and I can use it to prove B), etc. And these more advanced type systems can be used to prove non-trival mathematical theorems. It’s pretty mind-blowing when you first encounter it.

These systems do not do (2), because doing so introduces logical paradoxes in the same vein as Russel’s Paradox; the stratification prevents this. They also have to impose some restrictions to ensure that programs always halt – otherwise an infinite loop could allow what is essentially circular reasoning.

Idris is an example of a language geared towards actually building software that does (1) and (3). It’s very much a research project.

One snag that Idris 1 hit is that when writing proofs, you often end up using very inefficient representations of things for the sake of making formal reasoning easier, and because you’ve allowed types to depend on values, that means they sometimes end up being there at runtime when you didn’t expect them to. I’ve heard stories of accidentally-quadratic implementations of various “provably correct” algorithms. Idris 2 addresses this by adding a rust-ownership-like system to track whether things are actually used, so you can have the types enforce that something is only used at compile time.

The big downsides to all this are:

Thank you for the detailed answer, lots to think about here.

One thing to take note of though is that if you write the description of your program down in Idris then you will likely not have a very hard time porting it to whatever compiler ends up being most powerful. The descriptions are very precise and in that sense you get to approach the future-proof-ness of mathematics (modulo whether anyone wants to maintain a reader for this dataformat).

One practical thing that kinds of types enables is the ability to create different layouts of data in memory. This may be useful in a garbage-collected language that has a uniform representation of types on the heap, but might want to also be able to pass values around on the stack, for example, and therefore would need to indicate to the compiler somehow that these should not be GC’d; kinds would enable this, by helping to delineate from pointers to heap-allocated values in the former case to, say, a packed 64-bit representation of some struct that is meant to be copied about by value in the latter. I’m just a dilettante on this topic, see this talk about unboxed types for OCaml for an actual expert: https://www.janestreet.com/tech-talks/unboxed-types-for-ocaml/

Others have given specific answers, but I will give a general one:

There is always a tradeoff between expressiveness and analyzability. When something is second-class, that means useful guarantees can be made about it. In the limit, the tradeoff is between type systems which are complete (all well-formed programs are typeable) and consistent (all typeable programs are well-formed). Most type-system designers choose for their type systems to be consistent (which is a very reasonable choice, imo).

That is a case where the balance falls fairly clearly on the side of analyzability over expressiveness. In other cases it is not so clear-cut. And there are other axes to consider, too; in many cases, increasing expressiveness will result in a more complex type system. I believe that e.g. idris’s type system is not even decidable!

I know things like this are usually written in terms of Haskell, etc but for people familiar with C++ higher minded types are template template parameters.

e.g. the >>= compose function has the type:

compose :: m a -> (a -> m b) -> m b

(renamed for clarity) The C++ equivalent is: template <template class M, typename A, typename B> M compose(M, std::function<M(A)>);

Note I am not saying that the C++ syntax is pretty, just that it’s a thing that exists, and is useful. Just in C++ you’re less likely to use it the the way it’s used in functional languages.

Something C++ does not have is higher ranked type which is another fun one - it’s essentially the ability to pass around a function without binding all (or any) of the type parameters. Interestingly higher ranked types cannot be inferred (at least in DHM 15 years ago), and need to be specified explicitly