Kris apparently wrote this in part because I was asking him questions.

I enjoyed this article because it transcends a specific programming language, and gave me what felt like a primitive that I could apply to almost any language.

GADTs are what you get when you add non-syntactic type equalities to ADTs.

Normal equality on types is generally considered to be syntactical equivalence modulo beta-reduction. Beta-reduction is the reduction rule (λx. e) e' → e[x/e'], aka function application. So for example, the types list a and list a are equivalent. So are the types (λx. list x) a and list a.

Non-syntactic type equality is when the context influences type equalities. Let’s say in our scope we have a variable co with type (a ~ b), and we want to know if list a and list b are equivalent. The answer is yes, because we can use co to rewrite a to b. Types like (a ~ b) are called coercions, and if you have one in scope, that is proof of a particular equality.

Here is an example from the link that @taylorfausak posted:

data Expr a where
I :: Int -> Expr Int
B :: Bool -> Expr Bool
Add :: Expr Int -> Expr Int -> Expr Int
Mul :: Expr Int -> Expr Int -> Expr Int
Eq :: Expr Int -> Expr Int -> Expr Bool

This is syntactic sugar for the following:

data Expr a = I Int (a ~ Int)
| B Bool (a ~ Bool)
| Add (Expr Int) (Expr Int) (a ~ Int)
| Mul (Expr Int) (Expr Int) (a ~ Int)
| Eq (Expr Int) (Expr Int) (a ~ Bool)

That is, when you create an I, you have to supply a proof that the type variable a is Int. When you create a B, you have to supply a proof that the type variable a is Bool. Etc. When you perform a case on an Expr a you then get the appropriate coercions in scope for rewriting. This is how the evaluator from the link works:

eval :: Expr a -> a
eval (I n) = n
eval (B b) = b
eval (Add e1 e2) = eval e1 + eval e2
eval (Mul e1 e2) = eval e1 * eval e2
eval (Eq e1 e2) = eval e1 == eval e2

When we perform a pattern match and the case is I n, we return the Intn. But, we needed to actually return an a, per the type signature! No problem, we have a coercion from a ~ Int handy in our scope, and can use that to rewrite, and thus make the example type check. The other cases work similarly.

Misc facts:

You can build coercions in languages with higher-kinded polymorphism. a ~ b can be represented using a Leibniz equality ∀f : ⋆ → ⋆. f a → f b.

Adding coercions to the type system makes type inference undecidable. This is why you generally have to supply a type signature on a function that uses GADTs.

Proving that two types are equivalent modulo a context is undecidable in the general case. Consider equalities between type functions, or, equivalently, between polymorphic types (a proof that ∀x. e = ∀x. e' isn’t much different than a proof λx. e = λx. e')

I can’t say I fully grok GADTs, but conceptually they seem similar. This page does a good job explaining the differences between ADTs and GADTs in terms of a concrete example.

The way I understand it, all the “cases” in a regular ADT all have the same type. A GADT lets you define the “cases” such that different cases might appear differently to the type system, which allows you to make stronger guarantees (since you can, for example, abstract over subsets of the cases).

For example, in @cmm’s expression example, you can tell the type system that a particular Expr “case” can only be built out of other Expr cases parameterized upon Int, rather than accepting any Expr case (and hoping for the best).

To answer the OP’s question about “not”-types, a Not T isn’t anything but aT but a proof that Ts do not exist. In the Curry Howard, isomorphism we have:

(A, B) ~ A and B, because the tuple is a proof that an A exists and a B exists.

Union A B ~ A or B, because an instance of the union proves that either an A or a B exists.

A -> B ~ A implies B, because a function that takes A and returns B means that a B exists if an A exists.

A Not A is represented by A -> #f where #f is some empty type (i.e. some type that one knows is uninhabited).

Since this logic is intuitionist, you don’t have the guarantees that classical logic (in which a thing is true or false) provides. In particular:

there’s no double-negation elimination, because ((A -> #f) -> #f) -> A is uninhabited. There’s no function with that type signature, or way to generate an A just because you have an ((A -> #f) -> #f ~ not-not-A.

there’s no law of the excluded middle. In other words Union A (A -> #f) is not inhabited for all A.

This is because, in intuitionist logic, things are judged according to provability rather than some notion of platonic truth, so ~(~A) actually means “ ‘A is not provable’ is not provable”, which is not a proof of A.

While it is true in a language like Coq ∀a.a is uninhabited, in most programming languages there is an extremely useful member of that type: the construct that throws an exception (such as error in Haskell).

Also, I’m not sure if I would categorize classical logic as “some notion of platonic truth” given that it has a natural computational interpretation in terms of call/cc.

You’re right that when I call a type “uninhabited”, I’m ignoring error cases and infinite loops (⊥). In Haskell, types that are intuitionistically uninhabited can be inhabited.

For example, (a -> a) -> a isn’t even classically inhabited, but it’s the type signature of fix, which is permissible if you’re OK with infinite loops. If a is an empty type, than the only a -> a is id and fix id is ⊥.

As for callCC, the type signature is MonadCont m => ((a -> m b) -> m a) -> m a. This looks like Pierce’s law (which is equivalent to excluded middle) but it’s not quite the same. You can have an m a without an a, because (a -> r) -> r doesn’t require an a in order to be inhabited. For example, callCC (\_ -> 5) is a valid m a for any a, even if uninhabited, because it ignores the a.

I’m not referring to the monadic encoding of call/cc. I’m talking about an approach like the one taken in the POPL ‘90 paper A Formulae-as-Types Notion of Control

Kris apparently wrote this in part because I was asking him questions.

I enjoyed this article because it transcends a specific programming language, and gave me what felt like a primitive that I could apply to almost any language.

I think @ChadSki could write a book with all the questions I’ve asked him over the years…

The URL appears to be incorrect on the header link. It goes to the blog rather than the article, which is at

`http://blog.jenkster.com/2016/06/functional-mumbo-jumbo-adts.html`

instead.Odd, it pulled the title correctly. I can’t change the URL, hopefully a moderator can jump in for us!

Fixed.

Ah I see the problem now. My blog wasn’t generating the correct

`canonical`

link in the header.Is an ADT the same as a GADT?

GADTs are what you get when you add non-syntactic type equalities to ADTs.

Normal equality on types is generally considered to be syntactical equivalence modulo beta-reduction. Beta-reduction is the reduction rule

`(λx. e) e' → e[x/e']`

, aka function application. So for example, the types`list a`

and`list a`

are equivalent. So are the types`(λx. list x) a`

and`list a`

.Non-syntactic type equality is when the context influences type equalities. Let’s say in our scope we have a variable

`co`

with type`(a ~ b)`

, and we want to know if`list a`

and`list b`

are equivalent. The answer is yes, because we can use`co`

to rewrite`a`

to`b`

. Types like`(a ~ b)`

are called coercions, and if you have one in scope, that is proof of a particular equality.Here is an example from the link that @taylorfausak posted:

This is syntactic sugar for the following:

That is, when you create an

`I`

, you have to supply a proof that the type variable`a`

is`Int`

. When you create a`B`

, you have to supply a proof that the type variable`a`

is`Bool`

. Etc. When you perform a`case`

on an`Expr a`

you then get the appropriate coercions in scope for rewriting. This is how the evaluator from the link works:When we perform a pattern match and the case is

`I n`

, we return the`Int`

`n`

. But, we needed to actually return an`a`

, per the type signature! No problem, we have a coercion from`a ~ Int`

handy in our scope, and can use that to rewrite, and thus make the example type check. The other cases work similarly.Misc facts:

You can build coercions in languages with higher-kinded polymorphism.

`a ~ b`

can be represented using a Leibniz equality`∀f : ⋆ → ⋆. f a → f b`

.Adding coercions to the type system makes type inference undecidable. This is why you generally have to supply a type signature on a function that uses GADTs.

Proving that two types are equivalent modulo a context is undecidable in the general case. Consider equalities between type functions, or, equivalently, between polymorphic types (a proof that

`∀x. e = ∀x. e'`

isn’t much different than a proof`λx. e = λx. e'`

)For more information, System F with Type Equality Coercions may be useful.

GADTS are Generalised ADTs, so they’re a more flexible version of the same idea.

I’d like to do a follow-up on GADTs, but I want to find a good concrete example first. (One that isn’t a parser!)

Heh… Every example I can think of is a parser. :)

I can’t say I fully grok GADTs, but conceptually they seem similar. This page does a good job explaining the differences between ADTs and GADTs in terms of a concrete example.

The way I understand it, all the “cases” in a regular ADT all have the same type. A GADT lets you define the “cases” such that different cases might appear differently to the type system, which allows you to make stronger guarantees (since you can, for example, abstract over subsets of the cases).

For example, in @cmm’s expression example, you can tell the type system that a particular

`Expr`

“case” can only be built out of other`Expr`

cases parameterized upon`Int`

, rather than accepting any`Expr`

case (and hoping for the best).To answer the OP’s question about “not”-types, a

`Not T`

isn’tanything but a`T`

but a proof that`T`

s do not exist. In the Curry Howard, isomorphism we have:`(A, B) ~ A and B`

, because the tuple is a proof that an`A`

exists and a`B`

exists.`Union A B ~ A or B`

, because an instance of the union proves that either an`A`

or a`B`

exists.`A -> B ~ A implies B`

, because a function that takes`A`

and returns`B`

means that a`B`

exists if an`A`

exists.A

`Not A`

is represented by`A -> #f`

where`#f`

is some empty type (i.e. some type that one knows is uninhabited).Since this logic is intuitionist, you don’t have the guarantees that classical logic (in which a thing is true or false) provides. In particular:

`((A -> #f) -> #f) -> A`

is uninhabited. There’s no function with that type signature, or way to generate an`A`

just because you have an`((A -> #f) -> #f ~ not-not-A`

.`Union A (A -> #f)`

is not inhabited for all A.This is because, in intuitionist logic, things are judged according to provability rather than some notion of platonic truth, so ~(~A) actually means “ ‘A is not provable’ is not provable”, which is not a proof of A.

While it is true in a language like Coq

`∀a.a`

is uninhabited, in most programming languages there is an extremely useful member of that type: the construct that throws an exception (such as`error`

in Haskell).Also, I’m not sure if I would categorize classical logic as “some notion of platonic truth” given that it has a natural computational interpretation in terms of call/cc.

You’re right that when I call a type “uninhabited”, I’m ignoring error cases and infinite loops (⊥). In Haskell, types that are intuitionistically uninhabited can be inhabited.

For example,

`(a -> a) -> a`

isn’t even classically inhabited, but it’s the type signature of`fix`

, which is permissible if you’re OK with infinite loops. If`a`

is an empty type, than the only`a -> a`

is`id`

and`fix id`

is ⊥.As for

`callCC`

, the type signature is`MonadCont m => ((a -> m b) -> m a) -> m a`

. This looks like Pierce’s law (which is equivalent to excluded middle) but it’s not quite the same. You can have an`m a`

without an`a`

, because`(a -> r) -> r`

doesn’t require an`a`

in order to be inhabited. For example,`callCC (\_ -> 5)`

is a valid`m a`

for any`a`

, even if uninhabited, because it ignores the`a`

.I’m not referring to the monadic encoding of call/cc. I’m talking about an approach like the one taken in the POPL ‘90 paper A Formulae-as-Types Notion of Control

Huh. I learned in college (15 years ago) that ADT stood for Abstract Data Type!

It still does. It’s an overloaded acryonym. Wikipedia lists five different computing-related articles for it. https://en.wikipedia.org/wiki/ADT