There’s something interesting here I haven’t quite worked through: increasing correctness via types goes the opposite way from function subtyping! Given

A: Int -> Nat
B: Nat -> Int

Matt shows that B is going to be more safe than A. The interesting thing is that A is a subtype of B! It’s contravariant in the domain and covariant in the range. Any function with type signature matching B can also accept a function with a type signature matching A.

IIRC Haskell has doesn’t have function subtyping, so this is more academic, but it’s interesting from a PLT perspective.

[Caveat, hobbyist, armchair Haskell programmer:] There’s some kind of polymorphism subtyping, I’m not sure what it’s called: given toInt :: X -> Int you can write fromAnything = toInt :: (forall a. a) -> Int.

(The converse is a little harder because Int -> (forall a. a) is the same as forall a. Int -> a, but if you define data Anything = forall a. Anything a then you can write fromInt :: Int -> X and toAnything = Anything . fromInt :: Int -> Anything.)

However, good luck invoking the first and using the result of the second… I’m sure there’s a more practical example to be found.

Edit: Here’s a better exploration on Stack Overflow, with a link to this paper in the comments, and another comment on HN.

“forwards: the caller of the function is responsible for handling the possible error output
backwards: the caller of the function is required to providing correct inputs”

Both of these are required in high-confidence software. There might be bugs in the code you call.

Completely true, but that wouldn’t really help when writing (Haskell style) type signatures though. Let’s say we’re writing a function with a type like f :: A -> B, and we decide “there might be bugs in our code, callers should handle that possibility”, so we change the type to f :: A -> Maybe B.

This is a problem, for the reasons the author points out; specifically because we’re allowing ourselves to write f x = Nothing, or (if we’re able to branch on values of type A) a combinatorial explosion of other implementations which return Nothing for some values and not others.

If we stick to the smaller type (f :: A -> B) we prevent ourselves from writing any of those possibilities. That’s a win, regardless of whether there are still bugs in our implementation or in things we call (there usually will be!).

Failures are still possible, e.g. in Haskell these might be asynchronous exceptions (like out-of-memory), unsafePerformIO, or of course undefined. Haskell doesn’t track/enforce these (except for “safe Haskell”, which forbids unsafePerformIO), so callers must still handle them, as you say. I’m not sure what the best way to track/enforce this would be, but “enlarging” types doesn’t seem like the right way IMHO; so (for now) that seems to be orthogonal to the author’s arguments.

Learning Ada, this all seems obvious. Ada has nice support for subtyping and other things that allow for restricting the domain of subprograms.

I’m more surprised that this is, apparently, uncommon in Haskell and that Haskell would be prey to bugs such as large integers becoming negative, or am I misunderstanding?

The author’s not writing about integer wrapping where 2 ^ (some power of 2) + 1 evaluates to a negative number, he’s diagnosing places where parameter types or return types allow negative integers when they shouldn’t (an Integer type instead of a Natural type).

I’m wondering if this is the standard way to put dependent types in Haskell, stuff like nonZero and nonEmpty are the bread an butter of this patterns. Also he bashes the Option type to then use it instead of providing the proof.

In coq I would use the exist constructor to construct the return value of the function so that you can encode both the
return value, the return type, the predicate the value satisfies and the proof of the return value being a witness.

There’s something interesting here I haven’t quite worked through: increasing correctness via types goes the opposite way from function subtyping! Given

Matt shows that B is going to be more safe than A. The interesting thing is that A is a subtype of B! It’s contravariant in the domain and covariant in the range. Any function with type signature matching B can also accept a function with a type signature matching A.

IIRC Haskell has doesn’t have function subtyping, so this is more academic, but it’s interesting from a PLT perspective.

[Caveat, hobbyist, armchair Haskell programmer:] There’s some kind of polymorphism subtyping, I’m not sure what it’s called: given

`toInt :: X -> Int`

you can write`fromAnything = toInt :: (forall a. a) -> Int`

.(The converse is a little harder because

`Int -> (forall a. a)`

is the same as`forall a. Int -> a`

, but if you define`data Anything = forall a. Anything a`

then you can write`fromInt :: Int -> X`

and`toAnything = Anything . fromInt :: Int -> Anything`

.)However, good luck invoking the first and using the result of the second… I’m sure there’s a more practical example to be found.

Edit: Here’s a better exploration on Stack Overflow, with a link to this paper in the comments, and another comment on HN.

“forwards: the caller of the function is responsible for handling the possible error output backwards: the caller of the function is required to providing correct inputs”

Both of these are required in high-confidence software. There might be bugs in the code you call.

Completely true, but that wouldn’t really help when writing (Haskell style) type signatures though. Let’s say we’re writing a function with a type like

`f :: A -> B`

, and we decide “there might be bugs in our code, callers should handle that possibility”, so we change the type to`f :: A -> Maybe B`

.This is a problem, for the reasons the author points out; specifically because we’re allowing ourselves to write

`f x = Nothing`

, or (if we’re able to branch on values of type`A`

) a combinatorial explosion of other implementations which return`Nothing`

for some values and not others.If we stick to the smaller type (

`f :: A -> B`

) we prevent ourselves from writing any of those possibilities. That’s a win, regardless of whether there are still bugs in our implementation or in things we call (there usually will be!).Failures are still possible, e.g. in Haskell these might be asynchronous exceptions (like out-of-memory),

`unsafePerformIO`

, or of course`undefined`

. Haskell doesn’t track/enforce these (except for “safe Haskell”, which forbids`unsafePerformIO`

), so callers must still handle them, as you say. I’m not sure what the best way to track/enforce this would be, but “enlarging” types doesn’t seem like the right way IMHO; so (for now) that seems to be orthogonal to the author’s arguments.That all makes sense. Appreciate the reply.

Learning Ada, this all seems obvious. Ada has nice support for subtyping and other things that allow for restricting the domain of subprograms.

I’m more surprised that this is, apparently, uncommon in Haskell and that Haskell would be prey to bugs such as large integers becoming negative, or am I misunderstanding?

The author’s not writing about integer wrapping where

`2 ^ (some power of 2) + 1`

evaluates to a negative number, he’s diagnosing places where parameter types or return types allow negative integers when they shouldn’t (an Integer type instead of a Natural type).I’m wondering if this is the standard way to put dependent types in Haskell, stuff like

`nonZero`

and`nonEmpty`

are the bread an butter of this patterns. Also he bashes the Option type to then use it instead of providing the proof.In coq I would use the

`exist`

constructor to construct the return value of the function so that you can encode both the return value, the return type, the predicate the value satisfies and the proof of the return value being a witness.`exist`

works—it reduces the size of the range to only those values for which a compatible witness exists.This kind of stuff can be done to some degree with Liquid Haskell, but it’s far from mainstream.

You can also do some amount of Coq-like dependent typed technique in Haskell if you’re willing to pay the tax of a much less expressive language.