Okay, I’ll bite, despite the click-baity title (A scam? Seriously? Who’s attempting to deceive anyone? Is there money to be gained fraudulently here?)

The value in Curry-Howard, like so many things, is not at the extremes - like the example programs here that only prove vague and uninteresting things, or the complex proofs that don’t give you much computational behaviour - but somewhere in the middle. That is, programs that express some lightweight invariant that needs to be maintained, so there is value in expressing it in the type meaning that you have a proof that it is maintained. Here’s an example from the Idris 2 implementation:

eval : {free, vars : _} ->
Env Term free -> LocalEnv free vars ->
Term (vars ++ free) -> Stack free -> Core (NF free)

This type says that we have an environment of free variables free, some additional bound local variables vars, and the process of evaluating an expression gives us a normal form which doesn’t refer to the bound variables any more, because they’ve been substituted in. And I know that’s true, because the type says so, and the type checker agrees. Variable scoping is easy to get wrong, so I’ve decided it’s important to prove it’s right throughout.

Here’s another example, more contrived but I think easier to understand:

uncompress : RLE xs -> Singleton xs

That’s the type of a program which takes a compressed, run-length encoded representation of a list xs, and gives back the uncompressed representation. It’s both a program which does the uncompression, and a proof that the uncompressed list is the same as the original. Or, maybe you could read it as: “For all run-length encoded lists, there is a way to reconstruct the original list, and this is it.”

Is Curry-Howard overrated? Probably. I’d even have read the article if that’s what the title said, and I’d probably have largely agreed with it. But even so, it’s a useful way to understand what’s going on in languages with more expressive type systems.

The internet is already full of articles with clickbaity titles that take an extreme and infer some general negative conclusion from it. Let’s not have more of them.

The value in Curry-Howard, like so many things, is not at the extremes

Yup. It allows us to reason about our code in the same way we can reason about mathematical proofs. That’s about it. That’s both extremely powerful and useless, depending on your environment and what you’re trying to do.

I think a lot of this gets back to what another commenter said: tooling and language choice. The program is not the proof, but the running of the program can work just like a proof if you set it up correctly. That’s an extremely cool observation. Not such a good essay.

Yup. It allows us to reason about our code in the same way we can reason about mathematical proofs. That’s about it. That’s both extremely powerful and useless, depending on your environment and what you’re trying to do.

But why do you need C-H to do this reasoning? C-H is a much more narrow result, specifically proving a duality between proof and program. Verification of invariants does not require C-H.

Author here. I should have titled it “over hyped” or “overrated” indeed (in the conclusion I do acknowledge the click baitiness, but oh well ^^). I do love type driven programming, even from my lowly OCaml vantage point, and Idris is pretty cool. The eval example is impressive.

Excerpt which hopefully isn’t too far off:

Idris could possibly have some interesting proofs that are also real programs… if it were designed to be a proof assistant and not a beefed-up programming language.

What is the program corresponding to a proof of “there exists an infinite number of primes”? If I run this program, what input does it takes, and what do I get as an output?

I know very little in this space, but this is such a Coq program, that includes the classic “[product of primes < n] + 1” proof. It seems pretty clear to me that walking through this proof with a machine, you could slightly modify the execution to provide you with an example.

A program that would successfully generate a list of infinite primes definitely seems to me like a proof of the infinity of primes, in a constructivist sense.

So what I thought you were saying was that a naive program that produced an infinite list of prime numbers would itself be a proof of the infinitude of the primes. Of course, something like a simple program that takes the next number n and check it has no factors between it and 1 would not be good enough. If there were a finite number of primes, then such a program would just run forever without halting or producing any more primes once it had listed them all. The existence of such a program would not be a proof of the infinitude of the primes.

Your wikipedia link is interesting, but it relies entirely on the fact that two successive numbers have no factors in common (it also doesn’t produce the next bigger prime, but rather just a prime that is bigger than previously included primes). What would be the program to prove that?

And at this stage, we’re finding exactly what the article argues - except in very specific circumstances the interesting programs relating to prime numbers, do not correspond to interesting proofs about prime numbers, and the interesting proofs about prime numbers do not correspond to interesting programs about them.

it also doesn’t produce the next bigger prime, but rather just a prime that is bigger than previously included primes

Proving infinite primes is not the same as generating a list of all primes, it’s the same as generating a list of some primes.

I think primes are a pretty bad example in general for this. Things like red/black trees or quicksort work way better. An FP implementation of quicksort is the proof of it working! There’s a real beauty in that

That’s simply a fact. gcd(n,n+m) is a divisor of m. That can only be 1. You don’t need a separate proof program for that. Just represent the numbers in your prime sequence as prime factors. That’s how you’re generating them in the first place. 2, 2+1, (2)x(2+1)+1, (2)x(2+1)x((2)x(2+1)+1)+1 etc.

Well at one point, just like proofs rely on an axiomatic base, you can look at your programs text and say “while True will go on forever”

You might think “aha! I have to write a proof for the program that is itself meant to be a proof” but I think that if you’re relying on stuff like primitive recursion it’s more of a systematic application of rules more than “extra work”

And … I mean, any machine will run out of space eventually when you hit the first prime not storable in RAM or whatever.

My concern was more that a simple program, e.g. something based on checking that no smaller numbers were factors, might appear to be a program that would generate an infinite sequence of prime numbers, but actually, if there were a finite number of primes, it would just stop providing numbers without halting.

You clearly can’t explicitly test an infinity of outputs, so you need to be sure that your program is actually proving what you think it’s proving.

And as the article points out, none of this is really the correspondance that the CHC is talking about, where the program is a proof of a formula that the type of the program encodes.

I wonder whether choice of programming language affects perception here. When I use jq or awk, for example, it feels very obvious that I’m constructing a filter from inputs to outputs. As requested, here is a non-trivial jq program; Curry-Howard says that it takes a presentation of a category and constructs the Knuth-Bendix completion.

If we have a countably infinite number of objects, then we can index them with the natural numbers. The Curry-Howard way to interpret “there is an infinite number of objects with property X” is with a program that maps natural numbers to objects and any extra witnesses required for the property. If there is an infinite number of primes, then Curry-Howard gives us a program which takes a natural number and returns the indexed member of the sequence [2, 3, 5, 7, …] along with a witness of primality.

Let’s walk through a garden of constructive proofs in elementary logic. We’ll consider what Curry-Howard says about a few examples. And yes, like the post says, many of these are boring.

The praeclareum theorema is the lambda term λ (f, g). λ (x, y). (f x, g y). Curry-Howard says that because we only used lambda calculus with products, this statement is true in any CCC, and hints that maybe we only need products. Curry-Howard also lets me read that lambda term directly from a proof of the theorem.

The fact that equality of natural numbers is decideable (proof) can be used to produce programs which take two natural numbers and return a disjoint union whose left side indicates that the inputs are equal and whose right side indicates that they are unequal.

The fact that the axiom of choice implies excluded middle, Diaconescu’s theorem, gives a program (proof). The axiom of choice is passed in as a polymorphic choice function, and the program takes an input type, looks up the choice function, and builds a program which decides equality on that type. The choice functions themselves take a type and an value of that type, which is a constructive proof that the type is inhabited, and return another value of the same type along with a witness which shows that the two values give an ordered pair. (This is confusing to everybody, and Metamath has an explanation of this encoding.)

But beyond all of that, Curry-Howard lets us consider the final entry on this table of correspondences, which I’ve discussed before here: We can generate programming languages for interesting mathematical objects of study. Choose a synthetic domain of objects and receive a lambda calculus.

I’m unclear about the relevance of any of this. The OP talks about programs and proofs. How does discussing axiom of choice and the decidability of equality of natural numbers matter?

The proofs I chose are strongly related to computability via the BHK interpretation. But they can also just be seen as four arbitrary logical statements, one of which happens to be the one used in the original article.

I am not sure how to cross the bridge any further. Given any constructive proof, there is a program whose closure is the proof’s context, whose inputs are the proof’s hypotheses, whose subroutines are the proof’s subproofs, and whose outputs are the proof’s conclusions. Consider this (untested) Python code:

def isEqual(x, y):
if x == 0 and y == 0:
return True
elif x > 0 and y > 0:
return isEqual(x - 1, y - 1)
else:
return False

Apply a solid dab of native type theory and we can see that this is the same control flow that we would have in a logical proof of the equality of two natural numbers. Either they’re both zero and they’re equal, or they’re both successors and they’re equal if their predecessors are equal, or one and not the other are zero and they’re not equal. (Note that Python’s native type theory requires us to presume that x and y are natural numbers and not anything else.)

I’m a bit confused by the jq program. I’m no categorician so 1/3rd of CH escapes me anyway, it seems; but what would be the type of this jq program? What is it a proof of?

If you’re saying that it relies on CH to build a completion of a category, that’s cool, but it still doesn’t refuse my point I think. I have an OCaml theorem prover, in the dozens of klocs, which implements a more complicated version of KB completion and its type is still unit (or set expr -> set expr if you look at the main loop).

jq programs are generally endomorphisms on JSON data. Recall that programs don’t have the one single type, but a family of types which are related by their own little Boolean algebra. It is up to our choice of logical formalism to decide which types go where. I could say that my program is a map from categories to graphs, for example, and it would not be any more or less true; it is up to me to explain how the incoming JSON data can be arranged to present a category, not for the data itself to innately be a category in some way.

My point is that CH is not something upon which rely, but something which happens to arise within these mathematical structures, even if we didn’t want it. And as for practical applications, I think that using CH to produce domain-specific small programming languages is quite practical.

Recall that programs don’t have the one single type

That’s debatable, I guess :-). OCaml certainly has principal types (for the core language anyway) and principal type inference outside the extensions like GADTs. When you add a layer of boolean interpretation (which is definitely useful! 100% agree!) then you kind of segue from “writing a program” to “writing a program and verifying properties about it” which as often as not is done with different mechanisms such as Hoare triplets, with classical logic, etc. Idris is probably the closest, in the real world, of the CH style of writing programs and verifying actual properties about them directly by well-typedness; but it’s not the only approach.

Isn’t the point of curry Howard to provide a mapping of interesting logics to potentially useful type systems, and of interesting type systems to potentially useful logics? No one has ever tried to sell it to me that the programs themselves should somehow be useful as both at the same time.

There’s some work being done for creating “intrinsically typed” programs; that is, programs that typecheck only if their output is correct. I recently saw this POPL paper, titled Intrinsically Typed Compilation with Nameless Labels. The authors used the dependently typed language Agda to implement a compiler that is guaranteed to produce assembly code (for a stack-based abstract machine) that is type-correct. They did so by encoding what they called a “proof relevant separation logic”, which was made possible by the power of dependent types. Interestingly, they specifically did not use Prop, but Set: the proof that a piece of code does not use any external labels (or that it does use some number of external labels) was also used to compute the actual jump location. Here is a value that is both interesting as, well, a value (because it determines the location of the label to which a jump needs to be made) and as a proof (for verifying that the compiler is not generating incorrect code).

This was just a paper I picked at random when I was checking out this year’s POPL. Surely the chance that I read a paper that provides an interesting use of Curry-Howard is small enough to extrapolate that there are other interesting uses of the correspondence?

There are other, less avant garde instances where proofs are useful for programs. For instance, when I was writing a typechecker, I was able to write a proof that my equality comparison for types was indeed correct, and thereby convince the type checker that two values are of the same type. My comparison function contained both the information that things are equal or not equal, and served as a proof to convince the typechecker. When I wrote a termination proof for an advent of code problem, I could use this proof to compute the final outcome of the program: the proof is constructive, and the witness is useful.

So I’m not convinced. I think Curry-Howard has a use, and is certainly not a scam. And certainly it has educational use too, but this is a whole another can of worms. And there are the other comments on this post.

I have mixed feeling about this post. I strongly agree that curry howard is not overpowereded magic. On the other hand, features like type holes are bringing Haskell closer to proof assistants. The tactics plugin for the Haskell language server adds some magical “produce code from type signature” abilities. I’ve written a minimal example that shows how to click your way from a type signature to working code for a small function.

In the case where you have imported many functions designed to work together, greeting the computer to string together those pieces to match the input and output is really useful and saves me time in my daily tasks.

I’ve been learning Dafny and LiquidHaskell recently and a very little bit of Agda. Dafny and LiquidHaskell make it very clear that programs are indeed proofs, and LiquidHaskell makes it very clear that types are indeed theorems. That said, the examples in this post are, as the author says, mostly proofs of uninteresting theorems.

I’m coming at this from the perspective of “I have to write a proof, but I only know how to program” and slowly discovering the bits of the Curry-Howard Isomorphism that are useful to me toward my goal. The pieces fit, but the tools by which we can experience it are rough at best. I think the author just needs to spend more time with it rather than ranting into the ether.

[Edit: Functional Pearl: Theorem Proving for All is a very short read showing how to express pre- and post- conditions on values as LiquidHaskell types. After that, it goes into how to express arbitrary properties by writing a literal proof in Haskell. What’s going on behind the scenes which is a little bit hard to see is that there’s a context of facts built up inside every function, and the facts are handed off to Z3 which turns its propositional-logic gears and spits out “yes” or “no”. In the proof examples what you’re doing is manually introducing the facts necessary to prove the theorem.]

i think there’s a confusion here – curry-howard is a statement about running programs as proofs, not about programs as such as proofs. of course we can’t have a programming language that expresses first-order logic at the syntactic level; the compilation of the program would be undecidable.

Is it? my understanding is that it’s “the program is a proof of its type”, not “running this program gives you a witness” kind of thing (like reflection in Coq could give you).

There are languages designed for writing verified code, like Dafny or Why3, and they do embed first-order logic; simply it’s in separate code annotations that express Hoare-logic invariants. Compilation remains decidable, but verification/proving is obviously not.

This blog post is so weird. Definitionally a proof encoded in dependently typed language is a program.

Curry-Howard allows us to map between logic systems and type systems. This is extremely powerful and allows you to do things like derive linear types from linear logic.

Here’s why Curry-Howard is important to programmers even with concrete types. When we express the properties we want at the type level, we have a mathematical guarantee that things can’t go wrong. That’s the whole point for us–that things can’t go wrong. For example, consider this code (OCaml syntax):

type person
type name
val name : string -> name option
val person : id:int -> name -> person

This code guarantees that a person value can be constructed with any int value as an ID, but only with a valid name value, which can be constructed only by following the rules encoded in the name function. The types enforce that these logical rules must be followed. If you have a name value, you have a proof that it was constructed correctly. And if you have a person value, you also have a proof that it was constructed correctly.

Simple, everyday code that gives you mathematical proofs that it will work correctly. That’s why C-H is important.

Why do you need C-H for this? You can constrain input to sets just fine and make assertions over the domain and range of pure functions well enough without invoking C-H. TLA+ after all models algorithms using set theoretic constructions. So what is C-H giving you here?

It’s giving me a simple model for thinking about my code, using a system (static types) that is mainstream and widely available. If I set up my types correctly, then the programs I construct using them can be provably correct. This is cool because this tool is accessible to working developers today.

Sure, and I use a ton of these in OCaml! But this is clearly on the “program” side of things. These are not proofs of anything a mathematician would call a theorem. They’re just statements about the program, which helps understanding it and allow the compiler to remove a ton of runtime checks; but nothing more. Trying to coerce OCaml into proving non trivial stuff with its types gives you michelson whose design is heavily informed by what you can and can’t do with GADTs. If you want better proofs for OCaml, use Why3 or the upcoming gospel from the same team.

The fact that you can have a type system at all is a consequence of C-H. Of course that standard programming languages’ type systems make lousy theorem provers, so what? Similarly, the theorems that arise from “regular” programs are not mathematically interesting, but again, so what? How does this make C-H a scam?

In the end C-H is more important to mathematicians than programmers, as it allows for things like Coq to exist. I find the idea of C-H not being important very bizarre. Before C-H, the idea that mathematics could be formalized at all was a hypothesis. C-H shows not only that it is true, but also shows you how to do it. It’s one of the most profound mathematical discoveries of the 20th century.

CH is not necessary for the formalization of mathematics. Look up the HOL family which is not based on terms-as-proofs at all, or Mizar (which predates a lot of the type theory-based proof assistants, afaik).

I changed the title because it was exaggerated and I regretted how clickbaity it was, but still, your answer is the kind that prompted me to write this in the first place. CH is not the one and only way, and yet some people seem to think it is required for any form of theorem proving.

Okay, I’ll bite, despite the click-baity title (A scam? Seriously? Who’s attempting to deceive anyone? Is there money to be gained fraudulently here?)

The value in Curry-Howard, like so many things, is not at the extremes - like the example programs here that only prove vague and uninteresting things, or the complex proofs that don’t give you much computational behaviour - but somewhere in the middle. That is, programs that express some lightweight invariant that needs to be maintained, so there is value in expressing it in the type meaning that you have a proof that it is maintained. Here’s an example from the Idris 2 implementation:

This type says that we have an environment of free variables

`free`

, some additional bound local variables`vars`

, and the process of evaluating an expression gives us a normal form which doesn’t refer to the bound variables any more, because they’ve been substituted in. And I know that’s true, because the type says so, and the type checker agrees. Variable scoping is easy to get wrong, so I’ve decided it’s important to prove it’s right throughout.Here’s another example, more contrived but I think easier to understand:

That’s the type of a program which takes a compressed, run-length encoded representation of a list

`xs`

, and gives back the uncompressed representation. It’s both a program which does the uncompression, and a proof that the uncompressed list is the same as the original. Or, maybe you could read it as: “For all run-length encoded lists, there is a way to reconstruct the original list, and this is it.”Is Curry-Howard overrated? Probably. I’d even have read the article if that’s what the title said, and I’d probably have largely agreed with it. But even so, it’s a useful way to understand what’s going on in languages with more expressive type systems.

The internet is already full of articles with clickbaity titles that take an extreme and infer some general negative conclusion from it. Let’s not have more of them.

Yup. It allows us to reason about our code in the same way we can reason about mathematical proofs. That’s about it. That’s both extremely powerful and useless, depending on your environment and what you’re trying to do.

I think a lot of this gets back to what another commenter said: tooling and language choice. The program is not the proof, but the running of the program can work just like a proof if you set it up correctly. That’s an extremely cool observation. Not such a good essay.

But why do you need C-H to do this reasoning? C-H is a much more narrow result, specifically proving a duality between proof and program. Verification of invariants does not require C-H.

Author here. I should have titled it “over hyped” or “overrated” indeed (in the conclusion I do acknowledge the click baitiness, but oh well ^^). I do love type driven programming, even from my lowly OCaml vantage point, and Idris is pretty cool. The

`eval`

example is impressive.Excerpt which hopefully isn’t too far off:

I know very little in this space, but this is such a Coq program, that includes the classic “[product of primes < n] + 1” proof. It seems pretty clear to me that walking through this proof with a machine, you could slightly modify the execution to provide you with an example.

A program that would successfully generate

a listof infinite primes definitely seems to me like a proof of the infinity of primes, in a constructivist sense.How would you know for sure it successfully generated a list of infinite primes without proving it?

By generating a list that will always provide a next bigger prime, without a stopping condition. It does not need to include all primes. Not too hard: https://en.wikipedia.org/wiki/Euclid%27s_theorem#Proof_by_construction

So what I thought you were saying was that a naive program that produced an infinite list of prime numbers would itself be a proof of the infinitude of the primes. Of course, something like a simple program that takes the next number n and check it has no factors between it and 1 would not be good enough. If there were a finite number of primes, then such a program would just run forever without halting or producing any more primes once it had listed them all. The existence of such a program would not be a proof of the infinitude of the primes.

Your wikipedia link is interesting, but it relies entirely on the fact that two successive numbers have no factors in common (it also doesn’t produce the next bigger prime, but rather just

aprime that is bigger than previously included primes). What would be the program to prove that?And at this stage, we’re finding exactly what the article argues - except in very specific circumstances the interesting programs relating to prime numbers, do not correspond to interesting proofs about prime numbers, and the interesting proofs about prime numbers do not correspond to interesting programs about them.

Proving infinite primes is not the same as generating a list of all primes, it’s the same as generating a list of some primes.

I think primes are a pretty bad example in general for this. Things like red/black trees or quicksort work way better. An FP implementation of quicksort is the proof of it working! There’s a real beauty in that

That’s simply a fact. gcd(n,n+m) is a divisor of m. That can only be 1. You don’t need a separate proof program for that. Just represent the numbers in your prime sequence as prime factors. That’s how you’re generating them in the first place. 2, 2+1, (2)x(2+1)+1, (2)x(2+1)x((2)x(2+1)+1)+1 etc.

Well at one point, just like proofs rely on an axiomatic base, you can look at your programs text and say “while True will go on forever”

You might think “aha! I have to write a proof for the program that is itself meant to be a proof” but I think that if you’re relying on stuff like primitive recursion it’s more of a systematic application of rules more than “extra work”

And … I mean, any machine will run out of space eventually when you hit the first prime not storable in RAM or whatever.

My concern was more that a simple program, e.g. something based on checking that no smaller numbers were factors, might

appearto be a program that would generate an infinite sequence of prime numbers, but actually, if there were a finite number of primes, it would just stop providing numbers without halting.You clearly can’t explicitly test an infinity of outputs, so you need to be sure that your program is actually proving what you think it’s proving.

And as the article points out, none of this is really the correspondance that the CHC is talking about, where the program is a proof of a formula that the type of the program encodes.

I wonder whether choice of programming language affects perception here. When I use jq or awk, for example, it feels very obvious that I’m constructing a filter from inputs to outputs. As requested, here is a non-trivial jq program; Curry-Howard says that it takes a presentation of a category and constructs the Knuth-Bendix completion.

If we have a countably infinite number of objects, then we can index them with the natural numbers. The Curry-Howard way to interpret “there is an infinite number of objects with property X” is with a program that maps natural numbers to objects and any extra witnesses required for the property. If there is an infinite number of primes, then Curry-Howard gives us a program which takes a natural number and returns the indexed member of the sequence [2, 3, 5, 7, …] along with a witness of primality.

Let’s walk through a garden of constructive proofs in elementary logic. We’ll consider what Curry-Howard says about a few examples. And yes, like the post says, many of these are boring.

`λ (f, g). λ (x, y). (f x, g y)`

. Curry-Howard says that because we only used lambda calculus with products, this statement is true in any CCC, and hints that maybe we only need products. Curry-Howard also lets me read that lambda term directly from a proof of the theorem.But beyond all of that, Curry-Howard lets us consider the final entry on this table of correspondences, which I’ve discussed before here: We can generate programming languages for interesting mathematical objects of study. Choose a synthetic domain of objects and receive a lambda calculus.

I’m unclear about the relevance of any of this. The OP talks about programs and proofs. How does discussing axiom of choice and the decidability of equality of natural numbers matter?

The proofs I chose are strongly related to computability via the BHK interpretation. But they can also just be seen as four arbitrary logical statements, one of which happens to be the one used in the original article.

I am not sure how to cross the bridge any further. Given any constructive proof, there is a program whose closure is the proof’s context, whose inputs are the proof’s hypotheses, whose subroutines are the proof’s subproofs, and whose outputs are the proof’s conclusions. Consider this (untested) Python code:

Apply a solid dab of native type theory and we can see that this is the same control flow that we would have in a logical proof of the equality of two natural numbers. Either they’re both zero and they’re equal, or they’re both successors and they’re equal if their predecessors are equal, or one and not the other are zero and they’re not equal. (Note that Python’s native type theory requires us to presume that

`x`

and`y`

are natural numbers and not anything else.)I’m a bit confused by the jq program. I’m no categorician so 1/3rd of CH escapes me anyway, it seems; but what would be the type of this jq program? What is it a proof of?

If you’re saying that it relies on CH to build a completion of a category, that’s cool, but it still doesn’t refuse my point I think. I have an OCaml theorem prover, in the dozens of klocs, which implements a more complicated version of KB completion and its type is still

`unit`

(or`set expr -> set expr`

if you look at the main loop).jq programs are generally endomorphisms on JSON data. Recall that programs don’t have

theone single type, but a family of types which are related by their own little Boolean algebra. It is up to our choice of logical formalism to decide which types go where. I could say that my program is a map from categories to graphs, for example, and it would not be any more or less true; it is up to me to explain how the incoming JSON data can be arranged to present a category, not for the data itself to innatelybea category in some way.My point is that CH is not something upon which rely, but something which happens to arise within these mathematical structures, even if we didn’t want it. And as for practical applications, I think that using CH to produce domain-specific small programming languages is quite practical.

That’s debatable, I guess :-). OCaml certainly has principal types (for the core language anyway) and principal type inference outside the extensions like GADTs. When you add a layer of boolean interpretation (which is definitely useful! 100% agree!) then you kind of segue from “writing a program” to “writing a program and verifying properties about it” which as often as not is done with different mechanisms such as Hoare triplets, with classical logic, etc. Idris is probably the closest, in the real world, of the CH style of writing programs and verifying actual properties about them directly by well-typedness; but it’s not the only approach.

Isn’t the point of curry Howard to provide a mapping of interesting logics to potentially useful type systems, and of interesting type systems to potentially useful logics? No one has ever tried to sell it to me that the programs themselves should somehow be useful as both at the same time.

There’s some work being done for creating “intrinsically typed” programs; that is, programs that typecheck only if their output is correct. I recently saw this POPL paper, titled Intrinsically Typed Compilation with Nameless Labels. The authors used the dependently typed language Agda to implement a compiler that is guaranteed to produce assembly code (for a stack-based abstract machine) that is type-correct. They did so by encoding what they called a “proof relevant separation logic”, which was made possible by the power of dependent types. Interestingly, they specifically did

notuse`Prop`

, but`Set`

: the proof that a piece of code does not use any external labels (or that it does use some number of external labels) wasalso used to compute the actual jump location. Here is a value that is both interesting as, well, a value (because it determines the location of the label to which a jump needs to be made) and as a proof (for verifying that the compiler is not generating incorrect code).This was just a paper I picked at random when I was checking out this year’s POPL. Surely the chance that I read a paper that provides an interesting use of Curry-Howard is small enough to extrapolate that there are other interesting uses of the correspondence?

There are other, less avant garde instances where proofs are useful for programs. For instance, when I was writing a typechecker, I was able to write a proof that my equality comparison for types was indeed correct, and thereby convince the type checker that two values are of the same type. My comparison function contained both the information that things are equal or not equal,

andserved as a proof to convince the typechecker. When I wrote a termination proof for an advent of code problem, I could use this proof to compute the final outcome of the program: the proof is constructive, and the witness is useful.So I’m not convinced. I think Curry-Howard has a use, and is certainly not a

scam. And certainly it has educational use too, but this is a whole another can of worms.Andthere are the other comments on this post.I have mixed feeling about this post. I strongly agree that curry howard is not overpowereded magic. On the other hand, features like type holes are bringing Haskell closer to proof assistants. The tactics plugin for the Haskell language server adds some magical “produce code from type signature” abilities. I’ve written a minimal example that shows how to click your way from a type signature to working code for a small function.

In the case where you have imported many functions designed to work together, greeting the computer to string together those pieces to match the input and output is really useful and saves me time in my daily tasks.

I’ve been learning Dafny and LiquidHaskell recently and a very little bit of Agda. Dafny and LiquidHaskell make it very clear that programs are indeed proofs, and LiquidHaskell makes it very clear that types are indeed theorems. That said, the examples in this post are, as the author says, mostly proofs of uninteresting theorems.

I’m coming at this from the perspective of “I have to write a proof, but I only know how to program” and slowly discovering the bits of the Curry-Howard Isomorphism that are useful to me toward my goal. The pieces fit, but the tools by which we can experience it are rough at best. I think the author just needs to spend more time with it rather than ranting into the ether.

[Edit: Functional Pearl: Theorem Proving for All is a very short read showing how to express pre- and post- conditions on values as LiquidHaskell types. After that, it goes into how to express arbitrary properties by writing a literal proof in Haskell. What’s going on behind the scenes which is a little bit hard to see is that there’s a

contextof facts built up inside every function, and the facts are handed off to Z3 which turns its propositional-logic gears and spits out “yes” or “no”. In the proof examples what you’re doing is manually introducing the facts necessary to prove the theorem.]i think there’s a confusion here – curry-howard is a statement about

runningprograms as proofs, not about programsas suchas proofs. of course we can’t have a programming language that expresses first-order logic at thesyntacticlevel; thecompilationof the program would be undecidable.We have heaps of languages whose complication is undecidable! Any language with macros. C++ templates. Java generics. TypeScript’s Type system …

Is it? my understanding is that it’s “the program is a proof of its type”, not “running this program gives you a witness” kind of thing (like reflection in Coq could give you).

There are languages designed for writing verified code, like Dafny or Why3, and they do embed first-order logic; simply it’s in separate code annotations that express Hoare-logic invariants. Compilation remains decidable, but verification/proving is obviously not.

This blog post is so weird. Definitionally a proof encoded in dependently typed language is a program.

Curry-Howard allows us to map between logic systems and type systems. This is extremely powerful and allows you to do things like derive linear types from linear logic.

Here’s why Curry-Howard is important to programmers even with concrete types. When we express the properties we want at the type level, we have a mathematical guarantee that things can’t go wrong. That’s the whole point for us–that

things can’t go wrong.For example, consider this code (OCaml syntax):This code guarantees that a

`person`

value can be constructed with any`int`

value as an ID, butonlywith a valid`name`

value, which can be constructedonlyby following the rules encoded in the`name`

function. The types enforce that these logical rules must be followed. If you have a`name`

value, you have a proof that it was constructed correctly. And if you have a`person`

value, you also have a proof that it was constructed correctly.Simple, everyday code that gives you mathematical proofs that it will work correctly. That’s why C-H is important.

Why do you need C-H for this? You can constrain input to sets just fine and make assertions over the domain and range of pure functions well enough without invoking C-H. TLA+ after all models algorithms using set theoretic constructions. So what is C-H giving you here?

It’s giving me a simple model for thinking about my code, using a system (static types) that is mainstream and widely available. If I set up my types correctly, then the programs I construct using them can be provably correct. This is cool because this tool is accessible to working developers

today.Sure, and I use a ton of these in OCaml! But this is clearly on the “program” side of things. These are not proofs of anything a mathematician would call a theorem. They’re just statements about the program, which helps understanding it and allow the compiler to remove a ton of runtime checks; but nothing more. Trying to coerce OCaml into proving non trivial stuff with its types gives you michelson whose design is heavily informed by what you can and can’t do with GADTs. If you want better proofs for OCaml, use Why3 or the upcoming gospel from the same team.

The fact that you can have a type system at all is a consequence of C-H. Of course that standard programming languages’ type systems make lousy theorem provers, so what? Similarly, the theorems that arise from “regular” programs are not mathematically interesting, but again, so what? How does this make C-H a scam?

In the end C-H is more important to mathematicians than programmers, as it allows for things like Coq to exist. I find the idea of C-H not being important very bizarre. Before C-H, the idea that mathematics could be formalized at all was a hypothesis. C-H shows not only that it is true, but also shows you how to do it. It’s one of the most profound mathematical discoveries of the 20th century.

(just seeing this now).

CH is not necessary for the formalization of mathematics. Look up the HOL family which is not based on terms-as-proofs at all, or Mizar (which predates a lot of the type theory-based proof assistants, afaik).

I changed the title because it was exaggerated and I regretted how clickbaity it was, but still, your answer is the kind that prompted me to write this in the first place. CH is not the one and only way, and yet some people seem to think it is required for any form of theorem proving.