This is over my head. Can someone explain it to me? Most likely, slowly, with a minimum of big words.
I joke, but not really. I was good with mathematical concepts but never rigorously learned symbols etc so I regularly struggle to move through texts that are heavy in mathematical symbols. I can do it but, it takes forever.
I haven’t read the paper yet past the abstract, but I’m familiar with the ideas laid out there at least so I’ll give it a go.
It’s been repeatedly discovered that there’s a remarkably tight, deep correspondence between logics and programming. It’s a bit mysterious to say that because (a) it requires a typed system to see most of the features of the correspondence and (b) that type system has to be really good for the features to be interesting. In particular, I’m most familiar with Haskell’s type system and while it’s powerful it’s still quite insufficient to demonstrate these ideas. To really see them you should study Agda or Coq, so I’ll immediately drop some good references for learning a bit of Coq
To dive in a bit, we want to correspond types with theorems and programs with proofs. To play out the language (though this is a unbelievably boring example) we might write this program in Haskell
foo :: Int -> Int -- line 1
foo x = x + 1 -- line 2
and read line 1 as saying “we suppose a program exists with the type Int -> Int” and read line 2 as saying “here it is, a program of type Int -> Int”. If we were to interpret that logically we’d say line 1 says “We suppose there exists some method of showing that if you have an Int, you can provide an Int” and line 2 says “Here, we exhibit a proof that this (line 1) is true”. Dreadfully boring example, again, so really you want a type system powerful enough to write something like
foo :: (a :: Natural) -> a + a == 2 * a
which could be read as the theorem that “if you have a natural number, a, then a + a equals 2a” and we assume that the system which could express this type is also sufficiently powerful to express a program which inhabits this type.
And that’s exactly the idea. It turns out that the laws of sensible type systems are exactly such that you can build so powerful a system and have it behave such that you can find programs for a type only when that type represents a true proposition.
Here’s another example, this time functional Haskell
impossible :: forall a . forall b . a -> b
impossible = ?
It’s impossible to find a program with the same type as impossible—this is universal coercion and if it existed it’d mean your type system was completely untrustworthy (coughi.e. every other type system ‘roundcough). Read as a theorem it’s also absurd: “It’s possible to prove anything you want, given that I provide you anything at all”. Such a system of logic would be meaningless.
We can write a true example, though, in Haskell, though it’ll once again be boring
id :: forall a . a -> a
id x = x
This function, id, is obviously inhabited and thus its type represents a true theorem. That theorem is obvious: “If I tell you how to prove something, you can tell me how to prove that same thing” and any logical system which could not deduce it would be fatally flawed.
And that’s the correspondence in a nutshell—any sensible type system asks for the same things a sensible logical system does. We kept inventing new sensible type systems and finding the corresponding, sensible logical system and visa versa. It means that all of this research does double duty and that there’s something mysterious and special going on in the field of new type systems.
And to work around the untrustworthiness of Haskell’s logic we often use something called Fast and Loose Reasoning. We basically try to use a total subset of Haskell for reasoning and let our programs be embedded in the world of partiality.
This is over my head. Can someone explain it to me? Most likely, slowly, with a minimum of big words.
I joke, but not really. I was good with mathematical concepts but never rigorously learned symbols etc so I regularly struggle to move through texts that are heavy in mathematical symbols. I can do it but, it takes forever.
I haven’t read the paper yet past the abstract, but I’m familiar with the ideas laid out there at least so I’ll give it a go.
It’s been repeatedly discovered that there’s a remarkably tight, deep correspondence between logics and programming. It’s a bit mysterious to say that because (a) it requires a typed system to see most of the features of the correspondence and (b) that type system has to be really good for the features to be interesting. In particular, I’m most familiar with Haskell’s type system and while it’s powerful it’s still quite insufficient to demonstrate these ideas. To really see them you should study Agda or Coq, so I’ll immediately drop some good references for learning a bit of Coq
Benjamin Pierce’s Software Foundations
Adam Chlipala’s Certified Programming with Dependent Types
To dive in a bit, we want to correspond types with theorems and programs with proofs. To play out the language (though this is a unbelievably boring example) we might write this program in Haskell
and read line 1 as saying “we suppose a program exists with the type Int -> Int” and read line 2 as saying “here it is, a program of type Int -> Int”. If we were to interpret that logically we’d say line 1 says “We suppose there exists some method of showing that if you have an Int, you can provide an Int” and line 2 says “Here, we exhibit a proof that this (line 1) is true”. Dreadfully boring example, again, so really you want a type system powerful enough to write something like
which could be read as the theorem that “if you have a natural number, a, then a + a equals 2a” and we assume that the system which could express this type is also sufficiently powerful to express a program which inhabits this type.
And that’s exactly the idea. It turns out that the laws of sensible type systems are exactly such that you can build so powerful a system and have it behave such that you can find programs for a type only when that type represents a true proposition.
Here’s another example, this time functional Haskell
It’s impossible to find a program with the same type as
impossible—this is universal coercion and if it existed it’d mean your type system was completely untrustworthy (coughi.e. every other type system ‘roundcough). Read as a theorem it’s also absurd: “It’s possible to prove anything you want, given that I provide you anything at all”. Such a system of logic would be meaningless.We can write a true example, though, in Haskell, though it’ll once again be boring
This function,
id, is obviously inhabited and thus its type represents a true theorem. That theorem is obvious: “If I tell you how to prove something, you can tell me how to prove that same thing” and any logical system which could not deduce it would be fatally flawed.And that’s the correspondence in a nutshell—any sensible type system asks for the same things a sensible logical system does. We kept inventing new sensible type systems and finding the corresponding, sensible logical system and visa versa. It means that all of this research does double duty and that there’s something mysterious and special going on in the field of new type systems.
And to work around the untrustworthiness of Haskell’s logic we often use something called Fast and Loose Reasoning. We basically try to use a total subset of Haskell for reasoning and let our programs be embedded in the world of partiality.
Thanks for this comment. Really great!