My suspicion is that computer scientists have gotten types all wrong - and have latched onto type theory from formal logic without understanding why formal logic failed at its larger goals.

Why do you say that? I’d say computer scientists have been developing types in an interesting way that directly accounts for the flaws in formalism and profits from them. CS forces the “open logical system extended through learning/communication” perspective which avoids the failings of Formalism.

I don’t know what you are quoting, but types as sets or as some byzantine formalism (such as the Hindley-Milner mess) doesn’t solve any problem as far as I can tell. In actual programming, types and polymorphism are simple and come from the mapping between finite bit sequences and values. Why one would need to drag in a failed project from metamathematics is something I don’t get.

You are making too big of a deal of the connection to set theory. Nobody is “dragging in a failed project”, it is mostly a historical curiosity at this point. Most research in type theory involves systems that are unsound when interpreted as a logic. Even the people today interested in using type theory to formalize mathematics are using it as a way to replace set theory, with massive implications for machine verification of proofs.

I don’t understand how you can call Hindley-Milner a “byzantine formalism.” I get the feeling that you are unfamiliar with the sequent notation commonly used to describe the precise behavior of a type system. Hindley-Milner is the simplest type system that can elide type annotations and has the principal type property (each expression has a “best type”). Is type inference esoteric? Is querying your IDE for the type of an expression esoteric?

types as sets … doesn’t solve any problem as far as I can tell.

In actual programming, types and polymorphism are simple and come from the mapping between finite bit sequences and values.

So I guess you could say that a type is a set of finite bit sequences?

. Most research in type theory involves systems that are unsound when interpreted as a logic.

Really? It is definitely not my field, but I can’t imagine what value could be obtained by a logical system in which False is a theorem. Certainly Milner claims his scheme to be sound. Maybe I don’t know what you mean by “interpreted as a logic”.

I still don’t know what problem is being solved. I see what Hindley is trying to do, but he’s interested in combinatory logic which is something that does not interest me all that much (I’m not saying it is not an interesting topic, just not to me). What I don’t get is what utility comes from applying such a generic approach to programming languages.

I don’t see a type as a set of finite bit sequences, since the interpretation of the same bit sequences into mathematical objects can be different. For example, if we look at the set of K bit sequences, we can have a mapping into Z_{2^K}
and a mapping of symbolic operations into operations e.g. so integer(“a*b”) => integer(a) times integer(b) mod 2^K
or something. We could also map to a subset of the reals with a NaN value if we are using the same bit sequences as floating point. To me, the hard parts of type theory are figuring out how to bound errors, deal with overflows, work with limited representations, handle not-a-number results etc.

“Research in type theory” comes in two camps: the PL-ish and the mathematical. There’s a lot of value in a type system that proves False on the PL-ish side as that is the type of a value for which you have no information and any program with a non-productive infinite loop must take that type. You can excise general recursion, but now we’re heading toward the mathematical side of type theory.

Even when False is a theorem you can still get a lot of value out of the fact that structural reflection on the rules leading to a type judgement can give you a lot of information about the dynamic semantics of the proof/program. In this case, False just confers the information I said above: a totally non-productive program which will not, even in principle, ever produce information.

I’m not the one suggesting that types are sets of finite bit sequences (I’d argue pretty heavily that they’re nothing close to that at all!) but I would suggest that the practical problems you’re taking aim at w.r.t. type theory are very interesting and totally a real part of the PL-ish community of studying type theory.

They’re a bit weird from a mathematical perspective because there isn’t as much of a foundational motivation for why one should spend time trying to model overflows, but even that’s pretty interesting. I’d love to see type theories which would let you talk about Reals (perhaps computational, intuitionistic Reals) and Doubles together and relate what properties can be “ported” from one to the other as a basis for discussing how finite approximations to algorithms designed over the Reals can succeed or fail.

Why type theories as opposed to something else? Because type theories essentially specialize in information that can be obtained statically. Oppose this with information that can only be obtained “dynamically” by running a program. In a world where non-termination is a real, ever-present concern (more generally, there is a real “cost of learning” even in the very most theoretical sense) what type theories present is a measure of what “free” information is available.

A PL designed with a type system begins to optimize its design to offer more “free” information all the time. This tendency doesn’t always help directly with concerns which are structured to be dynamic problems (“will this algorithm ever OOB this array?”) but it does offer an opportunity to stack cracking away at those problems by expressing more information about the system under question statically.

Abstract interpretation (which, caveat emptor, I don’t really understand that well) seems like another dimension in this whole picture. It offers information which isn’t “free” (completely static) but offers opportunity to start accessing information about a problem without paying the “entire cost” of running it.

I’m not the one suggesting that types are sets of finite bit sequences (I’d argue pretty heavily that they’re nothing close to that at all!)

I was primarily pointing out what I believed to be a contradiction, but I’ll defend this viewpoint anyway. Let’s say we are compiling to X86-64 and our compiler is ignoring SSE registers. Then every type–including arrow types, which are often thought of as primitive–ultimately boils down to “int64”. Parametricity prevents us from inspecting these physical representations, which not only preserves the freedom of the compiler to choose the physical representation, but also memory safety. If we are compiling to a typed assembly language, this may very well be the actual way it works.

Sorry, didn’t mean to come off critically. I totally understand an idea that types are sets of bit sequences—I just didn’t want to defend it right there!

They’re a bit weird from a mathematical perspective because there isn’t as much of a foundational motivation for why one should spend time trying to model overflows, but even that’s pretty interesting.

So my suggestion is that not only is there little foundational motivation for modelling overflows, but that there is little engineering motivation for studying foundations.

I don’t get your claims about False as a theorem. If False=True, then we can prove that e.g. the type of a function is X and also Y and also anything else. Unsound deduction produces no information at all. Are you sure you don’t mean “incomplete” rather than “unsound” ?

I can see your perspective, but I think there’s a lot of engineering motivation in studying foundations. I spend more time trying to model situations and build and test expectations about those models than I do working with algorithms. For me, having a system to introduce ideas and given them logical support is great and quite practical.

The heart of what makes False useful is that while it does implode your propositions, it cannot be practically realized. A generic coercion function forall a b. a -> b can be derived from False, but canonical inspection of the proof “takes forever”. So, your logic works for all “productive programs” and doesn’t require you to toss out general recursion.

Then, you examine proofs assuming that you don’t have False and see what you get. This is still meaningful information even though it can be in principle violated. It’s useful because that matches the logic we use when programming anyway: maybe there’s an infinite loop here, but I don’t think there is so let’s assume there’s not and see what we get. Then at runtime we find we’re wrong, oh no!

Logics exist that let you avoid this song and dance but they’re require a lot more of the programmer. It’s not always a practical tradeoff.

Probably my fault. I’m interested in how we treat types in programming languages and wondering whether the combinatoric/lambda-calc/russell approach gets at the interesting problems or if there is a more productive approach in applied math, not in metamathematics.

In particular, I don’t really see the value of Hindley-Miller for programming languages. But my ideas just in the asking questions stage.

I think reasonable people can disagree on what degree of type inference is necessary. However, not difficult? Maybe you just mean that HM isn’t difficult, which it shouldn’t be to anyone with a cursory background in PL. But in the limit, type inference is automated theorem proving, which is one of computing’s oldest and thorniest problems. The simplest of tweaks to HM can change its complexity from DEXPTIME to undecidable.

Not quoting, relating a fuzzy idea from the Intuitionist’s school of logical foundations.

types and polymorphism are simple and come from the mapping between finite bit sequences and values

I don’t think I can agree with that statement. At the very least I’d like to understand what you mean by “value” here. I’d probably also be interested in better understanding how you’re defining “type” and “polymorphism”.

But maybe the better place to start is: I can agree with you that formalists failed to achieve some of their stated ends, but in what way do you feel that type theory as applied to PL is depending upon those successes or continuing along failed routes?

They “failed” their ultimate Program in that what Hilbert hoped to achieve was proven impossible. This is a bit different from their “goals” however. The goal was ultimately to achieve greater understanding of the challenge of formalizing mathematics and logic and to create common understanding and shared language around the most challenging fundamental aspects of it. Hilbert expressed a philosophical goal by suggesting how this would play out and was shown to be wrong in his guess. That doesn’t mean that the larger mathematical work was a failure at all.

The intense study of these problems led to things like types and type theory which embrace a different philosophical foundation than Hilbert (or, really, are quite foundationally agnostic) and have been widely successful as a language for discussing mathematics. Instead of showing, as they hoped, that there could be an ultimate final formalization of algorithms for constructing mathematical proofs of any variety they instead showed that intense study of formalization and foundation proves the need for continuous growth of mathematical thought. It forces the issue of an open and growing system and ends up being a Full Employment Theorem for mathematicians.

Type theory here is just the technology for letting Mathematical Foundations reach that stage of its evolution. Even if we as of yet hadn’t found a way past the sticky point Gödel left us in we’d still have to call type theory successful as a technology.

I’m guessing types come from the first time lack of types caused a bug in some asm code. “Oh rats, that wasn’t a byte, that was a word”. Then C came along and fixed the lack of type safety in asm and fixed all type safety problems for ever. Then C++ came along and fixed the lack of type safety in C.

My suspicion is that computer scientists have gotten types all wrong - and have latched onto type theory from formal logic without understanding why formal logic failed at its larger goals.

Why do you say that? I’d say computer scientists have been developing types in an interesting way that directly accounts for the flaws in formalism and profits from them. CS forces the “open logical system extended through learning/communication” perspective which avoids the failings of Formalism.

I don’t know what you are quoting, but types as sets or as some byzantine formalism (such as the Hindley-Milner mess) doesn’t solve any problem as far as I can tell. In actual programming, types and polymorphism are simple and come from the mapping between finite bit sequences and values. Why one would need to drag in a failed project from metamathematics is something I don’t get.

You are making too big of a deal of the connection to set theory. Nobody is “dragging in a failed project”, it is mostly a historical curiosity at this point. Most research in type theory involves systems that are unsound when interpreted as a logic. Even the people today interested in using type theory to formalize mathematics are using it as a way to

replaceset theory, with massive implications for machine verification of proofs.I don’t understand how you can call Hindley-Milner a “byzantine formalism.” I get the feeling that you are unfamiliar with the sequent notation commonly used to describe the precise behavior of a type system. Hindley-Milner is the simplest type system that can elide type annotations and has the principal type property (each expression has a “best type”). Is type inference esoteric? Is querying your IDE for the type of an expression esoteric?

So I guess you could say that a type is a

setof finite bit sequences?Really? It is definitely not my field, but I can’t imagine what value could be obtained by a logical system in which False is a theorem. Certainly Milner claims his scheme to be sound. Maybe I don’t know what you mean by “interpreted as a logic”.

I still don’t know what problem is being solved. I see what Hindley is trying to do, but he’s interested in combinatory logic which is something that does not interest me all that much (I’m not saying it is not an interesting topic, just not to me). What I don’t get is what utility comes from applying such a generic approach to programming languages.

I don’t see a type as a set of finite bit sequences, since the interpretation of the same bit sequences into mathematical objects can be different. For example, if we look at the set of K bit sequences, we can have a mapping into Z_{2^K} and a mapping of symbolic operations into operations e.g. so integer(“a*b”) => integer(a) times integer(b) mod 2^K or something. We could also map to a subset of the reals with a NaN value if we are using the same bit sequences as floating point. To me, the hard parts of type theory are figuring out how to bound errors, deal with overflows, work with limited representations, handle not-a-number results etc.

“Research in type theory” comes in two camps: the PL-ish and the mathematical. There’s a lot of value in a type system that proves False on the PL-ish side as that is the type of a value for which you have no information and any program with a non-productive infinite loop must take that type. You

canexcise general recursion, but now we’re heading toward the mathematical side of type theory.Even when False is a theorem you can still get a lot of value out of the fact that structural reflection on the rules leading to a type judgement can give you a lot of information about the dynamic semantics of the proof/program. In this case, False just confers the information I said above: a totally non-productive program which will not, even in principle, ever produce information.

I’m not the one suggesting that types are sets of finite bit sequences (I’d argue pretty heavily that they’re nothing close to that at all!) but I would suggest that the practical problems you’re taking aim at w.r.t. type theory are

very interestingand totally a real part of the PL-ish community of studying type theory.They’re a bit weird from a mathematical perspective because there isn’t as much of a foundational motivation for why one should spend time trying to model overflows, but even that’s pretty interesting. I’d love to see type theories which would let you talk about Reals (perhaps computational, intuitionistic Reals) and Doubles together and relate what properties can be “ported” from one to the other as a basis for discussing how finite approximations to algorithms designed over the Reals can succeed or fail.

Why type theories as opposed to something else? Because type theories essentially specialize in information that can be obtained statically. Oppose this with information that can only be obtained “dynamically” by running a program. In a world where non-termination is a real, ever-present concern (more generally, there is a real “cost of learning” even in the very most theoretical sense) what type theories present is a measure of what “free” information is available.

A PL designed with a type system begins to optimize its design to offer more “free” information all the time. This tendency doesn’t always help directly with concerns which are structured to be dynamic problems (“will this algorithm ever OOB this array?”) but it does offer an opportunity to stack cracking away at those problems by expressing more information about the system under question statically.

Abstract interpretation (which, caveat emptor, I don’t really understand that well) seems like another dimension in this whole picture. It offers information which isn’t “free” (completely static) but offers opportunity to start accessing information about a problem without paying the “entire cost” of running it.

I was primarily pointing out what I believed to be a contradiction, but I’ll defend this viewpoint anyway. Let’s say we are compiling to X86-64 and our compiler is ignoring SSE registers. Then every type–including arrow types, which are often thought of as primitive–ultimately boils down to “int64”. Parametricity prevents us from inspecting these physical representations, which not only preserves the freedom of the compiler to choose the physical representation, but also memory safety. If we are compiling to a typed assembly language, this may very well be the actual way it works.

Sorry, didn’t mean to come off critically. I totally understand an idea that types are sets of bit sequences—I just didn’t want to defend it right there!

So my suggestion is that not only is there little foundational motivation for modelling overflows, but that there is little engineering motivation for studying foundations.

I don’t get your claims about False as a theorem. If False=True, then we can prove that e.g. the type of a function is X and also Y and also anything else. Unsound deduction produces no information at all. Are you sure you don’t mean “incomplete” rather than “unsound” ?

I can see your perspective, but I think there’s a lot of engineering motivation in studying foundations. I spend more time trying to model situations and build and test expectations about those models than I do working with algorithms. For me, having a system to introduce ideas and given them logical support is great and quite practical.

The heart of what makes False useful is that while it does implode your propositions, it cannot be practically realized. A generic coercion function

`forall a b. a -> b`

can be derived from False, but canonical inspection of the proof “takes forever”. So, your logic works for all “productive programs” and doesn’t require you to toss out general recursion.Then, you examine proofs assuming that you don’t have False and see what you get. This is still meaningful information even though it can be in principle violated. It’s useful because that matches the logic we use when programming anyway: maybe there’s an infinite loop here, but I don’t think there is so let’s assume there’s not and see what we get. Then at runtime we find we’re wrong, oh no!

Logics exist that let you avoid this song and dance but they’re require a lot more of the programmer. It’s not always a practical tradeoff.

[Comment removed by author]

I think it’s a typical formal methods result: mostly typesetting. If there are any insights in it, they eluded me.

[Comment removed by author]

Probably my fault. I’m interested in how we treat types in programming languages and wondering whether the combinatoric/lambda-calc/russell approach gets at the interesting problems or if there is a more productive approach in applied math, not in metamathematics.

In particular, I don’t really see the value of Hindley-Miller for

programming languages. But my ideas just in the asking questions stage.[Comment removed by author]

I am just not that convinced that safe automatic deduction of type signatures is such a difficult or important issue.

I think reasonable people can disagree on what degree of type inference is necessary. However, not difficult? Maybe you just mean that HM isn’t difficult, which it shouldn’t be to anyone with a cursory background in PL. But in the limit, type inference is automated theorem proving, which is one of computing’s oldest and thorniest problems. The simplest of tweaks to HM can change its complexity from DEXPTIME to undecidable.

Not quoting, relating a fuzzy idea from the Intuitionist’s school of logical foundations.

I don’t think I can agree with that statement. At the very least I’d like to understand what you mean by “value” here. I’d probably also be interested in better understanding how you’re defining “type” and “polymorphism”.

But maybe the better place to start is: I can agree with you that formalists failed to achieve some of their stated ends, but in what way do you feel that type theory as applied to PL is depending upon those successes or continuing along failed routes?

They failed all of their stated goals. No? Will try your harder questions later.

They “failed” their ultimate Program in that what Hilbert hoped to achieve was proven impossible. This is a bit different from their “goals” however. The goal was ultimately to achieve greater understanding of the challenge of formalizing mathematics and logic and to create common understanding and shared language around the most challenging fundamental aspects of it. Hilbert expressed a philosophical goal by suggesting how this would play out and was shown to be wrong in his guess. That doesn’t mean that the larger mathematical work was a failure at all.

The intense study of these problems led to things like types and type theory which embrace a different philosophical foundation than Hilbert (or, really, are quite foundationally agnostic) and have been widely successful as a language for discussing mathematics. Instead of showing, as they hoped, that there could be an ultimate final formalization of algorithms for constructing mathematical proofs of any variety they instead showed that intense study of formalization and foundation proves the need for continuous growth of mathematical thought. It forces the issue of an open and growing system and ends up being a Full Employment Theorem for mathematicians.

Type theory here is just the technology for letting Mathematical Foundations reach that stage of its evolution. Even if we as of yet hadn’t found a way past the sticky point Gödel left us in we’d still have to call type theory successful as a technology.

I’m not in a position to argue that in terms of mathematics as a whole, but I don’t see any interesting results in CS that are not self-referential.

Should computer scientists care why formal logic failed at the goals mathematicians gave it?

What matters to us is if it helps create more reliable software. If mathematicians set their goals higher and failed to meet them, that’s orthogonal.

I’m guessing types come from the first time lack of types caused a bug in some asm code. “Oh rats, that wasn’t a byte, that was a word”. Then C came along and fixed the lack of type safety in asm and fixed all type safety problems for ever. Then C++ came along and fixed the lack of type safety in C.

Did you read the post? There’s a lot more history.