I don’t think these kind of articles are productive since they use different meanings of the same terms. That can be a good thing but not in debates where one seeks agreement. My hypothesis is that there’s at least two definitions of typed/untyped at play:

The author and people on that side who are looking at it in a formal way based on academically-accepted definitions of the words.

Most of the people saying the programs are untyped probably mean they’re not “explicitly” typed. They don’t require extra typing, limiting structure of their programs, and so on.

If so, then No 2 is a popular definition of “untyped.” It’s academically incorrect. The masses are on board with it. These articles might correct a misuse of the definition. They probably won’t convert people in No 2 to want to use type systems since the articles don’t address what No 2 really like about “untyped” languages. Examples that did are Strongtalk and Typed Racket trying to get benefits of RAD-style languages and stronger typing.

I don’t think the article is trying to convince people to use static typing. What it says is that we should talk about different approaches to enforcing invariants about programs. These can come in form of static types, contracts, test, and so on. The dichotomy of static and dynamic typing is naive, and it distracts from the actual purpose of having a specification.

I like that interpretation and goal. Plus, increasing awareness of concept and importance of invariants is good itself. Especially since they’ll build more reliable stuff. Maybe even upgrade into high assurance. :)

When I’m hacking, I write in “untyped” languages. I write programs in Racket, scripts in bash, plugins and tools in JavaScript, papers in latex, build systems in Makefile, and so on.

So I don’t think “converting” people is the goal of this article.

I don’t know if I buy the proof. Sure, you can say that a binary with undefined behavior is not a program because a program only exists of defined expressions, but these binaries exist and people work with them every day. If anything, this means that there are untyped or partially typed binaries out there. At the end of the day, I think we only really care about that code artifact because that is what runs.

This article has no type. At least it is not sound. I think serious writing about type theory starts with studying the aptly named untyped lambda calculus. Terms in the untyped lambda calculus have no type.

That’s one way to look at it. From another perspective, the terms of the “untyped” (some would say “uni-typed”) lambda calculus have only one type. That’s how TAPL presents it, for example. I consider that serious enough.

But would agree that studying the differences between Church’s 1936 “untyped” and 1940 “simply typed” lambda calculus is quite illustrative.

It seems utterly pointless to talk about a type system where everything has one and the same type. The only point of a type system is to distinguish between different types. Otherwise you just . . . don’t have a type system.

Sure, as long as you’re only talking about that one “untyped” language and its properties, there’s no point. But when you want to compare languages and their features, it’s all of a sudden very helpful.

This happens all the time in the history of math, by the way: as a theory develops and gets elaborated, something previously thought of as distinct gets recognized as a degenerate special case of something else. For example, natural numbers are included in the integers when we recognize the (not necessarily intuitive!) concept of negation, which arises out of subtraction, which in turn is desirable as a way of undoing addition.

It can be argued that there’s no point to a type-system, it’s just a formal system, whether intended to be or not. It’s useful to know what these formal systems represent.

It’s actually a very useful perspective for understanding the semantics of such languages: Dana Scott’s reflexive objects are essentially just a codification of the fact that untyped can be seen as unityped.

Reminds me of that Guy Steele talk where he notices that type theorists primarily use an ad-hoc, unspecified, dynamically typed language to describe the denotational semantics of their language.

Yes that is a good one! But the point is that the meta language is supposed to be smaller than the language and can be trusted… I think NG de Brujin discusses this issue in depth

I think the author is missing the point and tackling a strawman. As far as I can tell, the real “debate” between static and dynamic typed languages is whether the compiler uses type assertions to find errors & optimizations, and whether the coder considers those to be valuable enough to offset the extra keyboard-typing.

As Wolfram pointed out, what academic computer scientists mean by “type theory” doesn’t correspond at all with mathematical use which is much more flexible and rich. Academic computer science confuses metamathematical ideas with mathematical practice.

Untyped programs do exist. The author is just looking at the wrong direction. He’s looking down to hardware when he should look upwards toward abstract programming languages.

Here’s an example of an “untyped” program, it’s the ‘id’ function from Haskell: (λx.x) : (∀a. a -> a)

It works for every variable you pass in and the variable ranges over an infinitely large domain. The type in the program says that for every variable, you can get a variable and return the same type of a variable.

If you loosen from the idea that a program must represent a valid proof for some logical formula, then you can describe large variety of programs. The validity of the program is determined when the domain is selected. That’s how you get runtime errors and dynamic typing.

The original post is parroting of Robert Harper’s dumbest ideas. It’s equivalent to taking my “Dynamic typing is good” -post from the year 2014 and using it as the basis to refute the use of type systems.

It appears that the motive for writing the post came from a short discussion with somebody else. He hasn’t gotten to the agreement about the subject so he decided to write a very assertive blog post to drive his point.

I’m afraid I don’t understand what definition of “typed” you are using here, could clarify a little?

I’m not sure what connection the proofs-as-programs idea has here, this merely states that some interesting type systems happen to mirror some interesting proof theories. You can trivially go from a type system to a logic, though the result is rarely worth studying. Going in the opposite direction is usually more interesting but seems irrelevant.

Type systems not only mirror proof theories; they are isomorphic. And the isomorphism is very useful because theorems about programs can be stated in the same language as the programs themselves, as seen in languages such as Coq, Idris, Agda, Epigram… It gives verification powers to languages, and also gives implementation laboratories for logics.

I avoided the use of the word “isomorphic” because it’s not clear in what category such an isomorphism should live. But yes, it is a very important symmetry!

Isomorphic as in a one to one mapping from and to one another so that you can always recover the original information/object after going from one representation to the other

The more general claim is that it’s idiotic to continue the static vs. dynamic typing debate because it is a false dilemma. And it’s harmful to propagate it.

If you take the role of a dynamic typing zealot, it means you’ve discarded the study of logic and proof theory that could enhance your usual ways of reasoning about programming. You don’t need to stop using dynamic languages to reap the benefits, but once you’ve figured it out you want to use something like Prolog, Coq or Haskell more often.

If you go and be a static typing zealot, then you do some sort of a fallacy. It involves putting some particular set of deduction rules to a pedestal and discard all methods of reasoning outside of whatever framework you’ve decided to use. Just like the author asserts how everything, including assembly language and C, has to have a well-defined type, even if the type was useless by failing to convey what the program is trying to solve. There are tons of logical systems and frameworks and nobody has decided that we should stick to intuitionistic logic.

Effectively both stances discard the study of logic in one form or an another. It literally harms scientific progress and the motion from theory into practice.

I don’t think these kind of articles are productive since they use different meanings of the same terms. That can be a good thing but not in debates where one seeks agreement. My hypothesis is that there’s at least two definitions of typed/untyped at play:

The author and people on that side who are looking at it in a formal way based on academically-accepted definitions of the words.

Most of the people saying the programs are untyped probably mean they’re not “explicitly” typed. They don’t require extra typing, limiting structure of their programs, and so on.

If so, then No 2 is a popular definition of “untyped.” It’s academically incorrect. The masses are on board with it. These articles might correct a misuse of the definition. They probably won’t convert people in No 2 to want to use type systems since the articles don’t address what No 2

reallylike about “untyped” languages. Examples that did are Strongtalk and Typed Racket trying to get benefits of RAD-style languages and stronger typing.I don’t think the article is trying to convince people to use static typing. What it says is that we should talk about different approaches to enforcing invariants about programs. These can come in form of static types, contracts, test, and so on. The dichotomy of static and dynamic typing is naive, and it distracts from the actual purpose of having a specification.

I like that interpretation and goal. Plus, increasing awareness of concept and importance of invariants is good itself. Especially since they’ll build more reliable stuff. Maybe even upgrade into high assurance. :)

The author says:

So I don’t think “converting” people is the goal of this article.

I don’t know if I buy the proof. Sure, you can say that a binary with undefined behavior is not a program because a program only exists of defined expressions, but these binaries exist and people work with them every day. If anything, this means that there are untyped or partially typed binaries out there. At the end of the day, I think we only really care about that code artifact because that is what runs.

This article has no type. At least it is not sound. I think serious writing about type theory starts with studying the aptly named untyped lambda calculus. Terms in the untyped lambda calculus have no type.

That’s one way to look at it. From another perspective, the terms of the “untyped” (some would say “uni-typed”) lambda calculus have only one type. That’s how TAPL presents it, for example. I consider that serious enough.

But would agree that studying the differences between Church’s 1936 “untyped” and 1940 “simply typed” lambda calculus is quite illustrative.

It seems utterly pointless to talk about a type system where everything has one and the same type. The only point of a type system is to distinguish between different types. Otherwise you just . . . don’t have a type system.

Sure, as long as you’re only talking about that one “untyped” language and its properties, there’s no point. But when you want to compare languages and their features, it’s all of a sudden

veryhelpful.This happens all the time in the history of math, by the way: as a theory develops and gets elaborated, something previously thought of as distinct gets recognized as a degenerate special case of something else. For example, natural numbers are included in the integers when we recognize the (not necessarily intuitive!) concept of negation, which arises out of subtraction, which in turn is desirable as a way of undoing addition.

It can be argued that there’s no

pointto a type-system, it’s just a formal system, whether intended to be or not. It’s useful to know what these formal systems represent.It’s actually a very useful perspective for understanding the semantics of such languages: Dana Scott’s reflexive objects are essentially just a codification of the fact that untyped can be seen as unityped.

Reminds me of that Guy Steele talk where he notices that type theorists primarily use an ad-hoc, unspecified, dynamically typed language to describe the denotational semantics of their language.

Yes that is a good one! But the point is that the meta language is supposed to be smaller than the language and can be trusted… I think NG de Brujin discusses this issue in depth

a+ expert troll

Pseudo-mathematical proof. Stark, unnuanced conclusion.

I think the author is missing the point and tackling a strawman. As far as I can tell, the real “debate” between static and dynamic typed languages is whether the compiler uses type assertions to find errors & optimizations, and whether the coder considers those to be valuable enough to offset the extra keyboard-typing.

As Wolfram pointed out, what academic computer scientists mean by “type theory” doesn’t correspond at all with mathematical use which is much more flexible and rich. Academic computer science confuses metamathematical ideas with mathematical practice.

Untyped programs do exist. The author is just looking at the wrong direction. He’s looking down to hardware when he should look upwards toward abstract programming languages.

Here’s an example of an “untyped” program, it’s the ‘id’ function from Haskell:

`(λx.x) : (∀a. a -> a)`

It works for every variable you pass in and the variable ranges over an infinitely large domain. The type in the program says that for every variable, you can get a variable and return the same type of a variable.

If you loosen from the idea that a program must represent a valid proof for some logical formula, then you can describe large variety of programs. The validity of the program is determined when the domain is selected. That’s how you get runtime errors and dynamic typing.

The original post is parroting of Robert Harper’s dumbest ideas. It’s equivalent to taking my “Dynamic typing is good” -post from the year 2014 and using it as the basis to refute the use of type systems.

It appears that the motive for writing the post came from a short discussion with somebody else. He hasn’t gotten to the agreement about the subject so he decided to write a very assertive blog post to drive his point.

The haskell

`id`

function is typed (it is polymorphic, but it has a universal type).I’m afraid I don’t understand what definition of “typed” you are using here, could clarify a little?

I’m not sure what connection the proofs-as-programs idea has here, this merely states that some interesting type systems happen to mirror some interesting proof theories. You can trivially go from a type system to a logic, though the result is rarely worth studying. Going in the opposite direction is usually more interesting but seems irrelevant.

Type systems not only mirror proof theories; they are isomorphic. And the isomorphism is very useful because theorems about programs can be stated in the same language as the programs themselves, as seen in languages such as Coq, Idris, Agda, Epigram… It gives verification powers to languages, and also gives implementation laboratories for logics.

I avoided the use of the word “isomorphic” because it’s not clear in what category such an isomorphism should live. But yes, it is a very important symmetry!

Isomorphic as in a one to one mapping from and to one another so that you can always recover the original information/object after going from one representation to the other

Looks like a type to me… but that’s just Haskell’s type system. Are you trying to make a more general claim?

The more general claim is that it’s idiotic to continue the static vs. dynamic typing debate because it is a false dilemma. And it’s harmful to propagate it.

If you take the role of a dynamic typing zealot, it means you’ve discarded the study of logic and proof theory that could enhance your usual ways of reasoning about programming. You don’t need to stop using dynamic languages to reap the benefits, but once you’ve figured it out you want to use something like Prolog, Coq or Haskell more often.

If you go and be a static typing zealot, then you do some sort of a fallacy. It involves putting some particular set of deduction rules to a pedestal and discard all methods of reasoning outside of whatever framework you’ve decided to use. Just like the author asserts how everything, including assembly language and C, has to have a well-defined type, even if the type was useless by failing to convey what the program is trying to solve. There are tons of logical systems and frameworks and nobody has decided that we should stick to intuitionistic logic.

Effectively both stances discard the study of logic in one form or an another. It literally harms scientific progress and the motion from theory into practice.