Also see the comment (use case) from Mike Stay: https://golem.ph.utexas.edu/category/2021/02/native_type_theory.html#c059431
This looks interesting, but it’s a bit beyond me. Any chance we could get a slightly dumbed-down explanation, with a bit less category theory jargon?
I’ll give it a try. The main idea is to treat a programming language not based on its advertised type system, but on its actual computational rules. Because all programming languages must have such rules, the derived type theory must be a valid logic for reasoning about the behavior of programs, even if the language is traditionally considered untyped.
For example, let’s take a snippet of Python 3.
x = 1; x + 2
This snippet is closed; it doesn’t refer to any outer names. By the rules of Python, we could remove the first statement and put x into an outer scope with the primitive value 1 already assigned. So, assuming that x = 1 already, we have:
x = 1
x + 2
Now, again by the rules of Python, we can substitute 1 for x in our snippet, rewriting the syntax:
1 + 2
This might seem like ordinary big-step semantics. But we also inherit type information from the term constructors of the language. For Python 3, we know that 1 is not just a value, but has a type int of integers. This applies to operators, too; we know that if x and y are of type int, then so is x + y. Or, in a phrasing closer to type theory:
x + y
x :int, y :int |- x + y :int
The paper’s main contribution is a seamless synthesis of these two ways of approaching the behaviors of programming languages.