Saying that a Turing-complete type system “has bugs” is only fair if there’s a specification that type checking must terminate. For example, dependent type systems have non-termination as a possibility, but it’s rarely an issue in practice. You’re very unlikely to end up in a bad case just because you decided that you wanted to keep track of vector sizes in the types.
Also, type systems without dependent types or Turing-complete capabilities are still vulnerable to pathological code that takes infeasibly long to compile. For example, this bit of Haskell:
ghci> id id id id id id id id id id id id id id id id id... id $ 5
with about 30 ids, will build up an exponentially large type while solving the system of equations: the innermost id is Int -> Int, the second innermost is (Int -> Int) -> (Int -> Int), the third is ((Int -> Int) -> (Int -> Int)) -> ((Int -> Int) -> (Int -> Int)). So the type of your outer id, if you have 30 of them, is a tree with 2^30 nodes.
You can also bork a type checker in fewer characters with a construction like this:
let f0 x = (x, x); f1 x = f0 (f0 x); f2 x = f1 (f1 x); f3 x = f2 (f2 x);
f4 = f3 (f3 x); f5 = f4 (f4 x); f6 = f5 (f5 x)
As bad as this may seem, I have never seen it come up in real-world programming.
Turing completeness is equivalent to enabling unbounded recursion. Without Turing completeness, you’re usually limited to primitive (or bounded) recursion. For example, for simple enough type systems one can prove that type checking terminates (although if the perversions above are used, it can take a very long time). This is theoretically important but not necessarily meaningful in practice because unbounded recursion and primitive recursion from, say, 1 to 2^100, both have the capability to take longer than any of us will be alive.
Also, the fact that many systems turn out to be accidentally turing complete shows the practical effect the property has. If it were of practical significance (in that area), many implementors would notice that immediately. Examples: https://www.gwern.net/Turing-complete
Obviously, if you want proofs, it’s the worst thing to have.
Also, the fact that many systems turn out to be accidentally turing complete shows the practical effect the property has.
General recursion is the key concept. That’s what leads to the ubiquity of Turing completeness.
Primitive recursion (i.e. “do this 200 times” or even “do this N! times” where N is the size of the problem) isn’t an issue with regard to termination, but some questions require more. For example, “Is T a theorem of (first-order or more powerful) system A?” In the positive case, you have to find a proof, but that might take arbitrarily long. In the negative case, it might be that T is not a theorem but that no proof of T or ~T exists, i.e. it’s undecidable. The set of theorems is called Σ-1 or recursively enumerable: in principle, we will list all of the theorems given infinite time, but we can never rule out all the non-theorems. (Note that non-theorem means unprovable, not “false”. Per Godel, there are always T such that both T and ~T are non-theorems.)
There are plenty of real world problems that, when stated most generally and simply, require general recursion. For example, “I want to find a route, by road, from New York to Seattle.” You can view a path as a proof that there is such a route, and the search as an unbounded recursion on a graph. In practice, we can use a lot of knowledge about the system both to make the problem tractable (i.e., apply A*-search) and place a bound on it (i.e. exclude paths that exceed 20,000 miles). But without this about-the-system knowledge, we’d be stuck with general recursion. So if you’re building infrastructure and don’t know who the user will be and what bounds make sense, you often see general recursion as a natural attractor… and that’s not necessarily a bad thing.
Saying that a Turing-complete type system “has bugs” is only fair if there’s a specification that type checking must terminate. For example, dependent type systems have non-termination as a possibility, but it’s rarely an issue in practice. You’re very unlikely to end up in a bad case just because you decided that you wanted to keep track of vector sizes in the types.
Also, type systems without dependent types or Turing-complete capabilities are still vulnerable to pathological code that takes infeasibly long to compile. For example, this bit of Haskell:
with about 30
ids, will build up an exponentially large type while solving the system of equations: the innermostidisInt -> Int, the second innermost is(Int -> Int) -> (Int -> Int), the third is((Int -> Int) -> (Int -> Int)) -> ((Int -> Int) -> (Int -> Int)). So the type of your outerid, if you have 30 of them, is a tree with 2^30 nodes.You can also bork a type checker in fewer characters with a construction like this:
This creates an massive type:
f0 :: t -> (t, t)f1 :: t -> ((t, t), (t, t))f2 :: t -> ((((t, t), (t, t)),((t, t), (t, t))),(((t, t), (t, t)),((t, t), (t, t))))f3 :: t -> ... tuple with 2^2^3 = 256 t'sfN :: t -> ... tuple with 2^2^N t'sAs bad as this may seem, I have never seen it come up in real-world programming.
Turing completeness is equivalent to enabling unbounded recursion. Without Turing completeness, you’re usually limited to primitive (or bounded) recursion. For example, for simple enough type systems one can prove that type checking terminates (although if the perversions above are used, it can take a very long time). This is theoretically important but not necessarily meaningful in practice because unbounded recursion and primitive recursion from, say, 1 to 2^100, both have the capability to take longer than any of us will be alive.
Sad to not have the actual line at hand, but Martin Odersky about the compile times of a single line of Scala:
https://twitter.com/headius/status/630779433373163521
Also, the fact that many systems turn out to be accidentally turing complete shows the practical effect the property has. If it were of practical significance (in that area), many implementors would notice that immediately. Examples: https://www.gwern.net/Turing-complete
Obviously, if you want proofs, it’s the worst thing to have.
General recursion is the key concept. That’s what leads to the ubiquity of Turing completeness.
Primitive recursion (i.e. “do this 200 times” or even “do this
N!times” whereNis the size of the problem) isn’t an issue with regard to termination, but some questions require more. For example, “IsTa theorem of (first-order or more powerful) systemA?” In the positive case, you have to find a proof, but that might take arbitrarily long. In the negative case, it might be thatTis not a theorem but that no proof ofTor~Texists, i.e. it’s undecidable. The set of theorems is called Σ-1 or recursively enumerable: in principle, we will list all of the theorems given infinite time, but we can never rule out all the non-theorems. (Note that non-theorem means unprovable, not “false”. Per Godel, there are alwaysTsuch that bothTand~Tare non-theorems.)There are plenty of real world problems that, when stated most generally and simply, require general recursion. For example, “I want to find a route, by road, from New York to Seattle.” You can view a path as a proof that there is such a route, and the search as an unbounded recursion on a graph. In practice, we can use a lot of knowledge about the system both to make the problem tractable (i.e., apply A*-search) and place a bound on it (i.e. exclude paths that exceed 20,000 miles). But without this about-the-system knowledge, we’d be stuck with general recursion. So if you’re building infrastructure and don’t know who the user will be and what bounds make sense, you often see general recursion as a natural attractor… and that’s not necessarily a bad thing.