algebraic datatypes are an extremely simple and beautiful thing; they’re just unfamiliar. one of the things i’m excited about pyret potentially doing is making them a standard part of an introduction to programming course.
Algebraic datatypes are awesome! Pyret’s also super cool. Yeah I don’t think they’re particularly complicated to explain, but their actual implementation isn’t that simple when it comes to type inference, pattern matching, and recursive datatypes.
ah, my mistake, sorry. i didn’t realise you meant in terms of implementation; i thought you were picking on haskellers for calling unfamiliar language features “simple”.
I think it’s a bit incomplete to talk about how Dependent Haskell is the future of formal verification (not software development), but only compare it to Agda and not any of the actual formal verification languages used in production.
Incidentally, is there a big software project that’s been done in Agda? I know Isabelle has seL4 and Coq has CompCert, but I’m drawing a blank on an Agda example.
CakeML is another example from HOL side that pairs with CompCert quite nicely. They’re both compilers.
Far as Agda, I dont know anything big. I find a pretty-printer and simple compiler. The publications list might have something if you dare to read all that. ;) Mostly looks like PLT and math stuff vs realistic software.
On one hand, he did mention another independently typed language:
There are experiments in this direction, for instance Idris, but Idris is a language with eager (strict) evaluation and its implementation isn’t particularly stable at the moment.
Then again,
Is Dependently-typed Haskell’s implementation notably more stable than Idris’s?
I recall Idris having very good reasons for not being lazy ;)
It would be perfectly reasonable to write a lazy back end for Idris. There’s plenty of back ends already and I’ve deliberately made it easy to make more if people want to try stuff out. It would probably involve rewriting a lot of library code to take proper advantage of it, though.
My main worry about this sort of discussion is that, cool as it is to see it happening, for me Haskell’s approach to dependent types does a good job of making them look a lot more complicated than they really need to be. I think the concept it a fairly simple one - types are first class in the same way that functions are - but either we’re not doing as good a job of explaining this as we need to be, or it’s not really as simple as I like to think after all…
I used to think it wasn’t simple until the logical connection with proofs started making sense but that was only after years of fighting with the ideas. My most recent foray is with “Type Theory and Formal Proof” + “Software Foundations” and it’s going a lot better.
Now I don’t know know why it all seemed complicated. I think the main blocker was getting caught up in the categorical stuff and not realizing that’s just one angle of attack and maybe not even a very good angle at that.
There’s whole lot that I don’t like about this post.
First of all, it doesn’t properly answer the question in the title. The objective of the blog post seem to be something entirely else. it looks like a mix of a provocation, an analysis and a project announcement.
Btw. The ‘unityped’ -claim over dynamic typing is probably wrong because it confuses the language and language implementation. When the program is being interpreted, the variable cells it uses are fully defined and form a single type. But you can take a different interpreter with different sort of variable cell structure that allows different values to be passed in. It’s more like universal quantification at every variable and every function being polymorphic in an unrestricted manner.
The attraction in dynamically typed languages is that you need to describe only the bare minimum about the program to get a running program. You won’t know if it keeps working afterwards, but often you’re just happy if it runs even once.
I also don’t like the “tests cannot replace types - we need both!” -remark. In a proper setting the types are formulas that describe the problem and the desired solution to it. The program is a formal proof that there’s a solution to the problem. In such setting you don’t need tests! Only need to verify that the problem described in the type is the exact problem that you need to have solved.
The point in types is not “confidence in code correctness”. The point is certainty. In such setting the tests would be by definition useless.
The argument against Rust is also annoying. The practical work on Rust has progressed and helped type theorists to address problems there are in using types to reason over mutable constructs. This is just as valuable and important as the study of dependent types. I would prefer to use types to reason about mutable constructs as well.
I like about the confidency of making dependent typing practical. But that feels like silly confidence. Shouldn’t proper dependent array lookup type look more like (xs : [a] -> i : Int -> a) {0 < i ∧ i < len xs}?
The downside is that, while safe, memory management in Rust is still manual and non-trivial
In what sense is it more “manual” than garbage collected languages? RAII means you don’t worry about memory management details unless you’re specifically writing a low-level container like a vector.
You have to explicitly say whether a variable is shared or exclusively owned by a binding. In a GC-ed language, the runtime just figures it out for you.
Which actually makes it very similar to a statically/dynamically typed program, now that I think about it.
None of those things are simple :/
algebraic datatypes are an extremely simple and beautiful thing; they’re just unfamiliar. one of the things i’m excited about pyret potentially doing is making them a standard part of an introduction to programming course.
Algebraic datatypes are awesome! Pyret’s also super cool. Yeah I don’t think they’re particularly complicated to explain, but their actual implementation isn’t that simple when it comes to type inference, pattern matching, and recursive datatypes.
ah, my mistake, sorry. i didn’t realise you meant in terms of implementation; i thought you were picking on haskellers for calling unfamiliar language features “simple”.
I think it’s a bit incomplete to talk about how Dependent Haskell is the future of formal verification (not software development), but only compare it to Agda and not any of the actual formal verification languages used in production.
Incidentally, is there a big software project that’s been done in Agda? I know Isabelle has seL4 and Coq has CompCert, but I’m drawing a blank on an Agda example.
CakeML is another example from HOL side that pairs with CompCert quite nicely. They’re both compilers.
Far as Agda, I dont know anything big. I find a pretty-printer and simple compiler. The publications list might have something if you dare to read all that. ;) Mostly looks like PLT and math stuff vs realistic software.
Or, indeed, to compare it to any other dependently typed languages :)
On one hand, he did mention another independently typed language:
Then again,
That’s not even all of the reasons :).
It would be perfectly reasonable to write a lazy back end for Idris. There’s plenty of back ends already and I’ve deliberately made it easy to make more if people want to try stuff out. It would probably involve rewriting a lot of library code to take proper advantage of it, though.
My main worry about this sort of discussion is that, cool as it is to see it happening, for me Haskell’s approach to dependent types does a good job of making them look a lot more complicated than they really need to be. I think the concept it a fairly simple one - types are first class in the same way that functions are - but either we’re not doing as good a job of explaining this as we need to be, or it’s not really as simple as I like to think after all…
I used to think it wasn’t simple until the logical connection with proofs started making sense but that was only after years of fighting with the ideas. My most recent foray is with “Type Theory and Formal Proof” + “Software Foundations” and it’s going a lot better.
Now I don’t know know why it all seemed complicated. I think the main blocker was getting caught up in the categorical stuff and not realizing that’s just one angle of attack and maybe not even a very good angle at that.
There’s whole lot that I don’t like about this post.
First of all, it doesn’t properly answer the question in the title. The objective of the blog post seem to be something entirely else. it looks like a mix of a provocation, an analysis and a project announcement.
Btw. The ‘unityped’ -claim over dynamic typing is probably wrong because it confuses the language and language implementation. When the program is being interpreted, the variable cells it uses are fully defined and form a single type. But you can take a different interpreter with different sort of variable cell structure that allows different values to be passed in. It’s more like universal quantification at every variable and every function being polymorphic in an unrestricted manner.
The attraction in dynamically typed languages is that you need to describe only the bare minimum about the program to get a running program. You won’t know if it keeps working afterwards, but often you’re just happy if it runs even once.
I also don’t like the “tests cannot replace types - we need both!” -remark. In a proper setting the types are formulas that describe the problem and the desired solution to it. The program is a formal proof that there’s a solution to the problem. In such setting you don’t need tests! Only need to verify that the problem described in the type is the exact problem that you need to have solved.
The point in types is not “confidence in code correctness”. The point is certainty. In such setting the tests would be by definition useless.
The argument against Rust is also annoying. The practical work on Rust has progressed and helped type theorists to address problems there are in using types to reason over mutable constructs. This is just as valuable and important as the study of dependent types. I would prefer to use types to reason about mutable constructs as well.
I like about the confidency of making dependent typing practical. But that feels like silly confidence. Shouldn’t proper dependent array lookup type look more like
(xs : [a] -> i : Int -> a) {0 < i ∧ i < len xs}
?In what sense is it more “manual” than garbage collected languages? RAII means you don’t worry about memory management details unless you’re specifically writing a low-level container like a vector.
You have to explicitly say whether a variable is shared or exclusively owned by a binding. In a GC-ed language, the runtime just figures it out for you.
Which actually makes it very similar to a statically/dynamically typed program, now that I think about it.
This is indeed simple, but it sure doesn’t feel like it when you put it in an unexplained run-on like this ;)