There seems to be a few different dimensions in which one can consider the question:
There are indeed pros-and-cons to all approaches. For instance, with the second obviously we want as much to be done automatically as possible but there are decidability issues (ignoring efficiency issues!). Should a system be conservative and predictable? Should the user be able to define their own heuristics? How should we allow the user to guide the proof search? All of these are open questions in research. These are also the sort of scientific questions which are the hardest to answer :)
Untyped programs do exist. The author is just looking at the wrong direction. He’s looking down to hardware when he should look upwards toward abstract programming languages.
Here’s an example of an “untyped” program, it’s the ‘id’ function from Haskell: (λx.x) : (∀a. a -> a)
(λx.x) : (∀a. a -> a)
It works for every variable you pass in and the variable ranges over an infinitely large domain. The type in the program says that for every variable, you can get a variable and return the same type of a variable.
If you loosen from the idea that a program must represent a valid proof for some logical formula, then you can describe large variety of programs. The validity of the program is determined when the domain is selected. That’s how you get runtime errors and dynamic typing.
The original post is parroting of Robert Harper’s dumbest ideas. It’s equivalent to taking my “Dynamic typing is good” -post from the year 2014 and using it as the basis to refute the use of type systems.
It appears that the motive for writing the post came from a short discussion with somebody else. He hasn’t gotten to the agreement about the subject so he decided to write a very assertive blog post to drive his point.
The haskell id function is typed (it is polymorphic, but it has a universal type).
I’m afraid I don’t understand what definition of “typed” you are using here, could clarify a little?
I’m not sure what connection the proofs-as-programs idea has here, this merely states that some interesting type systems happen to mirror some interesting proof theories. You can trivially go from a type system to a logic, though the result is rarely worth studying. Going in the opposite direction is usually more interesting but seems irrelevant.
Type systems not only mirror proof theories; they are isomorphic. And the isomorphism is very useful because theorems about programs can be stated in the same language as the programs themselves, as seen in languages such as Coq, Idris, Agda, Epigram… It gives verification powers to languages, and also gives implementation laboratories for logics.
I avoided the use of the word “isomorphic” because it’s not clear in what category such an isomorphism should live. But yes, it is a very important symmetry!
Isomorphic as in a one to one mapping from and to one another so that you can always recover the original information/object after going from one representation to the other
GHCi, version 8.0.1
λ :t id
id :: a -> a
Looks like a type to me… but that’s just Haskell’s type system. Are you trying to make a more general claim?
The more general claim is that it’s idiotic to continue the static vs. dynamic typing debate because it is a false dilemma. And it’s harmful to propagate it.
If you take the role of a dynamic typing zealot, it means you’ve discarded the study of logic and proof theory that could enhance your usual ways of reasoning about programming. You don’t need to stop using dynamic languages to reap the benefits, but once you’ve figured it out you want to use something like Prolog, Coq or Haskell more often.
If you go and be a static typing zealot, then you do some sort of a fallacy. It involves putting some particular set of deduction rules to a pedestal and discard all methods of reasoning outside of whatever framework you’ve decided to use. Just like the author asserts how everything, including assembly language and C, has to have a well-defined type, even if the type was useless by failing to convey what the program is trying to solve. There are tons of logical systems and frameworks and nobody has decided that we should stick to intuitionistic logic.
Effectively both stances discard the study of logic in one form or an another. It literally harms scientific progress and the motion from theory into practice.
This article has no type. At least it is not sound. I think serious writing about type theory starts with studying the aptly named untyped lambda calculus. Terms in the untyped lambda calculus have no type.
That’s one way to look at it. From another perspective, the terms of the “untyped” (some would say “uni-typed”) lambda calculus have only one type. That’s how TAPL presents it, for example. I consider that serious enough.
But would agree that studying the differences between Church’s 1936 “untyped” and 1940 “simply typed” lambda calculus is quite illustrative.
It seems utterly pointless to talk about a type system where everything has one and the same type. The only point of a type system is to distinguish between different types. Otherwise you just . . . don’t have a type system.
Sure, as long as you’re only talking about that one “untyped” language and its properties, there’s no point. But when you want to compare languages and their features, it’s all of a sudden very helpful.
This happens all the time in the history of math, by the way: as a theory develops and gets elaborated, something previously thought of as distinct gets recognized as a degenerate special case of something else. For example, natural numbers are included in the integers when we recognize the (not necessarily intuitive!) concept of negation, which arises out of subtraction, which in turn is desirable as a way of undoing addition.
It can be argued that there’s no point to a type-system, it’s just a formal system, whether intended to be or not. It’s useful to know what these formal systems represent.
It’s actually a very useful perspective for understanding the semantics of such languages: Dana Scott’s reflexive objects are essentially just a codification of the fact that untyped can be seen as unityped.
Reminds me of that Guy Steele talk where he notices that type theorists primarily use an ad-hoc, unspecified, dynamically typed language to describe the denotational semantics of their language.
Yes that is a good one! But the point is that the meta language is supposed to be smaller than the language and can be trusted… I think NG de Brujin discusses this issue in depth
Aside from research, I’m working my way through “Algebraic Theories: A Categorical Introduction to Universal Algebra” and a bunch of papers related to describing syntax & models of syntax in category theory.
For a rare programming interlude I am debating on whether or not I should add an elaborator to nbe-for-mltt or create a separate repo to do this.
Hey my research is kind of relevant here: Iris is a great logic for reasoning about lock free algorithms if you want to try playing around with them. See for instance Clausen’s lock-free hashtable or the lock-free concurrent stack with helping that I helped verify :)
While I agree with the analysis in this article it strikes me that these were precisely the caveats that “effectively” was meant to capture.
Well I just finished coding up an implementation and writing a tutorial on higher-order unification so I guess I’ll have to shamelessly self-promote that ;) It’s also my last two weeks in Denmark working at Aarhus university so I’m finishing writing up the technical reports for the work I’ve done this summer. Explaining math is hard.
In my spare time I’m now fidgeting with an implementation of “typical ambiguity”, the algorithm used by Coq and others to implement universe polymorphism/simulate the experience of Type : Type. I’m also trying to get through Streicher’s paper on fibered category theory which is an excellent overview of the subject but very dense. If anyone wants to read this with me and chat about it I’d love the company!
Type : Type
Been a while since I posted to one of these threads. I’m at the university of Aarhus nowadays working in Iris for the summer so this week I have a lot of writing to do
But I actually want to do some additional writing so I figured I’d ask, does anyone have any topics in type theory or just functional programming that they would be interested in seeing a blog post about?
Aren’t Böhm trees already a graph model of the lambda calculus?
Different sort of graph: rather than a graph of a term (Bohm tree) I’m referring to the graph of the function that a lambda term represents. Bohm trees are another way to give a nice semantics to the lambda calculus though.
Github says: “Sorry, this file is invalid so it cannot be displayed.”
Excitingly, it works for me.. I love computers. Is it corrupted if you download it?
No, the download link delivers a working PDF.
It works now. There must have been a glitch with Github.
Also, as long as I’m posting things, I have been in the mood to write recently and was wondering if anyone had suggestions for introductory type theory concepts they’re curious about? Maybe even stuff to do with just functional programming?
The next “what are you working on this week” thread on Monday morning would be a great place to get attention to this question.
A lot of this is past the limits of my knowledge of math, but one part jumped out at me:
The unfortunate bit of this model is that it’s not extensional.
The unfortunate bit of this model is that it’s not extensional.
Could you unpack why this is? I’ve just learned what “extensional” means, and this is one of those really satisfying moments when there’s a term with rigorous definition for a half-shaped concept that’s been rolling around in the back of my head. In this case, what about the definition of the model means it’s not extensional?
The basic idea is that in this model we can distinguish between the denotations of f and g even if f(x) = g(x) for all x. This is because f and g ended up coded as the collection of finite fragments of their graphs and there’s no guarantee that distinct collections will produce distinct functions.
f(x) = g(x)
I’m at a new job working at Aarhus university in Denmark so this week I’m spending a lot of time reading related work to my project. Most anything from http://iris-project.org/ has made it’s why on to my queue and is pretty excellent. For fun I’m also reading
Perhaps this is my unfamiliarity with Haskell, but what if we ask the reverse qurstion. What does the following code do?
fizzBuzz :: (Functor f, Foldable t)
=> t (Integer -> Maybe String)
-> f Integer
-> f String
fizzBuzz rules = fmap (fromMaybe <$> show <*> ruleSet)
ruleSet = fold rules
I would have a harder time saying “oh, that prints fizz and buzz for multiples of 3 and 5”.
I think that snippet is incomplete, because you need to invoke it with the proper ruleset.
That snippet doesn’t contain the numbers 3 or 5, or the strings “fizz” and “buzz” (except in the variable name fizzBuzz).
It’s not terribly legible because most of the work is hidden in all of this type information which is informing GHC of which type class instances to use. I would agree that the use of fold which relies on the monoid instance for functions and that applicative stuff to avoid writing \i -> fromMaybe (show i) (ruleSet i) is potentially overkill. The fold is a grayer area though because by using the monoid instance we may get a nice performance boost depending on what t we instantiate this function with. It may be the case that t makes use of the fact that it’s supposed to be able to reassociate the combining function.
\i -> fromMaybe (show i) (ruleSet i)
there sure are a lot of Haskell dudes who confuse over-golfing with elegance.
While this post is.. overkill in many respects, the general trick of using foldMap to accumulate a collection of functions into a function which does collection of all the outputs is pretty useful: I just used it in a compiler I’m writing to snarf up all the variables mentioned in part of the control flow graph!
Better still, when I switched to an actual graph representation (as opposed to an edge-set) the code didn’t break because it’s all generic over the container.
I always worry about upvoting posts like this because they add to this “wow, what a lack of perspective on the engineering these FP folks have” impression but small abstractions occasionally scale up.
I think the point you just made is the thing he’s missing in the article. He’s excited about the things to come, but it appears as though he’s excited about what he just wrote, which is very similar to enterprise fizzbuzz, gross. https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition
Lobste.rs needs an April Fools' badge. :)
I was excited before I saw the satire tag :(
At least there was a satire badge!
Totally shameless self promotion but if we’re talking small compilers, this compiler for a functional language to C and this type inferencer may interest folks.
I’ve switched most of my own code from Haskell to SML and OCaml this last year and I have to agree, it’s a very nice world to live in :) I can’t say that it’s all perfect but I’m willing to overlook the annoyances for simple, clean, modular functional programming.
Out of curiosity, why did you switch? Personally I hear more stories about people migrating towards Haskell not away from it.
Well part of the reason I switched was just to try something new. It
felt like I’d spent a long time writing the same sort of hobby
projects in Haskell and I felt like I was reaching a point of
diminishing returns with what it was teaching me. Haskell has a huge
and vibrant culture but Haskell is not all of functional
The biggest change was switching from thinking in terms of expressing
my abstractions by slotting them into the existing ones in Haskell,
like finding the applicatives in my program, factoring things into
some stack of monads, making my DSL a monoid to using what ML
supported: modules! My programs now make use of far simpler tools
locally but have a much richer structure in the large because they’re
decomposed better into modules, functors and signatures. I think this
makes them a lot more readable and I’m far happier with the results. I
sometimes miss the fancier aspects of Haskell but I never ended up
missing the sugary bits like do-notation or type classes (much to my
Honestly, the biggest thing I miss is having lots of vocal, smart
people building software in the same language I am. The communities
are far smaller/quieter so there is a missing sense of communal pride
and camaraderie. Also I have to write more utils.
Haskell is still my third favorite language though :)
That was a great story, thanks for sharing! What’s your languages number 1 and 2?
as someone who “committed” to ocaml without ever really exploring SML i’d be interested in hearing if there were ever times you felt SML was a better choice for the program at hand.
SML is arguably a nicer language than Haskell or OCaml just because it’s a small core with relatively few corners that make no sense. I like to think of it as the intersection of the sane parts of most functional languages even though historically it predates most of them. It’s got a nice module language and the expression language is minimal but “enough” for everything that I want. I’ve also written a few compilers for ML so I know the language well enough that I don’t get surprised by what’s in there and I can hold most of the language in my head and regularly exercise all of it. I also like the fact that I have multiple implementations and an exceedingly well defined language, stability is arguably a practical reason but also it just appeals to me aesthetically.
This means that it’s great whenever I’m writing something that has essentially zero dependencies. So that means things like simple compilers, example programs (particularly for teaching since there are fewer sharp edges of the language), whatever are quite pleasant to write in SML because I don’t have to fight with the language. On the other hand, the second that I need that one library because I want to snarf some JSON and generate sexps on the other end, or even simple things like make a nice colorized or unicode output, I’m basically dead in the water. There’s some tragic irony of SML being wonderful to build abstractions in but there’s almost no one to share it with! The lack of ecosystem, build tools, package management, etc, etc, etc, mean that it’s hard to build things which depend on other things. As soon as I’m writing something that isn’t self contained it’s Haskell or OCaml time.
TLDR: SML when able, OCaml/Haskell when not
SML is arguably a nicer language than Haskell or OCaml just because it’s a small core with relatively few corners that make no sense.
Yeah. My go-to analogy is SML is related to Haskell the way C is related to Java. SML and C are both self-contained languages that can fit inside your head. Java and Haskell are both very rich environments with many bells and whistles and libraries.
(side note: the fact that SML has complete formal semantics is also pretty amazing. It’s not immediately useful to me, but it is to some, and it’s a unique and impressive achievement.)
The downsides to that are exactly as you describe: lack of libraries, lack of community. There has been some effort in the past few years to change that. https://github.com/standardml is one of the centers of that, which came out of the #sml channel on freenode. It’s still pretty meager, though.
The most useful bit to me of the SML semantics isn’t so much the formal part (though that’s nice), but that it at least has a very well-defined semantics, and they’re tested in practice by having multiple independent implementations (SML/NJ, MLton, Poly/ML, etc.). OCaml by contrast one of the defined-by-its-only-implementation style languages. There are some advantages to that, such as easier experimentation with new features. But I like the multi-implementation approach. (This is also one of the things I like about Common Lisp, which has an even broader range of independent implementations with different characteristics.)
thanks, that’s about what i suspected. it is, as you say, sad; there are a lot of ML dialects out there (AliceML and Mythryl looked especially interesting, and MLTon’s support for writing libraries that can be called from other languages sounded intriguing) but what i’m mostly interested in is writing desktop apps, and for both CLI and GUI apps it seems to be OCaml or nothing. i experiment with F# every so often, as being more promising for cross-platform GUIs, but I’ve never actually gotten it working on linux.
Yep! I try to steer clear of the tiny pockets of SML dialects because it seems like a bit of a hellscape of abandoned languages but there have been a lot of small fragmented pushes in interesting directions there.
I’d like to finish this write up talking about understanding classical logic computationally. I think there’s an interesting argument to be made for intuitionism here so hopefully that’ll come out well.
Programming wise I’ve got some feature I want to add to JonPRL’s tactic system (making reduce/unfold better behaved) so hopefully I’ll be writing that this week :)