While I agree with the analysis in this article it strikes me that these were precisely the caveats that “effectively” was meant to capture.
Well I just finished coding up an implementation and writing a tutorial on higher-order unification so I guess I’ll have to shamelessly self-promote that ;) It’s also my last two weeks in Denmark working at Aarhus university so I’m finishing writing up the technical reports for the work I’ve done this summer. Explaining math is hard.
In my spare time I’m now fidgeting with an implementation of “typical ambiguity”, the algorithm used by Coq and others to implement universe polymorphism/simulate the experience of Type : Type. I’m also trying to get through Streicher’s paper on fibered category theory which is an excellent overview of the subject but very dense. If anyone wants to read this with me and chat about it I’d love the company!
Been a while since I posted to one of these threads. I’m at the university of Aarhus nowadays working in Iris for the summer so this week I have a lot of writing to do
But I actually want to do some additional writing so I figured I’d ask, does anyone have any topics in type theory or just functional programming that they would be interested in seeing a blog post about?
Different sort of graph: rather than a graph of a term (Bohm tree) I’m referring to the graph of the function that a lambda term represents. Bohm trees are another way to give a nice semantics to the lambda calculus though.
[Comment removed by author]
The goal is that by using Godel numbering we can encode a continuous function P(N) -> P(N) as an element of P(N). If we used finite sets we would instead have P(P_fin(N)) which would not give us the desired retraction D -> D –> D that we need in order to define the lambda calculus.
Also, as long as I’m posting things, I have been in the mood to write recently and was wondering if anyone had suggestions for introductory type theory concepts they’re curious about? Maybe even stuff to do with just functional programming?
The next “what are you working on this week” thread on Monday morning would be a great place to get attention to this question.
A lot of this is past the limits of my knowledge of math, but one part jumped out at me:
The unfortunate bit of this model is that it’s not extensional.
Could you unpack why this is? I’ve just learned what “extensional” means, and this is one of those really satisfying moments when there’s a term with rigorous definition for a half-shaped concept that’s been rolling around in the back of my head. In this case, what about the definition of the model means it’s not extensional?
The basic idea is that in this model we can distinguish between the denotations of f and g even if f(x) = g(x) for all x. This is because f and g ended up coded as the collection of finite fragments of their graphs and there’s no guarantee that distinct collections will produce distinct functions.
I’m at a new job working at Aarhus university in Denmark so this week I’m spending a lot of time reading related work to my project. Most anything from http://iris-project.org/ has made it’s why on to my queue and is pretty excellent. For fun I’m also reading
Perhaps this is my unfamiliarity with Haskell, but what if we ask the reverse qurstion. What does the following code do?
fizzBuzz :: (Functor f, Foldable t)
=> t (Integer -> Maybe String)
-> f Integer
-> f String
fizzBuzz rules = fmap (fromMaybe <$> show <*> ruleSet)
where
ruleSet = fold rules
I would have a harder time saying “oh, that prints fizz and buzz for multiples of 3 and 5”.
I think that snippet is incomplete, because you need to invoke it with the proper ruleset.
That snippet doesn’t contain the numbers 3 or 5, or the strings “fizz” and “buzz” (except in the variable name fizzBuzz).
It’s not terribly legible because most of the work is hidden in all of this type information which is informing GHC of which type class instances to use. I would agree that the use of fold which relies on the monoid instance for functions and that applicative stuff to avoid writing \i -> fromMaybe (show i) (ruleSet i) is potentially overkill. The fold is a grayer area though because by using the monoid instance we may get a nice performance boost depending on what t we instantiate this function with. It may be the case that t makes use of the fact that it’s supposed to be able to reassociate the combining function.
While this post is.. overkill in many respects, the general trick of using foldMap to accumulate a collection of functions into a function which does collection of all the outputs is pretty useful: I just used it in a compiler I’m writing to snarf up all the variables mentioned in part of the control flow graph!
Better still, when I switched to an actual graph representation (as opposed to an edge-set) the code didn’t break because it’s all generic over the container.
I always worry about upvoting posts like this because they add to this “wow, what a lack of perspective on the engineering these FP folks have” impression but small abstractions occasionally scale up.
I think the point you just made is the thing he’s missing in the article. He’s excited about the things to come, but it appears as though he’s excited about what he just wrote, which is very similar to enterprise fizzbuzz, gross. https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition
Totally shameless self promotion but if we’re talking small compilers, this compiler for a functional language to C and this type inferencer may interest folks.
I’ve switched most of my own code from Haskell to SML and OCaml this last year and I have to agree, it’s a very nice world to live in :) I can’t say that it’s all perfect but I’m willing to overlook the annoyances for simple, clean, modular functional programming.
Out of curiosity, why did you switch? Personally I hear more stories about people migrating towards Haskell not away from it.
Well part of the reason I switched was just to try something new. It felt like I’d spent a long time writing the same sort of hobby projects in Haskell and I felt like I was reaching a point of diminishing returns with what it was teaching me. Haskell has a huge and vibrant culture but Haskell is not all of functional programming.
The biggest change was switching from thinking in terms of expressing my abstractions by slotting them into the existing ones in Haskell, like finding the applicatives in my program, factoring things into some stack of monads, making my DSL a monoid to using what ML supported: modules! My programs now make use of far simpler tools locally but have a much richer structure in the large because they’re decomposed better into modules, functors and signatures. I think this makes them a lot more readable and I’m far happier with the results. I sometimes miss the fancier aspects of Haskell but I never ended up missing the sugary bits like do-notation or type classes (much to my surprise).
Honestly, the biggest thing I miss is having lots of vocal, smart people building software in the same language I am. The communities are far smaller/quieter so there is a missing sense of communal pride and camaraderie. Also I have to write more utils.
Haskell is still my third favorite language though :)
as someone who “committed” to ocaml without ever really exploring SML i’d be interested in hearing if there were ever times you felt SML was a better choice for the program at hand.
SML is arguably a nicer language than Haskell or OCaml just because it’s a small core with relatively few corners that make no sense. I like to think of it as the intersection of the sane parts of most functional languages even though historically it predates most of them. It’s got a nice module language and the expression language is minimal but “enough” for everything that I want. I’ve also written a few compilers for ML so I know the language well enough that I don’t get surprised by what’s in there and I can hold most of the language in my head and regularly exercise all of it. I also like the fact that I have multiple implementations and an exceedingly well defined language, stability is arguably a practical reason but also it just appeals to me aesthetically.
This means that it’s great whenever I’m writing something that has essentially zero dependencies. So that means things like simple compilers, example programs (particularly for teaching since there are fewer sharp edges of the language), whatever are quite pleasant to write in SML because I don’t have to fight with the language. On the other hand, the second that I need that one library because I want to snarf some JSON and generate sexps on the other end, or even simple things like make a nice colorized or unicode output, I’m basically dead in the water. There’s some tragic irony of SML being wonderful to build abstractions in but there’s almost no one to share it with! The lack of ecosystem, build tools, package management, etc, etc, etc, mean that it’s hard to build things which depend on other things. As soon as I’m writing something that isn’t self contained it’s Haskell or OCaml time.
TLDR: SML when able, OCaml/Haskell when not
SML is arguably a nicer language than Haskell or OCaml just because it’s a small core with relatively few corners that make no sense.
Yeah. My go-to analogy is SML is related to Haskell the way C is related to Java. SML and C are both self-contained languages that can fit inside your head. Java and Haskell are both very rich environments with many bells and whistles and libraries.
(side note: the fact that SML has complete formal semantics is also pretty amazing. It’s not immediately useful to me, but it is to some, and it’s a unique and impressive achievement.)
The downsides to that are exactly as you describe: lack of libraries, lack of community. There has been some effort in the past few years to change that. https://github.com/standardml is one of the centers of that, which came out of the #sml channel on freenode. It’s still pretty meager, though.
The most useful bit to me of the SML semantics isn’t so much the formal part (though that’s nice), but that it at least has a very well-defined semantics, and they’re tested in practice by having multiple independent implementations (SML/NJ, MLton, Poly/ML, etc.). OCaml by contrast one of the defined-by-its-only-implementation style languages. There are some advantages to that, such as easier experimentation with new features. But I like the multi-implementation approach. (This is also one of the things I like about Common Lisp, which has an even broader range of independent implementations with different characteristics.)
thanks, that’s about what i suspected. it is, as you say, sad; there are a lot of ML dialects out there (AliceML and Mythryl looked especially interesting, and MLTon’s support for writing libraries that can be called from other languages sounded intriguing) but what i’m mostly interested in is writing desktop apps, and for both CLI and GUI apps it seems to be OCaml or nothing. i experiment with F# every so often, as being more promising for cross-platform GUIs, but I’ve never actually gotten it working on linux.
Yep! I try to steer clear of the tiny pockets of SML dialects because it seems like a bit of a hellscape of abandoned languages but there have been a lot of small fragmented pushes in interesting directions there.
I’d like to finish this write up talking about understanding classical logic computationally. I think there’s an interesting argument to be made for intuitionism here so hopefully that’ll come out well.
Programming wise I’ve got some feature I want to add to JonPRL’s tactic system (making reduce/unfold better behaved) so hopefully I’ll be writing that this week :)
What makes this interesting? The answer is obviously yes, and the SO answer doesn’t seem to offer any great insight.
It’s only an interesting question because of the confused nature of definitions for strongly/weakly typed. Most of the arguments on the matter that I’ve seen are just people refusing to align definitions before debating this.
Invariably one hacker’s strong typing is another hacker’s weak typying. Look at them trying to argue how Perl types also don’t change, but are merely overloads.
Learning how (ultra)metric spaces work so I can read some papers on realizability. Lately my research project has meant that I’m studying a lot of really cool math so I’m quite happy. I’m also writing a bit on the distinction between formal and computational type theories.
Outside of CS and math, I’ve been cooking a lot and now have to eat my way through the ungodly amount of bread I baked this weekend.
Hm, what interesting ultrametrics are there besides the p-adic? I don’t really know of any applications to logic/computery stuff.
It really is curious how CS people seem to learn a completely different kind of maths than maths students learn.
One interesting one is Term x N U infinity, the idea is that we pair a term with how many steps it needs to compute and then say that the distance corresponds to how many steps the two computations are indistinguishable for. This is handwavy but see “A Metric Model of PCF” for a more precise treatment.
Also yeah, my understanding of more advanced math is essentially what’s required for logic/programming languages so it’s a very synthetic/constructive approach to algebra/category theory. I think it’s nice though :)
Oh god, too much logic and set theory, I’m running away back to the safe confines of zeta functions and integral transforms. I might try again to read this later. I didn’t even know what “PCF” meant.
You have a funny definition of “safe” from my perspective :) I suppose we’re starting a type theory reading list so if you wanted to know what pcf is…
We should talk format and expectations as well: weekly reading goals, reviews, co-created study guide, forums, chat rooms, meeting times: what do you think produces the best study group?
Opinion: each week on person ought to take responsibility for being “the expert” on the chapter. They should feel comfortable fielding questions, guiding discussions, and so on. This “expert” should probably cycle but at the paper club I’m in this prevents weekly meetings where there’s no discussion because no one knows what is happening.
We could also reach out to the selected author (who most likely has better things to do, but still, costs nothing to ask!).
Having a regular meeting is definitely a huge motivator for those of us who have trouble dedicating ten hours a week. :) Timezones are going to play a large role in scheduling that, as they always do…
Types and Programming Languages. Kind of the standard text, comprehensive, has (sketches of) solutions to exercises in back. No free access.
Bob Harper’s Practical Foundations of Programming Languages is an alternative to TAPL. There is a draft for free online currently (warning, large PDF).
Hey my research is kind of relevant here: Iris is a great logic for reasoning about lock free algorithms if you want to try playing around with them. See for instance Clausen’s lock-free hashtable or the lock-free concurrent stack with helping that I helped verify :)