1. 15
  1.  

  2. 12

    I had to chuckle at two parts of the article:

    Haskell, at its core, is simple: it is just a polymorphic lambda calculus with lazy evaluation plus algebraic data types and type classes.

    Well… he’s not that wrong, but if you don’t happen to know all the jargon…

    Memory safe languages form two groups: the ones that rely on a garbage collector, and Rust.

    Well, a group of 1 is a group, you’re not wrong.

    1. 11

      I’m a bit skeptical when an enthusiastic blockchain development outfit proclaims something to be the future. Has any progress been made since this article was posted a year back?

      That said, the article itself provides a good overview of the space and the issues with the various approaches.

      1. 5

        Has any progress been made since this article was posted a year back?

        Yes, most notably:

        • Standalone kind signatures, a new feature that allows one to write kind signatures for data types, classes, type synonyms, and type families.
        • Consistent implicit quantification rules for type variables in type and kind positions.
        • New lexical rules for (!) and (~), a new parser architecture with a late validation step, and numerous other changes to the parser. This will be used to unify the grammars for types and terms.

        Also, plenty of internal refactorings and minor improvements. You can scroll through my commits and see for yourself: https://github.com/ghc/ghc/commits?author=int-index

        enthusiastic blockchain development outfit proclaims something to be the future

        For what it’s worth, I’m not a blockchain enthusiast at all, I’m a dependent types enthusiast. Serokell is funding 100% of my work on GHC, and it’s not related to the fact that we also have clients interested in blockchain solutions and we have experts in that area. My job is compiler engineering and my interest is language design, so those are the topics I cover in the article.

        That said, the article itself provides a good overview of the space and the issues with the various approaches.

        Thank you.

      2. 7

        The present of Dependent Haskell looks like this (paper).

        1. 3

          Some things are a bit unclear to me with constructions like in that “Glimpse of Dependent Haskell”:

          We’ve got three integer types

          • data N: inductively defined natural numbers
          • data Fin n: inductively defined bounded natural numbers
          • Int: regular built-in integers

          I don’t really see how we can bridge between these at run-time. E.g., lookupS requires a Fin n argument, and the call only typechecks if it’s a valid index. But type-checking happens at compile time, so if we read a list from somewhere and an index from somewhere else at runtime, I expect it would be impossible to even make use of lookupS. Is that correct?

          In other words, (how) is it possible to get runtime values to the type level? I’m quite confused…

          1. 6

            It’s always a weird trick. You never really get runtime values at compile time, you instead get code which magics up the properties you need. This code is composed of two pieces: a piece which includes a runtime verification of a property and results in a fancy type, and a piece which relates those fancy types together into a verified algorithm.

            In the case of indexing a list of known length, you may have a type Vec n Int for some n. You can write code which reflects the compile-time n down to the runtime and evaluates a check inBounds :: Int -> Maybe (Fin n). This can be applied to integers received at runtime and results in either nothing, or an integer with a property. The codepath following the successful results can be shown to be safe.

          2. 2

            I’m more convinced by the HN commenter who says that refinement types, not dependent types, might be the future. This blog post feels a bit tone-deaf, like we’re just supposed to accept that Haskell is the one true language but that the only thing that makes it not perfect is that its types are not strong enough (yet)? It’s probably more off-putting than not, to people outside the FP community.

            1. 6

              What are Haskellers meant to do with this feedback? This kind of feedback comes up often, and it seems it’s always either that Haskellers are meant to admit that all programming languages are the same and which one you choose is not important, or that if Haskellers recognise some language features that provide genuine leverage then they shouldn’t talk about if for fear of offending the egos of those who don’t understand this stuff yet.

              1. 1

                Well, certainly something to take away from this feedback is to maybe not claim you know what the future of software development will be like? I see the tone-deafness here, and I would count myself among the “Haskellers”. While those “bruised ego” reactions exist, this doesn’t seem to be one of them.

                1. 2

                  I generally agree with your scepticism around claims like this, and I can’t say personally I would have made the same claim. At the same time — and I’d ask you to interpret my words charitably now, and not as an apology for the post’s hyperbole — it is indeed difficult to strike a balance between selling a novel approach to drive adoption and presenting the same in an understated and likely less biased way. Humans are emotional creatures after all.

                  1. 3

                    In this case the author probably believes that dependent types in Haskell is indeed “the future of software development,” so it’s useful to have people who credibly criticize this claim, like pron on Hacker News:

                    The problem is that dependent types (and the very idea of “correctness by construction”) suffer from fundamental, serious limitations – both in theory and in practice – that the author does not seem to be aware of. He also does not seem to be aware of all the other approaches for software verification and correctness (sound static analysis, model checking, concolic testing) that have so far shown greater promise, and formal methods research focuses on them rather than on dependent types (the approach that scales least and suffers from the most limitations); in fact, last I spoke with a dependent type researchers, he told me that now they’re trying to weaken the soundness of their type systems (sort of gradual-dependent-types) to try and fight the problems types suffer from. I’m not saying dependent types don’t have some uses, or that they won’t change enough so they could play a central role in software correctness, but thinking that they’re the future of software correctness (in their current form) shows a complete lack of familiarity with this field.

                    And a comment further down with more substantial criticism of dependent types for program verification.

                    1. 3

                      I agree with everyone criticising everything, but as long as we’re arguing about what the future holds, I’m reluctant to take anyone’s claim as more than conjecture.

                      1. 2

                        I agree with this criticism. Dependent types feel like an all-or-nothing approach, whereas contracts (or refinement types, which are pretty similar) allow you to pick between runtime verification, property testing, abstract interpretation, or full verification. With dependent types you need to specify all that you want to verify upfront, and ensure all the properties are valid even as you’re writing the first prototype of your code.

              2. 2

                Previous discussions of this post can be found at r/Haskell and HN.

                1. 1

                  I tend to find lazy evaluation directly antithetical to performance. Am I mistaken?

                  1. 7

                    I don’t think that lazy evaluation is antithetical to performance, but it is hard to get right in a language. I think Haskell and Rust get lazy evaluation right most of the time in their respective domains (Haskell: everywhere; Rust: iterators and async/await), but it can be a bit tricky to debug for folks not expecting lazy evaluation. It does help reduce how much work needs to be done at runtime, and in some cases, dramatically reduce how many allocations are needed for a given unit of work.

                    1. 2

                      I think Haskell and Rust get lazy evaluation right most of the time in their respective domains (Haskell: everywhere; Rust: iterators and async/await)

                      A lot of modern languages (Rust, Python, Clojure, Elixir, even Java) have gotten far just by making lazy iterators first-class. To me this seems like a reasonable enough compromise; I guess I’m still confused as to what we gain by making laziness more pervasive and consequently harder to get right. Of course, pervasive laziness doesn’t make sense in any of the languages I mentioned since none of them enforce functional purity, but it doesn’t immediately follow in my mind that pure should mean lazy.

                      I’d be interested in seeing a good example of Haskell code that becomes more expressive/performant than its equivalent in another language thanks to laziness features that aren’t lists.

                      1. 4

                        I’d be interested in seeing a good example of Haskell code that becomes more expressive/performant than its equivalent in another language thanks to laziness features that aren’t lists.

                        IIRC, the second half of the paper Why Functional Programming Matters has a few examples of lazy evaluation as a means for better function composition, i.e., building more complex/interesting stuff from simpler parts. Bear in mind that some of these examples rely on generating infinite data structures, like all the possibilities of a board game, and then computing things over them, reducing them, pruning them, and finally only evaluating what is needed.

                        Even if the examples are not all that convincing, I highly encourage anyone who hasn’t already to read that paper. I found the explanations of the core concepts of functional programming very clear and inspiring, and the approach of focusing on purely on the code and its behavior instead of accessory stuff like types (I think the paper doesn’t even mention them) quite enlightening compared to some more modern approaches to FP :)

                        1. 2

                          Bookmarked the paper for later. Thanks!

                        2. 4

                          I’d be interested in seeing a good example of Haskell code that becomes more expressive/performant than its equivalent in another language thanks to laziness features that aren’t lists.

                          I’ve barely done any Haskell so take this with a grain of salt, but one benefit I’ve been told is that there’s no need to build seperate “streaming” APIs. Everything is streaming by default due to laziness — network interaction, parsing, decompression, etc.

                          1. 1

                            I’ve seen talks in which SPJ refers to laziness as “the hair shirt”: the metaphor here is that he finds it uncomfortable but having it in the language makes it easier for the community to stick with the functional purity convention.

                        3. 5

                          Yes you are mistaken. Laziness gives a bunch of performance; you do less work, even with simple/naive implementations. You also get composition of these implementations, e.g. quick select is take n . quicksort.

                          Want to explain why you think it’s directly antithetical?

                          1. 1

                            How true is that if you’re using an actually decent quicksort (one in place, on an array, not the toy example)? Does take n still benefit from lazyness? I think providing an actual function for that (like C++ does) will be more predictable and faster in the end.

                            1. 2

                              In-place and lazy?

                          2. 2

                            People will say you are and then write 10,000 word posts on how to make it fast (by making it more strict) and how to detect memory leaks that are impossible to reason about.

                            1. 1

                              Often the problem is that something in the program is already strict, for various reasons, so you have to start giving in and that unravels to “make X strict, make Y strict, make Z strict, alright you can stop there, ok don’t touch things”

                              1. 4

                                I think there’s a reasonable argument to be made that most code should be eagerly evaluated by default with some exceptions made for lazy evaluation. There’s a good reason why Idris and PureScript opted for eager evaluation as the default.

                                1. 2

                                  As someone who worked on PureScript and have used it quite a bit professionally, it’s completely a concession to the JavaScript runtime. There’s a bunch of times you have to put up with code duplication because PureScript is strict. It’s not because it’s a good default; PureScript’s goal is to have simple JS output, therefore it has to be strict.

                                  1. 1

                                    Thanks for your insights! I also found Idris’ answer to why it isn’t lazily evaluated to be interesting as well: https://github.com/idris-lang/Idris-dev/wiki/Unofficial-FAQ#why-isnt-idris-lazy.