1. 37
  1.  

  2. 29

    Some things I really like about this:

    • Listing all the points at once, and then going into detail gives a good overview.
    • Acknowledging when languages offer similar benefits
    • For each bullet point, lists “click here to learn more”

    All very good for pedagogy; I’m going to have to steal them.

    And a couple of nitpicks:

    Pure functions are a joy to test and debug … their correctness is composable

    This is technically untrue: their type-safety is composable, and their purity is composable, but their correctness is not. Correctness is when the behavior matches the spec, and when you call f . g you presumably have a different intention in mind than just “f’s spec” and “g’s spec”. An example of this is min = (head . sort).

    Now we are 140 times faster than Python, and within 20% of the highly optimized NumPy implementation:

    The thing is the NumPy is not “optimized C”. NumPy’s convolve is just a skin for correlate2 function, which is general purpose and has a lot of safety checks. So we’re actually comparing handrolled, special-purpose Haskell to “enterprise-grade”, general-purpose Python/C. We saw a similar problem with the “rewriting wc” meme.

    Despite this nitpicks this is still a very good post. It also made me realize I’m not really the target audience for this anymore, as I kept going “I thought everyone knew about laziness!” Turns out no, I’m just in a bubble :P

    1. 4

      I really dislike the increased use of metaprogramming in Haskell. It’s often used to derive things like JSON codecs, WHICH IS YOUR API. You can no longer change your internal data structures without (often accidentally) changing your public API.

      1. 9

        This depends on several things.

        • The stability guarantees that your API provides. If the consumers on the other end of the wire are defined in the same codebase (e.g. it’s a GHCJS front-end), then it’s ok to change the API accidentally.

        • You can derive codecs for types that represent a specific version of your API, and they can be entirely separate from your internal data structures that can evolve rapidly. (This implies writing conversion functions between them). In this case these API types are basically a schema for your API, which isn’t a bad thing.

        That said, I agree that generally, it’s better not to derive the API. In fact, I even wrote a library for defining JSON APIs with bidirectional codecs, and it has no TH generators by design :-) https://github.com/monadfix/jijo

        1. 3

          jijo looks super cool. I’ve wanted to see more explorations in this area, thank you!

        2. 3

          Isn’t that what DTOs are for? Clearly there is a gap between what you send/receive and the business logic you wish to keep track of in your data structure, right?

          1. 2
            1. That should only becomes a serious problem if your organization structure makes it hard to upgrade all participants at once. For example when running a public service. For an internal service where you can afford to schedule a maintenance I’d say it’s a nice time saver.

            2. You can always derive your external data and then convert them to the internal representation. Haskell is very handy for this kind of conversion and the types should help you manage the drift between the internal and the external shapes of data.

            1. 2

              If you’re writing a conversion function between an data structures, why not just write a codec? Seems like you’re gaining nothing if you have to manually write a conversion function, except maybe you prefer writing a conversion function rather than codecs.

            2. 2

              This is why I handroll serialisers and use ban-instance to stop instances on core data structures. (I use a newtype V1 a = V1 a to describe the version of my serialisation, and hang instances off that.)

            3. 4

              I’m a bit confused about the and and all functions.

              and [] = False

              and (x : xs) = x && and xs

              all f = and . map f

              If the base case for and is False, then shouldn’t it always return False? Or have I lost my mind?

              1. 2

                Yes, that’s definitely a bug. The identity element of the monoid is True.

                1. 1

                  Oh, thank God. I often have to squint very hard at Haskell code so I just assumed I wasn’t getting something.

                2. 2

                  Thanks! Fixed.

                  1. 1

                    Happy to be part of the process! :)

                    BTW, I really enjoyed your article and appreciated the extra links. More to chew on!

                3. 3

                  I would not use the this example to justify laziness. Unfortunately, laziness is really a double-edged sword, you can easily create code that is hard to debug and your node runs out of memory. Not sure what is the exact situation in Haskell but in Clojure, you can run into this problem easily:

                  https://stackoverflow.com/questions/2214258/holding-onto-the-head-of-a-sequence

                  Is Haskell free from these sorts of issues?

                  1. 1

                    I’ve had a laziness issue in a small piece of code in production where I was building a lazy map, adding new elements to it and never really forcing it whole. So thunks accumulated.

                    It was not that hard to debug. Definitely not harder than illegal memory access hunting in C or accidentally retained memory in Python.

                    1. 1

                      Nice. I wasn’t trying to make that argument. C is pretty much a minefield. Haskell has nice debugging capabilities, it might be easy to debug these issues, easier than in Clojure I guess.

                      1. 2

                        It is definitely not all roses in the Haskell land either.

                  2. 3

                    Some day I will live learn haskell / ghc. Haskell is the only language I have given up more than a few times. If someone asks what languages I know, my answer is all of them except haskell. I think the time is now, and this coronacation is the perfect storm for me to learn haskell for reals. Then I too will be one of those great and far away functional (for real) programmers. I will ascend to the heavens where there is no need of side effects, and the unknowable inevitable ticking of time.

                    1. 1

                      It wasn’t until I learned BlueSpec SystemVerilog that I understood how much I hate Haskell syntax. It turns out that even SystemVerilog syntax makes it a much more pleasant language to use and I actually like the language semantics when I’m not spending all of my time fighting the syntax.

                    2. 2

                      Very well written article I heard alot of financial institutions are using haskell within analytics and trading systems

                      1. 1

                        My experience is that if you want a program that reads data, process it, and outputs the result, Haskell is an amazing fit. Like pandoc!

                        If, on the other hand, the program is mainly centered around state and user interaction, like a game or a text editor, Haskell needs a lot of convincing.

                        It’s not impossible to view the user interaction as input and the resulting pixels as output, but in my experience, other languages are a better fit for this.

                        Haskell is great for individual programmers to express clever ideas and give a feeling of completeness and correctness, though.

                        Haskell, a solid 5 out of 7.

                        1. 2

                          Do you have any 7 out of 7 languages?

                          1. 2

                            Unfortunately, no. In my opinion, many programming languages are 5 out of 7, and it all depends on the language being a good fit for what it’s going to be used for.