1. 20

  2. 8

    I like that the post is brief and to-the-point. How does one arrive at the “final encoding” form? Should I write my program to follow “initial encoding” first, and then rewrite it once I know the span of information required?

    As a side, I really like the first form that makes generous use of information-as-types. Adding a new constructor to the Command type will highlight non-exhaustive pattern matches, and I can simply allow GHC to babysit me through the refactor.

    1. 5

      I really dislike that this mixes the concerns of command parsing and command handling. For me, one of the main advantages of Haskell is that it allows you to express your program model as an ADT, so that you can do verification (by testing or by proof) based on the type-space of that ADT.

      This eliminates the ADT in favour of OOP-style objects that encapsulate behaviour behind a shared interface (IO ()), which, you know, is sometimes the best tool for the job, but if that’s the case you might as well write it in a more suitable language.

      1. 1

        What would be more suitable? I find Haskell is very good at this kind of thing.

        1. 1

          Well, you can write OOP Haskell if you like, I suppose it is a general-purpose language after all. But I mean, you could do it in Python, TypeScript, C#, Java… anything you like really

          1. 2

            But none of those other options have the nice type system that Haskell does. Or the nice seperation of pure/impure. Or the nice concurrency system from GHC. Etc etc

            1. 4

              Yeah, don’t worry about the ivory tower people and just write it the way you want to.. Haskell has had a culture of 1^👨🚢 that is not entirely productive and often intimidating for newcomers. The style of programming cyberia advocates is really nice for a variety of reasons but it is also a bit dry and boilerplate-y when you haven’t arrived there yet.

              1. 1


                Ahh, one-upmanship. Took me a while to get that.

              2. 2

                OK but my point is if you’re just calling everything IO (), then you’re getting none of the benefits of the type system and everything is impure anyway, so why bother?

        2. 4

          The article says:

          A “final encoding” is one where you encode information by how you intend to use it

          This tends to simplify your program if you know in advance how the information will be used

          It’s worth noting that “knowing what you’ll do in advance of writing the program” is not an inherent property of a final encoding; a final encoding can still be generic over how the data is used.

          That’s the essence of tagless final style. I have some examples in http://catern.com/tfs.html

          1. 2

            That link returns a 403 for me

            1. 2

              Thanks, fixed.

          2. 3

            one disadvantage, especially when you are writing something interpreter-like, is that it’s a lot harder to perform any sort of whole-program static analysis and optimisation on your code. e.g. suppose you added set and get commands to add variables to your little language. in the first encoding you could look at the parse tree up front and verify that you were never calling get on a variable that hadn’t been set first. in the second encoding you would have to wait for it to crash at runtime.