1. 21
  1.  

  2. 9

    Statically-typed functional languages are amazing. They have a property that exists almost nowhere else: if your program compiles, it works.

    Ahem…

    1. 4

      It’s true if the author has very distorted notion of what “it works” means.

      Lets say you wanted a program that sorts a list. What’s the type signature? It needs some sorting function, and list to sort as the input. The result would be a sorted list.

      Now if we do Haskell we can’t represent sorted lists (probably some dude comes to say it’s got an extension for this), but hey it’s okay. We can accept that it works if it compiles.

      sort :: (a -> a -> bool) -> [a] -> [a]
      sort lt list = []
      

      Since it compiles, it works. But you didn’t get the list sorted? Well that’s because you were unable to tell the type system that the list must be sorted. But it works since it compiles. The program perfectly matches the definition in the type after all.

      If the user complains too much, you can supply the second program:

      sort lt list = sort lt list
      

      Since it compiles, it works. Perfectly. It’s perfectly correct program because we say so.

      1. 0

        Now if we do Haskell we can’t represent sorted lists

        Are you sure?

        1. 7

          Yes. We’ll consider Haskell 2010. In this example, your referenced type is defined as:

          newtype SortedList a = SortedList [a] deriving (Eq, Ord)
          

          This clearly does not encode any sorted requirement in the type. Only smart constructors and programmer discipline can keep that list sorted.

          Here are some list functions. They all have the same signature, but different behaviors:

          id :: [a] -> [a]
          return . head :: [a] -> [a]
          reverse :: [a] -> [a]
          const [] :: [a] -> [a]
          init :: [a] -> [a]
          (++ []) :: [a] -> [a]
          

          Nothing in the type system can issue salvation here. Nothing surrounding the type system, either; the relevant theorem for free concerns map, which doesn’t appear here. Haskell’s type system doesn’t cover:

          • Partial functions, like head
          • Shape-preserving element-agnostic permutations, like reversing a list without looking inside it
          • Linearity and naturality, ensuring that arguments are used
          • Shape-preserving non-linear transformations, like init
          • Operations which normalize to id but predictably and reliably incur runtime costs, like appending an empty list to the end of a singly-linked list
          1. 0

            Of course you are totally correct, but I’d like to make the argument in practical terms. If you want to encode a sorted list in the type signature, you can, at least in practice.

            This is like the inverse of the argument that you can do FP in JavaScript. Theoretically you can, but in practical terms it just doesn’t work.

      2. 2

        This is effectively true for pure functional code if you unit-test it. Once you add side-effects, it’s harder to be sure.

        1. 3

          Unit testing helps, yes, but that’s not what the author is implying.

          1. 2

            This isn’t true either. Consider a complex simulation encoded as a pure function. A unit test just shows it gives the expected output for a given input, not that it correctly solves the problem for all inputs.

            This is also true for PBT. Just pick more complex pure functions.

            1. 3

              Indeed. That’s why I said it “helps”: you are still up against test coverage.

              You can unit test and statically analyse all day, but you can always insert bugs that won’t be caught.

          2. 1

            Author here. Good point. What I was referring to was the observation that many people make when using Haskell/OCaml/Elm, that once they’ve got it compiling it usually (or maybe just often?) works. I definitely overstated that.

            1. 1

              Well, you won’t get any type errors ;)

          3. 12

            The lack of static types in dynamic languages allows you to prototype quicker and iterate faster. It also makes your code more readable.

            In my experience, this is completely false.

            Purely anecdotally, I have been able to bring businesses to market much more quickly in Haskell than I could ever manage in Ruby. That’s also why I choose the tools that I choose.

            I’m getting a real sense of Middle Ground logical fallacy from the outset.

            1. 4

              Also purely anecdotally, I’m (even after not really using it for 5 years) I’m still a lot faster prototyping stuff in PHP than I am in anything else - and I am amazed how much faster some people in Rails are. (I was especially slow in Go for web stuff, fwiw).

              So either you can now say I’ve never been so good with another language, but I doubt it. Not sure what you’re doing there in Haskell either, maybe it’s better for your use case than Ruby?

              1. 2

                I do fairly typical web SaaS stuff, which would otherwise have been done in Rails. It’s just cheaper for me (and my team) to do it in Haskell than it would have been in Ruby.

                1. 1

                  Yesod or anything else?

                  1. 1

                    Yeah it’s Yesod. I haven’t tried any of the other frameworks, but I’m pretty happy with what I have.

              2. 3

                I can’t help but wonder about role of the false cause fallacy in these debates on static vs dynamic languages. Without discounting your anecdotal experience (and acknowledging my own preference towards Haskell over Ruby), Ruby and Haskell are so different than one another that I think it becomes difficult to isolate the type system as the greatest factor in one’s speed of development.

                That said, I also disagree with the claim: I personally find dynamic languages less readable. As for prototyping, I do find dynamic languages a bit quicker, but only in the very short term. Once the program is greater than 100 lines or so, I find that static types actually help me build quicker.

                1. 3

                  However, purely empirically speaking your claims are completely unsubstantiated. The most recent research available is the replication of the large scale GitHub study. The relative effects of language choice on code quality is less than 1%. The main finding is that language choice hardly matters at all.

                  On the other hand, we can easily measure the effects of factors like sleep, overwork, and happiness on code quality. If static typing was an actual factor, we’d see exactly the same kinds of effects.

                  There’s nothing wrong with enjoying static typing, but there’s simply no evidence that it plays any role past personal preference. Stop treating type discipline like a religion.

                  1. 1

                    Hello old friend. Good to see you’re still eager to engage me in exactly the same debate you’ve been trolling me with for the past couple of years.

                    1. 2

                      You keep using that word. I do not think it means what you think it means.

                2. 5

                  The railroad oriented programming favored by the author is flawed for error handling. The problem is that error handling often has to occur during obtaining resources. For example here’s the SDL initialization flow:

                  1. Initialize SDL library
                  2. Obtain a widget handle
                  3. Obtain rendering context
                  4. On failure, release obtained resources.
                  5. On success, start the event loop

                  Now if we examine these structures. They form railroad pieces that do not fit together. Their “success” sides look like this:

                  1. nothing –> SDL
                  2. SDL –> SDL * Widget
                  3. ….

                  The failure side looks like this:

                  1. failure –> failure
                  2. failure * SDL –> failure * SDL

                  To composes the first step, we would have to compose the failure:1 and failure:2 -arrows. It would require to satisfy the type constraint failure <: failure * SDL. Therefore this common case in error handling is not feasibly implemented with the ROP -style.

                  The exact same major flaw as what the exception handling has. Delivering you dangling file handles and unreleased memory addresses plus all the rest merry issues characteristic to procedural structured programming.

                  1. 1

                    Could you go into which styles of error handling do not have this flaw?

                    1. 2

                      Systems based on linear logic could recognize you have to release the resources, but you need a way to place a function between the success –> failure -switch, which releases the resource on failure.

                      There’s the linear Haskell and session typing systems that may be able to achieve this in future. But it requires that you reason about resources using the type system, or then have some method to track resources in a dynamically typed environment, eg. have a notion of ownership for objects.

                    2. 1

                      Fits the “defer” operation in Go much better. Or even “goto” in C.

                    3. 4

                      Nice post! I was confused about Dark at first, but it’s starting to make more sense. Generally I’m skeptical of integrated languages and editors, but I can see where they can get some wins with that approach.

                      I’m also skeptical of clean-slate / commercial languages, if only because even a meager standard library takes tremendous effort to build.

                      But choosing HTTP backends as the initial use case is smart because you don’t need as much of a standard library. In that sense, it’s like a shell. :) The shell is also a lot of effort to build, but it’s feasible for one person because a lot of the “library” is in external processes.

                      (In contrast, my work with the Python intepreter has showed that if you want feature parity for even the basic set of objects, leaving aside the standard lib, it’s a huge effort. Which is why I find Micro Python really impressive.)


                      I also liked this part – it seems like a balanced analysis of static vs. dynamic.

                      There’s little I find more frustrating than discovering in step 2 that the type changes I diligently propagated in step 1 were wrong and that I wasted the last hour.

                      I usually ignore most things on the Internet about static vs. dynamic, because people are usually extrapolating from small projects, or small parts of a big project (e.g. maintaining code they didn’t design). But this post seems well-informed and realistic.

                      1. 4

                        Thanks!

                        But choosing HTTP backends as the initial use case is smart because you don’t need as much of a standard library.

                        Yeah, that was kinda one point that I realized that Dark might actually be feasible. Since we just want to write programs that receive data (over HTTP), process it, store it, and send it, then we don’t lots of other stuff that current languages/stdlibs have to deal with: unix, containers, machines, filesystems.

                        1. 1

                          Does the Dark language currently have concurrency or a design for it? I’d be interested in what you came up with for that problem, which seems very relevant to HTTP backends.

                          For example, someone might want to write a backend that scrapes data from several Web APIs and compares them, and handles concurrency / slow APIs / timeouts / failure.

                          It seems like you’re influenced by Elm and OCaml. Would concurrency be based on the constructs in either of those languages or something more novel? I know OCaml has async libraries but I haven’t used them, and I’m not sure if people like them.


                          As a historical note, Python 3’s asyncio was influenced by Guido’s work on a similar problem, making concurrent database queries in Google App Engine:

                          https://pyvideo.org/europython-2012/ndb-the-new-data-store-library-for-google-app-en.html

                          Python did have a long history with async, e.g. Twisted, Tornado, and even “Medusa”/asyncore if anybody remembers that. And I remember Guido synthesizing a lot of experience from those projects. But they were always outside the language, and Python 3 changed drastically, with many new language features, after his work on NDB.

                          1. 1

                            We haven’t really looked at concurrency yet. We have background jobs but that only solves a small fraction of use cases.

                        2. 3

                          Indeed, HTTP backends are a great choice for initial use-case! You get the UI for free by implementing a simple templating engine, you don’t need to worry too much about hardware compatibility because you’re just listening to standard network traffic & responding to it, etc.

                        3. 3

                          As somebody constantly railing against what I see as huge problems in our processes, this seems relevant to my interests :)

                          1. 2

                            These are all arguments for and against static typing, not for and against FP.