1. 39
    1. 27

      I have tried and failed several times over the years to pick up Haskell. It wasn’t until last year when I started using the Haskell Language Server with VS Code that it all finally clicked! Compared to just using :t in ghci, HLS massively accelerated my ability to tinker with the language. I can hover to see the type (and the specialized type) of anything! Type holes! Instant error messages when something is wrong! I don’t need to constantly :load my modules when making changes! It’s fantastic!

      The other thing that helped was reading through the page on Haskell denotational semantics.

      1. 3

        Yes, HLS is great. Type on mouse-over (or type-at-point in emacs land) & immediate error messages make for a really powerful combination when writing Haskell.

      2. 1

        Sounds like what I had with Emacs, until they decided to break all existing tooling in favour of LSP/HIE…

        1. 7

          HLS adheres to the LSP protocol so you should have no problems getting it set up on Emacs.

          It’s at the point now on Neovim that I can just add a single line of configuration to enable interoperation with a new language server.

    2. 11

      I’m not sure to what extent this is still the case, but 5 or so years ago when I was more actively interested in Haskell, I was frustrated by the prevalence of academics in the Haskell community. People seemed to want to intentionally complicate things such that a thesis explaining them is necessary. It seemed like everyone was building abstractions or trying to understand abstractions and nobody had any time left to build applications.

      1. 3

        I’m building lots of fun command line things if you like learning from small useful projects.

      2. 2

        When I was in University a half a decade ago, I was interested in Haskell.

        But I was interested in it to actually make things. I think I felt some of this pain - building abstractions is pretty much the definition of the web programming model a lot of the libraries had built up. Almost needed a degree in math.

    3. 6

      It is exactly how much time it takes with Haskell if you are surrounded by other engineers who use it 2, but if an engineer learns Haskell on their own, they usually get stuck.

      This is how I feel about a ton of things in computing. It’s also why SO is so popular, I think. Very often I “get” everything (to the extent I need to), but I have one specific question, or one error message, that I need resolved before I can feel productive. Or I have built up a personal mental model and I want to run it by someone to make sure it is correct (enough).

      There’s nothing quite like having access to someone who knows more than you to accelerate learning.

      1. 4

        Heads up for anyone trying to learn anything FP-related, I found the FP Slack super useful for this when learning Haskell.

      2. 3

        There’s access to the knowledge but there’s also a lot of culture in programming languages that you learn much more quickly by contact. All of the “best practise” cargo cult stuff that you only learn by questions answered by “we usually solve that by…” and “don’t use feature X” and “everybody uses ____ library”

    4. 3

      I should revisit Haskell at some point. I discovered when I learned Bluespec SystemVerilog that I really, really hate Haskell syntax. BSV is a hardware specification language that was originally a DSL in Haskell and was then modified to use SystemVerilog syntax while retaining the Haskell semantics. SystemVerilog as Algol-like syntax which is pretty awful, yet not quite bad enough to make me hate the language (I use C++ regularly, so my tolerance for bad syntax is pretty high). I discovered though BSV that I really like the semantics of Haskell. It’s the only language I’ve ever tried to learn and been so put off by the syntax that I wasn’t able to get to the underlying abstractions (and I say this as someone who has written nontrivial amounts of both Prolog and Erlang).

    5. 3

      Cool article, thank you for writing. It is not easy to make Haskell approachable and I think this article does a good job at that. Figure I would share a non-haskeller’s immediate unfiltered thoughts in the interest of keeping up discussion and learning.

      To me, the do notation example (vs Functor/Applicative) is more readable because there are less symbols, and combinations of symbols to parse. To the point where in the do notation, I can infer what it does without having to learn how it does.

      do notation example
      -- assuming NoBuffering mode
      getName :: IO String
      getName = do
        putStr "Name: "
        name <- getLine
        putStr "Surname: "
        surname <- getLine
        return $ name <> " " <> surname

      There are 3 symbols to learn or infer: <-, $, and <>.

      My guesses are <- binds, <> concatenates, and $ does something string-related or maybe apply-related

      Functor / Applicative methods
      getName :: IO String
      getName = (<>) <$> get "Name: " <*> ((' ' :) <$> get "Surname: ")
          get s = putStr s >> getLine

      There are ~5 symbols to learn or infer: (<>), <$>, <*>, (' ' :), and >>

      Lots more questions to unpack in the second one

      • why do we need the parens after <*>?
      • why is there the : in (' ' :)?
      • what other ways are there to write get s that help me understand how >> works?

      After reading the first example, I’d expect the second example to look something closer to:

      unholy mess of imperative thinking
      getName :: IO String
      getName = (<>) <$> get "Name: " <> ' ' <> <$> get "Surname: "
          get s = putStr s >> getLine
    6. 1

      Reading each Haskell line can take twice longer, but because of much smaller overall size it is at least twice faster to write and to read than the equivalent logic in TypeScript.

      Where does the “2” come from? How can we measure if Haskell takes 0.2, 2 or 200 times longer to read?