1. 20
  1.  

  2. 3

    I largely agree with Liskov’s attitude towards monads: I always feel like Monads hide data in a secondary weird interface that you have to work around with secondary syntax.

    1. 17

      Only a fraction of monads are concerned with managing data at all. It may seem weird if you haven’t used them much, but probably only because most people don’t realize how monstrous the semantics of most imperative languages with “not weird” data management are. The type of monads you’re talking about (like State) are a semantically and syntactically simple and convenient way of doing stuff to data.

      Let’s take State as an example. If you want to represent a function that takes some arguments, takes a state, modifies the state, and returns a value, you can do it like this:

      doStuff :: args -> state -> (state, result)
      doStuff = ...
      

      Now, if you want to do a bunch of these things in sequence, you have to take the state output of one function, feed it into the next function, etc. etc. and it gets annoying fast. So instead, we just have the State monad, which is conceptually

      type State state result = state -> (state, result)
      

      So we can write this as

      doStuff :: args -> State state result 
      

      And then we can build functions for working with “State s a” values, which are just functions that take a state and return a state and a result. So instead of manually feeding the state output into the next function, you can just use the monadic bind operator (>>=) which does it for you. This lets us write things like

      doItTwice = do
          doStuff 1
          doStuff 2
      

      Which gets desugared using (>>=).

      This isn’t some weird detour people choose to take for no reason; this is actually the most straightforward way to do something like this in a language that has simple, equational semantics.

      1. 7

        So you take “state”, which is implicit in other programming languages; 1. make state explicit; 2. find that explicit state is awkward to pass around; and then 3. use a mechanism to hide state in an opaque data structure.

        1. 10

          Correct. And if you think that sounds excessive, that’s a perfectly reasonable opinion!

          1. 6

            Yes, and you get the best of both:

            1. Convenience
            2. Abstraction
            1. 1

              The advantage, as I understand it, is that you no longer have the ability to change shared state from anywhere in a program, which makes the program easier to reason about. This seems like a huge benefit for anyone who has dealt with a spaghetti of cascading state transitions happening everywhere in a codebase (or even a single file).

              1. 3

                But if you are carrying around opaque state objects, you’ve simply made it harder to understand.

                1. 4

                  It eliminates the problem where what looks like a simple accessor method turns out to be messing with some hidden global state. If you have nine functions that don’t touch state and one that does, passing an opaque state object to the latter makes your code easier to understand because you can now be confident that the functions that don’t touch state don’t touch state.

                  And if all your functions mess with global state, you already had a problem. Using monads just makes it harder for you to paper over it.

                  1. 1

                    Do you mean that the monad is opaque in that it is an abstraction the programmer needs to understand to use it? The state itself isn’t opaque (as I understand it). On the one hand you’re restricted with what you can do. On the other you can still use mutation-like code in localized areas where these kinds of mutations are easier to reason about.

                    I could understand the argument that the advantage of using a state monad is outweighed by the complexity of using an additional abstraction. I have no opinion one way or the other about that. But your argument seems to be that by using state monads to mimic state mutation you don’t get any benefits whatsoever, since the same code could be replaced with code in another language that does actual state mutation. That seems overstated since, while that is technically true that an imperative program could replace a Haskell program, by converting the code you also remove the strongly-typed restrictions on state transitions that you’re getting in Haskell, and introduce the ability for programmers to start introducing transitions of shared state in a totally unchecked way (a problem which causes serious problems for many codebases).

                  2. 2

                    Hm, Rust solves that by explicitly stating mutability and sharing in the interface.

                    Monads kind of implicitly allow that, if you know what the Monad at hand does.

                    I’m not arguing either way, but Haskell always takes the high road of abstraction with any problem and if you don’t understand the abstraction, you become lost.

                2. 4

                  I appreciate you taking the time to write that explanation, but I understand use cases of monads - I still find it a weird detour.

                  “Instead of manually feeding the state output into the next function” – this is hiding lexical+data binding in some weird interface (namely, your type system). Why not just use something like and-let*? Then you can tie together functions that don’t share monadic types, but do share partial arguments in order.

                  (and-let* ((res1 (func1 state))
                             (res2 (func2 state res1)))
                    (func3 res1 res2))
                  

                  I bet if you’re mildly familiar with a let expression, you can guess what’s going on. In the monadic example, I need to inspect the type of each function to understand the control flow of my program. That’s what appears weird to me. Please note I don’t think I’m going to convince you, and that’s fine, just want to publish my thoughts.

                  1. 4

                    State is a very, very limited case of what Monads (and transformers) are capable of. I used it as a pedagogical example specifically because it’s relatively easy to manipulate even without a concept of monads. However, there are several more considerations.

                    First, what you wrote is confusing and verbose. Monads via do-notation are much clearer syntactically. (Subjective, obviously)

                    Second, the trick you’ve proposed doesn’t generalize to more complex monads, such as (for example)

                    ExceptT Exception  . StateT AppState . ReaderT Config
                    

                    Which would allow you to have implicit configuration passing, managed state, and value-level exception handling, parametrized over some other arbitrary monad (which could be STM, IO, or whatever else you want).

                    1. 3

                      First, what you wrote is confusing and verbose. Monads via do-notation are much clearer syntactically. (Subjective, obviously)

                      I can actually accept either side of this particular style preference, but I don’t find either one a great argument for monads as a concept (and was therefore pretty confused when I first found it as a pedagogic example). To me, it mostly just takes you into the weeds of syntactic sugar. And there are a lot of ways to refine syntactic sugar, most of which have nothing to do with type systems.

                      There’s an old debate from the ‘90s about whether Perl makes things clearer or more confusing when it threads a magical implicit $_ state through various commonly used functions (I guess this probably is even older, but I first encountered it with Perl). Avoids the verbose explicit state passing, but at the expense of state passing without you seeing it. Some people strongly prefer explicit state-passing instead of the implicit $_ as a way of making it clearer exactly what state is being passed along which paths. The first time I ran across a monad tutorial using this approach, it read to me exactly like this Perl debate. As if the tutorial were written by someone who liked Perl’s implicit state passing, and was trying to sell Haskell as a Perl on steroids, a language perfected for magical implicit state passing, except now in all contexts! But Haskell people mostly seem to hate Perl, so I’m not sure this was the intent.

                      1. 5

                        The reason it’s a debate is that you want visibility but also conciseness, and the two are in tension. IMO monads are a great synthesis that gets you the best parts of both: almost as concise as completely implicit state, almost as visible as completely explicit state.

                3. 6

                  My favorite description of monads is “decorators for function composition”. From that viewpoint, they’re a useful time saving convention.

                  I consider that an accurate description; I have one place to put the code for composing a bunch of operations. That code can do a pile of useful things for me.

                  1. 3

                    That’s the wrong way to think about it. The behaviour of monads is in all other languages, but hidden. Monads are one way to make the sequencing of functions explicit. It doesn’t hide anything, it reveals what imperative languages hide.

                    1. 3

                      That’s the wrong way to think about it.

                      Nah, I like my way. I’ll let Barbara make her own points though.

                      The sequencing of functions can also be managed easily in languages that don’t default to lazy evaluation as long as they have the concept of function application and closures. AKA and-let* in scheme.

                    2. 2

                      Monad is largely a design pattern. I know that “design pattern” is a stigmatized phrase, but design patterns don’t have to be terrible. It’s the sloppiness endemic to corporate software engineering, and the historical limitations of certain popular languages relative to the purposes to which they were put, that resulted in the infamous “Gang of Four” mess, but design patterns recur in all aspects of life and aren’t always bad.

                      For individual monads, the fact that these types are monads is not very interesting. It’s fairly obvious. Thus, when you’re beginning to learn Haskell and not especially concerned yet with code reuse, it probably doesn’t seem to add much to talk about Monad, a type class which begins to show its value when you want to write code that’s agnostic of whether you’re doing a State s or IO or ST s action.

                      1. 1

                        I don’t know that I disagree with you at all here, but I also don’t understand why you would dislike that situation. As I read it, you’ve just described abstraction/indirection and Monadic internal languages are definitely a technique for doing that. If your goal is to avoid abstraction generally then I can see an argument for avoiding Monads very clearly.

                        On the other hand, if you’re trying to make a more refined argument about monads being a bridge too far in the process of abstraction then I think there’s also an argument to be made there… but I don’t see it yet.

                      2. 2

                        That was a great post. They look like they’re easy to use. Also looks like they do what one person told me: force a series of functions/expressions to happen in specific order. That’s the recurring pattern I see in the examples.

                        1. 9

                          Careful; that’s a practically OK but limited intuition. For example, that’s not a good description of the Logic monad in the article. Lots of monads that let you write imperative-looking code (like Maybe) don’t actually force anything to happen in any particular order. They’re still lazy, so the computation happens as it’s needed. There’s also this (contrived) example designed to blow that intuition out of the water; https://lukepalmer.wordpress.com/2008/08/10/mindfuck-the-reverse-state-monad/

                          The reality is that it’s hard to predict in advance what sorts of things can be conveniently represented by a monad. Some common varieties are “magical” monads that take advantage of compiler-specific features (IO, ST, STM). These are concerned with side effects and mutability. Some common varieties aren’t “magical” at all and are basically just syntactic sugar around what would otherwise be annoying to write out by hand (State, Reader, Writer). Some are designed to facilitate nice, clean exception handling (Either, Maybe). These simple monads, and their respective monad transformers (which allow you to combine monads together) are used to build most of the monads you’re likely to run into on a regular basis.

                          1. 1

                            “don’t actually force anything to happen in any particular order. They’re still lazy, so the computation happens as it’s needed. ”

                            You got me there. Didnt expect that at all from the examples. Thanks for tip. Far as reverse-state monad, that looks like it will cause me headaches. Im going to avoid it for now.

                            1. 4

                              It’s definitely just a trick. I don’t think anyone has used it in a serious way.

                              1. 2

                                There’s also the Free monad which lets you build computations sequentially but supply the actual order and sequence separately with an interpreter.

                            2. 7

                              Something that I’ve noticed in logic is that systems become more complicated, as a general rule, when you add time. Now you have to think about an ordering– partial or total depending on your model– and you’ve added a lot of context to every statement. You no longer have P, which is platonically true or false, you have P at time t_0. And if you’re in a distributed system where time is not a total ordering, it’s even more fun.

                              Monads take the timeless nature– a limitation for real-world programs– of functional programming and add time back in a way that allows control over what “time” and “the world” are.

                              I think that Elm has the right intuition for many cases by renaming the bind operator (>>=) to andThen. This may be unsatisfying for some cases such as List, because list comprehensions are purely functional and timeless, but it probably makes the motivation for that operation a bit cleaner. You could also rename return to always, although this further marries one’s conception of Monad to a notion of time, which is probably not completely appropriate for every case.

                            3. 1

                              The motivation behind talking about monads is to not be specific. Saying “monad” allows us to write functions and reuse them over many difference data structures. A list of example instances isn’t a great resource for showing what’s common between them and what a monad is for.