Hi all! I finally decided to write the monad tutorial of my dreams, using Rust and property-based testing as the vehicle to understand them.
A lot of monad explanations focus on either the formalism, or an IMHO excessive level of casualness. While those have their place, I think they tend to distract from what I think is an important and profound design pattern. My explanation has:
no analogies to food items or anything else
no Haskell
no category theory
Instead, this post talks about:
real-world problems
production-grade Rust
with performance numbers, including a very pretty log-log cdf!
There’s also a small Rust program to accompany the post.
Hi, it’s commendable that you set forth to bridge communities and languages with this post, but among other TCS/algebraic concepts monads tend to get variously conflated with their implementation in this or that concrete language. However you make some pretty strong claims in the post, and some downright inaccurate ones.
The thing we care about is composition of computations that might have effects. As such bind/flatMap are generalizations of the pure function composition “dot” operator.
This is the essence of monadic composition: powerful, unconstrained, and fundamentally unpredictable.
Not really. Lists and Optional computations can be composed with monadic bind, but would you say composition of programs that return these values is unpredictable?
Some of this misunderstanding is a historical accident; Haskell lets you talk about effect types that do not involve I/O, so it proved to be a good test bench for this concept.
monadic composition is Turing-complete
This is a property of the language, not of the composition operator.
Thank you for the feedback! For context, I studied some category theory in graduate school and I can also talk about natural transformations and endofunctors, but I made a deliberate decision to avoid all of that. There are a large number of developers who would benefit from recognition of monadic patterns in their daily professional lives, but have been traumatized by overly technical monad tutorials – that’s the audience for my post. If you’re not in that audience, there are plenty of other monad tutorials for you :)
Even in the appendix, I talk about the monad laws in terms of their application to strategies with the concrete Just and prop_flat_map. That’s a deliberate pedagogical decision too. I enjoy working in abstract domains, I just think it really helps to be motivated by practical concerns before going deeper into them.
but among other TCS/algebraic concepts monads tend to get variously conflated with their implementation in this or that concrete language.
This is absolutely true. In my post I was trying to focus on why monadic bind is of deep interest and not just a Haskell/FP curiosity — why working programmers should at least notice it whenever it pops up, no matter what environment or language they are in. (Speaking personally, in one of my first projects at Oxide, very far away from grad school, I had the opportunity to add monadic bind to a system, but deliberately chose not to do so based on this understanding.)
I think of it as similar to group theory, where you can either introduce groups formally as a set with an attached operation that has certain properties, or motivate them through an “implementation” in terms of symmetries (maybe even through the specific example of solving the quintic). I personally prefer the latter because it gives me really good reasons for why I should care about them, and also helps explain things like why group theory is central to physics. In my experience teaching people, this preference is shared by most of them. There’s always time later to understand the most general concept.
Not really. Lists and Optional computations can be composed with monadic bind, but would you say composition of programs that return these values is unpredictable?
Absolutely, relative to fmap in each of the monads. I mention an example at the end with lists, where with bind you don’t know the size of the resulting list upfront – for Rust this has performance implications while collecting into a vector, since you can’t allocate the full capacity upfront (though that’s a tiny difference compared to the exponential behavior of test case shrinking). Even with optionals, fmap means a Some always stays a Some, while bind can turn a Some into a None. Relatively speaking that’s quite unpredictable.
The extent to which monadic bind’s unpredictability matters is specific to each monad (it matters less for simpler monads like optionals, and more for strategies, futures and build systems.) But in all cases it is unpredictable relative to functor (or applicative functor) composition within that monad.
This is a property of the language, not of the composition operator.
This is true. What I was trying to say here is that in a typical Turing-complete environment, the Turing completeness means that introspecting the lambda is impossible in general. (And this is true in non-Turing-complete environments too – for example, in primitive recursive/bounded loop environments it’s still quite unpredictable.) I’ll try and reword this tomorrow.
I appreciate the thoughtful approach. I learn much better starting with the specific and moving to the general than starting with the abstract.
I wrapped up Georgia tech OMSCS and one of my favorite classes was Knowledge Based AI which focused on how humans learn things. (And the AI is classical and not LLMs). One core takeaway for me was the general/specific learning dichotomy.
A really interesting learning approach was called “version spaces” which acted like bidirectional search by building general and specific info at the same time. Basically, people need both models to fully absorb a concept, but how individuals learn best varies.
All that to say: thanks again, I think it takes a lot of work and effort to make something approachable and I appreciate your post.
There are a large number of developers who would benefit from recognition of monadic patterns in their daily professional lives, but have been traumatized by overly technical monad tutorials
When intersected with “Rust developer”, do you think that’s still a large group…? If someone finds monad tutorials to be “overly technical” then they’re never going to make it past JavaScript, much less discover systems programming languages like Rust.
I’m one of the fools who once upon a time tried to use Haskell for systems programming, and of all of the Rust/C/C++ developers I’ve met, their primary difficulty with monads is that all the documentation was non-technical (i.e. prose) and did the normal academic math thing of using five different words to identify the same concept in different contexts.
This article is a new way of writing a confusing explanation of monads, which is that it starts off by diving deep into an obscure testing strategy that’s pretty much only used by people really into functional programming, and then slowly works its way back into the shallow waters of monads, then dives right into a discussion on how function composition is Turing-complete[0] and how exemplar reduction in property-based testing can have unpredictable runtime performance. You’ve got, like, three different articles there!
If you want someone who can’t spell “GHC” to understand monads, there’s only three types you need:
Lists (Rust’s Vec, C++ std::vector) with flat_map
Maybe (Option, std::optional or local equivalent) with and_then
Either (Result, std::expected or local equivalent) with and_then
Don’t go off into the weeds about property testing, only teach one alien concept at a time. Those three types have super-simple implementations of (>>=), they’re universally familiar to everyone who’s not been frozen in Arctic sea ice since 1995, and it’s easy to go from ? to do-notation if you want to demystify the smug “monads are programmable semicolons” in-joke while you’re at it.
Then, once your reader is comfortable with the concept of nested inline functions as a control flow primitive, you can link them to your follow-up article about the performance implications of monadic combinators or whatever.
[0] The article acts as if this is a surprising and meaningful revelation, so I might be misunderstanding what’s actually being discussed, but when you say “monadic composition is Turing-complete” you mean something like “bind :: (a -> T b) -> T a -> T b is Turing-complete”, yes? I feel like among people who know what “Turing-complete” means, the knowledge of a Turing machine’s equivalence to function composition is well-known.
When intersected with “Rust developer”, do you think that’s still a large group…? If someone finds monad tutorials to be “overly technical” then they’re never going to make it past JavaScript, much less discover systems programming languages like Rust.
Yes, it’s a large group. The number of Rust people that come from Haskell or other FP languages is tiny.
This article is a new way of writing a confusing explanation of monads, which is that it starts off by diving deep into an obscure testing strategy that’s pretty much only used by people really into functional programming
Property-based testing is quite widely used by Rust projects! It’s not the most common way to test software, but it’s far from obscure. It’s also really effective at finding bugs in systems code. I haven’t done any professional work in FP languages, but I’ve used PBT extensively.
I even picked a very systems-y example, writing a production-grade sort function that’s resilient to comparator misbehavior. This is the the kind of thing Rust developers enjoy.
and then slowly works its way back into the shallow waters of monads, then dives right into a discussion on how function composition is Turing-complete[0] and how exemplar reduction in property-based testing can have unpredictable runtime performance. You’ve got, like, three different articles there!
Yes, deliberately so. This kind of progressive complexity enhancement, with a tutorial for A also being a tutorial for B in disguise, is the style I teach in. It’s one I’ve practiced and refined over a decade. It doesn’t work for everyone (what does?) but it reaches a lot of people who bounce off other explanations.
If you want someone who can’t spell “GHC” to understand monads, there’s only three types you need
With respect, I quite emphatically disagree. I personally believe this approach is a colossal mistake, and I’m far from the only one to believe this (I’ve had a number of offline conversations about this in the past, and after I published this post a professor reached out to me privately about this as well.) I deliberately picked PBT as one of the simplest examples of monadic bind being weird and fascinating.
With respect, I quite emphatically disagree. I personally believe this approach is a colossal mistake, and I’m far from the only one to believe this (I’ve had a number of offline conversations about this in the past, and after I published this post a professor reached out to me privately about this as well.) I deliberately picked PBT as one of the simplest examples of monadic bind being weird and fascinating.
The problem with only using List, Option and (perhaps to a lesser extent) Either, as examples is that they’re “containers” (List is commonly understood; Option can be understood as “a list with length < 2”; Either can be understood as Option whose “empty” case provides a “reason” (e.g. an error message)). Containers come with all sorts of intuitions that make interfaces like Monad less appealing. For example, what’s the point of y = x.flat_map(f) compared to ordinary, everyday, first-order code like for (element : x) { y += f(element); }?[0]
List (and Option) are definitely good anchors for our understanding, e.g. as sanity checks when reading a generic type signature or a commutative diagram; but we also need examples that aren’t “containers”, to show why these interfaces aren’t just some weird alternative to “normal” looping. A set of examples which aren’t “containers”, but may still be familiar, are things which “generate” values; e.g. parsers, random generators, IO, etc.[1]. Those are situations where our intuition probably isn’t “just loop”[2], so other interfaces can get a chance[3]. The fact that Monad & friends apply to both “containers” and “generators” is then a nice justification for their existence. Once we’re happy that this is a reasonable interface, we can go further into the weeds by deriving some examples purely from the interface, to get a better feel for what it does/doesn’t say, in general (e.g. a “container” that’s always empty, i.e. a parameterised unit type; or a Delay type, which captures general recursion; etc.).
[0] Indeed, Scala’s for is syntactic sugar for monadic operations, similar to Haskell’s do. Although that does require extra concepts like yield and <-, which don’t appear in e.g. a Java for-loop; and may require understanding of monad-like-things (I can’t say for sure, since I first tried Scala after after many years of Haskell programming!).
[1] Parsers and random generators are actually very closely related, e.g. we can think of a random generator as a parser that’s been given a random (but valid) input. Parser combinators are the most obvious way to understand what it means for parsers to be monads: they used to be quite academic, but I think are now common enough to be a motivating example for “why care”, even for those who tend to use other approaches. Random generators like those in QuickCheck (i.e. built using composition) seem much less common than e.g. only generating random ints, and operating on those directly; which makes them a less motivating example up-front. However, passing around PRNGs may be a good motivation for monadic state, and combining this with monadic parsing to get QuickCheck-style generators might be a nice climax :)
[2] We can do equivalent things with loops if we’re comfortable with yield; but (a) that’s a smaller subset of those that are comfortable with for, and (b) that rabbit hole leads more towards delimited continuations, algebraic effects, etc. which are alternatives to monads that are just as fascinating to think about :)
[3] I think the intuition for “generators” motivates the idea of composition. For “containers”, composition is merely an optimisation: we can just do multiple passes instead. Whereas for “generators”, it feels like we “don’t have the values yet”, so we need some way to plug them together before they’re “run”. IO is not a good motivator here, since imperative languages automatically compose statements for us. It seems better to come back to it after grokking parsers, random generators, etc.; even then it might be better to first describe an abstraction like a “task queue”, and only later introduce IO as a sort of “task queue for the whole language”.
List (and Option) are definitely good anchors for our understanding, e.g. as sanity checks when reading a generic type signature or a commutative diagram; but we also need examples that aren’t “containers”, to show why these interfaces aren’t just some weird alternative to “normal” looping. A set of examples which aren’t “containers”, but may still be familiar, are things which “generate” values; e.g. parsers, random generators, IO, etc.
I really love this way of looking at it, as well as you pointing out later that IO is not a good monad to introduce to people either because it is implicit in imperative code.
For me, there were two things that made monads really click:
Build Systems a la Carte, which draws a distinction between monadic and applicative build systems – this one made me first realize that monads are a general system property much more than a Haskell curiosity
The PBT example in the post, where generation isn’t hugely affected by monadic composition, but shrinking is
If you want someone who can’t spell “GHC” to understand monads, there’s only three types you need:
Lists
Maybe
Either
I dunno if it’s because I learned monads when they had only recently been discovered as the solution to purely functional IO, but I expect that many programmers who have vaguely heard about monads know that they are how you do IO in Haskell. So I think a practical monad tutorial should try to bridge the gap between monads as pure algebra (lists, maybe, either) and how they are useful for impure programming.
(I have noticed in several cases that many programmers find it hard to get to grips with very abstract ideas, if the ideas don’t come with concrete practical examples of how the abstraction is used in practice and what benefits the abstraction provides. This is especially true the less complicated the abstraction is, which is why monads are so troublesome.)
The problem with list/either/maybe is that they are all parametric types, so all the monad is able to do is faff around with the layout of the values. It’s hard for a tutorial to illustrate what benefit you get from an explicit monad as opposed to less polymorphic list/either/maybe combinators.
So I think a monad tutorial should show an example of something more imperative such as the state monad. That allows you to show monads in use with functions that do practical things with the container type, and how the monad sequences those functions. (Perhaps also emphasise Either as the exception monad.) It’s then only a small step to monadic IO.
ISTM that now that many languages have features like promises, there’s more relevant common knowledge among imperative programmers than there used to be. This might be an easier on-ramp than what the initial discoverers had to struggle through. A Promise<Promise<a>> can be flattened into a Promise<a>, and you can write a version of bind. Thinking of bind as “map + join” also helps avoid the “but I don’t have an a so how can I run my a -> m b function?” that I struggled with when understanding monads as they applied to things other than concrete data structures.
Dealing with your footnote, even as someone fairly familiar with function composition, I wouldn’t immediately notice that “bind :: (a -> T b) -> T a -> T b” qualifies as function composition, but “fmap :: (a -> b) -> T a -> T b” is not. Sure, if I as down and wrote it out, it would become clear quickly, but leaving this as an exercise for the reader is just poor pedagogy.
Would it be clearer if you considered composeM :: (a -> T b) -> (b -> T c) -> (a -> T c)? Because you can write it in terms of bind and vice-versa, provided you also have pure. (Final parens are redundant but added for clarity.)
Yep, thank you for clarifying. But I still think that not preserving the number of elements of a container is not the same thing as being unpredictable. For example, there are things for which you can define monadic bind (e.g. functions, https://hackage.haskell.org/package/base-4.12.0.0/docs/src/GHC.Base.html#line-828 ), for which binding means piling applications on top of each other.
Do you think it’s perverse that when I first read a rust tutorial I was perplexed about not putting semicolons at the end of a block until I decided that semicolons are just monadic bind (I don’t think I got around to writing any rust)
Hi all! I finally decided to write the monad tutorial of my dreams, using Rust and property-based testing as the vehicle to understand them.
A lot of monad explanations focus on either the formalism, or an IMHO excessive level of casualness. While those have their place, I think they tend to distract from what I think is an important and profound design pattern. My explanation has:
Instead, this post talks about:
There’s also a small Rust program to accompany the post.
Hi, it’s commendable that you set forth to bridge communities and languages with this post, but among other TCS/algebraic concepts monads tend to get variously conflated with their implementation in this or that concrete language. However you make some pretty strong claims in the post, and some downright inaccurate ones.
The thing we care about is composition of computations that might have effects. As such
bind/flatMapare generalizations of the pure function composition “dot” operator.Not really. Lists and Optional computations can be composed with monadic bind, but would you say composition of programs that return these values is unpredictable?
Some of this misunderstanding is a historical accident; Haskell lets you talk about effect types that do not involve I/O, so it proved to be a good test bench for this concept.
This is a property of the language, not of the composition operator.
Thank you for the feedback! For context, I studied some category theory in graduate school and I can also talk about natural transformations and endofunctors, but I made a deliberate decision to avoid all of that. There are a large number of developers who would benefit from recognition of monadic patterns in their daily professional lives, but have been traumatized by overly technical monad tutorials – that’s the audience for my post. If you’re not in that audience, there are plenty of other monad tutorials for you :)
Even in the appendix, I talk about the monad laws in terms of their application to strategies with the concrete
Justandprop_flat_map. That’s a deliberate pedagogical decision too. I enjoy working in abstract domains, I just think it really helps to be motivated by practical concerns before going deeper into them.This is absolutely true. In my post I was trying to focus on why monadic bind is of deep interest and not just a Haskell/FP curiosity — why working programmers should at least notice it whenever it pops up, no matter what environment or language they are in. (Speaking personally, in one of my first projects at Oxide, very far away from grad school, I had the opportunity to add monadic bind to a system, but deliberately chose not to do so based on this understanding.)
I think of it as similar to group theory, where you can either introduce groups formally as a set with an attached operation that has certain properties, or motivate them through an “implementation” in terms of symmetries (maybe even through the specific example of solving the quintic). I personally prefer the latter because it gives me really good reasons for why I should care about them, and also helps explain things like why group theory is central to physics. In my experience teaching people, this preference is shared by most of them. There’s always time later to understand the most general concept.
Absolutely, relative to
fmapin each of the monads. I mention an example at the end with lists, where with bind you don’t know the size of the resulting list upfront – for Rust this has performance implications while collecting into a vector, since you can’t allocate the full capacity upfront (though that’s a tiny difference compared to the exponential behavior of test case shrinking). Even with optionals,fmapmeans aSomealways stays aSome, while bind can turn aSomeinto aNone. Relatively speaking that’s quite unpredictable.The extent to which monadic bind’s unpredictability matters is specific to each monad (it matters less for simpler monads like optionals, and more for strategies, futures and build systems.) But in all cases it is unpredictable relative to functor (or applicative functor) composition within that monad.
This is true. What I was trying to say here is that in a typical Turing-complete environment, the Turing completeness means that introspecting the lambda is impossible in general. (And this is true in non-Turing-complete environments too – for example, in primitive recursive/bounded loop environments it’s still quite unpredictable.) I’ll try and reword this tomorrow.
I appreciate the thoughtful approach. I learn much better starting with the specific and moving to the general than starting with the abstract.
I wrapped up Georgia tech OMSCS and one of my favorite classes was Knowledge Based AI which focused on how humans learn things. (And the AI is classical and not LLMs). One core takeaway for me was the general/specific learning dichotomy.
A really interesting learning approach was called “version spaces” which acted like bidirectional search by building general and specific info at the same time. Basically, people need both models to fully absorb a concept, but how individuals learn best varies.
All that to say: thanks again, I think it takes a lot of work and effort to make something approachable and I appreciate your post.
You may enjoy this post: https://terrytao.wordpress.com/career-advice/theres-more-to-mathematics-than-rigour-and-proofs/
This three-staged learning process is very common, in my experience, and i feel like general/specific is similar to intuitive/rigorous.
When intersected with “Rust developer”, do you think that’s still a large group…? If someone finds monad tutorials to be “overly technical” then they’re never going to make it past JavaScript, much less discover systems programming languages like Rust.
I’m one of the fools who once upon a time tried to use Haskell for systems programming, and of all of the Rust/C/C++ developers I’ve met, their primary difficulty with monads is that all the documentation was non-technical (i.e. prose) and did the normal academic math thing of using five different words to identify the same concept in different contexts.
This article is a new way of writing a confusing explanation of monads, which is that it starts off by diving deep into an obscure testing strategy that’s pretty much only used by people really into functional programming, and then slowly works its way back into the shallow waters of monads, then dives right into a discussion on how function composition is Turing-complete[0] and how exemplar reduction in property-based testing can have unpredictable runtime performance. You’ve got, like, three different articles there!
If you want someone who can’t spell “GHC” to understand monads, there’s only three types you need:
Vec, C++std::vector) withflat_mapOption,std::optionalor local equivalent) withand_thenResult,std::expectedor local equivalent) withand_thenDon’t go off into the weeds about property testing, only teach one alien concept at a time. Those three types have super-simple implementations of
(>>=), they’re universally familiar to everyone who’s not been frozen in Arctic sea ice since 1995, and it’s easy to go from?todo-notation if you want to demystify the smug “monads are programmable semicolons” in-joke while you’re at it.Then, once your reader is comfortable with the concept of nested inline functions as a control flow primitive, you can link them to your follow-up article about the performance implications of monadic combinators or whatever.
[0] The article acts as if this is a surprising and meaningful revelation, so I might be misunderstanding what’s actually being discussed, but when you say “monadic composition is Turing-complete” you mean something like “
bind :: (a -> T b) -> T a -> T bis Turing-complete”, yes? I feel like among people who know what “Turing-complete” means, the knowledge of a Turing machine’s equivalence to function composition is well-known.Yes, it’s a large group. The number of Rust people that come from Haskell or other FP languages is tiny.
Property-based testing is quite widely used by Rust projects! It’s not the most common way to test software, but it’s far from obscure. It’s also really effective at finding bugs in systems code. I haven’t done any professional work in FP languages, but I’ve used PBT extensively.
I even picked a very systems-y example, writing a production-grade sort function that’s resilient to comparator misbehavior. This is the the kind of thing Rust developers enjoy.
Yes, deliberately so. This kind of progressive complexity enhancement, with a tutorial for A also being a tutorial for B in disguise, is the style I teach in. It’s one I’ve practiced and refined over a decade. It doesn’t work for everyone (what does?) but it reaches a lot of people who bounce off other explanations.
With respect, I quite emphatically disagree. I personally believe this approach is a colossal mistake, and I’m far from the only one to believe this (I’ve had a number of offline conversations about this in the past, and after I published this post a professor reached out to me privately about this as well.) I deliberately picked PBT as one of the simplest examples of monadic bind being weird and fascinating.
The problem with only using List, Option and (perhaps to a lesser extent) Either, as examples is that they’re “containers” (List is commonly understood; Option can be understood as “a list with length < 2”; Either can be understood as Option whose “empty” case provides a “reason” (e.g. an error message)). Containers come with all sorts of intuitions that make interfaces like Monad less appealing. For example, what’s the point of
y = x.flat_map(f)compared to ordinary, everyday, first-order code likefor (element : x) { y += f(element); }?[0]List (and Option) are definitely good anchors for our understanding, e.g. as sanity checks when reading a generic type signature or a commutative diagram; but we also need examples that aren’t “containers”, to show why these interfaces aren’t just some weird alternative to “normal” looping. A set of examples which aren’t “containers”, but may still be familiar, are things which “generate” values; e.g. parsers, random generators, IO, etc.[1]. Those are situations where our intuition probably isn’t “just loop”[2], so other interfaces can get a chance[3]. The fact that Monad & friends apply to both “containers” and “generators” is then a nice justification for their existence. Once we’re happy that this is a reasonable interface, we can go further into the weeds by deriving some examples purely from the interface, to get a better feel for what it does/doesn’t say, in general (e.g. a “container” that’s always empty, i.e. a parameterised unit type; or a Delay type, which captures general recursion; etc.).
[0] Indeed, Scala’s
foris syntactic sugar for monadic operations, similar to Haskell’sdo. Although that does require extra concepts likeyieldand<-, which don’t appear in e.g. a Java for-loop; and may require understanding of monad-like-things (I can’t say for sure, since I first tried Scala after after many years of Haskell programming!).[1] Parsers and random generators are actually very closely related, e.g. we can think of a random generator as a parser that’s been given a random (but valid) input. Parser combinators are the most obvious way to understand what it means for parsers to be monads: they used to be quite academic, but I think are now common enough to be a motivating example for “why care”, even for those who tend to use other approaches. Random generators like those in QuickCheck (i.e. built using composition) seem much less common than e.g. only generating random ints, and operating on those directly; which makes them a less motivating example up-front. However, passing around PRNGs may be a good motivation for monadic state, and combining this with monadic parsing to get QuickCheck-style generators might be a nice climax :)
[2] We can do equivalent things with loops if we’re comfortable with
yield; but (a) that’s a smaller subset of those that are comfortable withfor, and (b) that rabbit hole leads more towards delimited continuations, algebraic effects, etc. which are alternatives to monads that are just as fascinating to think about :)[3] I think the intuition for “generators” motivates the idea of composition. For “containers”, composition is merely an optimisation: we can just do multiple passes instead. Whereas for “generators”, it feels like we “don’t have the values yet”, so we need some way to plug them together before they’re “run”. IO is not a good motivator here, since imperative languages automatically compose statements for us. It seems better to come back to it after grokking parsers, random generators, etc.; even then it might be better to first describe an abstraction like a “task queue”, and only later introduce IO as a sort of “task queue for the whole language”.
Thanks for the thoughtful reply.
I really love this way of looking at it, as well as you pointing out later that IO is not a good monad to introduce to people either because it is implicit in imperative code.
For me, there were two things that made monads really click:
I dunno if it’s because I learned monads when they had only recently been discovered as the solution to purely functional IO, but I expect that many programmers who have vaguely heard about monads know that they are how you do IO in Haskell. So I think a practical monad tutorial should try to bridge the gap between monads as pure algebra (lists, maybe, either) and how they are useful for impure programming.
(I have noticed in several cases that many programmers find it hard to get to grips with very abstract ideas, if the ideas don’t come with concrete practical examples of how the abstraction is used in practice and what benefits the abstraction provides. This is especially true the less complicated the abstraction is, which is why monads are so troublesome.)
The problem with list/either/maybe is that they are all parametric types, so all the monad is able to do is faff around with the layout of the values. It’s hard for a tutorial to illustrate what benefit you get from an explicit monad as opposed to less polymorphic list/either/maybe combinators.
So I think a monad tutorial should show an example of something more imperative such as the state monad. That allows you to show monads in use with functions that do practical things with the container type, and how the monad sequences those functions. (Perhaps also emphasise Either as the exception monad.) It’s then only a small step to monadic IO.
ISTM that now that many languages have features like promises, there’s more relevant common knowledge among imperative programmers than there used to be. This might be an easier on-ramp than what the initial discoverers had to struggle through. A
Promise<Promise<a>>can be flattened into aPromise<a>, and you can write a version of bind. Thinking ofbindas “map+join” also helps avoid the “but I don’t have anaso how can I run mya -> m bfunction?” that I struggled with when understanding monads as they applied to things other than concrete data structures.Dealing with your footnote, even as someone fairly familiar with function composition, I wouldn’t immediately notice that “
bind :: (a -> T b) -> T a -> T b” qualifies as function composition, but “fmap :: (a -> b) -> T a -> T b” is not. Sure, if I as down and wrote it out, it would become clear quickly, but leaving this as an exercise for the reader is just poor pedagogy.Would it be clearer if you considered
composeM :: (a -> T b) -> (b -> T c) -> (a -> T c)? Because you can write it in terms ofbindand vice-versa, provided you also havepure. (Final parens are redundant but added for clarity.)Yep, thank you for clarifying. But I still think that not preserving the number of elements of a container is not the same thing as being unpredictable. For example, there are things for which you can define monadic bind (e.g. functions, https://hackage.haskell.org/package/base-4.12.0.0/docs/src/GHC.Base.html#line-828 ), for which binding means piling applications on top of each other.
Do you think it’s perverse that when I first read a rust tutorial I was perplexed about not putting semicolons at the end of a block until I decided that semicolons are just monadic bind (I don’t think I got around to writing any rust)
It is true that semicolons are a monadic bind, but I also think that lens confuses more than it enlightens :)
It’s the sunglasses from They Live but they show you the monads underlying all computation
Wayfarers by Leibniz?