This was my first time doing a public talk. I tried to summarize a pretty big topic in a short time, I hope the details didn’t get too lost. Any feedback would be much appreciated!

Thanks for the talk, it was interesting! Here’s some feedback about how the presentation felt for me:

First section (what are monads and how are they everywhere) was very clear, but a little slow

I don’t understand why algebraic effects are all that desirable yet

I don’t understand how we get from “annotate what errors you throw” to “bundling effects together” or “effect inference”, in fact, I don’t know what bundling effects together means in the context of exceptions, it sounds like effects are actually something quite different from exceptions?

Felt like you were tackling the usability of monads quite a lot, but then didn’t talk about that for algebraic effects beyond IDE support. Also felt kinda notable that the extra features of e.g. Haskell’s do, or LINQ or Elixir’s with expression didn’t come into it, cos they seem like big usability benefits.

In the Q&A you talked about “subtype inference”, which sounds a lot like the structural typing in Typescript and similar to the structural typing in Go and OCaml.

Thanks for the feed back! Those are very valid points.

I didn’t talk that much about the usability of algebraic effects, because right now, there isn’t much. Of course the entire comparison is unfair since Monads are well explored while Algebraic Effects aren’t. And since I wanted to keep this somewhat short and understandable, a lot of cool things about Monads didn’t get mentioned.

Excellent talk, great introduction to monads & algebraic effects!

You touched on the fact that before monads, Haskell used streams for all IO. You mention that this API was quite weird/annoying, and other than your talk I’ve only seen some very high-level descriptions of this online (which I remember coming to roughly the same conclusion). Do you happen to know what actually made this API so weird/annoying?

In the current codebase I am working on, we use RxJS observables for all app state & communication between functions/modules (in a similar pattern to Cycle.js, if you are familiar with that). In this world you have functions that transform source streams into sink streams, and I think this is actually a really nice pattern in my opinion. Dataflow is 100% explicit, ie. you can tell what external resources a function can access just by looking at which streams are passed in as parameters.

Having worked with Haskell previously, I actually prefer this style of IO: it’s much easier to combine multiple input streams from different sources (ie. “read effects”) using stream operators than composing multiple monads (you touched on this with monad transformers).

On your blog, I saw that you mention Pointless, which takes a similar approach: its equivalent of main (output) is a lazy list of commands. Elm also has a similar idea for IO, with ports.

So yeah, wondering if you had any insight on why Haskell moved away from this - are we just re-inventing the wheel badly?

I don’t, or definitely not more than you already have. I’ve never tried this myself, and mostly just repeated what I’ve read in the HOPL paper on Haskell, in section 7: https://dl.acm.org/doi/10.1145/1238844.1238856
Also relevant is the video from the HOPL conference starting at minute 16 or so. There it’s called a “prolonged embarrassment”.

I also read through the relevant sections in the Haskell Report 1.0 and 1.3. They are surprisingly short! But still, the monad version is simpler and shorter.
But I do feel like stream based effects might not have gotten a fair try in early Haskell. The “continuation-based IO” feels like the wrong abstraction. While I was sometimes annoyed by Elm’s ports, they are absolutely practical and very far from a “prolonged embarrassment”. It also feels like these effects as streams fit UI very well. Especially web UIs, where you (mostly) don’t have parallelism.

This was a great presentation. Thank you for giving a brief mention to the algebraic laws; many monad explainers omit them entirely.

Note that continuations themselves, for any particular continuation monad, cannot be encoded as an algebraic effect. (The underlying reason is quite technical; algebraic effects need to be carried by ranked endofunctors, but the underlying endofunctors of continuation monads are unranked. See Combining algebraic effects with continuations for an exploration of the consequences.) I feel that this directly answers the question which frames the second half of your talk: no, it is not bad that monads appear everywhere. There are good reasons to believe that they naturally occur when talking about computation, and implementation of at least one monad is necessary for a runtime which supports user-defined algebraic effects.

Great talk! Very clear explanation of monads and their drawbacks w.r.t. composition.

Also, CubiML looks very interesting! The biunification seems to be very similar to something I attempted but ultimately abandoned. I might have to take another look at it, now.

I know Leibniz would agree with “monads are everywhere”, I was hoping to hear something about Leibniz thinking “Maybe that’s bad”, that would be news to me. :)

This was my first time doing a public talk. I tried to summarize a pretty big topic in a short time, I hope the details didn’t get too lost. Any feedback would be much appreciated!

Thanks for the talk, it was interesting! Here’s some feedback about how the presentation felt for me:

`do`

, or LINQ or Elixir’s`with`

expression didn’t come into it, cos they seem like big usability benefits.In the Q&A you talked about “subtype inference”, which sounds a lot like the structural typing in Typescript and similar to the structural typing in Go and OCaml.

Thanks for the feed back! Those are very valid points.

I didn’t talk that much about the usability of algebraic effects, because right now, there isn’t much. Of course the entire comparison is unfair since Monads are well explored while Algebraic Effects aren’t. And since I wanted to keep this somewhat short and understandable, a lot of cool things about Monads didn’t get mentioned.

Excellent talk, great introduction to monads & algebraic effects!

You touched on the fact that before monads, Haskell used streams for all IO. You mention that this API was quite weird/annoying, and other than your talk I’ve only seen some very high-level descriptions of this online (which I remember coming to roughly the same conclusion). Do you happen to know what actually made this API so weird/annoying?

In the current codebase I am working on, we use RxJS observables for all app state & communication between functions/modules (in a similar pattern to Cycle.js, if you are familiar with that). In this world you have functions that transform

sourcestreams intosinkstreams, and I think this is actually a really nice pattern in my opinion. Dataflow is 100% explicit, ie. you can tell what external resources a function can access just by looking at which streams are passed in as parameters.Having worked with Haskell previously, I actually prefer this style of IO: it’s much easier to combine multiple input streams from different sources (ie. “read effects”) using stream operators than composing multiple monads (you touched on this with monad transformers).

On your blog, I saw that you mention Pointless, which takes a similar approach: its equivalent of

`main`

(`output`

) is a lazy list of commands. Elm also has a similar idea for IO, with ports.So yeah, wondering if you had any insight on why Haskell moved away from this - are we just re-inventing the wheel badly?

I don’t, or definitely not more than you already have. I’ve never tried this myself, and mostly just repeated what I’ve read in the HOPL paper on Haskell, in section 7: https://dl.acm.org/doi/10.1145/1238844.1238856 Also relevant is the video from the HOPL conference starting at minute 16 or so. There it’s called a “prolonged embarrassment”.

I also read through the relevant sections in the Haskell Report 1.0 and 1.3. They are surprisingly short! But still, the monad version is simpler and shorter. But I do feel like stream based effects might not have gotten a fair try in early Haskell. The “continuation-based IO” feels like the wrong abstraction. While I was sometimes annoyed by Elm’s ports, they are absolutely practical and very far from a “prolonged embarrassment”. It also feels like these effects as streams fit UI very well. Especially web UIs, where you (mostly) don’t have parallelism.

For anyone wanting to read the paper without an ACM Digital Library subscription, here’s a freely available copy hosted by Microsoft’s research blog.

And the evolution of the the Haskell I/O system is summarized with code examples on p. 24 of the PDF.

This was a great presentation. Thank you for giving a brief mention to the algebraic laws; many monad explainers omit them entirely.

Note that continuations themselves, for any particular continuation monad, cannot be encoded as an algebraic effect. (The underlying reason is quite technical; algebraic effects need to be carried by ranked endofunctors, but the underlying endofunctors of continuation monads are unranked. See Combining algebraic effects with continuations for an exploration of the consequences.) I feel that this directly answers the question which frames the second half of your talk: no, it is not bad that monads appear everywhere. There are good reasons to believe that they naturally occur when talking about computation, and implementation of at least one monad is necessary for a runtime which supports user-defined algebraic effects.

Great talk! I couldn’t find any info on the last language you mentioned (QBML?), do you have any links?

It’s spelled “CubiML”: https://github.com/Storyyeller/cubiml-demo

The best resource are these great blog posts: https://blog.polybdenum.com/2020/07/04/subtype-inference-by-example-part-1-introducing-cubiml.html

Great talk! Very clear explanation of monads and their drawbacks w.r.t. composition.

Also, CubiML looks very interesting! The biunification seems to be very similar to something I attempted but ultimately abandoned. I might have to take another look at it, now.

Totally unrelated but you have come to a very Leibnizian conclusion.

That’s interesting. How so?

A joke.

https://en.wikipedia.org/wiki/Monadology

I know Leibniz would agree with “monads are everywhere”, I was hoping to hear something about Leibniz thinking “Maybe that’s bad”, that would be news to me. :)

That would definitely be exciting :)