1. 20

  2. 5

    This reminds me deeply of the quote: “A monad is just a monoid in the category of endofunctors, what’s the problem?” The post tries to be approachable, and the solution is very enticing, but far from reach for most programmers, (myself included, at least without a lot of effort) unfortunately.

    1. 8

      I don’t think the post tries to be approachable, which is fine. Not every article is for everybody. Someone should however translate for people who just want to know when and how to use it and include a link to this article as reference.

      1. 2

        Yes, exactly. :)

        1. 2

          Someone like that should just port it to Python, or cringe, JavaScript.

          1. 2

            You jest, but have you seen Raganwald’s blog? It’s terrific (in both senses of the word)!

        2. 5

          In context, that quote was not meant to be taken seriously.

          1. 5

            Sort of briefly, you can think of a semiring as talking about path traversal. Each element of a semiring a, b, c, … is a “step” in the path and then a * b means “a then b” and a + b means “a or b, non-deterministically”.

            “Starring” (or, equivalently “plussing”) adds the notion of “transitive, reflexive closure” which means “all of the points you can reach by taking this path 0 or more times”. This lets you “complete” a graph walking.

            For instance, let’s say you’ve got a set S and you want to know if it’s entirely reachable from a single point x by doing step p over and over again. You’d ask if S = star(p)(x). What about checking to see that there are no loops? That’s forall x in S . x not in plus(p)(x).

            So, obviously “path following” makes good sense on graph algorithms. It also does in parsing since parser combinators show us that parsing is made of a bunch of fundamental “elemental” parsers composed with “next” and “or”.

            And that’s pretty much what’s going on here. Once we’ve got transitive closure we can “see” lots of different problems as just being “find the transitive closure of X when you see it as being made of ‘then’s like this and 'or’s like that”.

            1. 3

              It took me a sec to understand “a step in the path” to mean “a direction you can walk”, and not “a particular spot you might have walked to”.

              Thank you, this is very helpful.

              Edit to add: You mention parser combinators, which are usually LL grammars or some variant thereof. There’s also a pretty direct mapping from the combinators to nondeterministic finite automata, which if you draw them out visually make your “path” metaphor very concrete. Hopefully I won’t forget what a semiring is now!

              Transitive closures also come up in LR parse-table generation, although if it’s for the same underlying reasons as it is in LL, that’s not obvious to me. So, yes, this is probably a very broadly useful algorithm. :)

            2. 4

              Once, trying to explain monads to a relative, and despairing to find my own words that they could understand, I gave them that quote. They had very limited programming background, and a doctorate in mathematics.

              The quote cleared things up immediately.

              I definitely see this article as being on the math side of that profound contextual divide. I don’t have the background knowledge to make sense of it. Which is disappointing to me, because the part about computing transitive closures is tantalizing - that comes up a lot in parsing, and the terminology in the article suggests the author is aware of this application.

              Honestly, part of what makes it impenetrable to me may be its presentation as literate Haskell. The goal of literate programming is to make code accessible to a larger audience, but it hasn’t done that here - with more prose and less code, I wouldn’t have to keep stopping to check the definition of a semiring. :(

              1. 4

                Which is disappointing to me, because the part about computing transitive closures is tantalizing - that comes up a lot in parsing, and the terminology in the article suggests the author is aware of this application.

                Yes! I want a simpler way to do this type of stuff. I long for a simpler way!

                The goal of literate programming is to make code accessible to a larger audience, but it hasn’t done that here

                I respectfully disagree. If simply 1 person was able to understand this as a result of it being a literate program, then it is more accessible to a larger audience, even if it’s an amphitheater of 1. (trollface)

                More seriously, however, I believe that if I understood the domain of semigroups and Kleene Algebras (I might naively guess these relate to operations defined in formal languages, simply based on the name) a bit more, and discovered this file in my code base, I believe I’d have an easy time understanding what was going on. In that regard, this Literate Program would have been successful.

                It’s very much like a technical paper. The author assumes the reader has some context, because if they don’t, the author can’t usually report about the research in less than 90 pages. It just sucks that the necessary context continues to grow and grow as we discover more and more interesting things…

                1. 4

                  That’s absolutely fair. Honestly, my personal plan is to study more math. (Somebody here suggested abstract algebra as a useful place to start, a while ago.) It sounds like we’re in agreement that it’s frustrating meanwhile.

                  1. [Comment removed by author]

                    1. 1

                      me? :)

                      Probably. :)

                      I’d definitely be excited about that, whether as an article or a talk.

                      1. 1

                        i really want to write up an article/talk on “abstract algebra for software engineers” as i think abstract algebra could be very easily motivated to folks with a pre-existing computery background.

                        I would love this.

                  2. 4

                    The quote cleared things up immediately.

                    Really? I find that rather doubtful. You still need to know what structure to endow endofunctors with, and endofunctors of which category? And what are the arrows in the endofunctor category? What kind of natural transformations? Just stating that quip doesn’t completely define a monad.

                    I assume the actual conversation went a bit differently than you describe it now.

                    Also, what kind of PhD does your relative have? Most math PhDs I know only dabble in category theory, and virtually all of my acquaintances are more interested in homological algebra (e.g. abelian categories) than purely abstract category theory. There’s a wide cultural gap between mathematicians and Haskellers.

                    1. 4

                      Yes, in full honesty I then had to explain how endofunctors are routinely used in programming (though they’re never called that, which I also said). I know just enough about what the word means to clarify that, and the relative definitely felt afterward that they understood what a monad is, at least in the way that they wanted to. But I’ve often observed that “that makes sense” means something different to mathematicians than it does to me, and of course it doesn’t mean that they learned anything about the practical aspects of factoring code to separate concerns cleanly.

                      It was still a lot more productive than the conversation that preceded it. (“What’s a statement? What is the significance of control flow to paradigms of programming? What is a paradigm of programming?”)

                      They had spent a couple years on category theory, apparently. I don’t know enough to understand any explanation they’d give more specific than that, so I’ve never asked. I accept your assertion that most mathematicians would still find it opaque; I think I got lucky.

                      I’m amused to be relating this story in such detail. :) I meant it as a throw-away remark to bring up the topic of people having different contexts. But it is true all the same.

                2. 3

                  Does law 8 (also 7) really need to be explicitly stated? Can’t it be derived from 1 & 2. Also, why isn’t a . b = b . a defined?