The writer pattern is cool for several reasons. As the author mentions, the writer monad can (for balanced-ish data structures) help break up the problem into balanced pieces and improve complexity. A purely compositional view (rather than a stateful / mutating one) is more elegant and easier to reason about. With listen, one can introspect the intermediate pieces of your monoid as you build them (which you could do anyway if you weren’t using Writer, but it’s still nice).
I recently ran into this when I was defining creating control flow graphs when proving things about static analysis in Agda. My first thought was to use stateful mutation (“add a node, connect it to another node” etc.). My graphs were dependently typed, though, which highlighted one more problem with the mutating approach: it’s much harder to write a well-typed stateful, mutating function than one based on monoids! In the end, I ended up defining two monoidal operations on my graphs, and building them inductively like the author describes.
However, the monoid instance for graphs (which concatenates edge and node lists) is a good performance counter-example. A linear traversal (like one done by a stateful operation) could append one edge / one node at a time to the lists. However, when building up inductively, the concatenation in the monoid instance, combined with the purity of the data structures, requires numerous re-traversals of the lists to create concatenated copies. This suggests to me that the complexity might jump from O(log(n)) for a balanced inductive traversal to O(nlogn(n)), which is slower than the mutable version.
I’m in good faith trying to follow the author’s larger point, so do not take this as a refutation but rather as a question to make sure I followed the post:
In a more mutate-y language, is their solution the same as having a state accumulator object passed around
If that is the case, what insight is the monoid+Writer framing beyond that?
Is it that the traversals of the Trees are separable and having this state accumulation interface is sufficient, without needing any read operations? Or is it something simpler like how they get the implementations of the accumulations for free via the ‘deriving’s?
(Rereading, I guess it is maybe the first one? From the post, “The insight here is that everything we needed to collect was monoidal”?)
Both of the things you said are true, but neither is the real point.
The author, like most programmers, is interested in replacing lots of specialized, explicit code with a smaller, easier to reason about slice of specialized code that implements an interface accepted by a more general-purpose and battle tested chunk of code that is someone else’s job to make work right.
The real value is not the Monoid itself, but instead how it plugs into other standard interfaces and language constructs. By implementing Monoid and using the Writer monad, you can eliminate all the explicit code involved in threading the result state in and out of functions and combining partial results.
So I read this article the same way I would read an article detailing how one would model a solution by implementing their language’s Iterable interface, or how their code could be improved by leaning on their Dependency Injection container and implementing a Java Bean.
The writer pattern is cool for several reasons. As the author mentions, the writer monad can (for balanced-ish data structures) help break up the problem into balanced pieces and improve complexity. A purely compositional view (rather than a stateful / mutating one) is more elegant and easier to reason about. With
listen, one can introspect the intermediate pieces of your monoid as you build them (which you could do anyway if you weren’t usingWriter, but it’s still nice).I recently ran into this when I was defining creating control flow graphs when proving things about static analysis in Agda. My first thought was to use stateful mutation (“add a node, connect it to another node” etc.). My graphs were dependently typed, though, which highlighted one more problem with the mutating approach: it’s much harder to write a well-typed stateful, mutating function than one based on monoids! In the end, I ended up defining two monoidal operations on my graphs, and building them inductively like the author describes.
However, the monoid instance for graphs (which concatenates edge and node lists) is a good performance counter-example. A linear traversal (like one done by a stateful operation) could append one edge / one node at a time to the lists. However, when building up inductively, the concatenation in the monoid instance, combined with the purity of the data structures, requires numerous re-traversals of the lists to create concatenated copies. This suggests to me that the complexity might jump from O(log(n)) for a balanced inductive traversal to O(nlogn(n)), which is slower than the mutable version.
I’m in good faith trying to follow the author’s larger point, so do not take this as a refutation but rather as a question to make sure I followed the post:
In a more mutate-y language, is their solution the same as having a state accumulator object passed around
where e.g. putNode is
this.nodeSet.insert(node)?If that is the case, what insight is the monoid+Writer framing beyond that?
Is it that the traversals of the Trees are separable and having this state accumulation interface is sufficient, without needing any read operations? Or is it something simpler like how they get the implementations of the accumulations for free via the ‘deriving’s?
(Rereading, I guess it is maybe the first one? From the post, “The insight here is that everything we needed to collect was monoidal”?)
Both of the things you said are true, but neither is the real point.
The author, like most programmers, is interested in replacing lots of specialized, explicit code with a smaller, easier to reason about slice of specialized code that implements an interface accepted by a more general-purpose and battle tested chunk of code that is someone else’s job to make work right.
The real value is not the Monoid itself, but instead how it plugs into other standard interfaces and language constructs. By implementing Monoid and using the Writer monad, you can eliminate all the explicit code involved in threading the result state in and out of functions and combining partial results.
So I read this article the same way I would read an article detailing how one would model a solution by implementing their language’s
Iterableinterface, or how their code could be improved by leaning on their Dependency Injection container and implementing a Java Bean.