For example, a structure with a commutativity law … permits an implementation for ⊕ or integer multiplication but rejects matrix multiplication.
There’s a good intuition here. Matrices are representations of linear functions on vector spaces, and matrix multiplication is just function composition, which is noncommutative.
closed⁵ binary⁵ associative operation that acts on that type
These both point to the same footnote.
we typically use <> (or even +) to denote this operation.
Is this that common? I thought people reserved + for operations that are both associative and commutative.
The power of an identity is that there always exists a default or a known empty. Monoids let us “drop the option”:
There’s a small tradeoff here: since empty ∈ T, there’s no way of knowing whether the operation was invalid or if it was a valid operation that resolved to empty. Depending on what you’re doing you may want to keep those separate.
Monoids are the building blocks of composition. And composition leads to clean, simple, and expressive code.
Everybody talks about semigroups and monoids but nobody talks about poor ol’ groups!
(I mean there’s reasons for that, groups are much less common in practice, but still. Groups are cool)
I didn’t know that + typically implies commutativity. I’ll update the post!
I find that groups don’t materialize too often when I’m writing software. The only time I came across an interesting group: Styling (think atomic css classes). StartButton := FinishButton <> (-Round) <> Green. You can model it with a pair of sets of your base styles acting like a sort of “free commutative group”. The only issue with this is (-Round) only really has meaning as a way to annihilate the roundness of some style, it can’t really stand on its own. Regardless, I have used the styling group on a web project and I have to say being able to subtract out styles is useful!
Offhandedly (read: easily subject to counterexamples) I’m wondering if truly isolated transactions could be modeled as a group, regardless of whether there would be any gains in doing so. Unlike the general case of state transitions in a random-access machine, the context of a transaction has reversion to the prior state as a default, so within an uncommitted transaction every operation has an inverse in the form of “cancel this statement.”
There’s a small tradeoff here: since empty ∈ T, there’s no way of knowing whether the operation was invalid or if it was a valid operation that resolved to empty. Depending on what you’re doing you may want to keep those separate.
Associativity turns trees into sequences. Monoids are essentially things that can be modeled as being “in sequence” where composition is putting one sequence in front of the other.
Without associativity, what you’re left with can have arbitrary branching. That’s very useful when you want it, but it’s much more complex and it’s much harder to say anything about “all branching things” because they’re so much more variable.
I disagree. It’s just not a “thing”. It’s a shape. In programming awareness of common shapes helps you to write better code. It’s hard to say much more than that without diving into a larger example.
I recommend Brent Yorgey’s “Monoids: Themes and Variations” as a set of examples.
If an operation is associative, then it’s trivially parallelizable. If an operation has an identity element, then you can transform the operation in a noop, which is handy for pipelining.
Thanks, that’s what I meant. But are there other examples? Because aiding parallelization is the only use-case I ever see for it, and it’s already somewhat covered by the map-reduce pattern.
I like it! Some quick comments:
There’s a good intuition here. Matrices are representations of linear functions on vector spaces, and matrix multiplication is just function composition, which is noncommutative.
These both point to the same footnote.
Is this that common? I thought people reserved
+
for operations that are both associative and commutative.There’s a small tradeoff here: since
empty
∈ T, there’s no way of knowing whether the operation was invalid or if it was a valid operation that resolved toempty
. Depending on what you’re doing you may want to keep those separate.Everybody talks about semigroups and monoids but nobody talks about poor ol’ groups!
(I mean there’s reasons for that, groups are much less common in practice, but still. Groups are cool)
Thanks!
I didn’t know that + typically implies commutativity. I’ll update the post!
I find that groups don’t materialize too often when I’m writing software. The only time I came across an interesting group: Styling (think atomic css classes).
StartButton := FinishButton <> (-Round) <> Green
. You can model it with a pair of sets of your base styles acting like a sort of “free commutative group”. The only issue with this is (-Round) only really has meaning as a way to annihilate the roundness of some style, it can’t really stand on its own. Regardless, I have used the styling group on a web project and I have to say being able to subtract out styles is useful!Offhandedly (read: easily subject to counterexamples) I’m wondering if truly isolated transactions could be modeled as a group, regardless of whether there would be any gains in doing so. Unlike the general case of state transitions in a random-access machine, the context of a transaction has reversion to the prior state as a default, so within an uncommitted transaction every operation has an inverse in the form of “cancel this statement.”
And, embarrassingly, on immediate consideration, associativity can’t be guaranteed, so scratch that!
A.k.a., the semipredicate problem.
+1
I still not get it. What’s the point of associativity? Related to programming? Just fancy syntax sugar?
And monoids: what the purpose of returning Nothing? Pipeline not broke. Great. What’s the point? With exceptions you get at least stack trace.
Associativity turns trees into sequences. Monoids are essentially things that can be modeled as being “in sequence” where composition is putting one sequence in front of the other.
Without associativity, what you’re left with can have arbitrary branching. That’s very useful when you want it, but it’s much more complex and it’s much harder to say anything about “all branching things” because they’re so much more variable.
No applicable to real life.
I disagree. It’s just not a “thing”. It’s a shape. In programming awareness of common shapes helps you to write better code. It’s hard to say much more than that without diving into a larger example.
I recommend Brent Yorgey’s “Monoids: Themes and Variations” as a set of examples.
This is a good explanation of what they are, but the more important question (imho) is why are they useful?
What real world use cases does it help solve?
If an operation is associative, then it’s trivially parallelizable. If an operation has an identity element, then you can transform the operation in a noop, which is handy for pipelining.
Thanks, that’s what I meant. But are there other examples? Because aiding parallelization is the only use-case I ever see for it, and it’s already somewhat covered by the map-reduce pattern.
What a cool idea!
Reminds me of Monoids: Theme and Variations (Functional Pearl).
Yes! Lists are great for this because you can treat them as having 0 or more results.