Not Haskell specific, but the Haskell-architecture shows up a lot in distributed systems. There is a framework from Twitter called Summingbird where the base unit of computation is a monoid, as described in this post. This is convenient because you can scale a monoid from one machine to 1000. You can also take one pipeline that is a monoid and smash it together with another one, and you still get a monoid out the other side and all the nice properties apply.
Another case is resolvable datatypes in distributed systems, such as a CRDT. These types fit into a lattice and you can compose these together. So if you make a new type that contains 2 other lattices in it, the new type is also a lattice and you can resolve it by recursively applying their resolution functions. You don’t have to create a new concept to call the new type with its own resolution, these this compose together and maintain their property of being resolvable.
I understand the theory behind this, but can someone give a concrete example of how this applies in a Haskell application?
Not Haskell specific, but the Haskell-architecture shows up a lot in distributed systems. There is a framework from Twitter called Summingbird where the base unit of computation is a monoid, as described in this post. This is convenient because you can scale a monoid from one machine to 1000. You can also take one pipeline that is a monoid and smash it together with another one, and you still get a monoid out the other side and all the nice properties apply.
Another case is resolvable datatypes in distributed systems, such as a CRDT. These types fit into a lattice and you can compose these together. So if you make a new type that contains 2 other lattices in it, the new type is also a lattice and you can resolve it by recursively applying their resolution functions. You don’t have to create a new concept to call the new type with its own resolution, these this compose together and maintain their property of being resolvable.