The statement of the first homomorphism theorem states:

If h is a list homomorphism, then there is an operator ⊙ and function f such that
f xs = reduce ⊙ e (map f xs)

I suspect the f on the LHS should be a h. In Haskell, we might replace reduce ⊙ e with a call to fold :: (Monoid m, Foldable t) => t m -> m. At which point we can say that each list homomorphism can be expressed as a call to foldMap :: (Monoid m, Foldable t) => (a -> m) -> t a -> m.

We will not depend on our “lists” having the behaviour of linked lists, and using linked lists would actually inhibit parallelisation.

It is possible to parallelize operations on linked lists, so long as you use an appropriate representation of the data structure! List ranking, which is essentially prefix sun, can be parallelized by using an array pool representation of linked lists. This isn’t my blog post but it seemed better to link to it instead of an academic paper: https://downey.io/blog/helman-jaja-list-ranking-explained/

The statement of the first homomorphism theorem states:

I suspect the

fon the LHS should be ah. In Haskell, we might replace`reduce ⊙ e`

with a call to`fold :: (Monoid m, Foldable t) => t m -> m`

. At which point we can say that each list homomorphism can be expressed as a call to`foldMap :: (Monoid m, Foldable t) => (a -> m) -> t a -> m`

.It is possible to parallelize operations on linked lists, so long as you use an appropriate representation of the data structure! List ranking, which is essentially prefix sun, can be parallelized by using an array pool representation of linked lists. This isn’t my blog post but it seemed better to link to it instead of an academic paper: https://downey.io/blog/helman-jaja-list-ranking-explained/