1. 10
  1.  

  2. 4

    OP says: “It’s thus known as the ‘cofree comonad’, though I can’t claim to really have any idea what the algebraic notion of ‘cofreeness’ captures, exactly.”

    Algebraically, I have no idea, but it seems to capture something similar to Producers in pipes, which work like streams. A Cofree f a is a stream that produces as, but one has to look inside f to see if there’s a termination value (and a non-terminable f will make only infinite streams possible).

    1. 2

      It’s the type of trees with nodes of type a and branches of type f. Or the type of state machines of output values a and signals f. This second description is far more in the flavor of Cofree since it generates codata.

      1. 1

        Yeah I was thinking about a notion of generic algebraic ‘cofreeness’ as it compares to algebraic ‘freeness’, if such a thing even exists.

        So: I can have the intuition that a free foo is a sort of template foo that satisfies only the minimum constraints required to be a foo, but I have no idea what a cofree bar says about the characteristics of bar.

        1. 4

          The way that I think of them is that “freeing” and “co-freeing” are processes mapping between structures (“functors”). At the core you have the idea of “forgetting” where one might, e.g., “forget” a monad into a mere functor or “forget” a comonad into a mere functor. In this case, “freeing” and “cofreeing” are the processes which undo “forgetting” as best as possible.

          The reason one must “free” some things and “co-free” other things arises because of whether we’re dealing with data or codata. Essentially, depending on the “stance” data has with respect to the infinite we might need to restore forgotten properties in different ways. The end result is similar in practice, though.

          1. 1

            That helps enormously! Thanks.

            1. 1

              Np!

            2. 1

              That’s a great answer.

              For a tangential question: is there a reference you’d suggest for learning more about these category-theoretic concepts (data vs. codata, free vs. cofree)? I’d love to learn more.

              1. 2

                It’s sort of a gordian knot of references you need in order to catch all of these concepts, sadly. The Type Theory Study Group (https://groups.google.com/forum/#!forum/type-theory-study-group) is working through PFPL which will eventually cover some of the pieces by accurately describing recursive data types, so that’s a start. The notion of Freeing and Co-freeing probably is best tackled just by diving deep into some category theory.

                There are lots of CT references people toss around, but I’m just going to say “do whatever you can do to get to the point where you can read MacLane”. It’s really sort of a night and day difference as a reference.

                1. 1

                  Very helpful. Thanks!

          2. 1

            That’s a better explanation than what I gave, since it’s more general. (For some reason I was fixated on zero or one branch point, not general f.) Thanks.

            I should amend the above to “One instance of Cofree f a is…”.

            Out of curiosity, how would you explain codata?

            1. 3

              Codata and data are one and the same in Haskell, so we have to instead consider a language which is strict.

              Now, data is what you get when you think of values which arise from a finite construction process. A (finite) tree or (finite) list is such a thing because each arise by slapping on new “layers” finitely many times.

              Codata is weird at first. It arises when you consider something which is only finitely analyzed. This opens the doors to talking about things which are infinite in extent. We can at least consider their structure, knowing, as I gave, that we’ll never run into a never-ending process because we’re only allowed to examine them finitely. A state machine is a good example since if you “unfold” its states then they may go on forever (e.g., if there’s a loop in the transition matrix) but this “foreverness” is not very scary since we know we’ll only “observe” so many transitions.

              The distinction thus arises when you want to talk about applying analyses to things we can naively conceive of but run into issues during formalization. For instance, it’s rather difficult to talk about when two state machines are identical (note that I’m not asking to determine whether two—finite—presentations of state machines are identical, but two completely general state machines). Instead of being able to de-construct them to nothing and get a notion of structural equality we have to talk about something spacey like “they cannot be distinguished by finite observation"—a notion formalized as "bisimulation”.

              1. 3

                It’s probably worth stating explicitly too that we have a strong bias toward data since induction works on it. It may be hard to even conceive of a state machine except via its finite presentation as that’s how’re there always, well, presented.

                Tools like abstract data types and function spaces can be helpful for thinking about codata.

        2. 2

          btw - Cute equivalence: Mu Maybe ~ Nat

          1. 3

            Very nice, kudos.

            import Data.Functor.Foldable
            
            type Cute = Fix Maybe
            
            just :: Cute -> Cute
            just = Fix . Just
            
            nothing :: Cute
            nothing = Fix Nothing
            
            cutesum :: Cute -> Int
            cutesum = cata alg where
              alg (Just x) = succ x
              alg Nothing  = 1
            
            foo :: Int
            foo = cutesum bar where
              bar = just (just (just (just nothing)))