1. 19
  1.  

  2. 4

    I couldn’t have possibly built this without Software Transactional Memory.

    My Haskell is pretty rough, but that doesn’t look all that different from go’s CSP. That’s not a complaint, but I’m curious if anybody would care to compare/contrast approaches.

    My intuition (or at least comfort level) is that it’s easier to reason about one process collecting state and drawing when it wants rather than trying to ensure it always sees a consistent view of the world.

    1. 3

      It’s hard to tell for sure exactly how the STM composition he shows is working, but it looks to me like the sum type of possible changes is built there in the main display thread. In a CSP-style system, there would be something in each of the threads constructing a value of one of those variants and sending it to the main display thread.

      It seems like the threads in this example just directly operate on some shared data structure. The display thread notices the changes by watching the structure and translates them into one of the display update variants, rather than getting a message and doing updates itself.

      The trouble with a purely-messaging update system in Haskell, when each of the threads needs to read the shared state as well as update it, is that that without some sort of mutable reference like STM, you have to push the new state to each of the threads via message or have messages that update the state get performed once per thread. These are also valid ways of doing it, but in some ways it’s simpler for the mutator threads to simply each encapsulate their updates to the state in an STM transaction. In this scenario, the thread responsible for updating the display just watches for mutations of the shared state and translates them into screen updates rather than having to collect update requests describing data structure manipulations, perform them, and push out updates to all the other threads as well as the screen.

      Describing arbitrary data structure updates via messages also leads to a fairly complex message structure, and possibly even a full embedded DSL for describing them. Haskell is well-equipped to define and interpret that kind of thing, but it’s a lot of heavy machinery for simply doing standard updates to a shared bit of data.

      So, I think that when individual threads are interacting with a central thread via a fairly regimented protocol, the CSP approach is probably more efficient and understandable. But when the threads all share a fairly complex piece of state that any of them could potentially make arbitrary reads and updates to, the CSP model starts to be less attractive compared to an explicit shared-state model like STM. I think this particular ‘terminal multiplexing’ example could easily go either way, but some of the author’s design decisions early on (or maybe baked into an underlying library) probably led to the STM approach being the more comfortable one.