1. 2
  1.  

  2. 2

    I think concurrency is pretty well established, but I’ve seen a couple of different descriptions of parallelism. Imagine we have a set of actions that can happen in our code, and one action can happen in a single clock tick. That’s simple sequential programming. If there’s one of several possible actions that can happen per clock tick, that’s nondeterministic programming.

    If you have two clocks, each with a set of actions, and either can tick at any point, that’s concurrency. Then there’s a couple of ways to define parallelism:

    1. At least one clock, but several actions can happen per tick
    2. At least two clocks, but they can tick simultaneously

    I think there’s a couple of subtle differences to what counts or not, but they’re close enough for most practical purposes.

    1. 1

      Is it:

      • operations in a program are implicitly sequenced by their syntactic order (execute this line, then the next)
      • it is not obvious whether that ordering is logically necessary (e.g. due to a data dependency) or not (this just the order I expressed the list of things which need to happen)
      • concurrency is the explicit relaxation of that implicit syntactic ordering, leaving only the true dependencies (go foo(); go bar(ch); go baz(ch))

      The benefits of this are increased clarity and possibly modularity, due to explicit expression of the real dependencies.

      Parallelism is something you can do to take additional advantage of the concurrency. You can’t just take a normal program and execute every statement simultaneously, since you can run into the implicit dependencies. But you can take a program marked up for concurrency and execute the independent parts of it in parallel.