I think concurrency is pretty well established, but I’ve seen a couple of different descriptions of parallelism. Imagine we have a set of actions that can happen in our code, and one action can happen in a single clock tick. That’s simple sequential programming. If there’s one of several possible actions that can happen per clock tick, that’s nondeterministic programming.
If you have two clocks, each with a set of actions, and either can tick at any point, that’s concurrency. Then there’s a couple of ways to define parallelism:
I think there’s a couple of subtle differences to what counts or not, but they’re close enough for most practical purposes.
The benefits of this are increased clarity and possibly modularity, due to explicit expression of the real dependencies.
Parallelism is something you can do to take additional advantage of the concurrency. You can’t just take a normal program and execute every statement simultaneously, since you can run into the implicit dependencies. But you can take a program marked up for concurrency and execute the independent parts of it in parallel.