1. 22
  1.  

    1. [Comment removed by author]

      1. 9

        Because he’s making up new meanings for words that already exist and trying to convince people that his definitions are better than the ones that people have been using for hundreds of years.

        1. 1

          While that’s true, maybe the more relevant aspect is that for the first half-century or more of computer programming, there wasn’t a clear distinction. I do think that these new definitions have caught on, though, so we should use them.

          1. 2

            Applying the literal definition of concurrency (things actually happening at the same time) to computer programs is something that’s only made sense recently, because it used to be literally impossible for a CPU to do more than one thing at a time. That doesn’t mean the literal definition of concurrency is some new, questionable, or unclear thing.

            1. 4

              Well, in the early 50s there was already competition between “parallel” and “serial” computers, but the difference was that we were talking about how the bits in a machine word were treated when you were e.g. adding two words together. Parallel computers were much faster but also more expensive. Multicore shared-memory computers date from at least the 1960s: the CDC 6600 (1965) had a “peripheral processing unit” that time-sliced among 10 threads, context-switching every cycle, each with its own separate memory, in addition to having access to the central processing unit’s memory. And of course in some sense any computer with a DMA peripheral (which date from the 1960s if not the late 1950s) is in some sense doing more than one thing at a time. Symmetric multiprocessing dates from, I think, the late 1960s. Meanwhile, what we now know as “concurrency” was already in play—even before the CDC 6600’s PPUs, “time sharing” (what we now know as having multiple processes) was an active research topic.

              So parallelism and concurrency have been around in more or less their current form for half a century. Nevertheless, I think the distinction Pike is drawing between the two terms' meanings is much more recent, like the last decade or two; but it does seem to have become accepted.

              1. 1

                A current example with a little discussion of this usage, C. Scott Ananian talking about Rust:

                Rust is also slow because it is not built to be parallel. The language is concurrent, but this is a word game: in the past few years the terms have been redefined such that “concurrent” is (roughly) non-blocking cooperative multitasking (such is implemented by node.js and GNU Pth), and “parallel” is reserved for actually doing more than one thing simultaneously (whether on separate CPUs or separate cores of a single CPU). Rust’s memory model doesn’t help: there is no shared memory, and ownership types make fork/join parallelism difficult.

              2. 1

                Concurrency has been around long before multiple cores were. User input, file input, displaying output, punching cards, etc.

                It’s not a recent concept.

            2. [Comment removed by author]

              1. 2

                You might want to remove your downvote; review my longer comment above.

                1. 2

                  I stand corrected.

                  1. 2

                    You, sir, are a gentleman.

            3. 6

              I don’t know that it’s due to confusion. There are a lot of self-taught people who might be running into this the first time. There will always be the next generation of programmers coming up who just haven’t learned this yet. We’ve all got to start somewhere and learn this at some point, and not everyone’s been programming since they were 15. I do like seeing it crop up, it’s a good video to learn from.

              1. 3

                I actually had to keep rereading about parallelism vs concurrency. I got it when I read it, but after some time I found myself trying to explain it and failing to, and my thing is if I can’t explain it easily then I don’t understand it well enough, so I go back and read it again. When I want to implement concurrency, I still of course have to look up good ways to do that.

                1. 2

                  I don’t think there’s really much confusion. It just helps a lot to define terminology at some point, and go on from there. And that’s what I think Rob Pike does very well in this presentation, to show the abstract programming model of concurrency, and how an implementation (in this case, Go) can achieve parallelism by leveraging fundamental properties of the abstract model.