1. 14
  1.  

  2. 6

    I didn’t really understand the point of this post. If you ignore all the details, go can be as fast as if? What does that even mean, though? It’s not like go is a single piece of work, it’s creating something that will run until it’s done. I’m not sure calling go “control flow” entirely makes sense either. It’s a non-deterministic implicit context switch. I’ve never heard such a thing called control flow but maybe I’m being narrow minded.

    1. 2

      Green threads serve the same purpose as classic control statements.

      Imagine a object that reads data from a socket on one side and spits out messages on the other side.

      You can implement it as a state machine, using bunch of ifs, whiles and so on.

      Or you can do the same thing using a separate green thread.

      The two are, from computational point of view, the same thing. However, the latter is much more elegant and readable. Still, people often prefer the former because of performance reasons.

      1. 4

        By this definition, what isn’t control flow? An OS thread fulfills the same requirement. What I think makes it unclear, to me, that preemptive threads are control flow is that their context switches are implicit, whereas control flow generally involves an explicit control of the flow of the language. The wikipedia page on it, at least, suggests along the same lines.

        1. 1

          Switching between OS threads is much more expensive than a conditional jump.

          Switching between green threads should be O(1).

          1. 2

            Switching between green threads may have less overhead than for OS threads, depending on how much machine state is preserved, but I don’t understand the O(1) part of your assertion. O(1) in what?

            1. 2

              OS threads should be O(1) for context switches as well, but that is the asymptotic’s, it doesn’t say anything about how long that operation takes.

              Also, the actual cost doesn’t have anything to do with my point.

          2. 2

            My understanding, and I could be off the mark because I’m only coming at this with a small amount of book knowledge, is that the performance problems of context switches come not so much from stack allocation but from cache misses.

            When it comes to vanilla branching, there’s usually a more common alternative, so you also get speculative execution. While coroutines can unroll into a predictable pattern, it seems more common that they’re used to model unpredictable environments in a sane way (i.e. select statements).

            To me, it seems that one of the differences is that performance is measured in a different way. Busy-waiting isn’t a problem if your device will only do one task at a time (i.e. no operating system, no competing processes) and you don’t care about power consumption, but only want the fastest results possible. However, it’s a bad practice to busy-wait in a program that’s going to be competing for CPU cycles with other processes.

        2. 6

          Y'all should look at how green threads and an even cheaper primitive, sparks, work in the GHC runtime system.

          Not totally up to date, but gets some of the details across: http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/multicore-ghc.pdf

          Also: http://www.cse.chalmers.se/edu/year/2015/course/pfp/lectures/Frolov14.pdf

          Note this particular slide.

          1. 2

            As a concrete example, the libco coroutine library can switch contexts in about 30ns on my Core i5-3570. Because it’s a coroutine library, it doesn’t have a scheduler (the caller specifies which coroutine to switch to) and it’s still slower than if, but it’s within the ballpark, I think.

            1. 1

              Am I missing something or is he calling continuations by a different name?

              1. 3

                I think the difference between continuation and green thread is that the former is accessible from the language. The latter is implicit.

              2. 1

                One thing that comes up in this is the question of why there isn’t a separate construct from go that can only launch green threads. I’m gonna take a go (ha) at giving some reasons:

                • The team’s bar for adding language features is high. On the one hand I have things I wish they’d add, too. On the other, it’s kind of a relief to me personally that Go isn’t playing Katamari Damacy with features and paradigms like C++ or Swift seem to be.

                • Green+OS threading together can be a feature. Maybe you’re using goroutines to, say, run a bunch of network requests at once. CPU parallelism wasn’t your goal there. But even then you might happen to benefit from parallelism if, say, the goroutines you launched parse responses in parallel on different cores, or your parent goroutine’s “home” core is busy it’s ready to resume work but it can be migrated to another core. Concurrency isn’t just parallelism, but parallelism can still help your concurrent code.

                • Taking that a bit further, even when parallelism has no benefit, it’s often break-even or a tolerable cost. It’s a specific sort of task where it’s too costly to pay the µsecs for goroutine launch and channel communication (sometimes less when buffering/batching) and too painful to code for a single goroutine without green threads. The team has worked a lot on minimizing goroutine costs, like recently by making the scheduler move goroutines between OS threads only when it has to. Gotta see what goroutines are costing your actual program before you decide they’re too costly.

                • To get the full benefit of green threads you’d need to do more than add a statement. To reduce the need for locking, you’d need to prevent the channel or generator that communicates with the green thread from being accessed (through channel sends, shared data, etc.) by more than one goroutine. To get coders using green threads correctly, you’d need to teach them the distinctive stuff about green threads, like where they’re preempted. You’d also need to explain when to choose a goroutine versus a pure green thread. I found saying when to use pointers and values is tricky enough, even though value/reference is already familiar to a lot of folks, so definitely wouldn’t downplay the teaching cost.

                So, like, are coroutine mechanisms nice to have? Sure; for example, I’ve used yield in Python now and then to make handling huge data streams feel more natural, and I could imagine using them in Go for iterators, state machines, etc. But I don’t think they’ll show up in Go and though the reasons might not be obvious at first, I think there are reasons.