Boost::Asio (async I/O, network library) calls these strand. I’ve worked extensively with Boost::Asio and I conclude that strands are not worth it unless extreme performance requirements force you to use them. You just don’t want yet another scheduler in your pipeline. In particular, it’s difficult to keep track of what happens when and why. In C++, you have the additional problem of lifetimes (when is the object deleted), which is alleviated by std::shared_ptr, but then suddenly your objects live because some strand somewhere still has a std::shared_ptr to it, and it’s very difficult to have an overview of lifetimes and memory management. You trade a whole lot of inconvenience for a tiny bit of performance.
The article spent a lot of space to talk about how to save / restore context and reset stack. While it is interesting, using a proper stackless coroutine construct such as the one provided in C++20 would be much more efficient and much less space wasted for the non-portal stackful implementation. That, I guess, is also why so many languages choose to use a infectious async keyword rather than the stackful implementation (probably with only notable exception of Go?).
What I really interested in the M:N discussion would be topics related to: 1. prioritization API design; 2. structured synchronization implementation; 3. observability (since stacktrace is useless at this point).
I agree, C++20 coroutines are the elephant-in-the-article. Kept waiting for them to mention it.
I can imagine that implementing a complex program based on the fibers described here would require careful tuning of stack sizes. You’d need pretty exhaustive testing, or tricky static analysis, to ensure that fibers never overflowed their stacks. (Unless you have tons of RAM available and don’t care if you’re wasting stack space.)
Go gets away without this because it has growable stacks, which have proven very difficult to get right — initially they used stack segments, which hurt performance if a tight loop kept falling off the end of a segment. Now they just move the stack to a larger heap block, which involves carefully relocating all pointers to stack-based data.
Given all that, I get why async/await is the model that most languages seem to be converging on (C++, JS, Rust, Nim, I know I’m forgetting some others…)
* “entierly” -> “entirely”