I think the takeaway from these is that concurrency should be done in a structure, with defined entrance, suspension/cancellation, and exit points. If your code might exit anywhere, it’s too hard to reason about.
I made a Go library that tries to provide sufficiently primitives to express the structures you want without needing raw go statements: https://github.com/earthboundkid/flowmatic
Another way to view this is that the runtime scope of an async task should be tied to a lexical scope by default, just as RAII is for construction/destruction. In a recent realisation, Erlang does the opposite by default and thus needs OTP to restore it. As does Go. In case of Erlang, there’s at least a good reason – there’s a by-design separate garbage collection for each async task. trio is the cleanest, most pleasant and predictable concurrency API I have used in any language.
Another way to view this is that the runtime scope of an async task should be tied to a lexical scope by default
That’s a good way to put it. I wonder if it’s practical to enforce this at the language level or if it’s necessary to punt it into a library that covers up the “go-statements” (thread spawn etc.) that exist under the covers.
It used to be that people would argue that we need GOTO because it’s more flexible than a function call, but it turns out you really don’t need it that often and if it seems like you do, you can just fake it with a for loop and state variable of some kind like:
state = state1
while true:
if state == state1:
do something
state = state2; continue // equivalent to GOTO state2
if state == state2:
do something else
// etc
Are lexically scoped tasks sufficient to simulate any lexically scoped spaghetti code you might need to write? I wonder.
Are lexically scoped tasks sufficient to simulate any lexically scoped spaghetti code you might need to write? I wonder.
I don’t think they are. A side benefit of tying to lexical scopes is that lexical scopes form a tree and your code automatically falls into a tree structure thanks to that (no loops, each node has exactly one parent). Goto based code OTOH forms a general graph, which just another word for spaghetti.
Another way to view this is that the runtime scope of an async task should be tied to a lexical scope by default
Interesting.
I use mainly Scala at work, with a pure-functional programming style. This makes it so that lexical scope and runtime scope are always completely separate for effects. In other words, lexical scope doesn’t matter for semantics at all.
It has some disadvantages, such as being more explicit/verbose in cases where other languages don’t need to, but on the other hand it removes all problems from the article.
However, these problems don’t arise in Trio, because of its unique approach to concurrency. Trio’s nursery system means that child tasks are always integrated into the call stack, which effectively becomes a call tree.
The use of the word “unique” here bothered me - this exact same system, with exact same terminology, is popping up in lots of places….
….but, that said, looking up the date and author of this blog post, he’s one of the original authors that everyone else cites, so I guess at the time it was written, it was fairly unique. I do find this post pretty digestible anyway, perhaps that is why im seeing it pop up in more places since it was written lol.
This should be read together with https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/.
I think the takeaway from these is that concurrency should be done in a structure, with defined entrance, suspension/cancellation, and exit points. If your code might exit anywhere, it’s too hard to reason about.
I made a Go library that tries to provide sufficiently primitives to express the structures you want without needing raw
gostatements: https://github.com/earthboundkid/flowmaticAnother way to view this is that the runtime scope of an async task should be tied to a lexical scope by default, just as RAII is for construction/destruction. In a recent realisation, Erlang does the opposite by default and thus needs OTP to restore it. As does Go. In case of Erlang, there’s at least a good reason – there’s a by-design separate garbage collection for each async task.
triois the cleanest, most pleasant and predictable concurrency API I have used in any language.That’s a good way to put it. I wonder if it’s practical to enforce this at the language level or if it’s necessary to punt it into a library that covers up the “go-statements” (thread spawn etc.) that exist under the covers.
It used to be that people would argue that we need GOTO because it’s more flexible than a function call, but it turns out you really don’t need it that often and if it seems like you do, you can just fake it with a for loop and state variable of some kind like:
Are lexically scoped tasks sufficient to simulate any lexically scoped spaghetti code you might need to write? I wonder.
I don’t think they are. A side benefit of tying to lexical scopes is that lexical scopes form a tree and your code automatically falls into a tree structure thanks to that (no loops, each node has exactly one parent). Goto based code OTOH forms a general graph, which just another word for spaghetti.
A graph can be formed within a single nursery. Just pass it to the functions!
Doesn’t Zig kind of do this at the language level? Or at least tries to, last time I checked.
E.g. Zig gets by without function colors because you write suspend/resume points yourself.
Interesting.
I use mainly Scala at work, with a pure-functional programming style. This makes it so that lexical scope and runtime scope are always completely separate for effects. In other words, lexical scope doesn’t matter for semantics at all.
It has some disadvantages, such as being more explicit/verbose in cases where other languages don’t need to, but on the other hand it removes all problems from the article.
The use of the word “unique” here bothered me - this exact same system, with exact same terminology, is popping up in lots of places….
….but, that said, looking up the date and author of this blog post, he’s one of the original authors that everyone else cites, so I guess at the time it was written, it was fairly unique. I do find this post pretty digestible anyway, perhaps that is why im seeing it pop up in more places since it was written lol.
The post definitely had a big effect on how I think about structuring concurrent code.
[Comment removed by author]