1. 11
  1.  

  2. 5

    async/await doesn’t introduce concurrency, i.e. the execution of multiple tasks at the same time.

    I am not sure if I had used the term ‘concurrency’ here. The concurrent execution of multiple tasks at the same time is called ‘parallelism’. And that is what I believe that author had in mind. And concurrency does not automatically imply parallelism. Confusing those two is a classic mistake. You can have concurrency on a a single core machine: hardware interrupts are the classical source of concurrency here. But for parallelism, you need a multi core machine.

    1. 3

      The Swift Evolution proposal that this post is describing is pretty easy to read too and has more about the motivations. I’ve encountered several of these problems using unstructured tasks in async Rust and had to write code to link spawned tasks together into logical “tasks”. Very cool that it will be baked into Swift!

      1. 2

        Interesting article. Do we have similar things in other languages?

        The concept of promises I am relatively familiar with, and I have done quite a bit with observables.

        1. 3

          There are structured concurrency libraries for other languages (C, Python, Swift, maybe Kotlin are the mature implementations I know about).

          The originator of the “structured concurrency” label summarized progress since their seminal post back in 2018, but I think it’s come a lot farther since then: https://250bpm.com/blog:137/

          It’s linked from the article, but “Notes on structured concurrency” is probably the best summary of the idea yet written: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/

          1. 3

            Do we have similar things in other languages?

            This article and the Swift proposal are very, very close to how Ada built-ins handles concurrency.

            You don’t deal with threads and promises, the language provides tasks which are active and execute concurrently, and protected objects which are passive and provide mutual exclusion and allow complicated guard conditions to shared data. Tasks are like threads, but have procedure-like things (called “entries”) you can write and call which block until that task “accepts” them. You can implement the concurrency elements common in other languages (like promises) using these features, but you usually don’t need to.

            Execution doesn’t proceed out of a block until all tasks declared in the scope are complete, unless those tasks are allocated on the heap. You can declare one-off tasks or reusable tasks even within functions that can share regular state. Tasks doesn’t just have to accept a single “entry”–queueing and selection of one of many entries is built-in, and this select block also supports timeouts, delays and proceeding if no entry is available. For long-running tasks which might not complete on time, there’s also a feature called “asynchronous transfer of control” which aborts a task if a computation exceeds a time threshold. Standard library functions provide pinning of tasks to CPUs, prioritization, and controlling which CPUs a task runs on using “dispatching domains”.

            I’ve spent days of my life debugging async/await in other languages, but I feel like the Ada concurrency built-ins help describe the intent of what you’re trying to accomplish in a very natural way.