1. 72
  1.  

  2. 23

    Every single time I see a warning from python asyncio that a future wasn’t awaited, or a warning from node that a promise wasn’t used, I think that we got await backwards.

    This feels like it’s the right way to do it. Awaiting should have been implicit, and taking a task and doing abstract work with it should have been the special case.

    1. 9

      It always blows my mind that it’s not a hard error to use a promise without having awaited it. Javascript will let you silently if (promise) { missiles.launch() } while all promises evaluate to true. At least Python spits out a warning, but it has the same underlying broken behavior.

      1. 4

        That particular error is mainly due to weakness of lack of type system and “truthy” casts rather than await-specific. Object would evaluate to true too (e.g. instead of if (authorization.canLaunchMissiles)). OTOH Rust also needs explicit .await, but won’t compile if future {} or if some_struct {}.

        With implicit await I’d be worried about mutexes. Rust has both sync and async versions of a mutex, each with very different uses. What does “colorblind” mutex do? And how do you guarantee that code in your critical section doesn’t become automagically async, risking a deadlock?

        1. 2

          In Zig, the expectation will presumably be that you either support both modes or you compile-time error on the one you don’t support. Problem solved.

          Note: it’s not clear how you would achieve the same thing in Rust, if it would even be possible.

          1. 3

            Rust gives you a warning at compile time if a Future is not awaited, at least. It’s actually the same infrastructure that they use to warn about unused Result values (that equates to unchecked errors).

      2. 4

        This is more like a co-routine based system like Go has rather than an event-based concurrency system like JavaScript.

        1. 18

          Yes that’s exactly right. Go + manual memory management + runtime in userland = zig evented I/O mode.

          1. 3

            This is perhaps explained in the video, but does switching to io_mode = .evented effectively make every function async? It’s clear that plain function calls are transparently converted to await async call(). To do so, the calling function must also be async, correct? At least, if the example given truly matches the equivalent Python given.

            1. 5

              It doesn’t make every function async automatically, but if you are hinting that it can propagate to many functions, yes that is an accurate observation. Any suspend point makes a function async, and await is a suspend point, as well as calling an async function without using the async keyword.

              1. 1

                Yes that’s what I meant, thank you!

          2. 3

            This is more like a co-routine based system like Go has rather than an event-based concurrency system like JavaScript.

            These days, a lot of languages are adding some sort of async/await functionality for dealing with I/O and/or concurrency.

            I have not understood why doing async/await is preferrable to putting focus on cheap contexts and composable communication and synchronization primitives, e.g. Go’s goroutines and channels. To me, a future that you can await, be it constructed through async or manually, looks a whole lot like a suspended thread with its associated stack. No matter whether you’re doing traditional threads or async/await, the compiler needs to generate code that manipulates the two key data structures of an independent unit of execution: a program counter to keep track where you are executing and a list of call frames where you are coming from. Threads implement this through the PC register and call frames on a stack*. Async/await implementations remember all the continuations they have, containing the equivalent information as the PC register. Continuations continue in other continuations, building the equivalent of a list of call frames where you are coming from.

            My assumption is that threads and and async/await have the same expressive power. It also seems that they implement conceptually equivalent data structures. What is the reason for having both in a language? Am I missing something?

            • I am not including, for instance, general purpose processor registers here. For a context switch, they can be saved to the stack if needed. Depending on ABI and cooperativity of your threading implementation, all registers might be already saved to the stack at the time of context switch anyway.
            1. 3

              Rust 0.x had this model, and dropped it because:

              • coroutines must a runtime, and Rust doesn’t want to be tied to one. Rust aims to usable everywhere (including drivers, kernels, plugins, embedded hardware, etc.)

              • coroutines need small stacks and ability to swap stacks on context switches. This adds overhead and complexity. Segmented stacks have very bad performance cliffs. Movable stacks require a GC. OTOH async/await compiled to a state machine is like a stack of guaranteed fixed size, and that stack can be heap-allocated and easily handled like any other struct.

              • It’s incompatible with C and every other language which wants single large stack. Foreign functions will not (or don’t want to be bothered to) cooperate with the green thread runtime. This is why in Go calling out to C code has a cost, while in Rust it’s zero cost and Rust can even inline functions across languages.

              • Rust values predictability and control. When blocking code becomes magically non-blocking there are a lot of implicit behaviors and code running behind the scenes that you don’t directly control.

              1. 1

                coroutines need small stacks and ability to swap stacks on context switches. This adds overhead and complexity. Segmented stacks have very bad performance cliffs. Movable stacks require a GC.

                You don’t need small stacks but just 64-bit and overcommit. Allocate an X MiB stack for each thread and rely on the kernel to do the obvious correct thing of only committing pages you actually use.

                Of course if you need to support 32-bit systems or badly designed operating systems this doesn’t work but it’s a very reasonable design trade-off to consider

                1. 2

                  Fair point. That’s a clever trick, although you may need to call the kernel for every new coroutine, which makes them heavier.

                  It’s not for Rust though: it supports even 16-bit CPUs, platforms without an OS, and microcontrollers without MMU.

                  1. 1

                    It’s a viable approach in C, yet C supports all those things too. Is the difference that Rust wants a unified approach for all platforms from 16-micros to supercomputers, unlike the fragmented land of C? If that is the case then I suppose that’s a reasonable trade-off.

                    1. 1

                      Yeah, Rust takes seriously ability to run without any OS, without even a memory allocator, without even Rust’s own standard library. That makes it favor solutions without any “magic”.

                      async fn only generates structs that don’t do anything, and you have to bring your own runtime executor to actually run them. That lets people choose whether they want some clever multi-threaded runtime, or maybe just a fixed-sized buffer that runs things in a loop, or like Dropbox did, a special executor that fuzzes order of execution to catch high-level race conditions.

        2. 11

          It’s addressed in the article, but for those who don’t know “colorblind” in this context refers to a blog post: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ which discusses how async/await/promises/callbacks all bifuricate the library ecosystem of a language.

          I remember when rust switched from green threads to no runtime, folks basically asked for zig’s behaviour and were told it was infeasible. (See comment here: https://lobste.rs/s/bfsxsl/ocaml_4_03_will_if_all_goes_well_support#c_whrcmk )

          I would be very interested in understanding what constraints of rust don’t apply to zig or what the rust folks got wrong. Is it just that it was a runtime vs compile-time switch?

          1. 7

            It took years for rust developers to find Pin<T> and develop safe promises, along with async/await around them, support for which became stable less than a year ago. I can imagine why it was not obvious that this was possible, back in 2015.

            1. 15

              Rust also has compile-time proven safety, which is an incredible accomplishment, but also limits the design of the language. Meanwhile Zig’s safety is not required to be perfect, and some safety features are runtime checks. So Zig as a language has more design options than Rust does when it comes to getting “creative” with functions and memory layout.

              1. 3

                Definitely agree. Rust is tackling a hard problem and it pushes some (non-accidental) complexity into the language. The reward is less debugging at runtime. Some safety can still be pushed back to runtime, though, e.g. in RefCell.

                I think both Zig and Rust have their place, and both have decent answers to async now; they cater to different aesthetics/sensibilities, is all.

          2. 4

            I think I’m missing something here. The major problem with the function colour approach is that sync-coloured functions can’t be switched out of, forcing you and everyone who calls you to know whether you’re async or not. I can’t tell if that problem is being solved here, or if function colours still exist and it’s just applying type inference to save the programmer from specifying the colour of most functions.

            If it’s the latter, presumably the same problems are going to exist with things like protocol libraries having a colour even though the same code would work either way.

            1. 0

              While Zig is very innovative, it tries to be a small and simple language. Zig takes a lot of inspiration from the simplicity of C […]

              This doesn’t really instill confidence in me.

              I generally avoid languages whose designer believes that C has any redeemable properties, and it has worked quite well for me.

              you tend to have two options in imperative programming languages: callbacks or async/await […] the downside is that now everything has to be based on callbacks and nested closures

              Can someone tell me what this is about? I have been perfectly fine using Future/Task APIs and haven’t seen this.

              1. 20

                I generally avoid languages whose designer believes that C has any redeemable properties, and it has worked quite well for me.

                Then Zig is probably not for you (and I don’t mean that in an aggressive way). I do recommend reading the docs and maybe watching a video from Andrew Kelley to get a feeling, but if you don’t see any virtue in C, then you risk not seeing any virtue in Zig either.

                1. 1

                  What’s wrong with C?

                2. 1

                  Very interesting design decision. I’ve seen it previously implemented in JavaScript through generators and wonder why other languages didn’t take this route (e.g. Rust)?