1. 89
  1.  

    1. 41

      I thank the author for writing this so that I don’t have to. My conclusion so far is that a lot of Rust’s special sauce comes from its assumptions that lots of stuff can go on the stack, nothing moves in memory, and the compiler can always figure out where in the program a particular option is free’d. Coroutines (or async calls, or processes, they’re mostly the same thing) have a lot of special sauce that comes from the fact that you can treat control flow as data, moving it to arbitrary places at runtime and taking a function’s local context/stack frame with it. I’ve tried, and I haven’t yet come up with a way to make coroutines that work well with a borrow checker that aren’t cripplingly hampered by how much stuff you must Box/Pin/Arc around them, or cripplingly hampered in what control flow is actually possible with them – resulting in them needing a gigantic pile of ceremonial code necessary to accomplish basic things like “create a new coro from inside an existing one”.

      So I think that, as the author says, having dictatorial, static control over memory and having control flow that hops stack frames are pretty conflicting goals.

      (It also annoys me that async and “non-blocking I/O” have become synonymous. 99% of the time when you use something that is async, non-blocking I/O is your actual goal, and all the async fooforah covers that up. You can write non-blocking I/O entirely with normal functions, and it’s not even that hard. It’s just a state machine. The hard part then becomes modifying and composing state machines into more complicated ones – that is where the value of async/coroutines/etc shows itself.)

      1. 22

        nothing moves in memory

        You mean everything can move? Pin had to be invented as a workaround to give futures a stable address.

        haven’t yet come up with a way to make coroutines that work well with a borrow checker that aren’t cripplingly hampered by how much stuff you must Box/Pin/Arc around them

        I’m absolutely surprised about that, because I have many thousands of lines of complex async code, with very little use of the heap (so much that I’ve ran into bugs in rustc due to too many nested futures without boxing :)

        Arc has been awful for callback-based futures, but with .await it’s never needed, unless you insist on frequently using multi-threaded spawn. But you have concurrent streams, join_all, join sets, and many more constructs that work with borrowed data.

        Given the OP also rants about 'static, I suspect you’re both using spawn way too much.

        1. 2

          You mean everything can move?

          Yep, my bad. Got my comparisons backwards.

      2. 11

        That’s where I think Go really outshines Rust. Thanks to green threads, all I/O code in Go looks blocking, and reads like simple, imperative code, but is actually translated to non-blocking under the hood.

        1. 51

          I think this is where Rust really outshines Go. In Rust, all of these problems are exposed as things in the type system that cause the compiler to shout at you. In Go, they’re undefined behaviour that causes weird nondeterminism at run time.

          If you capture a pointer to an object in a goroutine and modify it in another, that’s UB in Go unless you have some locking around it. The compiler will accept it because Go has no notion of transfer of ownership at the type level and so there’s no way it can tell the safe cases apart. If the object contains a slice as a field and both goroutines assign to it, you can end up with the bounds of one and the base of another, and now so,e unrelated code that writes through that slice will corrupt memory.

          If you capture a pointer in an async context in Rust and try to modify it outside the the compiler tells you it doesn’t have the Send trait and refuses to compile. You fix the bug at build time. It might be hard, but it’s easier that tracking down a heisenbug caused by a data race.

          1. 6

            go build -race

            go test -race

            1. 18

              This is like replying to “Rust’s borrow checker helps prevents leaks at compile time” with valgrind --leak-check=yes ./a.out.

          2. 3

            You’re comparing apples and oranges. GP is talking about non-blocking I/O i.e. concurrency, you are talking about goroutines i.e. multicore parallelism. We are literally on a thread of an OP which talks about all the headaches Rust’s async system causes for developers.

          3. 2

            I see what you mean. As someone who programs Go primarily, I do sometimes wish the compiler could help me more with ensure race free code. OTOH, my other languages are (untyped) JavaScript and Python where there are no types at all but it mostly works out. With Go, I think of it as mildly typed: the types apply in some places, but you end up going to dynamic mode pretty often and it’s usually fine.

            1. 4

              Just a nit, but neither js nor python is untyped. Python is strongly dynamically typed and JS is weakly dynamically typed.

              1. 4

                I mean that I wasn’t using static typing add ons like mypy or TypeScript.

              2. 3

                I think that that would be considered “untyped”, meaning “no static types”. AFAIK “dynamic types” are not really a thing, pedantically - those are just values.

                1. 3

                  If you want to be fancy you can call them “unityped” instead of “untyped”, i.e. there is one (static) type which is a discriminated union, and dynamic “types” are the values of the discriminator.

                2. 1

                  This feels pretty down in the weeds, but unless I’m completely misunderstanding… the difference between static/dynamic is whether the type of data referenced by variable is able to be determined/constrained at compile time vs runtime. Where as strong vs weak has to do with whether the type of data referenced by a variable is influenced at runtime by its context or not.

                  In python a string value will always be a string regardless of context it is referenced in, whereas in js’s weakly typed runtime the context may result in an value’s type being inferred from the context in which it is dereferenced from a variable.

                  in python operator overloading can sometimes make some types “feel” weak, but that is sort of an implementation detail.

                  >>> "1" + 2
                  Traceback (most recent call last):
                    File "<stdin>", line 1, in <module>
                  TypeError: can only concatenate str (not "int") to str
                  >>> 1 + "2"
                  Traceback (most recent call last):
                    File "<stdin>", line 1, in <module>
                  TypeError: unsupported operand type(s) for +: 'int' and 'str'
                  >>>
                  >>> type(1)
                  <class 'int'>
                  >>> type("1")
                  <class 'str'>
                  >>>
                  

                  in js things get type coerced leading to a whole different weird set of bugs

                  > 1 + 2
                  3
                  > "1" + 2
                  '12'
                  > 1 + "2"
                  '12'
                  > typeof("1" + 2)
                  'string'
                  > typeof(1 + "2")
                  'string'
                  

                  Under the hood are they “types” in the way that you mean? I dunno, that seems like an academic concern outside of the semantic convention of terms like “strongly dynamically typed” having an accepted and useful meaning.


                  Mea culpa!

                  I am only just now, as I write this, realizing that “untyped” and “dynamically typed” are synonyms. And my original reply sounds really dumb because of it.

                  I suppose that people began using the phrase “dynamically typed” because the “strong/weak” distinction sounds sort of odd when discussing types if you have already said that the language in question is “untyped”.

                  I’ve left my original reply above the fold because, well I typed it and I don’t want to delete it now ;)

        2. 11

          Rust’s async code also “looks blocking” like simple imperative code. You add .await to “block” on I/O calls, but that is a detail that doesn’t reorganize the source code, like explicit state machines or callbacks would.

          I’d say golang is less often as imperative as async/await code. This is because when you want to run things concurrently, you need to start spawning goroutines and use channels, and then you have multiple disjoint pieces of code running and interacting, more like callback-based async, rather than a single contiguous imperative-looking program.

          Note that Go doesn’t actually translate the code to non-blocking (i.e. a stackless state machine). Go has runtime machinery that intercepts known blocking syscalls, and switches stacks of threads. Rust used to have the exact same implementation, but it made interoperability with C too expensive and risky, so Rust removed green threads before v1.0.

          1. 4

            You add .await to “block” on I/O calls, but that is a detail that doesn’t reorganize the source code, like explicit state machines or callbacks would.

            That’s not really true, eg. adding await in a fn used in an iterator chain.

            1. 1

              Yes, that’s fair. This is clunky in Rust, although solvable, e.g. futures::stream::from_iter lets you use .await in iterators.

    2. 30

      I think Rust’s async is fine.

      On the technical side, for what it does, it’s actually pretty good. Async calls and .await don’t need to do heap allocations. The Future trait is relatively simple (API surface of C++ coroutines seems much larger IMHO). Ability to use futures with a custom executor makes it possible to use them, efficiently, on top of other languages’ runtimes, or for clever applications like fuzzing protocols deterministically.

      Usability-wise, like many things in Rust, they have a steep frustrating learning curve, and a couple more usability issues than other Rust features. But once you learn how to use them, they’re fine. They don’t have any inherent terrible gotchas. Cancellation at any await point is the closest to a gotcha, but you can get away with that since Rust uses destructors and guards so much. In contrast with old node-style callbacks, which had issues no matter how experienced you were: subtlety of immediate vs deferred callbacks, risk of forgetting to call a callback on some code path or when an exception is thrown, need to deal with possibility of having a callback called more than once, etc.

      The famous color article implied that async should be an implementation detail to hide, but that’s merely a particular design choice, which chooses implicit magic over clarity and guarantees. Rust doesn’t hide error propagation. Rust doesn’t hide heap allocations, and Rust doesn’t hide await points. This is a feature, because Rust is used for low-level code where it can be absolutely necessary to have locks, threads, and syscalls behave in specific predictable ways. Having them sometimes invisibly replaced with something completely else would be as frightening as having UB.

      1. 17

        async Rust is fine for what it accomplishes, but is generally harder to use than normal Rust, and normal Rust was already changeling for many developers. Which would be fine if async Rust was only used where warranted, but it actually became the default and dominant dialect of Rust.

        1. 6

          [Async Rust] actually became the default and dominant dialect of Rust.

          Is it? I’m just dabbling into Rust and not very experienced by any means, but at least so far i’ve gotten the impression that async stuff is mostly found in some “leaf” and specialized crates, and that more “core” crates like regex or serde are just normal non-async Rust.

          I never got the impression of async Rust being dominant, much less the default. For which i’m grateful really, as i too share the impression that the async stuff in Rust is quite complicated and difficult to grok, and i would prefer to avoid it if possible.

      2. 8

        The famous color article implied that async should be an implementation detail to hide, but that’s merely a particular design choice, which chooses implicit magic over clarity and guarantees.

        async/await is the callee telling the caller how it wants to be executed/scheduled.

        Languages with no async/await let the caller decide how the callee should be executed/scheduled.

        In Erlang/Elixir, any function can be given to spawn(), or Task.start(). In Go, any function can be given to go ....

        It’s up to the caller to determine how the function should be scheduled. async/await tend to contaminate your whole code base. I’ve seen many Python and some Rust+tokio codebases where everything up to main() was an async function. At that point, can we just get rid of the keyword?

        1. 13

          You’ve given examples of higher-level languages with a fat runtime. This includes golang, which requires special care around FFI.

          This “implicit async is fine, you don’t need the await syntax” is another variant of “implicit allocations/GC are fine, you don’t need lifetime syntax”.

          True in most cases, but Rust is just not that kind of language, on purpose. It’s designed to “contaminate” codebases with very explicit code, and give low-level control and predictable performance of everything.

          1. 7

            You’ve given examples of higher-level languages with a fat runtime.

            Aren’t we considering tokio and all the other third-party libraries that you need to bring in in order to schedule async code, a fat runtime?

            The comparison with lifetimes is unfair. Here, we are adding the async keyword to almost every function, and the await keyword to almost every function call. With lifetimes, there is far more options. But if we were just adding ‘static to every reference as a lifetime, then yes, I would ask if it’s really needed.

            1. 3

              You don’t need Tokio to drive async code. You can have it for your ELF binary, and use the JS runtime when compiling to WASM.

              1. 3

                So, replacing the fat runtime that is tokio by the fat runtime that is JS. How does that dismiss the examples of high-level languages I gave?

                1. 7

                  The thing that makes Rust different from e.g. Go: being able to choose your runtime. And it doesn’t even need to be fat, there are embedded runtimes or libraries such as Smol.

            2. 3

              No, I wouldn’t put tokio in the same category. With something like Erlang your whole program lives in Erlang’s environment. It’s opaque, and sits between you and the OS, so it’s not a surprise when it inserts “magic” into that interaction.

              But Rust is for systems programming. It’s not supposed to intervene where you don’t ask it to. Things that Rust does are transparent, and you’re free to program for the underlying OS.

              In case of Rust you use tokio as a library. Tokio is dominant, but not necessary. I’ve worked on projects that used custom async executors. You can use async in embedded programming. You can use async in the kernel. It’s a builder for state machines, not a VM.

              In relation to lifetimes I did not mean just the amount of tokens you add to the source code, but the fact that Rust requires you to be precise and explicit about details that could have been abstracted away (with a GC). Rust’s types have “colors” in many ways: owned or borrowed, shared or mutable, copy or non-copy. Rust could have had “colorless” types: everything shared mutable copyable, but chose to have all this complexity and syntax for a reason.

              Rust could ship a green thread runtime in libstd (again). It could hide the async and await keywords and insert glue code automatically. It would be nicer syntactically. It would be easier to use. It would be a good tradeoff for majority of programs — just like benefits of adding a GC. But Rust choses to be low level and offer transparency, predictability, and control instead. In low-level programming knowing function’s mode of execution is as important as knowing whether it mutates or frees memory.

              1. 3

                But Rust is for systems programming.

                So we are told, but the home page talks about applications like command-line tools and network services. Seemingly the same kind of ‘systems programming’ that Rob Pike was saying Go is good for.

                1. 3

                  All of coreutils are in C. Nfs, sshd, samba, ntpd are in C. Nginx, apache are in C. Rust wants to be where C is.

                  There is a large overlap with Golang, and not every Rust program had to be in Rust, like not every C program had to be in C. But Rust is sticking to being low-level enough to be able to replace C everywhere.

                  1. 2

                    All those tools are written in C because they want to be able to run wherever you can run a C compiler. Tools that are written in Rust most often don’t care about having that level of portability.

                    1. 1

                      Rust vs C platform support is a complex topic. Rust tools generally have better portability across contemporary platforms. Windows often has native support, whereas in POSIX-centric C projects that’s typically “MS sucks, use WSL?” (and I don’t count having to build Linux binaries in a Linux VM as portability to Windows).

                      Rust’s platform support is not too bad these days. I think it covers everything than Debian supports except IA64 and SH4. And Rust also has some support for S390x, AVR, m68k, CUDA, BPF. Haiku, VxWorks, PlayStation 1, 3DS, and Apple Watch.

                      Rust doesn’t support platforms that LLVM/clang can’t find maintainers for, and these are generally pretty niche and/or obsolete. I think C projects support targets like Itanium or Dreamcast more out of tradition and a sense of pride, rather than any real-world need for them.

                      GCC backend (rustc_codegen_gcc) for Rust is in the works, so eventually Rust won’t be limited by LLVM’s platform support.

        2. 8

          Erlang doesn’t allow local state in functions. You need async/await to be able to comprehend how and when local state can change.

          Go doesn’t have a good excuse.

        3. 3

          Languages with no async/await let the caller decide how the callee should be executed/scheduled.

          In Go, everything is a goroutine, so in a sense everything is already contaminated with async/await. Code that blocks is automatically an ‘await point’, unless you use the go keyword. So I don’t think the semantics around caller/callee are not any different from async Rust with e.g. tokio::main as an entry point. The difference is you have to manually mark await points. However you do get better control, e.g. tokio::select! can act on any async future, not just channel reads..

          1. 1

            However you do get better control, e.g. tokio::select! can act on any async future, not just channel reads..

            Select in rust is not without its problems - it can trigger a panic and adds cancellation safety issues.

    3. 19

      I’m not a fan of what feels like needless hostility (confrontational tone?) in the article, and was expecting to hate it going in, but it does make some good points.

      There’s an important distinction between a future—which does nothing until awaited—and a task, which spawns work in the runtime’s thread pool… returning a future that marks its completion.

      I feel like this point in particular does not get attention when talking about async in languages and took me a long while to get the mental model for.

      To whatever challenges teaching Rust has, async adds a whole new set.

      I disagree with this opinion. In any language with native async infrastructure built-in, I’ve had to learn how it works pretty intimately to effectively use it. My worst experiences have been with Python’s asyncio while the easiest was probably F#.

      1. 7

        I disagree with this opinion. In any language with native async infrastructure built-in, I’ve had to learn how it works pretty intimately to effectively use it.

        I don’t think you’re disagreeing? The article is essentially saying that you have to learn async along with the rest of the language, and you are also saying that you had to learn async with the rest of the language.

        1. 10

          I think the difference is they’re making it sound like some uniquely difficult thing in Rust, and I disagree that it’s some Rust-only problem.

          1. 5

            It’s an async/await problem.

            In languages with concurrency and no async/await (erlang, elixir, go, …), the choice of the scheduling model of your code is determined at the call site. The callee should not care about how it is executed.

            1. 6

              In go:

              x := fetch(...)
              go fetch(...)
              

              In Rust:

              let x = fetch(...).await;
              tokio::spawn(async { fetch(...).await; });
              

              You have the same amount of control of scheduling. If you’re referring to being unable to call an async method from a sync context, this is technically also true in Go, but since everything runs in a goroutine everything is always an async context.

              What makes Rust harder is the semantics around moving and borrowing but also the “different concrete type for each async expression” nature of the generated state machines. For example this is easy in go, but painful in Rust:

              handlers[e] = func(...) ...
              // later
              x := handlers[event]
              go x()
              
      2. 2

        may i ask what you used to figure out how to think about concurrency in F# ?

        1. 3

          A lot of experience getting C#/F# async interop working and the .NET documentation/ecosystem is pretty great these days.

          https://learn.microsoft.com/en-us/dotnet/fsharp/tutorials/async

          In F# 6, they made the C# async primitives work seamlessly so you’re no longer wrapping anything in tasks to wait on it on the F# side.

    4. 10

      Do people find CSP to be a good concurrency model? Channels are a significant improvement over mutexes, but I find that simple tasks like cancelation/scoping worker exits become very difficult in practice. It’s a similar situation to dealing with objects and a large amount of tangled unexaminable internal state vs referentially-transparent function calls. The async/await ergonomics (even in managed languages!) are terrible, but “structured concurrency” seems to be a preferable paradigm for writing correct software in practice.

      1. 4

        CSP is great, but largely orthogonal to async. One can do CSP with async IO, blocking IO, mix, etc. Cancellation with CSP is often very simple - when the channel disconnects, the thread/task finishes. In more complex situations and extra shared atomic flag, some timeouts on IO, etc. might be needed. But yeah - ease of cancellation is solid benefit of async.

      2. 1

        I really enjoy csp. I wish go had sum types and function overloading. I suspect this is a bad antipatern but when I want optional function arguments I make function specific structs with *blah types to essentially pass optional arguments.

        1. 2

          Elixir is gonna get Set theoretic types. You can already have some kind of ADT, you just use pattern matching at runtime, instead of the compiler yelling at you at build time.

          I even made this library a while ago to help clean up some long and hard to read with expressions.

    5. 9

      there’s no thread::scope equivalent to help us bound futures’ lifetimes to anything short of “forever”.

      …and there are deep architectural reasons:

      And unlike launching raw threads, where you might have to deal with these annoyances in a handful of functions, this happens constantly due to async’s viral nature. Since any function that calls an async function must itself be async,7 you need to solve this problem everywhere, all the time.

      There’s interest from the async WG in making it possible to make functions generic over async-ness to solve this, so that you can publish libraries that are written in async Rust, but can have their awaits become synchronous no-ops if a downstream dependency with no need for async wants to depend on them.

      Announcing the Keyword Generics Initiative

      Granted, I imagine we’ll also need more standard library interfaces to ensure that stuff doesn’t have to depend on Tokio anyway, but that was always a long-term goal.

    6. 5

      The function coloring problem sucks and arguably it sucks harder in rust because of explicit lifetime management and lack of Gc (Gc/Arc as a library aren’t as ergonomic as builtin gc).

      But I think that async is the best compromise we are going to get in the near term. It’s so much less painful than explicitly managing callbacks. My ancedotal experience is that whenever I rewrite something using callbacks to something with await/yield/async keywords I find and fix at least one major bug.

      My naive hope would be that we start making threads and context switching more lightweight. It’s not a complete fantasy. There’s some evidence that we can get something closer to coroutine like performance with things like the switchto call Google added to their linux: https://www.phoronix.com/news/Google-User-Thread-Futex-Swap

      There are also lower level stack switching primitives (either by manipulating the stack pointer directly or using setcontext

      In the short term I just use Go and accept the runtime overhead in return for being able to avoid the function coloring problem and also get access to a decent integrated epoll event loop.

    7. 4

      I’m surprised the author says ‘in go and Haskell you don’t think about these things’ as a positive. I can’t speak to Haskell, but in go I’m aghast at how goroutines can be snuck into the callstack somewhere unwittingly. Also, cleaning them up sometimes involves a song and dance that’s not obvious - I’ve been surprised a few times to find goroutine leaks in code that I’d never have expected to be making async calls anyway

      1. 1

        I’ve been surprised a few times to find goroutine leaks in code that I’d never have expected to be making async calls anyway

        Goroutines aren’t just for async. In fact, I can’t really even call that the default. They’re the concurrency primitive in the language. I most often use them for “going wide” when I’m cpu bound (but that may be because it comes up semi-frequently in the domain I work in). The concurrency, in this case, enables parallelism.

        The only way you’d be able to emulate the future based async behavior we’re talking about would be to return a channel (or something wrapping a channel) from the function itself, and read from the channel (or utilization of a wrapper method that reads from the channel) to block. Which is actually a lot more song and dance in go than in languages that support this stuff at the language level. Including the footgun where if you forget to read from the channel, you’re gonna end up leaking a goroutine.

    8. 3

      An anecdotal counter-example. Personally the first rust tutorial I ever read 6 months ago was the tokio tutorial. I found it easy to understand and that it explained the rationale for a bunch of choices in async simply. There were a few rough edges in there that mainly came from from me not previously grokked the more fundamental rust idioms, but I’d count that as a success for async.

    9. 2

      I do feel like async is a thing that is… fine to exist, but I’ve always ended up doing concurrency “by hand” to get better latency. I wonder how much async code out there suffers from not just manually spinning up n threads and having a bit more granular concurrency controls.

    10. 2

      I felt old when I first read an algorithms books replacing “O(n)” with “O(n,p)” as “analyzing algorithms on one processor is a waste of everyone’s time”.

      The tools and vocabulary of async, parralellism, concurrency, coroutines, threads, and thread pools have always felt like an unstructured aglomeration of descriptions of specific solutions to specific problems. The sampe concepts arise when talking about jobs split among machines, but with a separate vocabulary. We can do better.

      I expect the pressure to do better will start with more interviewing questions aimed at testing if people understand concurrency, much as interview questions now test if a person understands pointers.

    11. 2

      Here’s my summary notes of the posts/this discussion, is this accurate?:

      Rust’s async features are thought to be a flexible, performant solution to the desire for green threading. However, what Rust itself provides are just building blocks that you can use to build async systems in libraries, and the most common such library is tokio, and in order to use tokio the type signatures of the functions that you want to run async-ly, and any functions that call them, have to have certain properties. For this reason, a lot of OTHER libraries have type signatures compatible with tokio’s requirements. This makes the ecosystem harder to learn, because in order to learn those other libraries, you have to learn a little bit about this async stuff even when you don’t need async yourself. Comparisons are made with other languages which have easier-to-learn green threading, but at the expense of being less flexible, or less performant, or less safe. A debate has arisen between people who think that this additional ecosystem complexity is not worth it (because it complicates life for those who don’t need to use green threads, or who need green threads but don’t need something as flexible and performant as this solution), and those who think that it is (because a lot of important projects do need green threads, and this is a flexible, performant solution for them).