1. 51
  1.  

  2. 13

    Rust seems to do very well with the balance between its core team and the community driving the project. Posts like this give the impression that they’re getting wide discussions and covering a lot of use cases, but everything still manages to feel solid and well planned out in the language features and tooling.

    Of the other languages I use, C/C++ appears to have quite a disconnected community and the language itself gets driven by a tight group of people. On the other end of the spectrum, Perl has quite an interactive community and the language seems to be full of quirks that maybe weren’t thought through too much.

    Maybe this is a naive way of seeing things from the outside looking in. It would be nice to know whether people within these communities see things the same way.

    1. 12

      I feel like this misses the mark at a basic level: I don’t want to write async rust.

      I want to write concurrent rust and not have to worry about how many or which executors are in the resulting program. I want to write concurrent rust and not worry about which parts of the standard library I can’t use now. I want to write concurrent rust and not accidentally create an ordering between two concurrent functions.

      1. 19

        I feel like those wants don’t align with rust’s principles. Specifically rust has a principle behind making anything that comes with a cost, explicit. It doesn’t automatically allocate, it doesn’t automatically take references, it doesn’t automatically lock things, etc. What you’re suggesting sounds like making transforms that come with costs implicit. That’s a reasonable tradeoff in many languages, but not rust.

        1. 12

          Sure. This initiative seems really great for people who end up choosing to use async Rust specifically because they need it for their high-performance application, and it sounds like they’ll really get a huge benefit out of this sort of work!

          But I feel like a lot of people don’t actually want to use async Rust, and just get forced into it by the general ecosystem inertia (“I want to use crate X, but crate X is async, so guess I’m async now (or I’m using block_on, but that still requires importing tokio).”). These people (hi, I’m one of them!) are going to be difficult to win over, because they don’t actually want to care about async Rust; they just want to write code (for which async Rust is always going to be net harder than writing sync Rust, IMHO).

          1. 5

            I think you’re describing a desire for the Rust ecosystem whereas the proposal in the OP is about the language. I’ve also been there, wanting to use library X but it turns out its async. This, to me, isn’t a language problem, it’s that someone (including myself) hasn’t written the library I want in a sync context.

            I don’t believe anything in the proposal is going to directly related to the situation you described.

            1. 1

              That’s a very fair point! :)

        2. 10

          As someone who mostly works on distributed systems (and writing them in Rust since 2014), the additional error classes, ergonomic pain and significant throughput degradations make async rust seem pretty inappropriate for any well-considered distributed system. Do a simple echo throughput benchmark if you want to see what I’m talking about. Nobody does these comparisons, they just jump on the train.

          That’s because async rust is not about performance or any quantitative improvement. It’s about the social momentum. It’s just a shame that it’s momentum that is a strict loss in terms of productivity and performance and correctness for pretty much anything other than maybe embedded workloads where async is more ergonomic than writing lots of low-level callbacks and threads are unavailable.

          The only time I felt that async rust was a good choice for a distributed system was when I had a teammate who built a lot of things for the async community and I wanted to give him a place to try that stuff out - a social reason, not a technical one. It made testing significantly more frustrating because fault injecting delays had to rely on all kinds of weird executor hacks that were difficult to interleave compared to a simple randomized sleep on a thread. It was shockingly difficult to prioritize writes over reads over accepts for properly handling messaging and backpressure. This is scheduling 101 and it’s impossible in async rust without extremely gross hacks. It’s going to be refactored back to sync soon, and I’m looking forward to reclaiming the throughput, testability, and engineering ergonomics that were lost by going async.

          And no, the throughput and reliability issues will never be fixed in the future, because they simply can not be fixed. Doing more work (userspace scheduling) gives up efficiency for higher utilization, but this utilization only has a narrow window of latency that even theoretically it can improve. This is conceivably the case for some load balancers, but a real load balancer would be making a big mistake by relying on a rust async executor due to the aforementioned inability to prioritize for achieving reasonable QoS. Ergonomics may be improved a bit more over time, but it’s making an inappropriate approach easier to use.

          1. 4

            Do a simple echo throughput benchmark if you want to see what I’m talking about. Nobody does these comparisons, they just jump on the train.

            The web benchmarks for the async web frameworks on Rust are often near the top, which is what made me think the async runtimes work well enough. But I’ve never done an echo test. It would be pretty interesting to make a few different PoCs distinguishing between threaded and async execution models for both IO and CPU heavy work to see what’s actually happening. AWS Rust libraries also make use of Tokio, so I don’t think it’s that bad? But I don’t have numbers on-hand so I can’t make a statement either way.

            And no, the throughput and reliability issues will never be fixed in the future, because they simply can not be fixed. Doing more work (userspace scheduling) gives up efficiency for higher utilization, but this utilization only has a narrow window of latency that even theoretically it can improve. This is conceivably the case for some load balancers, but a real load balancer would be making a big mistake by relying on a rust async executor due to the aforementioned inability to prioritize for achieving reasonable QoS. Ergonomics may be improved a bit more over time, but it’s making an inappropriate approach easier to use.

            I would never use any async execution model for a load balancer that I didn’t both explicitly understand and have access to every knob to turn. There’s a reason nginx and haproxy remain so popular as load balancers, and it’s because of their auditability and the myriad of knobs available for fine-grained tuning. I think async execution is more specifically for the increasingly common situation where I care about the extra utilization I’m putting on the table due to frequent IO waits (whether that’s because of disc, network, user input, etc), but not enough to get down into the nitty-gritty and poke for priority inversion or any of the host of other scheduler issues that could occur.