1. 51
  1. 14

    A nice article.

    Here is an observation to chew on:

    The same language that encourages you to pass data by references, for instance &str instead of String, also encourages you to split up your code into small tasks that have to clone every piece of data they send to one another. The same language that provides a complex lifetimes system to avoid the need for a garbage collector now also encourages you to put everything in an Arc.

    According to the idea of zero-cost abstractions, the programmer should split code into tasks only when they want these tasks to run in parallel, whereas the general feeling of asynchronous Rust leans towards spawning hundreds or thousands of tasks, even when it is known ahead of time that the logic of the program will not allow some of these tasks to run at the same time.

    I don’t think that this is really a problem, but I do feel like there are now two languages into one: a lower-level language for CPU-only operations, that uses references and precisely tracks ownership of everything, and a higher-level language for I/O, that solves every problem by cloning data or putting it in Arcs.

    1. 1

      Wow. That is a fascinating observation. “to chew on” is exactly my take away from reading that. Thank you for underlining those points.

    2. 9

      Some of these problem situations are language-agnostic and non-trivial, like deadlocks. An evergreen example is cancellation of asynchronous actions. Even when it’s possible to cancel a long-running action, the cancellation itself is an asynchronous action, and there’s an infinite regress which should be familiar to anybody who has wondered why their home computer is so frozen that they cannot use a three-finger salute to cancel whatever process has caused it to lock up.

      1. 18

        The most frustrating thing to me about async rust is that it will be permanently traumatic in the rust ecosystem (and over time more and more traumatic for the greater programming community as a whole) until it becomes irrelevant. It will not become irrelevant for a long time because it is a fundamentally ideological and social pursuit, rather than one that you can convince people to stop hurting themselves with simply because it’s measurably worse in so many ways.

        Nobody consciously thinks that non-async rust is too easy. But when people learn rust, they tend to go through a tremendous amount of struggle as they come to terms with the borrow checker, ownership, send/sync, macros, the list goes on and on. We experience a lot of compiler errors and we learn to just acknowledge that painful compilation is part of the process. But one of the most sorely lacking aspects of most people’s rust education is a sense of taste around how much pain is actually not necessary.

        When people start using async, they often need to rely on the community in some way to compensate for one of the problems of the ecosystem that they encounter. There are so many problems that these compensating social groups become thriving places. This is how most knowledge communities spring up, not just in programming languages but in science too. Humans band together when our experienced problems or beliefs about the future significantly overlap. People make friends and feel a social connection while alleviating their async problems and shared faith that it will all get better in the future. The pain they experience as part of async is associated with the positive social connections or status they have, and it stops being pain. It’s an affirmation of that social standing.

        Fire together, wire together. Pain becomes painless, despite the productivity hit, bugs, negative perf impacts of a priority-blind scheduler, etc… not going away. You start seeing posts being popular with headlines like “Yeah, Async Rust is fragmentary, but it doesn’t matter” because to that author, the significant downsides don’t feel like pain due to the socialization they have undergone.

        When metrics are presented that show basic things like how the latency-throughput decisions made by every executor are far from optimal for some extremely common workloads (do a simple high-throughput echo benchmark where you have single clients being very chatty and see how much slower all of the async systems are compared to boring thread-per-busy-client), the same social defense cognitive function that makes one numb to the compiler errors also makes one numb to evidence that their tribe’s foundation isn’t actually some general-purpose solution to all things networked.

        This kind of thing happens in all communities, programming languages or rust are not special here. But the reason this is going to keep being traumatic is because of these socially-driven defensive blind spots that have been so clearly on display the past few weeks with the uptick in coverage around async rust, the denial of its fundamental flaws, the performative allegiance to the async community, some well-formed and some ham-fisted rebuttals, a perpetual denial of the legitimacy of synchronous solutions that are usually a less hampered choice. But it should hopefully be clear that this is not about some logical decision that people make to actually improve their productivity or the reliability of their code or their performance (in fact all of these take hits while using async) but the driving force is the social connections people form while trying to compensate for its durable brokenness. Almost all async proponents I see writing about it seem to be perpetually driven by the idea that it will all be fixed in the future, despite the simple fact that when you add complexity, you increase resource usage and the effort required to understand what your machine is actually doing. Sometimes that resource usage increase allows you to avoid a very small hiccup caused by TLB flushes, but this becomes impossible to distinguish from noise once a system needs to perform the trivial computational effort of deserializing a request or terminating TLS.

        When the conscious core value that a group of humans build their self-worth and social connections around is dependent not on some interesting extrinsic problem or measurable solution, but rather simply NOT sync, then you have a situation where that group of people will fail to take advantage of the significant (and durably superior) reliability, ergonomic, throughput, and parallelism advantages of simply not sticking an additional scheduler (or 3 or 5 or however many are needed due to legacy reasons) in the critical path of some workload. But these disadvantages become invisible to the subcommunity that exists in the first place to resolve the problems of async. There is no comparable if-statement community or SQLite community (other than a quiet consortium of satisfied donor companies who want to remain satisfied) because they don’t require social interaction to use on a basic level.

        1. 5

          It is me or Rust’s async is much harder than Erlang’s?

          1. 18

            Rust’s async has to deal with some very hard constraints. I linked this in another thread today, but here’s Graydon Hoare, the original creator of Rust, talking about some of the history. (Edit: replaced link to the thread with a link to a collection, as Twitter’s threading breaks the thread in the middle, making it easy to miss the second half)

            1. 1

              Great details. So ultimately C compat won. Good to know.

            2. 7

              That’s hardly a surprise, erlang is a much higher-level (and slower) language.

              1. 3

                Erlang is better overall for low-latency tasks. It is worse for high throughput tasks. Async Rust is also worse than non-Async Rust for high throughput tasks. “Slower” does not describe actually measurable characteristics.

                1. 1

                  I think he probably though about numerical calculation performance which is irrelevant in the topic of async communication.

                  1. 1

                    I was thinking about general CPU bound tasks, yes (not just numerics). Rust isn’t just targetting distributed services :-)

            3. 0

              Programming language design is first and foremost an “artistic” activity, not a technical one

              I guess the author is trying to describe the design process, but — wow, I certainly hope this isn’t the case!

              1. 9

                Well, there’s programming language theory, where many concepts are scientifically rigorous. There may be set goals/requirements which are strictly technical in nature.

                But programming languages as a whole are a human-computer interface. The human part is messy and subjective. It involves selecting features that are “intuitive”, syntax that is “readable”, solutions that are “simple”, and these are fuzzy concepts involving many trade-offs. So navigating these is an “art” in the sense that you design for subjective human appeal.

                1. 2

                  I wholeheartedly agree programming languages are user interfaces and can’t be boiled down to proofs, but that’s unrelated to being scientific. Science is built on experiments, not proofs. Fitts’s law is a perfectly good piece of science. Readable syntax can be found by user testing, there’s nothing subjective about it.

                  1. 1

                    Readability varies according to many factors, including past coding experiences, preferences, spoken and written language experience, as well as some common aspects of human nature and how our eyes and brains work. This is why I would say that readability is partly subjective.

                    Of course you can do a survey that controls for many factors, including many subjective ones.

                    But evaluating such a survey tends to involve subjective questions; i.e. “what factors are more and less important and why?”

                    So, in a nutshell, many aspects of perceived readability are subjective, and designing for readability is also partly subjective.

                    1. 1

                      Yes, readability is multidimensional, that’s a different claim from being subjective.

                      1. 2

                        Importance of each dimension is subjective. The same design for some people may be “explicit” in a good way, for others “noisy boilerplate”. Some people like short keywords and even sigils, others think it makes language “write-only”. Some people think statement separators are unnecessary noise that make compiler nitpick about semicolons, others will argue it’s required for the language to have clear structure, and so on.

                        If you drop into a programming language conference and announce “I claim that {python|lisp|c|apl|haskell} syntax is objectively superior” you’ll be booed off the stage, unless your claim matches the conference’s chosen language :)

                        1. 1

                          Yes, multidimensional and subjective.

                          As I wrote above, preferences (part of readability) are subjective.

                    2. 2

                      Aside: programming languages are not natural languages, and while widely accepted, I think “syntax” is a misleading term. It’s closer to orthography than syntax. And we can say Italian orthography is better than English orthography, it’s not subjective. And we can say it is due to a constraint of ASCII. Latin alphabet was, understandably, designed to write Latin, not English, and Italian is closer to Latin than English. That’s why Latin alphabet has five vowel letters! Not because there are five vowels (hell no), but because Latin had five vowels. English, either British or American, has more than ten.

                      In this perspective, I think there is a lot to learn from APL. APL gave up on a constraint of ASCII, and designed an orthography to write APL, specifically designed for APL. It is an objectively better way to write APL. Yes, there are tradeoffs, but “syntax” (which means orthography) can be judged as a good and bad syntax.