1. 25
  1.  

  2. 9

    Author here. With this post, I wanted to dispel the notion that an expert by definition knows everything. Also Rust’s pace of change can be quite enjoyable, because for the most part new changes make things easier, more powerful or both.

    1. 5

      While I really enjoy Rust, and believe that it is a language for the future, I am also concerned about the pace that Rust is developing at. I fear that people who only occasionally work with the language are scared away by the rapid changes in idiomatic style and eco-system (which web server library should I use this month?), and I also fear that Rust will become bloated with so many features that you never truly understand the language. One of the reasons I like Standard ML so much, is that it is simple enough that you can learn the whole language in a few days, while still being powerful enough that you can express anything succinctly.

      I have the utmost respect for the Rust developers, and I’ll probably continue to use the language, I just hope that the developers are mindful that not everyone will see constant change in the language as a strength.

      1. 8

        The example you picked, web servers, is because async/await was just recently stabilized. If you’re using Rust for writing web servers, then you should absolutely be prepared for churn and rapid evolution, like any good early adopter.

        If you picked a different topic, like say, automatic serialization, then it’d be fair to say that it has been quite stable without much change for years at this point.

        Language features are added at a more rapid pace than, say, Go, but it’s not that often. Since stabilization of ?, what’s considered “idiomatic” has largely remained unchanged.

        1. 5

          I think Rust is becoming simpler over time. There’s a recurring pattern of:

          Q: Why doesn’t this code compile!?

          A: Complex reasons.

          …new rust comes out…

          A: It compiles now!

          So for outsiders it may seem like Rust just keeps adding and adding stuff, but the insider perspective is that Rust takes stuff that has been postponed before 1.0, unfinished, or clunky, or difficult to use — and finishes it.

          I used to have to write lots of error handling boilerplate. Now it’s mostly a single token. I used to have to strategically place extra braces to fit inflexible borrow checking rules. Now I don’t have to. I used to need to wrangle with differences between ref and & regularly. Now it’s pretty much gone. Async code used to require a lot of move closures, Arc wrapping, nesting Either wrappers. Now it’s almost as easy as sync code.

          1. 3

            I agree that using the language has become simpler over time. I think I’m more worried that understanding the language becomes harder. You definitely need to wrap your head around more concepts in order to have a full understanding of the language now than you did when 1.0 came out. Yes, that made some things more verbose and clunky, it also meant that all code was explicit and easy to follow.

            1. 6

              Yes, that made some things more verbose and clunky, it also meant that all code was explicit and easy to follow.

              Rust has a much more mature approach to explicitness than “tedious noisy syntax == easy”: https://boats.gitlab.io/blog/post/2017-12-27-things-explicit-is-not

              For example, Go’s error handling famously much more “explicit” than Rust’s, but it doesn’t make code easier to follow. It adds code that distracts. It adds code that needs to be checked for correctness. Rust’s terse error handling is more robust: it can’t be as easily ignored/forgotten, and catches more types of issues at compile time.

              There is a nuanced relationship between simplicity of the language and ease of understanding programs, so I generally disagree with the implied conclusion that bigger, more featured languages are more difficult to grasp. (simple = consisting of few parts) != (simple = easy to understand). Brainfuck is the extreme example of this. Another example is C, which is usually considered small and simple, but “is this a valid C program free of UB?” whooo boy. That’s a difficult question.

              I’m afraid that modern C++ has been such a shining beacon of ever-growing unfixable accidental complexity that it has put a stain on the mere notion of language evolution. I think C++ is an outlier, and languages aren’t doomed to end up being C++. PHP, JS, and Python have been evolving, and (ignoring Python’s migration logistics) they ended up being much better than what they started with.

              1. 2

                Hm, thant kinda started a thought in my mind, that Rust might be getting into a “hard to write, easy to read” territory, as a kinda polar opposite to how I consider Perl to be “easy to write, hard to read”

                1. 1

                  I don’t know a lot about Go, but from what I do know, I wouldn’t call Go’s error handling more “explicit” than Rust’s. It’s definitely noisier, but it’s also very weak.

                  That being said, I agree with most of what you’re saying.

            2. 3

              Putting my library team hat aside, yes, I like Standard ML for the same reasons. But it’s effectively a “dead” language outside of academia. Ocaml might be a more fair measuring stick in terms of comparing apples to apples. Although, Ocaml has been around for a long time, so I suppose there’s no truly fair comparison.

              So I guess, here’s a question for: what would you do differently? What would you give up?

              1. 3

                So I guess, here’s a question for: what would you do differently? What would you give up?

                I don’t know, and I certainly don’t know that what the Rust team (you included, thank you for all your hard work) is doing is wrong either.

                I think there’s a case to be made for a Rust–: Essentially C with algebraic data types, pattern matching and the borrow checker. Perhaps also traits. But something that could be standardized and implemented by other compilers, perhaps even formalized in a proper spec (disregarding RustBelt for now).

                I realize that much of the complexity in Rust comes from the fact that Rust is trying to do everything in the right way: If we have borrow checking, it should also cover concurrent code; if we have concurrent code, we should try to use an asynchronous programming model, because that’s the way these things are done nowadays; if we have traits, we should also be able to have lists of trait objects; if we have trait objects, the syntax shouldn’t hamper us; if we have algebraic data types, we should be able to use monadic return types (Option, Result) without too much syntactic overhead; and so on. Every time we encounter a problem because of some recently added feature, we introduce a new feature to handle that. All of the steps make sense, but the end result is increased complexity and a bigger language. But perhaps that is necessary.

                1. 4

                  Thanks for the response. My thinking is similar.

                  With respect to a spec, FYI, Ferrous Systems is starting down this path with Sealed Rust.

                  1. 4

                    I heavily use (and teach) concurrency with rust, but I almost never use rust’s async functionality. It’s an abstraction that both slows down implementations and makes codebases worse for humans for many things other than load-balancer-like workloads where the CPU costs per kernel process scheduling decision are very very low. None of the publicly available async schedulers pay any attention to modern scheduler theory, and they lock you into suboptimal latency-throughput trade-offs.

                    Async is a sharp knife to really only be used for load-balancer-like workloads.

                    1. 1

                      I’m interested in hearing more about this, if you have the time. I’m sure you’ve got sufficient experience with these trade-offs in building Sled, but I’ve found the publicly available schedulers in Rust to be excellent mechanisms to structure concurrent programs and tasks—better than plain threads, in my experience.

                      1. 8

                        I think it’s a beautiful way to compile certain simple state machines. But they have made a lot of decisions that make it difficult to actually build a high quality scheduler that can run them, due to needing to shoehorn everything into the Poll interface.

                        For instance, it’s generally well accepted that for request-response workloads you want to prioritize work in this order:

                        • first run things that are ready to write, as they signify work that is finished
                        • things that are ready to read, as they are work that has been accepted and the timer is ticking for
                        • only accept based on a desired queue depth based on your latency/throughput position. If you care about latency above everything, never accept unless all writes and reads are serviced and blocked. If you care about throughput above all else, you want to oversubscribe and accept a lot more work to reduce the frequency that your system bottoms out and has no work to do. You don’t want to accept work that you’re not servicing though if latency is a priority, and you want a smaller TCP backlog that will fill up and provide backpressure for your load balancer so it can do its job.

                        With Rust, it’s impossible to write a general purpose scheduler that does the right thing here, because there is no way to tell a Context / Waker that your task is blocked due to a write/read/accept-based readiness event. You have to write a custom scheduler that is aware of the priorities of your workload. You have to write your own Future that also feeds back information through something like a thread local variable, out-of-band.

                        There are very few reasons to use an async task. Memory usage is not one of them, because async tasks usually compile to about the same size as a large stack anyway, easily taking up megabytes of space. It’s the most pessimistic stack. Having it precompiled and existing separately from something that runs on a stack also has cache implications that are sometimes pretty negative compared to a stack that is always hot in cache.

                        The one time you could benefit from async tasks are when you are doing almost 0 compute per request, because only then do context switches become measurable compared to the actual workload. However, this effect is distorted by the way that people tend to run microbenchmarks which don’t do any real work, making it seem like the proportion of CPU budget consumed by context switches is large, but for anything de/serializing json, the context switch is already noise in comparison.

                        Then there are the human costs. These dramatically improved with async/await. But the impact of blocking is high, and all of the non-blocking libraries try to look as similar to blocking std stuff as possible for usability, but this is also a hazard that increases the chances that the blocking std version will be used, causing your perf to tank due to no longer being concurrent. Compared to threads, there are more implementation details that are in flux that will change performance over time. With threads, you can reason about the system because it’s familiar and it’s the same one many people have been working against for their entire careers as systems engineers. The need to use profiling tools at every dev stage dramatically rises any time you use async stuff because of the various performance pitfalls that they introduce.

                        An async task is a worse thread in almost every way, I find. They remove control, introduce naive scheduling decisions, bloat the memory use, pollute cache, make compilation slower, increase uncertainty, lower throughput due to shoehorning everything through Poll, increase latency by making it difficult to avoid oversubscribing your system, etc etc etc… Only good for load balancers that don’t do any CPU work anyway, which is exactly what the company behind tokio does. async-std got hijacked by ferrous to try to sell more Rust trainings, because it introduces so many hazards that you need to be educated to avoid. Those 2 businesses may not be aligned with yours.

                        1. 2

                          Thanks for taking the time to write this all up! I’m still processing it. I’ll try to dig into each point on my own. What you said is pretty valuable!