1. 19
  1.  

  2. 3

    The code looks kinda gross–assuming green threads/fibers may let you write something a lot simpler to deal with and manage.

    Shame about libgreen being removed:

    http://stackoverflow.com/questions/29791031/what-happened-to-libgreen

    Like, we’ve already seen this done right with Erlang. Why reinvent wheels?

    Supporting only native threads never ends well. It means that you eventually will have different libraries fighting over how to abstract them, how to divide tasks over them, how not to starve each other, etc.

    It’s far better just for the language to say “Yo, here’s an abstract thread of execution–we’ll do the rest for you.”

    1. 7

      “We’ll do the rest for you” is contrary to the goals of a systems programming language.

      (And green threads in Rust didn’t have any real advantage over 1:1 threads anyway)

      1. 13

        Once again, it’s time to share the best article about why having the language not take care of this abstraction for you is a huge deal: http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/

        There’s very few PL articles I’ve read in the last few years that have had such an impact on me. tl;dr: punting on fibers (or whatever) effectively partitions language code into two distinct classes.

        1. 3

          I think that that argument makes sense, but in a systems language, you can’t just worry about interface, you also have to worry about things like overhead. And to do green threads, you need a runtime, which many Rust programs don’t need or want or can’t use. So by building that, you end up partitioning the language in two as well: runtime and no runtime. You’ve just moved the problem.

          1. 3

            unfortunately, if you don’t move the problem, it stays where it is – every single programmer who wishes to use concurrency with your language poorly implementing half-understood hacks and libraries copy-pasted from stack overflow, as their fathers did before them, and their fathers, lo for the last several decades.

            Rust is, absolutely, free to ignore that, and I understand all the stated reasons. The difference this time is that it’s 2015, and every sensible processor has 4 cores, and many sensible computers have 16 or 24 or 48 processors, and those numbers are going to go up. So while again, it’s a totally valid and sensible decision and good luck with it, the original Rust design was visionary in an exciting way, and one can’t help but be disappointed at the whiffed missed opportunity.

            1. 4

              I don’t see how this isn’t moving the problem. It would just mean that no-runtime users, one of our core audiences, would also end up having to deal with the analogous situation.

              Furthermore, those who don’t like or can’t use the One True Concurrency Solution would be shut out from the language entirely.

              1. 2

                As I said, it’s definitely moving the problem, but it’s one that has to be moved at some point. Wish it had been rust. For all the post facto rationalization, M:N is going to win.

                As far as the other assertions go, going to have to respectfully disagree. There are many ways to skin both cats with low or zero overhead, either at compile or runtime.

            2. 2

              To me it seems like as long as all code is written in a synchronous style, expecting fiber-like behavior (Go provides a thread-pinning interface for library writers who need their OS thread pinned), then whether or not there actually is a runtime could completely be a swappable compile-time decision and then there’s no longer a problem.

              But it does seem unfortunately a decision that needs to be made globally (all libraries need to expect the exact same behavior could happen).

              EDIT: sorry for the edits, I said some words wrong the first time I hit post.

              1. 4

                This is exactly what we tried to do with libnative/libgreen, and it didn’t work, leading to libgreen’s removal.

                1. 1

                  Oh I didn’t realize that it was supposed to be pluggable. I’m super curious about why this didn’t work; is there something I can go read that explains why? Thanks!

                  1. 2

                    https://github.com/rust-lang/rfcs/blob/master/text/0230-remove-runtime.md is the RFC that led to its removal, it’s what you’re gonna want to read.

          2. 2

            I’m not sure I agree with you here.

            Every systems programming language does some amount of “the rest” for its users–for example, C handles setting up things on the stack. C++ standard library handles setting up containers and dealing with their memory (albeit very primitively). Rust has threads, while C/C++ for quite a long time did not as part of the core language.

            Would you like to claim that Ada is not a systems programming language? It “does the rest for you” in terms of its task scheduling, to the best of my understanding.

            1. 8

              Every systems programming language does some amount of “the rest” for its users–for example, C handles setting up things on the stack. C++ standard library handles setting up containers and dealing with their memory (albeit very primitively). Rust has threads, while C/C++ for quite a long time did not as part of the core language.

              I’m confused you are jumping between what the stdlib in those languages does and what the core language does pretty randomly in those two sentences.

              Rust does not have threads in the core language. They are a stdlib concern, just as channels.

              What Rust has in core - and that’s the novel approach - is type systems support that work across all concurrency models (Send and Sync being the most important).

              1. 1

                In 2015 I don’t think it’s unreasonable to consider the standard library of a language to be part of the core language.

                1. 5

                  Even in 2016 I would completely disagree.

                  1. 6

                    The conversation here is about a systems language, which is precisely the context where the distinction is really important - since one of the defining features is the ability to replace the standard library (and the runtime, if there is one) with one’s own implementation.

                    If one is developing something higher up the stack, blurring the distinction between language and library is pretty reasonable since most decisions about what to use are going to be based on what they provide collectively. But for systems programming, not so much.

                    1. 4

                      The conversation here is about a systems language, which is precisely the context where the distinction is really important

                      Bingo! :-) Being able to work with Rust without std is a key use case: https://github.com/rust-lang/rfcs/blob/master/text/1184-stabilize-no_std.md

          3. 1

            Don’t fear 1:1 threads. We’re not living in the c10k world anymore. Our systems can handle a lot more threads than they used to. Some of your favorite systems are probably running just fine at high frequencies on one thread per session on Varnish. Memory is cheap, I don’t care that your M:N concurrency primitive takes 50 bytes, it’s probably fucking with the cache of another concurrency primitive because it wasn’t taking ENOUGH space :P If space is your concern (embedded) then you are probably happy that Rust ditched that unnecessary abstraction anyway.

            Some problems need extra sympathy. Rust won’t get in your way.

            1. 5

              Do you have any source of information to back up what you’ve said? In my experience, highly scalable systems almost always tend towards a few threads (maybe a bit more than number of cores) and event loops in each one handling many connections per thread. The reason not being memory but rather the context switching between threads and requiring to go between user space and kernel space so much being too expensive. The general wisdom seems to be if you’re in the 10k or 100k range, 1 thread per connection is untenable. Especially in the microservice world where 1 incomming connection can spawn tens to hundreds of parallel outgoing connections. It would be interesting if this were no longer true.

              1. 3

                This video describes roughly how many requests happen per Google Search query. One machine ends up sending requests to about 100 more, which end up sending to even more. So to handle 1000 requests per machine concurrently you need 100,000 threads at 1:1. Of course not every one is Google, however more and more places have to handle more and more requests per node and have more and more services they must interact with.

                cc @icefall @steveklabnik

                https://youtu.be/QBu2Ae8-8LM?t=1233

                1. 1

                  You might be interested in the HN comments, which talk about this: https://news.ycombinator.com/item?id=10225903

                  1. 8

                    I did not read the whole thread, but someone did an experiment that backed up my claim that OS threads are insufficient at scale, for some value of scale.

                    https://news.ycombinator.com/item?id=10232131

                    In the test, the user was able to achieve 2 million Go routines and the runtime managed to handle doing work. However the system fell apart around 270,000 threads with the kernel OOMing. At 200,000 the context switching per thread was also an order of magnitude worse than Go at 2 million threads.

                    This matches my experience and intuition. I didn’t see any indiciation that this analysis was wrong, with pcwalton even confirming the results.

                    Of course, the question is: where is the point where you care. IME, so YMMV, the problem with going 1:1 on connections is that once you hit the point where it no longer works it’s awfully painful.

                    1. 3

                      Yup, the reason I linked to the thread is that the discussion is more nuanced than this: one thread per conection applies in certain cases, but doesn’t mean that it’s just inherently superior. For example, https://news.ycombinator.com/item?id=10232119

                      1. 2

                        Thanks for the link to the thread.

                        I wonder, what if Go and Erlang just made a new OS thread per language-thread/process? Would be intresting to see.

                        1. 1

                          Well, stack_size * thread_count is a limiting factor, unless you can get stack_size down to zero.

                    2. 1

                      Haven’t read this yet but I came to post a previous comment thread w/ pcwalton on the subject: https://news.ycombinator.com/item?id=10013009