1. 31

  2. 3

    I wonder how a webserver in Rust using futures would cope.

    1. 3

      Do you have recommendation for Rust webserver to include in benchmark?

      1. 2

        Take a look here: https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=fortune&l=xhnr73-f

        Languages included: Erlang, Elixir, Go, Java, Javascript, Rust.

        Rust’s actix / actix-web is inhumanly fast.

      2. 2

        Very useful benchmark, thanks for the analysis!

        I’m curious about the behaviour under overload – how would the Erlang servers manage to keep latency constant? Do they reject requests with 503 or similar? Is this tracked by the test setup?

        Also, I’d love to see how the servers would do if the test continued after that hour while keeping request rate constant.

        1. 4

          I can really only speak about Erlang here, but I will give a quick rundown of what cowboy, and presumably mochiweb, are doing and how the BEAM will help keep latency low.

          When a request comes in to cowboy, it follows this procedure (with a bit of hand-waving)

          1. Accept the socket using a central socket acceptor (central here meaning there is only one, because only a single process can listen on a given port)
          2. Start an Erlang process to handle the actual work of the request
          3. On the new process, do the work of the given endpoint (in the case of this test, it just sleeps for ~100ms)
          4. The new socket has access to the socket (or port in Erlang terms) and sends the response.

          This is roughly what all web servers do.

          The specific part of Erlang that helps here with low response times, even under load, is that Erlang uses a preemptive scheduler. What this means is that if you have N cores, you can have up to N active processes doing work at any given time. The special part, though, is that after some number of reductions (1 reduction is roughly equal to 1 function call), the scheduler will stop that process and start/resume a different process (assuming you have more than N processes trying to do work). The whole point of this is so that each process will get roughly the same amount of CPU time as one another.

          So in this test when the sleep is called (or if it were an actual database call), the scheduler is actually taking this process, throwing it to the end of the scheduling queue, and starting to do work on another process. So when the processes start to “wake up” from their sleep, they are scheduled and just immediately send off their response.

          There is actually an interesting talk by Saša Jurić on the difference between preemptive scheduling (Erlang/Elixir) and cooperative scheduling (golang) and how it can affect the performance of your application. It is very obviously shown from the perspective of an Erlang/Elixir developer, but it is interesting to watch nonetheless.

          1. 1

            This is actually very good question! While no 503s were observed in the test, there were timeouts. I updated the article with corresponding graph.

            We will add sustained phase to the future tests to see what happens at constant rate of requests.