1. 12
  1.  

  2. 2

    In short, the alternative to green threads presented is 1 real thread per connection, and not much is said about the criticism of HTTP as an RPC mechanism.

    1. 2

      I strongly like the spirit of this. However, benchmarking is hard. It is very easy to set out to measure A, actually measure B, and then very convincingly conclude C (borrowing the phrase from Brendan Gregg.) Before attributing any value to this, I would like to see some basic investigation into what the limiting factors are, how well it uses the various caches, how it handles coordinated omission, and so on. This is really basic stuff that should be part of any benchmark for it to count as valid.

      I know the author admits the fallibility of their method, but it’s still not ethical to go, “here are some random numbers that might mean something” under the pretense of a correct process for observing them.

      1. 1

        Quick question, what is $http_server_library?

        To be fair python is slow as hell and no one expects it to be fast. Between 2011 and 2017 there was this weird push to make fast python servers using uvloop like falcon, sanic, and other projects that don’t make sense from a performance perspective since doing anything besides a hello world with them means destroying performance.

        Using a server with python or any slow language is implicitly stating that you prefer some quality - say developer friendliness - over performance. That’s ok and all but you can’t assume that your system is better than anything else except for that quality.

        I use an asynchronous C++ single threaded server that I can run multiple procs of and which listen to the same port. They talk to FoundationDB asynchronously because it’s hooked into the event loop making batched calls. They call other services using libcurl asynchronously because it’s also hooked into the event loop. I get ~380,000 requests per second on a vm on my laptop with 4 cores, benchmarked with wrk on the same vm.

        Is it developer friendly? Not as friendly as django or ror but with spdlog and proper logging It’s easy enough to track everything while building upon what I have. I could run probably run all of Discord’s text based systems off of this setup with as many or fewer servers than they have.

        1. 1

          Final note: I didn’t mention what language or what library I used for this. To make my point, a friend did a similar test and got similar numbers with a different language/library. Then, just to make things even more ridiculous, he shoved it onto his Windows box and ran it from there. Even then, it did a stupid amount of traffic without breaking a sweat.

          I think the point of this was that, for the most part, you’re unlikely to need complex tooling to get decent performance out of any tool. You might not need Puma on top of a Ruby server, you might not need gunicorn/gevent on a Python server. Ruby/Python just by itself would likely be “good enough” without needing to tune it, or use external libraries.

          1. 1

            I’d be interested in hearing more about this setup. I’ve started dabbling in FDB so seeing some more advanced usage would be great. What can you share wrt code and experiences?

          2. 1

            I don’t have the repos (“google3” and “fbcode”) available to me any more.

            Are these the actual names of Google’s and Facebook’s monorepos?

            1. 3

              I can confirm that Google’s is google3; there was a google2 at least 15 years ago that used make as a build system, and I can only assume that there was a google at some point. No idea about facebook.

              1. 1

                An easy search online returned this: https://github.com/angular/google3

                so I’d assume so.