1. 15
  1.  

  2. 6

    Reminds me of that time when some of my coworkers were spending a lot of time on optimising string concatenation for building (rather complex) SQL queries. Even for a simple query 1ms is a good time, and those 50ns you’re saving is … nothing. And these queries definitely took longer than 1ms.

    It was one of those times when someone says something so profoundly off-base that you start to doubt your entire worldview.

    There’s still value in fast HTTP frameworks (or fast string concatenation), for example if you’re mainly serving static content or something else with little overhead, but those aren’t most applications. The “Worker crashed on signal SIGSEGV” bug reports on that japronto project don’t exactly inspire confidence though.

    1. 5

      These benchmarks looked suspiciously as though they didn’t even enough workers for the sync frameworks - the graph of cpu usage from gnome monitor never has more than a couple of cpus at 100% usage which is deeply suspicious.

      Looking through his code on github I was not able to find anywhere that he was increasing the worker count above 1 for either sync or async so consequently he has the numbers all wrong.

      If you compare 1 async worker to 1 sync worker you are implicitly allowing the async worker unlimited concurrency until it saturates a cpu but allowing the async worker no concurrency as it blocks for IO.

      You might find that async performs better for certain tasks but if you find that async trounces sync Python on an embarassingly parallel task (serving web requests) then you need to look at whether you’ve made a mistake in your measurements - as I think this person has. I spent an enormous amount of time on my own benchmarks on this subject last year and discussed the problem with worker counts and concurrency there.

      1. 1

        That’s a compelling article you’ve written, thanks. I wonder if part of the appeal of async is that you don’t need to tune your number of processes as much. I imagine that different loads would cause larger shifts in optimum number of processes in sync frameworks compared to async. But maybe that doesn’t matter? How much worse do the sync frameworks get when you provide too many processes?

        1. 1

          Glad you liked it!

          I think that the fact that you don’t need to tune worker counts under async is probably the key reason why they come out better in most benchmarks. In practice though, I think they are much much more fragile. I’ve seen firsthand where the production situation is like that in “Rachel By the Bay”’s article - very fragile services and weird, empirically-discovered deployment rules like “don’t let cpu go over 35% or you’ll get random timeouts”.

          How much worse do the sync frameworks get when you provide too many processes?

          Each process has a memory cost of your entire application image (your code+libraries+several mb for interpreter) because sadly in Python that cannot be shared. So increased memory would be a cost of having too many processes. However request processing time doesn’t really suffer from too many workers as far as I’ve seen, even when you have 5 or 6 time those required.

          1. 1

            Rachel’s article is great, too. Very clear about the probable causes of the issues.

            Yeah, I guess I would expect sync processes not to affect processing time if they don’t wake up until the server has something for them (so long as the server has sufficient memory for working processes).

            I’m writing a web server at the moment in a language that can use OS threads (Julia), so I don’t imagine that much of this will apply (I think you get all the benefits of the sync setup if you just start a new thread for every request), but it’s still interesting.

      2. 1

        Hmm it seems somewhat weird to do these benchmarks with Flask+Sanic as Quart is the closest asyncio-powered spiritual successor to Flask.