1. 6

  2. 14

    Wow, that is a lot of work to stack a benchmark in favor of “python”.

    1. Have most of the critical code in C
    2. Use a feature of the http/1.x spec unused in practice, ignore more modern standard (http/2)
    3. Cripple competition by single-threading it
    1. 3

      There should be another flag (among offtopic, already posted and spam) for such a content - misleading.

    2. 9

      Besides delaying writes for pipelined clients there are several other techniques employed in the code. Japronto is written almost entirely in C.

      Not really much Python here. The author is benchmarking C code. It’s fast, colour me surprised.

      1. 5

        This framework benchmarking is pointless. Apps are never that simple. There should be a basic app boilerplate that they can use for benchmarks. Maybe something that hits a PG database, Redis, sorts a few items etc.

        1. 3

          The red flag for me here is

          Servers were load tested using wrk with 1 thread, 100 connections and 24 simultaneous (pipelined) requests per connection […] HTTP pipelining is crucial here since it’s one of the optimizations that Japronto takes into account when executing requests.

          If you look at the graph, Japronto is roughly 24 times faster than Go, which suggests that most of the speedup comes from the pipelining. This makes the benchmark less representative of real world loads. For one, most browsers disable pipelining. Also, it means that you’re less likely to see a significant improvement if you do anything besides simple GETs. I’d like to see a benchmark made which involves a database and see what happens to the improvements.

          1. 1

            also, afaik net/http is not tuned aggressively for benchmarks. for that they should have used fasthttp

          2. 2

            Benchmarks are too…simple https://github.com/squeaky-pl/japronto/blob/master/benchmarks/japronto/micro.py https://github.com/squeaky-pl/japronto/blob/master/benchmarks/golang/micro.go

            and also: ‘To be fair all the contestants (including Go) were running single worker process’

            1. 3

              I mean, they did state as much in the text. And with only one graph, and that stark a difference, it’s obvious to any reader that they stacked the deck in their favor.

              I am certainly curious to see more robust (or at least more interesting) benchmarks, but I doubt I’d have that curiosity if their one benchmark hadn’t overpromised in that way.

              1. 2

                I’m not saying that this is senseless lib or they lie about performance. But given benchmarks are too simple IMO. By example bench their lib with Github routes would be much more interesting, by example: https://github.com/julienschmidt/go-http-routing-benchmark/blob/master/github_test.go Or adding sleeps for each request, to see how it’s behaves not as a simple echo.