This framework benchmarking is pointless. Apps are never that simple. There should be a basic app boilerplate that they can use for benchmarks. Maybe something that hits a PG database, Redis, sorts a few items etc.
Servers were load tested using wrk with 1 thread, 100 connections and 24 simultaneous (pipelined) requests per connection […] HTTP pipelining is crucial here since it’s one of the optimizations that Japronto takes into account when executing requests.
If you look at the graph, Japronto is roughly 24 times faster than Go, which suggests that most of the speedup comes from the pipelining. This makes the benchmark less representative of real world loads. For one, most browsers disable pipelining. Also, it means that you’re less likely to see a significant improvement if you do anything besides simple GETs. I’d like to see a benchmark made which involves a database and see what happens to the improvements.
I mean, they did state as much in the text. And with only one graph, and that stark a difference, it’s obvious to any reader that they stacked the deck in their favor.
I am certainly curious to see more robust (or at least more interesting) benchmarks, but I doubt I’d have that curiosity if their one benchmark hadn’t overpromised in that way.
I’m not saying that this is senseless lib or they lie about performance.
But given benchmarks are too simple IMO.
By example bench their lib with Github routes would be much more interesting, by example:
https://github.com/julienschmidt/go-http-routing-benchmark/blob/master/github_test.go
Or adding sleeps for each request, to see how it’s behaves not as a simple echo.
Wow, that is a lot of work to stack a benchmark in favor of “python”.
There should be another flag (among offtopic, already posted and spam) for such a content - misleading.
Not really much Python here. The author is benchmarking C code. It’s fast, colour me surprised.
This framework benchmarking is pointless. Apps are never that simple. There should be a basic app boilerplate that they can use for benchmarks. Maybe something that hits a PG database, Redis, sorts a few items etc.
The red flag for me here is
If you look at the graph, Japronto is roughly 24 times faster than Go, which suggests that most of the speedup comes from the pipelining. This makes the benchmark less representative of real world loads. For one, most browsers disable pipelining. Also, it means that you’re less likely to see a significant improvement if you do anything besides simple GETs. I’d like to see a benchmark made which involves a database and see what happens to the improvements.
also, afaik net/http is not tuned aggressively for benchmarks. for that they should have used fasthttp
Benchmarks are too…simple https://github.com/squeaky-pl/japronto/blob/master/benchmarks/japronto/micro.py https://github.com/squeaky-pl/japronto/blob/master/benchmarks/golang/micro.go
and also: ‘To be fair all the contestants (including Go) were running single worker process’
I mean, they did state as much in the text. And with only one graph, and that stark a difference, it’s obvious to any reader that they stacked the deck in their favor.
I am certainly curious to see more robust (or at least more interesting) benchmarks, but I doubt I’d have that curiosity if their one benchmark hadn’t overpromised in that way.
I’m not saying that this is senseless lib or they lie about performance. But given benchmarks are too simple IMO. By example bench their lib with Github routes would be much more interesting, by example: https://github.com/julienschmidt/go-http-routing-benchmark/blob/master/github_test.go Or adding sleeps for each request, to see how it’s behaves not as a simple echo.