1. 25
  1.  

  2. 15

    I liked his idea of using framework delay as a measure that he tracks between releases—it’s not particularly significant but you want to keep an eye on the direction.

    1. 7

      These things add up, web framework overhead, orm overhead, random for {} loops for giggles. Performance is a feature, own it.

      1. 7

        From my experience doing performance consulting as a day job: What finally kills you is the missing database index on the other end.

        Push comes to shove, you can always replace a hot API spot by a specialized server that fully optimised.

      2. 5

        I don’t get the author’s point. A few microseconds times thousands is going to make a difference. Also, serving static responses is a good way to exercise the critical path of a web framework, as the request processing time is an epsilon. The only bad thing with benchmarks is that they focus on a subset of the use cases, but why not create another benchmark that gives a different perspective?

        1. 7

          I don’t get the author’s point. A few microseconds times thousands is going to make a difference. Also, serving static responses is a good way to exercise the critical path of a web framework, as the request processing time is an epsilon.

          It isn’t. A web framework isn’t a HTTP router, but so much more. It’s a way to assemble a complex program around a request/response cycle. HTTP accepting and routing is rarely the critical path of any framework.

          For example, many web frameworks have a huge stack of attack mitigations running on every request (Path traversal, CSRF stuff, HTML escaping on the output buffer). They are there by default because it’s frankly dangerous to turn it off. Surely, such a framework is beaten hands down by a “basic” framework that doesn’t do such stuff.

          A few milliseconds times thousands also makes no difference in an environment where requests are independent - they can be trivially distributed. Fleets of web servers are not unusual. Obviously, there’s a point where optimising that is financially viable, but then, rewrites of hot paths are also feasible.

          For static responses, no web framework is appropriate (because that’s what Nginx is there for).

          The only bad thing with benchmarks is that they focus on a subset of the use cases, but why not create another benchmark that gives a different perspective?

          Because it’s unfeasible to build reasonably large, similar apps for every framework just to benchmark.

          1. 2

            I wish the article would have contained your explanation: it makes it much clearer why the author is criticizing the use requests/second as a key metric, and focuses on overhead instead. At the end of the day, the end goal is to interpret a request and send a response, and it’s good to know what time budget you have per request to compute a response – getting the request and sending the response being the minimal set of operations, it sets ceiling for performance.

            1. 1

              attack mitigation should have little impact on performance.

              1. 1

                Sure, but on a “hello-world-as-many-req-per-second-we’re-shaving-off-nanoseconds” benchmark, it’s a surprising lot.

                1. 1

                  fair enough

          2. 3

            I also don’t get the idea of the “requests per second” benchmarks. Why is “requests per second” a good thing to measure without mentioning what that “request” is doing?

            1. 2

              I see a similar behaviour by some web framework developers in Go.

              Fasthttp comes to mind, which is obviously the fastest but also has a lot of incompatibilities and lacks HTTP/2.

              I’e yet to find someone with a problem case where Fasthttp vs literally any other library would have a measurable or useful impact on performance.

              Just pick the framework you like most and is most useful to get the app up and running, then you’re set, don’t worry about benchmarks at this layer until it becomes a problem.