1. 3
  1.  

  2. 2

    I think HAProxy is an amazing tool, but I disagree with this tool for (some of) the same reasons I disagree with projects like Caddy.

    Too many eggs in one basket. Aka, a tool that tries to do too many things.

    1. 1

      Interesting. Wonder why this isn’t being upstreamed to HAProxy, but maybe there are too many major changes to the codebase for Willy to be comfortable with.

      I’m a little skeptical of the claims of being faster than Nginx and Varnish without formal benchmarks and specific configurations being listed. It’s not uncommon for someone to do a basic benchmark of Varnish and not have it configured optimally. There are more settings than just the VCL.

      1. 2

        Hi, here is the benchmark https://github.com/jiangwenyuan/nuster/wiki/Performance-benchmark:-nuster-vs-nginx-vs-varnish

        Include hardware, software, system parameters, config files and such. Any parameters tuning suggestion is welcomed.

        1. 2

          Can you ensure that you’re using the critbit hashing algorithm as that wasn’t listed and it’s possible the package/distro you’re on still has this set to “classic” by default? Also test against Varnish 5 so all software is at it’s current major release branch?

          Is the backend webserver sending any Cache-Control headers for this content? Varnish will obey that regardless of your beresp.ttl setting unless you forcibly remove that header. So can you verify that Varnish is getting 100% hits and not hit_for_pass ? Otherwise Varnish is accepting many connections in a burst and then making a single request to the backend. This is far from optimal instead of serving everything from cache instantly.

          1. 2

            If you see the check-http-response-header-and-data-size section, you can find out that there’s no cache related headers. I’m sure that requests 100% go to varnish(there’s no log in backend server except initial request).

            And it is critbit, maybe I should test against varnish 5 :)

            Do you have any other config suggestions? like pool threads?

            1. 2

              When tuning Varnish, think about the expected traffic. The most important thread setting is the number of cache-worker threads. You may configure thread_pool_min and thread_pool_max. These parameters are per thread pool.

              Although Varnish threading model allows you to use multiple thread pools, we recommend you to do not modify this parameter. Based on our experience and tests, we have seen that 2 thread pools are enough. In other words, the performance of Varnish does not increase when adding more than 2 pools.

              Defaults in 4.0+ are 100 thread_pool_min (100 free worker threads) and thread_pool_max of 5000 (5000 concurrent worker threads)

              As this is a synthetic benchmark not replicating real world scenarios, the question is: how many concurrent connections do you want? You could start the service with thread_pool_min of… 5000 and thread_pool_max 10000 so you can instantly be able to handle more responses, if you wanted (at the cost of more memory).

              Is this a single socket / single cpu (Intel(R) Xeon(R) CPU X5650 @ 2.67GHz) server? If you have multiple physical CPUs you should make sure the process affinity is set to pin it to a single CPU socket or you will have a performance penalty for accessing memory directly connected to the remote CPU socket.

              1. 1

                Hi, I’ve test against varnish 4.1.8 and 5.2.1 with and without -p thread_pool_min=5000 -p thread_pool_max=100000 it does not make a difference

                1. 1

                  Let me do benchmark again.

                  It s 12 cores CPU, i tested with 1 core and 12 cores both.