1. 3

    Looks like the result at the top is pulled from the “min” column because it’s more favorable to nuster :)

    1. 3

      I’m not sure how to interpret the requests per second. They are all over the place on nginx but the average is pretty close to nuster.

      1. 1

        Not min, from finished in 29.51s, 338924.15 req/s, 48.81MB/s and finished in 90.56s, 110419.16 req/s, 15.62MB/s

      1. 2

        working on a big feature of open source cache server nuster

        1. 1

          What is that feature?

          1. 1

            active cache

        1. 3

          Debugging several rare bugs of open source cache server nuster reported by community

          1. 4

            I’m still working on open source project, nuster, adding major feature.

            1. 2

              Working on open source project, nuster, a cache server based on HAProxy.

              https://github.com/jiangwenyuan/nuster

              Update to HAProxy v1.8.8, finally HTTP/2 support

              1. 4

                Working on open source project, nuster, a cache server based on HAProxy.

                Update to HAProxy v1.7.10, change config directives, remove share on|off mode, refactor code for next HAProxy v1.8 merging.

                Finished v1.7.x version, will move to HAProxy v1.8.x(v1.8.8)

                1. 1

                  Like this, but it is a little bit slow:(

                  1. 1

                    Do you only ever use du on one and only one directory? If not, does it take less time using du to look into multiple directories?

                  1. 3

                    Working on open source cache server nuster, migrating to HAProxy v1.8

                    1. 6

                      Working on open source cache server nuster, migrating to HAProxy v1.8

                      1. 5

                        Release nuster v1.7.9.9, a caching proxy server.

                        Added cache stats functionality, fixed a security bug.

                        1. 3

                          Working on cache server nuster, fixed a security bug which can bypass ACL check, implementing cache stats functionality.

                          When cache stats is done, nuster would be a full functional cache server. After that, I will start to refactor some code(mainly rename cache related variables/functions with same name used in haproxy v1.8), and import haproxy 1.8.

                          1. 1

                            How will this compare to the cache built in to haproxy1.8?

                            Have you considered extending/copying the process that syncs stick tables to peers, to also sync cached data to peers (so a failover event doesn’t then necessitate a heap of unneeded backend requests)?

                            1. 2

                              The cache introduced in haproxy1.8 has many limitations, for example only small response can be cached, which defaults to 16KB as defined by the global parameter tune.bufsize, while nuster can cache any size response.

                              Also haproxy1.8 cache only works for 200 and GET, while nuster can cache any http code and POST/PUT.

                              Also haproxy1.8 cache can only use host and urias key, while nuster can use header, cookie, query as key too.

                              And nuster has PURGE functionalities, disable/enable at runtime functionalities, stats, and so on, it is full functional compare to Varnish(HTTPS only available in Plus version) or squid(slow) or nginx(cache, purging only available in Plus version ).

                              I’m going to do the sync thing(maybe use stick tables) and persistence after importing v1.8.

                              1. 1

                                Thanks for the detailed explanation,

                                From what I read I’m not sure the “fully functional compared to varnish” is accurate.

                                You don’t have a config language like VCL, or an equivalent to vmod’s do you? And I’m guessing none of the extras like compression handling, esi processing, etc.

                                I’m not saying there isn’t a use case for your project, I just don’t quite see it as flexible as a regular haproxy+varnish pairing.

                          1. 6

                            Working on cache server nuster, adding following features:

                            • Purge cache by path (curl -X PURGE -H “path: PATH”)
                            • Purge cache by regex (curl -X PURGE -H “regex: REGEX”)
                            • Purge cache by host (curl -X PURGE -H “x-host: HOST”)
                            • And purge by combination of above
                            1. 6

                              Working on web cache proxy server nuster, added following features and tests:

                              • Purge all cache (curl -X PURGE -H "name: *")
                              • Purge the cache belong to a proxy (curl -X PURGE -H "name: proxy-name")
                              • Purge the cache belong to a cache-rule (curl -X PURGE -H "name: cache-rule-name")
                              1. 5

                                Working on web cache proxy server nuster, implementing feature to update cache TTL at run time by API

                                1. 1

                                  So what does Nuster do that HAProxy doesn’t?

                                  1. 2

                                    nuster added cache ability to HAProxy, nuster = haproxy + cache. Also it is different from the cache feature introduced in haproxy v1.8 which has many limitations.

                                  1. 7

                                    Still working on open source project, web cache server nuster, added support for disable/enable cache by name on runtime. Will release this week.

                                    1. 3

                                      Working on open source project, web cache server nuster, developing more powerful cache purging features(purge cache by tag/name/regex), disable/enable cache on the fly, etc

                                      1. 7

                                        Working on open source project nuster, fixed a bug, released a new version.

                                        1. 1

                                          Interesting. Wonder why this isn’t being upstreamed to HAProxy, but maybe there are too many major changes to the codebase for Willy to be comfortable with.

                                          I’m a little skeptical of the claims of being faster than Nginx and Varnish without formal benchmarks and specific configurations being listed. It’s not uncommon for someone to do a basic benchmark of Varnish and not have it configured optimally. There are more settings than just the VCL.

                                          1. 2

                                            Hi, here is the benchmark https://github.com/jiangwenyuan/nuster/wiki/Performance-benchmark:-nuster-vs-nginx-vs-varnish

                                            Include hardware, software, system parameters, config files and such. Any parameters tuning suggestion is welcomed.

                                            1. 2

                                              Can you ensure that you’re using the critbit hashing algorithm as that wasn’t listed and it’s possible the package/distro you’re on still has this set to “classic” by default? Also test against Varnish 5 so all software is at it’s current major release branch?

                                              Is the backend webserver sending any Cache-Control headers for this content? Varnish will obey that regardless of your beresp.ttl setting unless you forcibly remove that header. So can you verify that Varnish is getting 100% hits and not hit_for_pass ? Otherwise Varnish is accepting many connections in a burst and then making a single request to the backend. This is far from optimal instead of serving everything from cache instantly.

                                              1. 2

                                                If you see the check-http-response-header-and-data-size section, you can find out that there’s no cache related headers. I’m sure that requests 100% go to varnish(there’s no log in backend server except initial request).

                                                And it is critbit, maybe I should test against varnish 5 :)

                                                Do you have any other config suggestions? like pool threads?

                                                1. 2

                                                  When tuning Varnish, think about the expected traffic. The most important thread setting is the number of cache-worker threads. You may configure thread_pool_min and thread_pool_max. These parameters are per thread pool.

                                                  Although Varnish threading model allows you to use multiple thread pools, we recommend you to do not modify this parameter. Based on our experience and tests, we have seen that 2 thread pools are enough. In other words, the performance of Varnish does not increase when adding more than 2 pools.

                                                  Defaults in 4.0+ are 100 thread_pool_min (100 free worker threads) and thread_pool_max of 5000 (5000 concurrent worker threads)

                                                  As this is a synthetic benchmark not replicating real world scenarios, the question is: how many concurrent connections do you want? You could start the service with thread_pool_min of… 5000 and thread_pool_max 10000 so you can instantly be able to handle more responses, if you wanted (at the cost of more memory).

                                                  Is this a single socket / single cpu (Intel(R) Xeon(R) CPU X5650 @ 2.67GHz) server? If you have multiple physical CPUs you should make sure the process affinity is set to pin it to a single CPU socket or you will have a performance penalty for accessing memory directly connected to the remote CPU socket.

                                                  1. 1

                                                    Hi, I’ve test against varnish 4.1.8 and 5.2.1 with and without -p thread_pool_min=5000 -p thread_pool_max=100000 it does not make a difference

                                                    1. 1

                                                      Let me do benchmark again.

                                                      It s 12 cores CPU, i tested with 1 core and 12 cores both.