1. 8

Mailing list post includes a pdf link to a networking performance comparison of DragonFlyBSD, FreeBSD, and Linux.

Interesting comment by Matthew Dillon on DragonFlyDigest about it too.

  1.  

  2. 6

    Is it odd that FreeBSD is so low? I was under the impression the network stack was a strong point in FreeBSD, is that not the case or are these graphs showing something else?

    1. 1

      Isn’t this an HTTP benchmark? I’m not certain this is a network stack bottleneck. There are recent benchmarks showing FreeBSD excelling in packets per second

      1. 1

        It seemed odd to me too, but then again, I have been hearing that DragonFly has done a lot of good performance improvement work, so not sure.

        I do seem to recall reading something about nginx and reuseport not working on FreeBSD. The pdf does call this out specifically. I wonder how much impact that has on the results.
        I don’t recall seeing it documented in the pdf, but I wonder if sendfile was used for the static files.

        1. 3

          According to the nginx 1.9.1 release notes, SO_REUSEPORT has a big impact on performance, on Linux at least:

          I compared three NGINX configurations: the default (equivalent to acceptmutex on), with acceptmutex off, and with reuseport. As shown in the figure, reuseport increases requests per second by 2 to 3 times, and reduces both latency and the standard deviation for latency.

          There’s a discussion about it not working with FreeBSD on the nginx forums.

      2. 1

        Interesting comparison. Performance tests are always a tricky one as there’s the perennial issue of ensuring all the systems are optimally tuned, particularly as, in this case, FreeBSD was used pretty much “out the box”. I’d be curious to see a response from the FreeBSD team.

        Still, it’s impressive just how much the (relatively small) DragonFlyBSD team have been able to push the FreeBSD 4.x architecture to its limits.

        1. 1

          From my limited understanding, it all centers around one architecture change that lets all this happen as well, which is pretty cool.

          1. 3

            I don’t believe it is as simple as an architectural design change. There are many decisions and tradeoffs all over the code base which contribute to the overall result. Even with a great architecture at the base I would expect years of work for getting all the details right. And new devices that each need a different driver come out all the time, and each new driver will have a new set of bugs all over again.