1. 12
  1.  

  2. 1

    I haven’t personally done benchmarks, but I thought I read that using the SSH protocol with rsync was a lot slower than using the rsync protocol due to no encryption overhead. Did you see differently?

    1. 3

      Yeah, I can definitely believe that SSH can become the bottleneck, or pose a significant overhead, in many setups. But, in my tests, using unencrypted rsync daemon mode was even slower for some reason!

      I ran some tests with my network storage PC, downloading a 50GB zero file from my workstation (both connected via a 10 Gbit/s link):

      • curl -v -o /dev/null reaches ≈1000 MB/s — maximum achievable on this 10 Gbit/s link
      • ssh midna.lan cat > /dev/null reaches ≈368 MB/s — SSH overhead
      • rsync (writing to tmpfs) via SSH reaches ≈321 MB/s
      • rsync (writing to tmpfs) unencrypted reaches ≈337 MB/s

      But, once you write to disk, throughput drops even further:

      • scp (writing to disk) reaches ≈280 MB/s — SSH+disk overhead
      • rsync (writing to disk) via SSH reaches ≈213 MB/s — rsync overhead
      • rsync (writing to disk) unencrypted reaches ≈199 MB/s (!) not sure why this is slower
      1. 1

        As you have a really fast network, I’m wondering if compression is to blame? IIRC, in both programs compression is disabled by default, but distributions might change it.