I haven’t personally done benchmarks, but I thought I read that using the SSH protocol with rsync was a lot slower than using the rsync protocol due to no encryption overhead. Did you see differently?
Yeah, I can definitely believe that SSH can become the bottleneck, or pose a significant overhead, in many setups. But, in my tests, using unencrypted rsync daemon mode was even slower for some reason!
I ran some tests with my network storage PC, downloading a 50GB zero file from my workstation (both connected via a 10 Gbit/s link):
curl -v -o /dev/null reaches ≈1000 MB/s — maximum achievable on this 10 Gbit/s link
As you have a really fast network, I’m wondering if compression is to blame? IIRC, in both programs compression is disabled by default, but distributions might change it.
I haven’t personally done benchmarks, but I thought I read that using the SSH protocol with rsync was a lot slower than using the rsync protocol due to no encryption overhead. Did you see differently?
Yeah, I can definitely believe that SSH can become the bottleneck, or pose a significant overhead, in many setups. But, in my tests, using unencrypted rsync daemon mode was even slower for some reason!
I ran some tests with my network storage PC, downloading a 50GB zero file from my workstation (both connected via a 10 Gbit/s link):
curl -v -o /dev/null
reaches ≈1000 MB/s — maximum achievable on this 10 Gbit/s linkssh midna.lan cat > /dev/null
reaches ≈368 MB/s — SSH overheadrsync
(writing to tmpfs) via SSH reaches ≈321 MB/srsync
(writing to tmpfs) unencrypted reaches ≈337 MB/sBut, once you write to disk, throughput drops even further:
scp
(writing to disk) reaches ≈280 MB/s — SSH+disk overheadrsync
(writing to disk) via SSH reaches ≈213 MB/s — rsync overheadrsync
(writing to disk) unencrypted reaches ≈199 MB/s (!) not sure why this is slowerAs you have a really fast network, I’m wondering if compression is to blame? IIRC, in both programs compression is disabled by default, but distributions might change it.