1. 28
    1. 8

      Im surprised at how performant sshfs is. I always thought if it as a short term mounting option for when you infrequently need to acess a folder and dont want to go through the bother of setting up a full remote filesystem, but if it was something you wanted permanently mounted, you should really go with something higher performing. Happy to be shown to be wrong.

      1. 5

        I think the issues with sshfs are less around performance per se, and more around not having all of the complex facilities that NFS and SMB provide for safe access (under some conditions) from multiple machines; e.g., leasing and cache coherence management, open after close consistency, etc.

        1. 1

          I’ve never ever had good experiences with sshfs for anything but short term mounting, but that was for remote access, not local access. Though as I recall it was kinda unreliable, even when performance was good.

        2. 4

          Generally, I am surprised that the impact of encryption is so different across protocols. Ok, if SMB only uses one CPU max, I get it. Maybe it is something related to Paket fragments or similar. Many small encrypted packages probably have more overhead than fewer bigger packages. While the penalty is less for plaintext.

          I wonder how NFS over wireguard would perform…

          1. 4

            I do unencrypted NFS over Nebula(https://github.com/slackhq/nebula) I’ve never benchmarked it, but it’s plenty fast enough for my needs. Nebula doesn’t use Wireguard, but does use the same Noise Protocol Framework, so the crypto is roughly the same.

            1. 1

              But nebula runs in user space?

              1. 1

                yes, doesn’t Wireguard?

                1. 2

                  Not normally, on Linux (or windows?), no?

                  Not that user space has to be slow - I’m unsure of current status, but tailscale did a lot of work on improving user space performance:

                  https://tailscale.com/blog/throughput-improvements

                  But one of the initial selling points of wg was that it ran in the kernel.

                  1. 1

                    Ah, I obviously don’t use Wireguard :)

                    The bonus of Nebula, it will do routing, so if you have 3 machines, say 1 at the office 1 at home and your laptop, as your laptop moves between home and office, it will route local traffic between the laptop and say the office machine across the office network for you, so you get max throughput whenever you are on that network. As you move outside of the office network, it will shift to using the Internet instead as needed, still giving you connectivity, but slower.

                    https://arstechnica.com/gadgets/2019/12/how-to-set-up-your-own-nebula-mesh-vpn-step-by-step/

                    1. 1

                      Netbird will do a p2p mesh using kernel space wireguard. Kernel space linux networking is usually a magnitude faster than same stack in used space.

                      1. 1

                        I don’t disagree, but Netbird looks like they want your money: https://netbird.io/. Nebula is MIT licensed, so I’ll stick with Nebula for now. If I ever get desperate for performance I’ll look into something else.

            2. 1

              I wonder how NFS over wireguard would perform…

              Yeah, that would be really interesting.

            3. 2

              I would be interested to see a comparison on a higher latency connection, let’s say laptop accessing the server over a VPN around 50 ms away. In my experience NFS seemed to have huge issues with this, but I haven’t explored this deeply.

              Also maybe it’s just me but setting up NFS has always been extremely painful. I don’t know about you guys, but I don’t have LDAP in place to have consistent UIDs on my personal machines. Or just useless error messages or timeouts when mounting doesn’t work for one of 10 reasons. Kerberos seems like a configuration nightmare, Tailscale with fixed and trusted IPs was easier. But then you can’t convince systemd to mount only after Tailscale is connected, because Tailscale doesn’t expose a target or anything for this. NFS seems to never want to error out, instead freezing your system, despite setting various timeouts to low values. Good luck umounting (Lazy? Force?) if your shell tries to touch the mount from the prompt or tab-completion logic. I guess this makes sense for servers and such, but not for my laptop.

              SMB seems to have its own weird issues, like a directory listing (for 1 particular directory) loading for 3 minutes from a Windows client.

              Sorry for the rant, but this is a pet peeve of mine. Maybe I’m just unlucky, but I’ve always had so many issues with this. Perhaps I should try sshfs again.