1. 15
  1. 5

    This is an OpenWRT image built around speedify, the latter component of which I believe has been discussed here before.

    (I clicked hoping for something new not dependent on such a 3rd party service and was disappointed.)

    1. 1

      Sorry, fixed the title. I couldn’t find any past discussions here.

      1. 1

        Thanks for explaining and I’m definitely not disparaging your work. This looks very helpful for people at a certain scale or with certain needs - thanks for making it public and freely available.

        WRT alternatives, the first thing that comes to mind is I imagine a modern solution would use MPTCP at the gateway and call it a day while a more compatible solution would have an open source client/server architecture that would tag and forward all traffic over both connections to a remote VM and reassemble them on the other end.

        1. 5

          It’s nowhere easy to implement with different links.

          MPTCP is for TCP aggregation, it’s not a tunnel.

          There is a great project by the name OpenMPTCProuter(OMR) that makes use of it. Bonding is done via a TCP proxy for TCP only as a side effect having MPTCP enabled per WAN. Clients are now behind a proxy. UDP and non-TCP flows are second class citizens, a TCP VPN is used to route these flows over the proxy. MPTCP requires an ISP that doesn’t filter TCP headers which is uncommon, OMR solves this by using another VPN per WAN which decreases performance and cause bufferbloat. (VPN->Mptcp->proxy->tcp VPN for non TCP, TCP over TCP over TCP)

          OMR is excellent for bonding over a gigabit as tested by few contributors, it’s not designed for lossy networks.

          There is no seamless failover, and having one WAN going lossy will disrupt connectivity. MPTCP doesn’t work if the client cold start with the master interface down. You can have 3 WANs up and 1 down that happens to be a master and experience a loss of connectivity on reboot or power up.

          Residential grade internet is not reliable enough for MPTCP, let alone combining cellular with wired unless one of them is very reliable and set as master. This is not an issue in data centers. Apple uses their own MPTCP “fork” to work around this, it’s mainly used for data migration in video calls.

          The more you aggregate WANs the less the reliability hence why Speedify includes a basic SD-WAN, other solutions are equivalent to a RAID 0 hard drive storage in terms of reliability. There are channels for different streams: aggregation/bonding channel, mirroring/redundant channel. Routing each stream to these channels being application aware. Redundant duplicates data across WANs while aggregate channel bonds across them. With video calls, thin streams, large downloads, and a low quality network will cause Speedify to route the calls and sensitive thin streams to the redundant channel while aggregating the bulk downloads, bulk downloads can handle packet loss and small disruptions, no real time requirments, this works with “Streaming mode” enabled. There is more info in the README.md page. You can combine different links with different speeds and latencies, however it takes time for the speed to ramp up when adding more than 3 WANs while weighing and monitoring history for quality rating score per WAN. A bad connection will be removed from aggregation but is still used in the redundant channel. Speedify seems to have a huge reordering buffer, saturating each WAN will cause a small fixed bufferbloat. There is FEC when encryption is enabled which is a bonus for lossy wireless connections.

          Edit: Regarding OSS alternatives, there is a project called “Engarde” that is a middleware for Wireguard to duplicate data for redundancy only. Glorytun for aggregation only but without reordering the packets (bad for TCP with different links) and no automatic weighing yet. MLVPN doesn’t have a functional reordering buffer, no auto weighing, and aggregation only. Vtrunkd is abandoned and unfinished as it moved to commercial, it’s very similar to Speedify, offers redundancy and aggregation.

          Auto weighing and reordering are not needed for combining two equal gateways from the same ISP and identical hardware, you can get away with bonding two OpenVPN clients tied per each WAN interface in pfSense, or using Linux Ethernet bonding if the jitter and latency is very low. MLVPN was designed for equal latencies but with large tolerance and failover.

          1. 1

            ZeroTier also seems to support multipath.

            Also, apparently OMR can use UDP as a transport as well.

            If MPTCP is not supported, OpenMPTCProuter can also use Multi-link VPN (MLVPN) or Glorytun UDP with multipath support. ref: https://github.com/Ysurac/openmptcprouter/wiki

            No idea how well they work though.

            1. 1

              OMR using Glorytun UDP only or MLVPN only disables MPTCP and shadowsocks proxy. MPTCP is TCP only. All aggregation solutions are called multi path.

              I mentioned Glorytun and MLVPN in the previous comment for details.

              OMR supports VPN per WAN to forward MPTCP with Wireguard (default since 0.58) or OpenVPN both causing large overhead and bufferbloat. This is not clearly mentioned unfortunately.

              Since 0.58, OMR is using the master branch of Glorytun without the correct parameters in post-tracking script. An unofficial dormant fork of MLVPN that’s not functional instead of the updated fork called Ubond or simply using official MLVPN. There is no easy way to downgrade to older versions on the server side.

              Zerotier multipath is for equal links only, no buffering or auto* weighing.

              1. 1

                Zerotier multipath is for equal links only, no buffering or weighing.

                Sure seems like it has weighting?

                1. 2

                  Sorry, “Auto weighting”, typo. Anyways can’t wait till they actually implement these features as they’re only documented, the basic multi path function barely works, even then automatic detection and buffer length is necessary for dynamic links like cellular and residential internet, this is missing in Zerotier, reordering buffer is needed for non equal links, UDP and QUIC will run fine. Flow based is load balancing no packet aggregation hence why they suggest to use it for TCP streams. I haven’t tried dev-mutlipath testing branch yet: https://github.com/zerotier/ZeroTierOne/issues/1412

                  Checkout ubond, it has a working auto weight and reordering buffer in the master branch as of today but it lacks any documentation, I followed MLVPN docs. I’ll eventually add it after some polishing and use few major VPS providers using their API for one click server deployment requiring prepaid balance and an account for self hosting to maintain the “no CLI” approach, much less simpler than a 3rd party service however.

                  1. 2

                    Thanks for the info!