1. 14

  2. 7

    It is a bad idea, but it works for me quite well, so ¯_(ツ)_/¯

    1. 3

      It basically depends on your network and also your application traffic characteristics. While it’s considered bad practice, it is still done in the real world when there is no other way. For more information I recommend reading RFC8229 https://tools.ietf.org/html/rfc8229#section-12.1

      1. 3

        I understand that it may be problematic, but it is for me the only way to get a reliable VPN from the public internet into my home. It is a contraption of openvpn on tcp 443 + an ssh tunnel, but it works.

        1. 1

          Hey, I am not judging. I have done the same in the past. It should just be your last resort if nothing else works :) I posted rfc8229 so people understand that it can be done but also what the risks involved are.

      2. 2

        Same here, I am using openconnect VPN and udp always getting some weird packet drops (maybe MTU?). But TCP mode works just fine, with acceptable latency and much better stability.

      3. 3

        VPNs should use UDP. Or TCP (dis/re)assembly as sshuttle does.

        1. 1

          I wonder if you could partially alleviate this for links without high variance in throughput by throttling the upper PPP convection to a little less than the expected throughput of the lower IP link?

          Maybe better to idea: switching on ECN bits on the packets just before you put them into PPP, any time the lower TCP connection says its send window is full? I think that might make connections in the upper TCP stack respond to congestion seen by the lower TCP stack much faster.

          If the outer TCP stack notified you when it detected a lost packet, perhaps you could start seeing ECN bits even sooner. That would require adding more features to the sockets API though, which I guess is a tall order.

          1. 2

            The Yggdrasil approach is to use a very high MTU (to try to reduce the number of control messages) and drops packets if the upper TCP layer sends them faster than the lower TCP layer can send them.

            1. 1

              Interesting! Does the larger MTU change the granularity at which clients get feedback (or don’t) about how well their connections are going?

              Dropping packets is perfectly reasonable, it’s just that ECN seems more elegant to because it is nominally supposed to have the same effect on client send rate as a drop, but didn’t involve throwing away a perfectly good packet that maybe already traversed several hops consuming bandwidth along the way.