1. 8
  1.  

  2. 4

    Under “How It Works” you say that it can speed up uploads up to 4x. But then you say the upload could take 15 minutes instead of 2.5 hours, that’s 10x speedup. Like wise “upload a Youtube video in 5 minutes instead of 50” is also a 10x speedup, not 4x?

    Am I missing something?

    1. 3

      How does this work? The “How It Works” section is too high-level for lobste.rs, but if there’s a paper or something else more technical I’d love to read it.

      1. 5

        Check out the blog post here. If you have other questions let me know and I’ll do my best!

        1. 3

          This is what you should have posted to begin with.

          Does this only improve performance when implemented on the sender’s side? i.e. Will this improve download speed of HarryPotter.exe if I use SuperTCP on my Mac?

          1. 1

            It tries to better utilize the upstream bandwidth available to the system. If you put it on a server, some client might be able to download HarryPotter.exe from that server faster, but if it is installed on the client it won’t be able to help.

            Hope that helps.

      2. 2

        At first I saw the “How it Works” section and thought “Hey! They are going to ruin the connections for the rest of us!” but then I read the blog post and the claim is it won’t increase congestion. I guess by providing a smarter congestion algorithm that uses more information than packet loss for determining congestion.

        I guess the main idea is don’t slow down if you see packet loss if other signs point to no congestion. And that seems really cool.

        1. 1

          Good summary! We definitely think so too. ;)

          1. 1

            Sent you a message about trying the beta.

        2. 2

          I read the article. It seems like the protocol you’ve implemented takes

          Inputs

          the inter-arrival time of acknowledgements, changes in round trip time, including the minimum, the moving average and the variance, the number of consecutive packets lost per each loss event, the historical instantaneous buffer bloat at the time of loss events, estimated link capacity and the ideal bandwidth delay product needed to fill it

          and acts more aggressively based on this information, is that correct?

          If you’re acting more aggressively won’t that inherently increase congestion when an additional draw on network resources becomes active? You’re inherently going to incur losses, but you’ll refuse to back off as aggressively as normal TCP.

          I’ve never tried RDP + Bit Torrent.

          In order for your Super TCP to work though, it would have to understand application level in order to separate flows, otherwise wouldn’t it simply balance RDP as a single TCP flow and bit torrent would open many TCP flows? It is unclear how your service achieves this. It seems it would need to be an OS patch, or implemented in your router, which then does TCP termination for local clients. This is how a lot of the WAN accel proxies work (riverbed etc.)

          1. 2

            Correct. However, our protocol will be more aggressive only if there are no signs of congestion. If other flows become active, SuperTCP will indeed backoff and share the resources. This works the best if the other flows also use SuperTCP. If many other normal TCP flows are contending for the same bottleneck, congestion is inevitable, in which case all flows may experience losses and backoff accordingly.

            Furthermore, while normal TCP will wait for congestion loss to occur before backing off, SuperTCP will actually slow down upon early signs of congestion, decreasing the chance of queue overflow. Even without application level traffic shaping, this latter feature can improve the QoE of RDP sessions, as the extra latency from buffer bloat is minimized.