I read the article. It seems like the protocol you’ve implemented takes
Inputs
the inter-arrival time of acknowledgements,
changes in round trip time, including the minimum, the moving average and the variance,
the number of consecutive packets lost per each loss event,
the historical instantaneous buffer bloat at the time of loss events,
estimated link capacity and the ideal bandwidth delay product needed to fill it
and acts more aggressively based on this information, is that correct?
If you’re acting more aggressively won’t that inherently increase congestion when an additional draw on network resources becomes active? You’re inherently going to incur losses, but you’ll refuse to back off as aggressively as normal TCP.
I’ve never tried RDP + Bit Torrent.
In order for your Super TCP to work though, it would have to understand application level in order to separate flows, otherwise wouldn’t it simply balance RDP as a single TCP flow and bit torrent would open many TCP flows? It is unclear how your service achieves this. It seems it would need to be an OS patch, or implemented in your router, which then does TCP termination for local clients. This is how a lot of the WAN accel proxies work (riverbed etc.)
Correct. However, our protocol will be more aggressive only if there are no signs of congestion. If other flows become active, SuperTCP will indeed backoff and share the resources. This works the best if the other flows also use SuperTCP.
If many other normal TCP flows are contending for the same bottleneck, congestion is inevitable, in which case all flows may experience losses and backoff accordingly.
Furthermore, while normal TCP will wait for congestion loss to occur before backing off, SuperTCP will actually slow down upon early signs of congestion, decreasing the chance of queue overflow. Even without application level traffic shaping, this latter feature can improve the QoE of RDP sessions, as the extra latency from buffer bloat is minimized.
I read the article. It seems like the protocol you’ve implemented takes
Inputsthe inter-arrival time of acknowledgements, changes in round trip time, including the minimum, the moving average and the variance, the number of consecutive packets lost per each loss event, the historical instantaneous buffer bloat at the time of loss events, estimated link capacity and the ideal bandwidth delay product needed to fill it
and acts more aggressively based on this information, is that correct?
If you’re acting more aggressively won’t that inherently increase congestion when an additional draw on network resources becomes active? You’re inherently going to incur losses, but you’ll refuse to back off as aggressively as normal TCP.
I’ve never tried RDP + Bit Torrent.
In order for your Super TCP to work though, it would have to understand application level in order to separate flows, otherwise wouldn’t it simply balance RDP as a single TCP flow and bit torrent would open many TCP flows? It is unclear how your service achieves this. It seems it would need to be an OS patch, or implemented in your router, which then does TCP termination for local clients. This is how a lot of the WAN accel proxies work (riverbed etc.)
Correct. However, our protocol will be more aggressive only if there are no signs of congestion. If other flows become active, SuperTCP will indeed backoff and share the resources. This works the best if the other flows also use SuperTCP. If many other normal TCP flows are contending for the same bottleneck, congestion is inevitable, in which case all flows may experience losses and backoff accordingly.
Furthermore, while normal TCP will wait for congestion loss to occur before backing off, SuperTCP will actually slow down upon early signs of congestion, decreasing the chance of queue overflow. Even without application level traffic shaping, this latter feature can improve the QoE of RDP sessions, as the extra latency from buffer bloat is minimized.