Regarding QUIC itself, though, here’s a radical idea: Instead of creating a new protocol that mixes different OSI layers and complicates things by several orders of magnitude, why not simply stop making bloated websites that pull in dozens of stylesheets and scripts? I know that QUIC shaves off a few milliseconds in RTT, but today’s latency/lagginess comes from other sources.
I don’t expect most modern web developers to stop and think about this, given they are so occupied with learning a new JavaScript-framework every 4 months. The modern web seriously needs a reform.
Regarding QUIC itself, though, here’s a radical idea: Instead of creating a new protocol that mixes different OSI layers and complicates things by several orders of magnitude, why not simply stop making bloated websites that pull in dozens of stylesheets and scripts?
QUIC isn’t just for HTTP/3. It’s a lighter-weight protocol than TLS + TCP that can be used for anything that needs end-to-end encryption. Like SCTP, it provides multiple streams, so anything that wants to send independent data streams could benefit from QUIC. It would be great for SSH, for example, because things like X11 forwarding would be in a separate QUIC channel to the TTY channel and so a dropped packet from the higher-bandwidth X11 channel wouldn’t increase latency on the TTY. SSH already multiplexes streams like this but does so over TCP and so suffers from the head-of-line blocking problem (a dropped packet causes retransmit and blocks all SSH channels that are running over the same connection).
Good luck funneling the QUIC-UDP-SSH-stream through any half-decent firewall, and you can always multiplex streams in other ways. Surely there are performance advantages to QUIC, but at what cost? It’s just reinventing the wheel (TCP), just much more complicated. The complex state-handling will just lead to more bugs, but it’s another toy web developers can play with. Instead of striving for more simplicity, we are yet again heading towards more complexity.
Any half-decent firewall is either going to see the QUIC and not be able to know what the contents are, or it’s going to be a bank-type setup with interstitial certificates because it’s very important for them to enforce which connections are taking place.
It’s more complicated than TCP, sure, but motorized transport is more complicated than walking because the complication provides a compelling speed advantage.
It’s also not really a toy for web developers? My experience with HTTP 2 was that it was basically just a checkbox on a CDN or the presence of a config line for a couple years, and then it was just something I didn’t think about.
I agree that it sucks that the ship has seemingly sailed on simpler sites, but saving round trips is valuable when loading a smaller site off a bad network connection, like you might find in rural or underserved places and also major airports.
If you have multiple logical data streams in your protocol then you have the complex state handling anyway, just at the application level rather than the protocol level.
chill!! There are other reasons to use this. For example I’m working on a reverse-tunnel-as-a-service project similar to ngrok or pagekite, aimed at self-hosting and publishing stuff on the fly. Currently its tunnel uses Yamux over TLS – this is fine, but I noticed that it multiplies the initial connection latency quite a bit. When the tunnel server is nearby it’s fine, but someone from Sweden was testing this system and for them the initial page load lag was notice-able. I know that ultimately having servers in every region is the only “best” solution but after looking at the yamux code I believe that by using QUIC for my tunnel instead, I can cut the painful extra latency in half. It’s nice that its internal “logical” connections don’t add extra latency to establish!!
The modern web seriously needs a reform.
I agree!! This is why I’m working on a tool to make it easier for everyone to serve from the home or from the phone.
I’m the author of this page, let me know if you spot any problems with it - I’m always looking to improve the wording or fix up the pages, especially for anything inaccurate. (Still need to go through my TLS 1.2 page, it’s the first I wrote and I’ve learned a lot since then).
I’m not a mathematician or cryptographer, so I have plenty of blind spots. I want to keep the wording in the page readable to a layman developer, so I’ll usually strike a balance against domain-specific terms as long as the page is still accurate.
This article is very well done!
Regarding QUIC itself, though, here’s a radical idea: Instead of creating a new protocol that mixes different OSI layers and complicates things by several orders of magnitude, why not simply stop making bloated websites that pull in dozens of stylesheets and scripts? I know that QUIC shaves off a few milliseconds in RTT, but today’s latency/lagginess comes from other sources.
I don’t expect most modern web developers to stop and think about this, given they are so occupied with learning a new JavaScript-framework every 4 months. The modern web seriously needs a reform.
QUIC isn’t just for HTTP/3. It’s a lighter-weight protocol than TLS + TCP that can be used for anything that needs end-to-end encryption. Like SCTP, it provides multiple streams, so anything that wants to send independent data streams could benefit from QUIC. It would be great for SSH, for example, because things like X11 forwarding would be in a separate QUIC channel to the TTY channel and so a dropped packet from the higher-bandwidth X11 channel wouldn’t increase latency on the TTY. SSH already multiplexes streams like this but does so over TCP and so suffers from the head-of-line blocking problem (a dropped packet causes retransmit and blocks all SSH channels that are running over the same connection).
Good luck funneling the QUIC-UDP-SSH-stream through any half-decent firewall, and you can always multiplex streams in other ways. Surely there are performance advantages to QUIC, but at what cost? It’s just reinventing the wheel (TCP), just much more complicated. The complex state-handling will just lead to more bugs, but it’s another toy web developers can play with. Instead of striving for more simplicity, we are yet again heading towards more complexity.
Any half-decent firewall is either going to see the QUIC and not be able to know what the contents are, or it’s going to be a bank-type setup with interstitial certificates because it’s very important for them to enforce which connections are taking place.
It’s more complicated than TCP, sure, but motorized transport is more complicated than walking because the complication provides a compelling speed advantage.
It’s also not really a toy for web developers? My experience with HTTP 2 was that it was basically just a checkbox on a CDN or the presence of a config line for a couple years, and then it was just something I didn’t think about.
I agree that it sucks that the ship has seemingly sailed on simpler sites, but saving round trips is valuable when loading a smaller site off a bad network connection, like you might find in rural or underserved places and also major airports.
If you have multiple logical data streams in your protocol then you have the complex state handling anyway, just at the application level rather than the protocol level.
chill!! There are other reasons to use this. For example I’m working on a reverse-tunnel-as-a-service project similar to ngrok or pagekite, aimed at self-hosting and publishing stuff on the fly. Currently its tunnel uses Yamux over TLS – this is fine, but I noticed that it multiplies the initial connection latency quite a bit. When the tunnel server is nearby it’s fine, but someone from Sweden was testing this system and for them the initial page load lag was notice-able. I know that ultimately having servers in every region is the only “best” solution but after looking at the yamux code I believe that by using QUIC for my tunnel instead, I can cut the painful extra latency in half. It’s nice that its internal “logical” connections don’t add extra latency to establish!!
I agree!! This is why I’m working on a tool to make it easier for everyone to serve from the home or from the phone.
I’m the author of this page, let me know if you spot any problems with it - I’m always looking to improve the wording or fix up the pages, especially for anything inaccurate. (Still need to go through my TLS 1.2 page, it’s the first I wrote and I’ve learned a lot since then).
I’m not a mathematician or cryptographer, so I have plenty of blind spots. I want to keep the wording in the page readable to a layman developer, so I’ll usually strike a balance against domain-specific terms as long as the page is still accurate.