The term “ is eating the world” originates, I believe, from Mark Andreesen, the literally egg-headed VC whose claim to fame is to have been involved with starting Netscape. He wrote a blog post a few years back titled “Software is eating the world”. For some reason people took this to be a good thing.
I suspect that it’s more a matter of ‘can’t take advantage of QUIC’ than ‘can’t work with QUIC’. In particular, QUIC allows multiple streams to be multiplexed across the same connection without head-of-line blocking, which means you want to layer HTTP differently (connect, send requests on new streams in parallel, share any client authentication across all streams).
HTTP/2 header compression depends on the order that messages are multiplexed onto the TLS stream. But in HTTP/3, each HTTP message is a separate QUIC stream, and there is no ordering guarantee. So HTTP/3 header compression needs to be different.
I think HTTP/1.1 only counts as simpler than HTTP/3 if you’re looking at just the top layer of the stack. TCP plus TLS plus HTTP/1.1 looks more complex than UDP plus QUIC plus HTTP/3.
But that only works if you’re only looking at a single stack. Most computers connected to the Internet today already have a TCP/IP stack, and most of those that use HTTP already have support for TLS. So you don’t save any complexity by using UDP and QUIC: you just add QUIC to the list of things to worry about.
This isn’t an argument against HTTP/3 or QUIC, I just wanted to point out that most things can be either simpler or more complex depending on which parts of the system you consider.
That’s definitely true for desktop computers. I’ve recently been looking at QUIC for IoT devices and it looks as if you can probably get away with less code and a lower RAM requirement with QUIC than TCP + TLS. The same is probably true of unikernel server things.
But QUIC is way more complicated than TCP + TLS: it has to multiplex streams, the crypto has to cope with out-of-order delivery, it has multipath support, …
Depends on how much of both you implement. The multipath bits of QUIC are more complex. The difference in packet sizes between TCP and TLS add a lot of complexity and that’s not optional: you have to handle the fact that the TLS bit may need to buffer an arbitrary amount of state before you have a complete TLS frame and can validate HMACs and safely pass the data to the next layer. For memory-constrained systems, implementing that safely is hard. In contrast, each datagram in a QUIC stream is an atomic unit and can be safely decrypted and discarded. The only buffering that you need is to handle packet drops / out-of-order delivery of packets within a stream and you can always bound that (and just drop packets after the first drop that the network does).
I’m fairly certain that the maximum length of a TLS record isn’t arbitrary; it’s 32 KiB. This isn’t trivial for a small embedded system, but it’s not massive.
X509 certificates, on the other hand, I think are technically allowed to have records of length up to 2^2048.. cough.
I suspect it’s only ranked higher because it has a >= 1.0 version number. reqwest is more popular, 3.3M downloads vs. 742k, but has not reached 1.0 yet.
non-native speaker here - is “eating the world” considered a good thing?
It’s a bit context-dependent, but I don’t think it necessarily implies a value judgement in either direction.
I mean it doesn’t sound like it?
The term “ is eating the world” originates, I believe, from Mark Andreesen, the literally egg-headed VC whose claim to fame is to have been involved with starting Netscape. He wrote a blog post a few years back titled “Software is eating the world”. For some reason people took this to be a good thing.
Doesn’t tend to be a good thing when I say it, but as others have mentioned there’s some weird context behind it.
Huh? Surely http2 just requires a stream oriented protocol?
I suspect that it’s more a matter of ‘can’t take advantage of QUIC’ than ‘can’t work with QUIC’. In particular, QUIC allows multiple streams to be multiplexed across the same connection without head-of-line blocking, which means you want to layer HTTP differently (connect, send requests on new streams in parallel, share any client authentication across all streams).
HTTP/2 header compression depends on the order that messages are multiplexed onto the TLS stream. But in HTTP/3, each HTTP message is a separate QUIC stream, and there is no ordering guarantee. So HTTP/3 header compression needs to be different.
Fun fact: Rust’s #1 HTTP client library by @kornel’s ranking,
ureq, only supports up to HTTP/1.1. (ureqis designed for simplicity.)I think HTTP/1.1 only counts as simpler than HTTP/3 if you’re looking at just the top layer of the stack. TCP plus TLS plus HTTP/1.1 looks more complex than UDP plus QUIC plus HTTP/3.
But that only works if you’re only looking at a single stack. Most computers connected to the Internet today already have a TCP/IP stack, and most of those that use HTTP already have support for TLS. So you don’t save any complexity by using UDP and QUIC: you just add QUIC to the list of things to worry about.
This isn’t an argument against HTTP/3 or QUIC, I just wanted to point out that most things can be either simpler or more complex depending on which parts of the system you consider.
That’s definitely true for desktop computers. I’ve recently been looking at QUIC for IoT devices and it looks as if you can probably get away with less code and a lower RAM requirement with QUIC than TCP + TLS. The same is probably true of unikernel server things.
But QUIC is way more complicated than TCP + TLS: it has to multiplex streams, the crypto has to cope with out-of-order delivery, it has multipath support, …
Depends on how much of both you implement. The multipath bits of QUIC are more complex. The difference in packet sizes between TCP and TLS add a lot of complexity and that’s not optional: you have to handle the fact that the TLS bit may need to buffer an arbitrary amount of state before you have a complete TLS frame and can validate HMACs and safely pass the data to the next layer. For memory-constrained systems, implementing that safely is hard. In contrast, each datagram in a QUIC stream is an atomic unit and can be safely decrypted and discarded. The only buffering that you need is to handle packet drops / out-of-order delivery of packets within a stream and you can always bound that (and just drop packets after the first drop that the network does).
I’m fairly certain that the maximum length of a TLS record isn’t arbitrary; it’s 32 KiB. This isn’t trivial for a small embedded system, but it’s not massive.
X509 certificates, on the other hand, I think are technically allowed to have records of length up to 2^2048.. cough.
I suspect it’s only ranked higher because it has a >= 1.0 version number. reqwest is more popular, 3.3M downloads vs. 742k, but has not reached 1.0 yet.
Nice QUIC summary!
Fun to see this page being served by HTTP2 :)