1. 24
  1. 7

    QUIC being hard to parse by router hardware is a feature, not a bug. IIRC (and I may not) this is why encryption was originally introduced in the protocol. I believe that it wasn’t until TLS 1.3 started maturing that it was integrated into QUIC to also provide strong security guarantees, but to be honest I’m really unsure on this point and I’m too lazy to Google at the moment. Maybe someone else can tell us?

    In any case, the reason QUIC being hard to parse by routers is a feature is because it ensures protocol agility. I don’t know the details but there are things that in theory could be done to improve TCP’s performance, but in practice cannot because routers and other middleboxes parse the TCP and then break because they’re not expecting the tweaked protocol. QUIC’s encryption ensures that middleboxes are largely unable to do this, so the protocol can continue evolving into the future.

    1. 2

      Google QUIC used a custom crypto and security protocol. IETF QUIC always used TLS 1.3.

      1. 2

        While there are definitive benfits to it, like improved security from avoiding all attacks that modify packet metadata, it also means you can’t easily implement “sticky sessions” for example, i.e. keeping the client connected to the same server on the whole connection duration. So yeah, it’s always a convinience/security tradeoff isn’t it…

        1. 2

          I am not really a QUIC expert but I don’t really understand the issue here. The Connection ID is in the public header, so what prevents a load balancer from implementing sticky sessions?

          1. 2

            Oh I’m far from an expert too. You’re right, if the router understands QUIC it will be able to route sticky sessions. If it only understands UDP (as is the case with all currently deployed routers) - it won’t be able to, since the source port and even IP can change within a single session. But that’s a “real-world” limitation, not the limitation of the protocol, actually.

            1. 5

              What kind of router are you thinking of?

              A home router that can’t route back UDP to the google chrome application is just going to force google to downgrade to TCP.

              A BGP-peered router has no need to deal with sticky sessions: They don’t even understand UDP.

              A load balancer for the QUIC server is going to understand QUIC.

              A corporate router that wants to filter traffic is just going to block QUIC, mandate trust of a corporate CA and force downgrade to HTTPS. They don’t really give two shits about making things better for Google, and users don’t care about QUIC (i.e. it offers no benefits to users) so they’re not going to complain about not having it.

              1. 2

                You should take a look at QUIC’s preferred address transport parameter (which is similar to MPTCP’s option). This allows the client to stick with a particular server without the load balancing being QUIC aware.

        2. 6

          Two of the most critical changes introduced were … HTTP pipelining.

          I’m not sure I’d agree with that statement. Pipelining was a bit of a failure and basically not seen in the wild.

          While beneficial in theory, this feature rarely seen in practice, since it requires a server to understand the structure of the HTML it serves, which is rarely the case.

          This is not why push isn’t used. It has to do with the complexity of predicting what data the client will definitely need, the need to keep buffers shallow so the server can respond to client changes, the need for the client to accurately cancel requests, and the often very high cost of getting it wrong and over pushing.

          A bit of a nitpick: 0-rtt concerns are orthogonal to quic. You can run quic as 1-rtt, and you can run http1 or 2 with a 0-rtt TLS 1.3 handshake (and TFO if you want to remove the transport layer round trip).

          Two other things I’ll mention that I think are interesting: while both 2 and 3 do header compression, there were significant changes between HPACK and QPACK. HPACK header compression contains head of line blocking which was irrelevant when running over TCP, but made it a bad fit for running over QUIC. And one of the other big advantages of QUIC is having finer grain control of state timers built into TCP - for example, losing the initial SYN is extremely painful, more so than packets in an established session, and QUIC gives you the ability to be more aggressive on retrying that initial loss (which helps with bringing in tail latencies on lossy networks).

          1. 1

            I’m not sure I’d agree with that statement. Pipelining was a bit of a failure and basically not seen in the wild.

            HTTP/1.1 pipelining definitely exists in the wild and has extremely clear performance benefits. To the degree that you can visibly identify pipelining on certain types of websites like image galleries.

            1. 1

              It looks like it does have more adoption on mobile than I gave it credit for, so my original statement was too strong.

              I do still consider it mostly a failed stopgap to http2 though, and I wouldn’t consider the benefits extremely clear - especially given all major non mobile browsers I’m aware of have either turned it off by default or removed it.

              1. 1

                Given that buggy servers and proxies prevented widespread adoption in desktop browsers, I suppose I’m forced to agree the technology ultimately failed. But I’m definitely salty about it.

          2. 4

            Nice article, simple and pleasant to read.

            1. 1

              Thanks. Was researching the topic of HTTP3, so I thought other people might be interested in an overview.

              1. 1

                I have only seen HTTP/2 push used for Apple’s APNS protocol.

              2. 2

                The thing that’s a problem with QUIC is that you’ll have a hard time (like with http/2) to get it running between an application and your reverse proxy. So it’s user < http/3 > server < http1.1> application for 90% of what people tend to run ?

                Got a nextcloud behind a global nginx resolver ? Great, now everything past the nginx is streamed and quic, but not so between nginx and nextcloud. Or where are your localhost certificates coming from ?

                1. 6

                  The reason NGINX gives for not implementing HTTP/2 for reverse http traffic is that it wouldn’t improve the performance on low-latency networks that are normally used with this type of setup. Not sure if this would change for HTTP/3 though.

                  1. 1

                    Ah thanks, didn’t knew this was the case. Thought streaming / avoiding TCP would give you the same improvements locally.

                  2. 5

                    HTTP/1 turned out to be good enough for upstream communication. You lose only two things: H/2 push and detailed multiplexing/prioritization.

                    However, H/2 push seems to be dead. Browser implementations are too weird and fragile to be useful. Experiments in automating H/2 push at CDNs turned out to give “meh” results (lots of conditions must be just right for a perf benefit, but a less-than-ideal guess makes it a net negative).

                    Prioritization and multiplexing can be done at the edge. H/2 server can by itself decide how to prioritize and mix upstream H/1 connections, and this can be done well enough with heuristics.

                    So I expect this trend to continue. You can use H/1 server-side for simplicity, and H/3 for the last mile to get through slowest links.

                    1. 3

                      I tried to deploy http/2 push in a way that improves performance and it’s just hard, even though this was within the context of a server that understood and optimized HTML. Here’s how it typically goes:

                      I want to push js/css to the browser. But what file? Maybe I can configure my server to push /app.js, but my build system uses cache busters so now I need to integrate some sort of asset db into my web server config. What if the homepage, search, and product team all have different incompatible systems? Assuming that problem is solved, what happens if app-$SHA.js is already in the browser cache?

                      For certain websites you start looking at heuristics. Like if you cookie a user and a request comes in for a page without a cookie you can probably assume it’s their first visit and that they have a cold cache. But without some sort of asset db for your versioned assets you have to examine the response what assets are referenced. Now you might have to add a layer of gzip/brotli decoding and buffer.

                      It’s hard.

                      1. 3

                        Indeed. There has been proposal for “cache digest” that browser would send to signal what it has in the cache: https://calendar.perfplanet.com/2016/cache-digests-http2-server-push/

                        but it doesn’t seem to be going anywhere. It’s more complexity, potentially yet another tracking vector, and it’s still 1RTT win in the best case.

                      2. 2

                        Interesting to hear push is dead. Kinda like that tbh.

                      3. 4

                        HTTP/3 is really catered towards client-facing edge servers, especially talking to mobile clients. There might be a future where it makes sense for traffic within a datacenter or between services, but I’m skeptical. In any case, that will probably be awhile because more work needs to be done to bring QUIC’s server-side CPU footprint down before you’d want to try shoving it everywhere.

                        Generally I think HTTP/2 is the right choice for that sort of revproxy to server communication. You can use it without multiplexing and basically get HTTP1 with compressed headers.

                        1. 2

                          Not familiar with nextcloud - would nginx connect to it over a public internet? Because for the reliable internal networks HTTP 2/3 gives diminishing returns (packet loss is much less of an issue).

                        2. 2

                          Nice review. One thing not covered is the whole ‘how to detect QUIC compatibility’ on initial connection. For example there is talk of using DNS for this.

                            1. 0

                              Last time I’ve checked that, the server had to send alt-svc header via HTTP/2 or HTTP/1. For reference, HTTP/2 upgrade happens via TLS NPN or ALPN.

                              1. 1

                                That’s how it works, but DNS is also an option.

                            2. 1

                              Something I wonder about QUIC is whether it could have other interesting application than serving HTTP. It seems to be optimized on things like stateless requests (or “streams”). There’s quite a few protocols and libraries out there utilizing UDP, but also “re-inventing” (or picking) some features and guarantees from TCP. However, these often exist fairly isolated, sometimes in single language communities, frameworks or tools.

                              If QUIC gets widely implemented and there already are quite some implementations, currently often really tied to HTTP/3 implementations though, it might be interesting in other use cases as well. Maybe it’s a bit too early to ask that, but has anyone played with that? I wonder, whether there’s situations where this makes sense or whether one would in such situations then always go for HTTP/3 on top of QUIC.

                              1. 1

                                This is a good question! Most of the focus has been on getting HTTP/3 out the door, but there is talk of “what application protocol is next for QUIC” - it is expected that QUIC will support multiple protocols on top of it. DNS-over-QUIC (which is different from DNS over HTTP/3) is the one I’m aware of that’s furthest along.

                                or whether one would in such situations then always go for HTTP/3 on top of QUIC

                                In some ways I think HTTP/3 is “the minimal sane application transport you’d build on top of QUIC anyways”, so while you can build new protocols on QUIC directly, I think a lot of currently separate application protocols should instead be interfacing with HTTP/3 with small modifications. Two examples: for a bidirectional streaming or pubsub use case like websockets or MQTT, HTTP/2 and 3 can support it with very minimal changes (https://tools.ietf.org/id/draft-xie-bidirectional-messaging-01.html). For video streaming protocols there’s interesting opportunity to take advantage of selective reliability - eg. making keyframes/i-frames lossless and p/b-frames lossy. There’s no real reason for that to happen at the QUIC layer versus the HTTP/3 layer once APIs exposing delivery reliability are created.