1. 29
  1. 6

    Related article that explains some of the concepts, etc in HTTP/3 - https://www.smashingmagazine.com/2021/08/http3-core-concepts-part1/

    1. 1

      My key takeaway from this is that TCP+TLS is still faster for high throughput (without flaky connections) and HTTP/3 optimizations are only relevant if you need to help people with very unstable connections, and the actual results can vary a lot. For my basic nginx reverse proxy setup it’s kinda irrelevant and I’m hesitating to open UDP ports for that. If debian ships nginx with http/3 I’ll probably enable it, until then it seems to perform not that great in nginx and apache.

      1. 2

        It’s a bit more subtle than that, though regardless if you’re not interested in being in the bleeding edge of the space I too would wait until nginx or apache enable http/3.

        A couple points in no general order:

        • A lot of the overhead for small web pages or AJAX requests especially from TCP+TLS is the 3 round trips necessary to establish the TLS stream. Assuming a conservative TCP packet size would be ~ 1280 bytes (a conservative MTU of 1320 bytes, resulting in a TCP MSS of 1280 bytes), and HTTP request and response pair for a small blog post can easily fit in 2-3 packets (1 packet for the request and 1-2 packets for the response), and an AJAX request/response is usually 2 packets. This means the entire HTTP interaction over TCP for the AJAX request would result in 1.5RTT (for TCP establishment) + 2 RTT = 3.5 RTT. TCP+TLS has 3RTT (for TCP+TLS establishment) + 2RTT = 5RTT. This is ~42% overhead just for TLS establishment. If page weight is high though (or requests are being pipelined), the overhead on connection establishment is decreased. TCP Fast Open and TLS False Start can get this down to 1RTT connection establishment. TLS 1.3 has support for 0RTT connection establishment but this is tricky. Default QUIC connection establishment is 1.5RTT, just like regular TCP, and there are 0RTT modes available for QUIC.

        • “Flaky” connections can be more common than you think. The internet is mostly designed around maximizing throughput, and near after-work or after-school hours you’re going to see congestion on lots of routers as everyone starts using bandwidth-intensive multimedia services. Moreover if you’re ever on cafe/airport wifi, building free wifi, or just been far from an AP, then you’ll be hit with flakiness and dropped packets. QUIC could increase “reliability” in these situations dramatically.

        • Multimedia is especially impacted by HoL blocking. Dropping a packet or two when streaming a video is fine for stream quality but can cause the stream to stutter and stop while your connection waits for a blocked packet to ACK. Moreover if an ACK isn’t received, packets will be resent, adding delays and congesting the network further leading to a negative spiral. This is one common answer to “Why is Netflix slow after work?” and can improve experiences broadly.

        • QUIC supports using a connection ID to maintain a persistent connection even when IP endpoints change. This means that if you walk from one part of a building to another with a different WiFi SSID, you come back from elsewhere and plug into your desk’s Ethernet, or a NAT mapping changes silently for you that your existing connections will stay established instead of dropping and all reconnecting.

        There’s other stuff too, but the above points are some examples of the fat that can be trimmed on the net by moving to HTTP/3. Though personally I’m more excited by being able to use QUIC for non-HTTP traffic, and even using QUIC through p2p-webtransport so we can send/receive non-HTTP traffic directly from the browser. Happy to talk more about this stuff as I’m super excited for QUIC.

        1. 1

          I’ve actually read all 3 articles. Still it seems like a lot of overhead for diminishing returns for now. I think the biggest change is that we can replace parts and iterate on the protocol much faster now. (By choosing the only other possibly non-blocked protocol, UDP.) I fear for the DDoS resistance when looking at some of the overhead all the new compression, first-packet optimization and ID re-use adds on top (while actually storing multiple IDs for changing them on interface / ISP change, so more stuff to store in memory).

          1. 1

            I think the biggest change is that we can replace parts and iterate on the protocol much faster now.

            By having HTTP go over QUIC, QUIC gets to essentially play chicken with ossified middleboxes. “Support this or web traffic won’t work.” But because QUIC is so general-purpose, we can also push other traffic over it. It’s exciting to think that we can send arbitrary traffic over what looks like regular traffic (though folks do that today over TLS sockets on port 443.)

            I fear for the DDoS resistance when looking at some of the overhead all the new compression, first-packet optimization and ID re-use adds on top (while actually storing multiple IDs for changing them on interface / ISP change, so more stuff to store in memory)

            I’m hopeful that connection IDs offer a new way to throttle/block for DDoS also but yeah it’s something to keep in mind as HTTP/3 rolls out.

    2. 2

      QPACK uses separate unidirectional streams to modify and track field table state, while encoded field sections refer to the state of the table without modifying it.

      I’m gonna need to see this before I fully understand it.

      1. 2

        QPACK is defined in RFC9204. It uses two unidirectional QUIC streams, an encoder->decoder stream and a decoder->encoder stream. The gory details are in the RFC, and it seemed relatively straightforward to me. This page has a bunch of QUIC and HTTP/3 implementations along with some pure QPACK implementations if you’re curious.

        1. 2

          Wow this is so cool, I really appreciate you giving me this level of information. You’re very kind to do so!

      2. 1

        This document describes a mapping of HTTP semantics over QUIC.

        I’ve been looking forward to this for a while now.