1. 30
  1.  

  2. 18

    I don’t think we should change the protocols and force every library in every language on every platform to update mountains of code to support a new protocol just so my browser can download Javascript trackers and crappy Javascript frameworks faster.

    1. 17

      I’m excited for HTTP/3 because it will allow me to get lower-latency video streaming support for my private stream server.

      1. 15

        Well, just like with HTTP/1 and /2, the old protocols are very likely to be supported for a very long while. So you’re not forced to update.

        1. 12

          It’s still change just for the sake of allowing people to build even more bloated websites.

          Making HTTP more efficient isn’t going to mean websites load faster, it means people are going to stuff even more tracking and malware and bloat into the same space. It’s very, very much like building bigger wider roads with more lanes: it doesn’t alleviate congestion, it just encourages more traffic.

          1. 27

            I don’t think that’s entirely true, HTTP/3 does address some problems that we have with TCP and HTTP in modern network connections. I encounter those problems every day at work, it’s just background noise but it annoys users and sysadmins.

            1. 14

              As I understand that video, HTTP/3 is not a new protocol, but rather “HTTP/2 over QUIC”, where QUIC is a replacement for TCP. QUIC can be useful for a lot of other applications, too.

              People do a lot of stuff to work around limitations, like “bundling” files, image sprites, serving assets from different domains, etc, and browsers work around with parallel requests etc. So it saves work, too.

              Whether you like it or not, there are many WebApps like Slack, GitHub, Email clients, etc. etc. that will benefit from this. Chucking all of that in the “tracking and malware”-bin is horribly simplistic at best.

              Even a simple site like Lobsters or a basic news site will benefit; most websites contain at least a few resources (CSS, some JS, maybe some images) and just setting up one connection instead of a whole bunch seems like a better solution.

              1. 8

                Don’t you think that people are going to stuff even more bloat anyway, even if everybody downgrades to HTTP/1?

                1. 6

                  I don’t know that people will drive less if you make the roads smaller. But they won’t drive as much if you don’t make the roads bigger in the first place. They’ll drive less if you provide bike lanes, though.

                  In an ideal world AMP would be like bike lanes: special lanes for very efficient websites that don’t drag a whole lot of harmful crap around with them. Instead they’re more like special proprietary lanes on special proprietary roads for special proprietary electric scooters all vertically integrated by one company.

            2. 9

              The old protocols over TCP provide terrible experiences on poor networks. Almost unusable for anything dynamic/interactive.

              1. 1

                TCP is specifically designed and optimised for poor networks. The worst networks today are orders of magnitude better than the networks that were around when TCP was designed.

                1. 13

                  There are certainly types of poor networks that are ubiquitous today that TCP was not designed for.

                  For instance, Wifi networks drop packets due to environmental factors not linked to congestion. TCP data rate control is built on the assumption that packets are dropped when the network is congested. As a result, considerable available bandwidth goes unused. This can qualify as a terrible experience, especially from a latency point of view.

                  If your IP address changes often, say in a mobile network, you lose your connection all the time. Seeing that connection == session for many applications, this is terrible.

                  Also many applications build their own multiplexing on top of TCP, which, constrained by head of line blocking, leads to buffer bloat and a slow, terrible experience.

                  1. 5

                    Related to this:

                    https://eng.uber.com/employing-quic-protocol/

                    Mobile networks are a prime target for optimizing latency and minimizing round trips.

                  2. 1

                    It was designed when latency didn’t matter. Now it does matter. Three-way handshakes and ACKs are killing us.

                    1. 1

                      It seems to me that every reasonable website I use is fine with those tiny inefficiencies because they’re generally efficient anyway, while bloated malware-filled tracking javascript-bloated nightmare websites are going to be bad either way.

                      Who is this actually helping?

                      1. [Comment removed by author]

                        1. 0

                          Leave the moderation to the moderators. My opinions are pretty widely held and agreed with on this issue. Degrading them as ‘hot takes’ is unkind low effort trolling.

                          If you have a genuinely constructive comment to make I suggest you make it. If you don’t I suggest you stay quiet.

                          1. 1

                            I do not refute that there are issues with tracking and malware but if you think we are going to regress an era without a rich web you are out of your gourd. There is no future where the web uses fewer requests. The number of images and supporting files like JavaScript will only increase. JavaScript may be replaced in the future with something equally capable, but that still will not change the outcome in any appreciable way.

                2. 6

                  Without even talking about HTTP/3, it seems that any application that uses a TCP or UDP connection could benefit from using QUIC: web applications yes, but also video games, streaming, P2P, etc…

                  Daniel Stenberg also mentioned that QUIC would improve client with a bad internet connection because a packet loss on a stream does not affect the others, making the overall connection more resilient.

                  I do agree it could and will be used to serve even more bloated websites, but it is not the only purpose of these RFC.

                3. 4

                  I’d love to see some benchmarks comparing HTTP/1, HTTP/2 and HTTP/3. Does anyone know where I can find them?

                  1. 11

                    A visual comparison between HTTP/1 and HTTP/2 can be found here :)

                    1. 3

                      It’s only one use case though, the very that HTTP/2 tries to solve. Basically you are “benchmarking” the fact that browsers do limit TCP/HTTP connections. Now instead of creating TCP connections you add streams on top of HTTP.

                      One might ask how that would work for individual requests or for serial requests using HTTP/1.1 keep-alive or how HTTP/1.1 pipelining.

                      1. 1

                        True, if you find one I am very interested !

                  2. 2

                    Are there any benchmarks that actually compare between HTTP/1, HTTP/2 and HTTP/3?

                    1. 3

                      Talk says no reliable numbers yet.

                      Number of round trips can be way down, which helps a lot for latency.

                      CPU use is currently higher because of unoptimised UDP stacks and other issues. CPU use throughout the internet will be higher too because there isn’t dedicated hardware for quic routing or quic TLS yet.

                    2. 1

                      I guess this finally proves TCP is too bloated (or to put it differently the price we have to pay for correctness and reliable delivery at the protovol level is too high ) and UDP like protocols are best suited for communicated over unreliable networks.

                      1. 14

                        Not really, more that TCP enforces a level of correctness that many applications don’t need. If you’re using telnet or SSH, you probably want strict in-order delivery of everything, and that’s what TCP gets you. With HTTP though, you generally say “I want to get these 10 things from point A to point B, but as long as they all get there correctly in the end I don’t really care what order they’re in”, which gives you much more wiggle room for reordering and resending lost pieces. QUIC is able to take advantage of that wiggle room.

                        1. 4

                          If you’re using telnet or SSH, you probably want strict in-order delivery of everything

                          Tell the mosh people about that.

                          1. 2

                            Hence the “probably”. ;-) Thanks for the Cool New Thing To Investigate!

                        2. 8

                          QUIC is a TCP-like protocol that uses UDP instead of raw IP, because routers don’t understand anything else. For QUIC, UDP is simply overhead for legacy interoperability.

                          1. 3

                            Both TCP and UDP have design mistakes (assuming IPs as identifying clients, TCP latency and no encryption). We can’t fix those anytime soon because some networking hardware is incompatible.

                            New protocols (mosh, wireguard, QUIC) use UDP datagrams mostly just as proxies for IP frames.

                            1. 2

                              That’s not a mistake in UDP. Those solutions don’t belong at that layer, otherwise you would have to replace your networking hardware every few years.

                              1. 1

                                Which of the two claims do you think are not mistakes in UDP? I can see an argument for encryption, but I’m fairly sure that using a connection id would have been a good idea.

                                1. 1

                                  Both.

                                  Connection IDs: A connection ID requires you to pre-establish routing (meaning extra RTT before first UDP packet arrives, and now there are two code paths instead of one), and requires all intermediate boxes to remember all routes for all active connections (drastically increasing RAM costs; run out of ram? need to re-establish routing).

                                  Encryption doesn’t belong in UDP either, in particular because encryption schemes need to be upgraded on a different schedule to switches (I have an 11 year old gigabit switch under my desk).

                                  1. 1

                                    I don’t agree with either of those premises.

                                    The switches don’t need to understand the crypto (tho could maybe understand the MAC to drop bad traffic early) and for a connection id, you don’t need to do a round trip or have intermediate boxes remember, you just send one as part of the protocol.

                                    If the server receives an authenticated packet from a different IP with the same connection id, then it just sends to that address in the future instead.

                                    1. 1

                                      If the server receives an authenticated packet from a different IP with the same connection id, then it just sends to that address in the future instead.

                                      UDP doesn’t have connections, so I’m unclear on how this is better, and adds an extra header in every packet.

                                      I could understand the claim “There should be a standard layer between UDP and TCP adding support for crypto”, and/or “TCP should support session continuation across client IP changes”.

                                      Are those close to what you’re arguing for? Otherwise I don’t think I have understood you very well, sorry.