Pretty much. The HTTP specs have been more or less taken over by Google and are adding features/functionality according to what Google wants/needs. Which is sad to me because the wire protocol has become far less debuggable and explorable than it used to be – I remember the days of doing telnet 80 and typing in a raw HTTP request to learn how it worked (and doing the same to learn how email worked by putting together the HELO, etc.). With later HTTP versions you need tooling to generate even basic requests for you since it’s no longer a plain-text protocol.
I had the same reservations with HTTP/2. Implemented it anyway because Google said it was good for speed and SEO, discovered the speed gains were dubious, pre-load never actually helped, and it didn’t seem to improve SEO.
AMP and Dart are proof that things don’t just succeed because Google pushes them. It needs to also clear some minimum bar of quality or else it will be rejected by the internet no matter how much Google pushes it.
The nice thing about AMP is that I can just ignore it. However, my browser will be lugging around a useless layer of HTTP/2 support for a couple decades because it was a “standard”.
HTTP/3 is great but won’t replace HTTP/2, simply because it’s not always feasible to use anything non-TCP. Some network admins block UDP, there will always be hosting environments that can’t do anything other than TCP for reasons, and so on…
For https: over HTTP/1.0 or HTTP/1.1, you can always do openssl s_client -connect www.google.com:443 and it works just like telnet did for port 80. For HTTP/2+, yes, you need specialized clients.
HTTP/2 has at least one very useful consequence for everybody: we don’t have to optimize for a number of simultaneous connections anymore. Removing code that was trying to cleverly bundle extra data to unrelated requests has made a huge impact to maintainability of our code at $WORK.
HTTP/3, though, is exactly what you say. Google making everyone’s life more complicated because they settled on a stupid business model, and drowned clients in ad code.
Actually it improves load times for just about everything, but losing TCP’s head-of-line blocking and replacing TCP’s slow-start and loss-recovery mechanisms with more suitable ones makes a real difference to the quality/latency/buffering-probability tradeoff for DASH and HLS, especially on mobile or otherwise unreliable connections.
I had a lot of people attacking my web library for being a dinosaur because I didn’t implement this…. and after looking at it, it struck me as completely useless and not worth spending time on so I just continued to refuse and/or procrastinate any implementation at all.
I’m sad this didn’t work out. I spent quite a lot of time recently looking at the browser waterfalls and optimising a known service. It turns out that the loading time is pretty evenly split in 3 parts: waiting for the page itself, waiting for content needed for DOMLoaded, waiting for the rest. But I know immediately what to send for the second part - there’s no real reason for the browser to wait for the HTML before downloading the ~10 resources that are definitely going to be needed.
I almost miss the plain PHP pages where you could flush the with preloads included before you render the rest of the content. I hope we get some mechanism to do an equivalent in the future.
I almost miss the plain PHP pages where you could flush the with preloads included before you render the rest of the content
…why do you not have that now? http 1.1 with standard html fully supports that; php did nothing special. There’s even a new http 1.1 header you can send if your html can’t do the job.
It’s not how almost every Web framework in the wild works. The standard these days is to generate the data then hand it over to the template engine. There are escape hatches for streaming of course, but if you want to stay integrated with the framework and the templating - I have no idea how you’d achieve that in rails for example.
You still can flush the headers early. This has never changed, and never needed the PUSH feature.
There’s also a new 103 Early Hints spec that allows sending headers to the browser early without having to commit to an HTTP status for the final resource.
Wasn’t Server Push added to the spec by Google? It’s one of those micro-optimizations that only make sense when you have google-level traffic.
I think that describes all of HTTP/2.
Pretty much. The HTTP specs have been more or less taken over by Google and are adding features/functionality according to what Google wants/needs. Which is sad to me because the wire protocol has become far less debuggable and explorable than it used to be – I remember the days of doing
telnet 80
and typing in a raw HTTP request to learn how it worked (and doing the same to learn how email worked by putting together theHELO
, etc.). With later HTTP versions you need tooling to generate even basic requests for you since it’s no longer a plain-text protocol.This isn’t true at least for HTTP/3 & QUIC, both of which have been worked on by far more than just Google. (QUIC has actually morphed significantly from the original Google version.)
Hey I used a
telnet mail 143
earlier this week!I had the same reservations with HTTP/2. Implemented it anyway because Google said it was good for speed and SEO, discovered the speed gains were dubious, pre-load never actually helped, and it didn’t seem to improve SEO.
Ask me about AMP.
AMP and Dart are proof that things don’t just succeed because Google pushes them. It needs to also clear some minimum bar of quality or else it will be rejected by the internet no matter how much Google pushes it.
The nice thing about AMP is that I can just ignore it. However, my browser will be lugging around a useless layer of HTTP/2 support for a couple decades because it was a “standard”.
HTTP/2 will likely remain in use for decades.
HTTP/3 is great but won’t replace HTTP/2, simply because it’s not always feasible to use anything non-TCP. Some network admins block UDP, there will always be hosting environments that can’t do anything other than TCP for reasons, and so on…
For https: over HTTP/1.0 or HTTP/1.1, you can always do
openssl s_client -connect www.google.com:443
and it works just like telnet did for port 80. For HTTP/2+, yes, you need specialized clients.Edit: for clarity
HTTP/2 has at least one very useful consequence for everybody: we don’t have to optimize for a number of simultaneous connections anymore. Removing code that was trying to cleverly bundle extra data to unrelated requests has made a huge impact to maintainability of our code at $WORK.
HTTP/3, though, is exactly what you say. Google making everyone’s life more complicated because they settled on a stupid business model, and drowned clients in ad code.
HTTP3 is great for video delivery performance.
Actually it improves load times for just about everything, but losing TCP’s head-of-line blocking and replacing TCP’s slow-start and loss-recovery mechanisms with more suitable ones makes a real difference to the quality/latency/buffering-probability tradeoff for DASH and HLS, especially on mobile or otherwise unreliable connections.
I had a lot of people attacking my web library for being a dinosaur because I didn’t implement this…. and after looking at it, it struck me as completely useless and not worth spending time on so I just continued to refuse and/or procrastinate any implementation at all.
…I’m feeling pretty vindicated now.
Interesting, so we’re back to either polling or websockets for realtime data on websites.
That’s SSE, HTTP/2 push was primarily to pre-emptively send assets to the client before they were requested.
Ahh, looks like I mixed the two up!
Don’t forget SSE ; and whatever they’ll add in the next Chromium sprint…
I’m sad this didn’t work out. I spent quite a lot of time recently looking at the browser waterfalls and optimising a known service. It turns out that the loading time is pretty evenly split in 3 parts: waiting for the page itself, waiting for content needed for DOMLoaded, waiting for the rest. But I know immediately what to send for the second part - there’s no real reason for the browser to wait for the HTML before downloading the ~10 resources that are definitely going to be needed.
I almost miss the plain PHP pages where you could flush the with preloads included before you render the rest of the content. I hope we get some mechanism to do an equivalent in the future.
…why do you not have that now? http 1.1 with standard html fully supports that; php did nothing special. There’s even a new http 1.1 header you can send if your html can’t do the job.
It’s not how almost every Web framework in the wild works. The standard these days is to generate the data then hand it over to the template engine. There are escape hatches for streaming of course, but if you want to stay integrated with the framework and the templating - I have no idea how you’d achieve that in rails for example.
You still can flush the headers early. This has never changed, and never needed the PUSH feature.
There’s also a new 103 Early Hints spec that allows sending headers to the browser early without having to commit to an HTTP status for the final resource.
The most interesting implementation of this I recall is vulcain, although I never had the opportunity to use it.