1. 58
  1.  

  2. 29

    Such a long article when the main reason the web is slow is right in the domain name. /s

    1. 6

      The part about HTTP protocols seems interesting enough, but everything after that is basically “yes we know our data and what we extracted from it is deeply flawed but here it is anyway”.

      The detection of used libraries: You might think it will undercount consistently (in a way that does not introduce too much skew), but I’d expect jQuery to be much more likely in globals than other libraries, particularly React which, judging from gut feeling is more likely to be found on a site that uses webpack. The fact that none of the popular component frameworks are in the list suggests this mostly crawled newssites and stuff like that.

      This linear regression thing: What does a negative regression coefficient mean then? That my site becomes faster when I add Zendesk? They put up a disclaimer saying “correlation does not equal causation”, then go on to suggest causation anyway by saying jQuery makes everything slower.

      I commend the effort but I don’t think the results here tell me anything except “linear regression can be used to tie two random numbers together to make a graph that looks like it says something”.

      1. 4

        This linear regression thing: What does a negative regression coefficient mean then? That my site becomes faster when I add Zendesk?

        Pages with Zendesk JS are likely to also be faster than average, is what that’s saying, which could very well be since a “support” page is fairly lightweight and the Zendesk JS is smart and asynchronous (I’m not sure if it’s true or not).

        Likewise re:jQuery, you could probably say that folks that care about render times are also likely to not use jQuery. Not that jQuery itself is necessarily bad – though you can certainly build some Lovecraftian horrors with it.

        1. 3

          Just to be clear: I understand what the data actually says, I’m criticizing their choice to frame it as a useful guide when removing dependencies, which they do right at the end of the blogpost.

      2. 3

        Was this done with warm or cold cache? Ideally it’d be done with and without.

        1. 7

          They argue that modern browsers will do cache isolation which to me is a fair enough argument to not bother with warm caches:

          There’s a handful of scripts that are linked on a large portion of web sites. This means we can expect these resources to be in cache, right? Not any more: Since Chrome 86, resources requested from different domains will not share a cache. Firefox is planning to implement the same. Safari has been splitting its cache like this for years.

          1. 1

            Depends on what fraction of traffic is repeat visitors. The second time you visit the site you are not downloading jquery again.

          2. 2

            Ooh it would be interesting to get both for each site. You could get statistics on what the loading gap is between returning & first-time users.

          3. 3

            They needed to render million pages just to get the idea its too much JavaScript?

            Instead of rendering web pages on the server side (php/python/…) they are ‘rendered’ on the client using JavaScript …

            Instead of using plain CSS the JavaScript generates that CSS on the client …

            Empty started browser takes 500 MB RAM and 8 GB is a current reasonably minimum for a computer that can browse the Internet with more then 3 tabs in the browser.

            Its fucking insane :ASD

            1. 2

              Like it or hate it, the reason users like Google’s AMP is because the normal web has become slow and bulky to use.

              1. 41

                Where can we read about those users that like Google’s AMP?

                1. 14

                  AMP is a cancer on the modern web that spreads to everyone who copy-pastes AMP URLs. It’s mostly downside from both a user experience and technical perspective.

                  Downsides to normal users:

                  • URLs don’t mean anything anymore, so sharing pages with friends frequently involves copy pasting a huge AMP URL. Google frames this as an upside, that URLs shouldn’t mean anything anymore and that you should trust Google on this. The actual user experience of sharing an AMP URL with someone who then clicks that behemoth of a URL on desktop begs to differ.
                  • More Google tracking and opportunities to serve ads
                  • More opportunities for phishing and related attacks

                  Downsides to the web:

                  • More centralization of the web in Google
                  • Fewer incentives to actually fix pageload times

                  Upsides:

                  • Faster pageloads by virtue of the request going through Google
                  • Reader mode! Wait, Firefox already does this clientside.

                  I’ve frequently thought about writing a little tool for integration into some chatbots to remove everything extraneous from pasted URLs, starting with AMP and possibly including garbage like the fbclid parameter for Facebook tracking, Google Analytics spyware query params, the new Chrome deep-linking URL fragments, imgur trying to serve you something other than an image, and so on. Unfortunately, this trend of bait-and-switch on URLs to serve more ads has become depressingly common. If anyone can point me to something that already does this, I’d love to hear about it.

                  Edit: Found ClearURLs. Perfect.

                  1. 13

                    Users don’t “like” AMP. Users like fast, snappy content–it is the basest trick of marketing and developer evangelism that has conflated the two in an attempt to further fill the GOOG moat.

                  2. 1

                    There are clearly causal factors the model has no chance of discovering. Confounding variables is clearly an issue… We need to be cautious when mapping what the model says onto conclusions about reality.

                    It might be tempting to draw conclusions about jQuery, for example, but the data supports no causal link. The analytics and ad network requests, on the other hand, tell us something we already knew: There is no such thing as a free lunch. The original sin of the web was to claim that there was.