1. 9
  1.  

  2. 2

    I don’t see real discussion of how much it matters… It should be possible to estimate how much it matters by formulating a hypothesis and then testing it.

    Here’s an example hypothesis: Excessive load time causes people to abort loading/interacting with the page.

    If this is correct and some, but only some, of the load times for that site are excessive, then two things should vary with geography:

    • the share of browsers that load images (and other supplemental page resources)
    • the share of browsers that follow links, as indicated (imperfectly) by Referrer

    Graphing these against estimated RTT per user should give a reasonable estimate of how strongly load time affect success (assuming success means what the hypothesis implies).

    1. 2

      I suppose I’m trying to view how much it matters as the kind of opportunity cost -

      If I moved it to somewhere in New Jersey, and spent more, users would definitely save time in aggregate: half of roundtrips would be completed in 75ms rather than 105ms, a saving of 30%. Over several roundtrips that would probably mount up to around a sixth of a second off the average of first-time page loads, which is not too bad.

      Your idea is really good but I think I would struggle to find too many people who got frustrated with my sites loading time and closed the tab. It just isn’t complicated enough! I think you’d need a site that takes 10 seconds or so (as Medium does for me sometimes…)

      1. 1

        I quite agree that you’d struggle to find people who are dissatisfied with a 0.1s RTT, and I’d go further and say that answers the question of how much it matters.

        1. 1

          That would be relevant if anything could be achieved in a single roundtrip. Sadly, nothing much can be:

          It’s a bit worse than just [the time taken for one roundtrip]. Depending on what a user is doing they may end up making a number of those roundtrips. To download a web page usually requires five full roundtrips: one to resolve the domain name via DNS, one to establish the TCP connection, two more to set up an encrypted session with TLS and one, finally, for the page you wanted in the first place.

          It’s hard to imagine a more basic site than mine, for which total difference (I reckon) is about 0.2s. For other sites, with meaningful request chaining or lots of CORS preflighting to do, that value will increase. And this is on top of your request processing time which all comes out of your notional user experience “budget” for page load time (commonly agreed to be, what? About 1s for the user to feel that it’s instant?)

          1. 1

            I’ve heard about systems that do many requests and therefore end up delaying n×RTT. (I remember a support request long ago from someone who wrote “a tight loop around” an RPC.) But you’re the first person I’ve encountered who seems to think the problem is the RTT and ignore n.

            BTW, regarding “nothing can be done” to get below “usually five full roundtrips”. I tried now with a very distant web site that uses TLS 1.3 and several-day DNS TTLs, and saw around two roundtrips (compared to ICMP).

            1. 1

              But you’re the first person I’ve encountered who seems to think the problem is the RTT and ignore n.

              This is not the case and I think I’m out because for some reason you seem to be deliberately misinterpreting my comments.

              1. 1

                Sorry aboyt that. I did wonder (hence the several-day delay).

                FWIW I formed that impression because your posting focused entirely on the RTT and your using phrasing like “nothing much can be” about the number of round trips.