1. 14
  1. 4

    On Firefox 92 on my Linux machine, getElementById gets a average of 7ms, while querySelector gets 62ms. Chromium 93 on the same machine does significantly worse, getting around 44ms and 206ms respectively.

    Holy shit, those numbers are extremely high. I understand how querySelector can be slow, but I would’ve expected getElementById to be much closer to a has table lookup (tens to hundreds of nanoseconds maybe?)

    1. 3

      This creates 100,000 elements, and then selects them all in a loop. It does this 105 times, and ignores the first 5 results

      Divide the numbers by 10 million 100 thousand and it seems more reasonable

      1. 1

        Why? The numbers are per function call. That means Chrome takes 44ms to do a getElementById when there are 100,000 elements, which is way longer than I expected.

        Where are you getting the 10 million from btw? I know it’s 100,000 * 100, but the post makes it clear that the numbers are the average time per call, not the sum of all the 100 calls, so I don’t understand why you would multiply by 100.

        1. 2

          You’re right, I shouldn’t multiply by 100. It doesn’t do a getElementById, it does 100,000 getElementById.

          44ms / 100000 = 440 ns per element.

          2M DOM lookups per second seems pretty reasonable.

          1. 4

            Ok I think I know where the confusion is coming from.

            The blog post says this:

            This creates 100,000 elements, and then selects them all in a loop. It then plots a historgram of the average time per function call (in milliseconds) for each run.

            I interpreted it as plotting the average time per call to the document.getElementById and document.querySelector functions. I assume you interpreted it as plotting the average time per call to the function which selects 100,000 elements.

            I think the blog post could be clearer here, since it’s ambiguous. But looking at the source code, it seems like the histograms are actually for 100,000 getElementById/querySelector function calls, not for individual getElementById/querySelector calls. And yeah, when the numbers are divided by 100,000, they’re much more in line with what I would expect.

            1. 1

              On The Orange Site people are having the same misunderstanding, so it’s definitely too ambiguous!

    2. 1

      This is incredible. An optimization to make qureySelector(sel) if sel[0] == “#” then getElementById seems like something I would expect browsers to do …

      1. 3

        What about querySelector("#t1 .c1")? Or <div id="foo:bar">? Or <div id="foo\bar">? You have to pay the costs of parsing, no matter what.

        1. 1

          If the selector is anything but a raw id you couldn’t have used getElementById anyway, but recognizing “this is just an id” should be much cheaper than parsing full css expressions.

          I’m not sure what the exact point of your funky-id examples was, but both are invalid so even if a browser chooses to accept them, seems fine if you pay a performance penalty for using invalid IDs.

          1. 4

            My point is that the browser needs to tokenize and parse the entire query string before it can decide to shortcut into getElementById. The parsing is the expensive part.

            Those examples are both valid IDs, straight out of MDN. How you escape them is different between querySelector and getElementById.