1. 42
  1.  

  2. 11

    This article right here in a nutshell is the good, bad and ugly of the web application platform:

    Good: Browsers can implement whatever the heck they like, with amazing innovative results

    Bad: Other Browsers don’t support these things out of the gate, so it is left as an exercise for the developer (or they just give up and only support one browser - a highly popular approach in the late 90s and early 00s)

    Ugly: Stuff just breaks all the time and there is no accountability. Standards are skirted and ignored. Committees are formed and deemed too slow/unfriendly to innovation, creating rival committees (W3C vs WhatWG, etc).

    If you read this far, I hope you weren’t expecting a proposal to fix it? ;)

    1. 9

      That is why I never use $MOST_POPULAR_BROWSER_ENGINE. I hope it will help avoiding monopolisation.

      1. 3

        Well, to be honest, WhatWG was the right thing to do to jolt W3C out of its self-induced XHTML 2 coma.

        1. 2

          I was an XHTML2 believer back when I thought the W3C was still a real thing. Now they just whitewash stuff vendors have already shipped.

          1. 11

            If you rephrase it as “they standardize proven implementations” it becomes more palatable. And history actually shows that it’s the only way to produce useful, usable standards. You can’t invent a spherical XHTML in a vacuum and expect it to be implementable.

      2. 6

        As best I can reconstruct, the thing the Chrome team was dealing with was that Chrome had to delay starting to scroll until they ran touch event handlers, because the handler could always preventDefault() to disable scrolling, but 1) 80% of touch event handlers weren’t actually calling preventDefault(), 2) this added ~150ms latency before scroll started in a sample case, and doubled 95th-percentile latency in their field tests, and 3) even before adding their passive option, there was a CSS alternative for preventing scroll-on-touch without JS required at all. (They also argue that browsers already have timeouts for these handlers, so they weren’t entirely reliable in the first place.) So the post author’s app probably used touchstart to prevent scrolling and then Chrome broke that.

        If the Chrome folks wanted to do this and were willing to go to a lot more effort to mitigate breakage, maybe they could have effectively tried to patch up the damage after a passive touchstart handler called preventDefault: jump back to the old scroll position, log a complaint to the console, and treat this handler (or all handlers) as active for subsequent events. It would be very complicated (both for Chrome hackers to code up and for Web devs to understand–consider that arbitrary DOM changes can happen between the scroll starting and the preventDefault!), lead to a janky scroll-and-snap-back sequence for users, and still break for more complex cases, but would likely avoid breaking some previously-usable pages entirely.

        I get the value of working pretty hard to maintain backwards compatibility, and it’s possible the Chrome folks moved a little fast here; they did actually take the time to gather data (which they use sometimes: a GitHub comment provides some general background on when they’ve backed out interventions), but the ‘scrolling intervention’ page reports only 80% of listeners were passive, and 20% breakage seems high.

        For what it’s worth, I’m not absolutist about targeted changes that might break some standards-compliant pages; as a dev I see compat impact from changes that are outside the scope of standards (sunsetting Flash, autofill changes, tracking protection, things popular extensions do) and bug fixes; it seems like there’s just always a cost-benefit balance and you need to see how things are used out on the Web, versus being able to reason from first principles what changes are OK or not. And the APIs being as old and quirky as they are, there are places where you really could subtly tweak some API and break almost nobody and improve things for almost everyone; this seems like the common category where we’d like to allow something to be async but that can break some uses of it.

        It’s tricky to figure out if a change is worth it, and again, they might have gotten it wrong here, but I wouldn’t want “you can never break anything” to be the lesson from this.

        1. 4

          The current generation of standards makers and browser vendors, especially Google, do not care about forward or backward compatibility. Look what happened to <ISINDEX> support, for example. Still can’t use my favourite Latin–English dictionary website from the 1990s because that has now been removed from the spec and all browsers (on Google’s initiative).

          If you thought Google might be better than Microsoft as a >50% browser market share holder, changes like this should give you pause. If you use Chrome, consider switching to Firefox or Safari.

          1. 3

            I use Firefox on desktop and mobile, but I found that it’s often necessary to switch back to Chrome for some websites, which are either way too slow or plain broken.

            This is a pity, more and more developers target Chrome only, do all the optimisations for it, and kind of assume it will work on other browsers. Chrome is basically the new IE and Google knows that well. Now they can ship whatever change they want that will optimise google.com and YouTube and too bad if the rest of the web and other browsers are broken as a result.

            1. 2

              Sounds like our pain trying to get our code to work with WebRTC. Things were (still are?) changing so fast, and at such a different rate for each browser, it became impossible to keep up, even using a shim.