1. 26
  1.  

  2. 12

    My understanding from twitter is that they object because they all use the whatwg spec, and the w3c spec was just a fork maintained for some weird copyright/licensing cases, which are no longer valid.

    1. 5

      w3c uses github to discuss things?

      1. 3

        Mailing lists, github, meetings, bugzilla.

      2. 16

        TLDR;

        • In 2004 Apple, Mozilla and Opera were becoming increasingly concerned about the W3C’s direction with XHTML, lack of interest in HTML, and apparent disregard for the needs of real-world web developers and created WHATWG as a way to get control over the web standards
        • they throw away a whole stack of powerful web technologies (XHTML, XSLT…) whose purpose was to make the web both machine readable and useful to humans
        • they invented Live Standards that are a sort of ex-post standards: always evolving documents, unstable by design, designed by their hands-on committee, that no one else can really implement fully, to establish a dynamic oligopoly
        • in 2017, Google and Microsoft joined the WHATWG to form a Steering Group for “improving web standards”
        • meanwhile the W3C realized that their core business is not to help lobbies spread broken DRM technologies, and started working to a new version of the DOM API.
        • in 2018, after months of political negotiations, they proposed to move the working draft to recommendation
        • in 2018, Google, Microsoft, Apple and Mozilla felt offended by this lack of lip service.

        It’s worth noticing that both these groups have their center in the USA but their decisions affects the whole world.

        So we could further summarize that we have two groups, one controlled by USA lobbies and the other controlled by the most powerful companies in the world, fighting for the control of the most important infrastructure of the planet.

        Under Trump’s Presidency.

        Take this, science fiction! :-D

        1. 27

          This is somewhat disingenuous. Web browser’s HTML parser needs to be compatible with existing web, but W3C’s HTML4 specification couldn’t be used to build a web-compatible HTML parser, so reverse engineering was required for independent implementation. With WHATWG’s HTML5 specification, for the first time in history, a web-compatible HTML parsing got specified, with its adoption agency algorithm and all. This was a great achievement in standard writing.

          Servo is a beneficiary of this work. Servo’s HTML parser was written directly from the specification without any reverse engineering, and it worked! To the contrary to your implication, WHATWG lowered barrier to entry for independent implementation of web. Servo is struggling with CSS because CSS is still ill-specified in the manner of HTML4. For example, only reasonable specification of table layout is an unofficial draft: https://dbaron.org/css/intrinsic/ For a laugh, count the number of times “does not specify” appear in CSS2’s table chapter.

          1. 4

            You say Backwards compatibility is necessary, and yet Google managed to get all major sites to adopt AMP in a matter of months. AMP has even stricter validation rules than even XHTML.

            XHTML could have easily been successful, if it hadn’t been torpedoed by the WHATWG.

            1. 15

              That’s nothing to do with the amp technology, but with google providing CDN and preloading (I.e., IMHO abusing their market position)

              1. -1

                abusing their market position

                Who? Google? The web AI champion?

                No… they do no evil… they just want to protect their web!

            2. 2

              Disingenuous? Me? Really? :-D

              Who was in the working group that wrote CSS2 specification?

              I bet a coffee that each of those “does not specify” was the outcome of a political compromise.

              But again, beyond the technical stuffs, don’t you see a huge geopolitical issue?

            3. 15

              This is an interesting interpretation, but I’d call it incorrect.

              • the reason to create whatwg wasn’t about control
              • XHTML had little traction, because of developers
              • html5 (a whatwg standard fwiw) was the first meaningful HTML spec because it actually finally explained how to parse it
              • w3c didn’t “start working on a new Dom”. They copy/backport changes from whatwg hoping to provide stable releases for living standards
              • this has nothing to do with DRM (or EME). These after completely different people!
              • this isn’t about lobby groups, neither is this avout influencing politics in the US or anywhere.

              I’m not speaking on behalf of my function in the w3c working group I’m in, nor for Mozilla. But those positions provided me with the understanding and background information to post this comment.

              1. 8

                XHTML had little traction, because of developers

                I remember that in early 2000s everyone started to write <br/> instead of <br> and it was considered cool and modern. There were 80x15 badges everywhere saying website is in xhtml. My Motorola C380 phone supported wap and some xhtml websites, but not regular html in builtin browser. So I had impression that xhtml was very popular.

                1. 6

                  xhtml made testing much easier. For me it changed many tests from using regexps (qr#<title>foo</title>#) to using any old XML parser and XPATH.

                  1. 3

                    Agreed. Worth noting that, after the html5 parsing algorithm was fully specified and libraries like html5lib became available, it became possible to apply exactly the same approach with html5 parsers outputting a DOM structure and then querying it with xpath expressions.

                2. -1

                  This is an interesting interpretation, but I’d call it incorrect.

                  You are welcome. But given your arguments, I still stand with my political interpretation.

                  the reason to create whatwg wasn’t about control

                  I was 24 back then, and my reaction was “What? Why?”.

                  My boss commented: “wrong question. You should ask: who?”

                  XHTML had little traction, because of developers

                  Are you sure?

                  I wrote several web site back then using XML, XSLT and XInclude serverside to produce XHTML and CSS.

                  It was a great technological stack for distributing contents over the web.

                  w3c didn’t “start working on a new Dom”. They copy/backport changes from whatwg hoping to provide stable releases for living standards

                  Well, had I wrote a technical document about an alternative DOM for the whole planet, without anyone asking me to, I would be glad if W3C had take my work into account!

                  In what other way they can NOT waste WHATWG’s hard work?
                  Wel, except saying: “guys, from now on do whatever Google, Apple, Microsoft and few other companies from the Silicon Valley tell you to do”.

                  But I do not want to take part for W3C: to me, they lost their technical authority with EME (different group, but same organisation).

                  The technical point is that we need stable, well thought, standards. What you call live standard, are… working draft?

                  The political point is that no oligopoly should be in condition to dictate the architecture of the web to the world.

                  And you know, in a state where strong cryptography is qualified as munitions and is subject to export restrictions.

                  I’m not speaking on behalf of my function in the w3c working group I’m in, nor for Mozilla. But those positions provided me with the understanding and background information to post this comment.

                  I have no doubt about your good faith.

                  But probably your idealism is fooling you.

                  As you try to see these facts from a wider perspective, you will see the problem I describe.

                3. 4

                  XHTML was fairly clearly a mistake and unworkable in the real world, as shown by how many nominally XHTML sites weren’t, and didn’t validate as XHTML if you forced them to be treated as such. In an ideal world where everyone used tools that always created 100% correct XHTML, maybe it would have worked out, but in this one it didn’t; there are too many people generating too much content in too many sloppy ways for draconian error handling to work well. The whole situation was not helped by the content-type issue, where if you served your ‘XHTML’ as anything other than application/xhtml+xml it wasn’t interpreted as XHTML by browsers (instead it was HTML tag soup). One result was that you could have non-validating ‘XHTML’ that still displayed in browsers because they weren’t interpreting it as XHTML and thus weren’t using strict error handling.

                  (This fact is vividly illustrated through syndication feeds and syndication feed handlers. In theory all syndication feed formats are strict and one of them is strongly XML based, so all syndication feeds should validate and you should be able to consume them with a strictly validating parser. In practice plenty of syndication feeds do not validate and anyone who wants to write a widely usable syndication feed parser that people will like cannot insist on strict error handling.)

                  1. 2

                    there are too many people generating too much content in too many sloppy ways for draconian error handling to work well.

                    I do remember this argument was pretty popular back then, but I have never understood why.

                    I had no issue in generating xhtml strict pages from user contents. This real world company had a couple handred of customers with pretty various needs (from ecommerce, to online magazines or institutional web sites) and thousands of daily visitors.

                    We used XHTML and CSS to distribute highly accessible contents, and we had pretty good results with a prototype based on XLS-FO.

                    To me back then the call to real world issues seemed pretestuous. We literally had no issue. The issues I remember were all from IE.

                    You are right that many mediocre software were unable to produce proper XHTML. But is this an argument?

                    Do not fix the software, let’s break the specifications!

                    It seems a little childish!

                    XHTML was not perfect, but it was the right direction.

                    Look at what we have now instead: unparsable contents, hundreds of incompatible javascript frameworks, subtle bugs, bootstrap everywhere (aka much less creativity) and so on.

                    Who gain most from this unstructured complexity?

                    The same who now propose the final solution lock-in: web assembly.

                    Seeing linux running inside the browser is not funny anymore.

                    Going after incompetent developers was not democratization of the web, it was technological populism.

                    1. 2

                      What is possible does not matter; what matters is what actually happens in the real world. With XHTML, the answer is clear. Quite a lot of people spent years pushing XHTML as the way of the future on the web, enough people listened to them to generate a fair amount of ‘XHTML’, and almost none of it was valid and most of it was not being served as XHTML (which conveniently hid this invalidity).

                      Pragmatically, you can still write XHTML today. What you can’t do is force other people to write XHTML. The collective browser world has decided that one of the ways that people can’t force XHTML is by freezing the development of all other HTML standards, so XHTML is the only way forward and desirable new features appear only in XHTML. The philosophical reason for this decision is pretty clear; browsers ultimately serve users, and in the real world users are clearly not well served by a focus on fully valid XHTML only.

                      (Users don’t care about validation, they care about seeing web pages, because seeing web pages is their goal. Preventing them from seeing web pages is not serving them well, and draconian XHTML error handling was thus always an unstable situation.)

                      That the W3C has stopped developing XHTML and related standards is simply acknowledging this reality. There always have been and always will be a great deal of tag soup web pages and far fewer pages that validate, especially reliably (in XHTML or anything else). Handling these tag soup web pages is the reality of the web.

                      (HTML5 is a step forward for handling tag soup because for the first time it standardizes how to handle errors, so that browsers will theoretically be consistent in the face of them. XHTML could never be this step forward because its entire premise was that invalid web pages wouldn’t exist and if they did exist, browsers would refuse to show them.)

                      1. 0

                        Users don’t care about validation, they care about seeing web pages, because seeing web pages is their goal.

                        Users do not care about the quality of concrete because having a home is their goal.
                        There will always be incompetent architects, thus let them work their way so that people get what they want.

                        Users do not care about car safety because what they want is to move from point A to point B.
                        There will always be incompetent manufacturers, thus let them work their way so that people get what they want.

                        That’s not how engineering (should) work.

                        Was XHTML flawless? No.
                        Was it properly understood by the average web developers that most companies like to hire? No.

                        Was it possible to improve it? Yes. Was it better tha the current javascript driven mess? Yes!

                        The collective browser world has decided…

                        Collective browser world? ROTFL!

                        There’s a huge number of browsers’ implementors that nobody consulted.

                        Among others, in 2004, the most widely used browser, IE, did not join WHATWG.

                        Why WHATWG did not used the IE design if the goal was to liberate developers from the burden of well designed tools?

                        Why we have faced for years incompatibilities between browsers?

                        WHATWG was turned into one of the weapons in a commercial war for the control of the web.

                        Microsoft lost such war.

                        As always, the winner write the history that everybody know and celebrate.

                        But who is old enough to remember the fact, can see the hypocrisy of these manoeuvres pretty well.

                        There was no technical reason to throw away XHTML. The reasons were political and economical.

                        How can you sell Ads if a tool can easily remove them from the XHTML code? How can you sell API access to data, if a program can easily consume the same XHTML that users consume? How can you lock users, if they can consume the web without a browser? Or with a custom one?

                        The WHATWG did not served users’ interests, whatever were the Mozilla’s intentions in 2004.

                        They served some businesses at the expense of the users and of all the high quality web companies that didn’t have much issues with XHTML.

                        Back then it was possible to disable Javascript without loosing access to the web functionalities.

                        Try it now.

                        Back then people were exploring the concept of semantic web with the passion people now talk about the last JS framework.

                        I remember experiments with web readers for blind people that could never work with the modern js polluted web.

                        You are right, W3C abandoned its leadership in the engineering of the web back then.

                        But you can’t sell to a web developer bullshit about HTML5.

                        Beyond few new elements and a slightly more structured page (that could have been done in XHTML too) all its exciting innovations were… more Javascript.

                        Users did not gain anything good from this, just less control over contents, more ads, and a huge security hole worldwide.

                        Because, you know, when you run a javascript in Spain that was served to you from a server in the USA, who is responsible for such javascript running on your computer? Under which law?

                        Do you really think that such legal issues were not taken into account from the browser vendors that flued this involution of the web?

                        I cannot believe they were so incompetent.

                        They knew what they were doing, and did it on purpose.

                        Not to serve their users. To use those who trusted them.

                  2. 0

                    The mention of Trump is pure trolling—as you yourself point out, the dispute predates Trump.

                    1. 6

                      I think it’s more about all of this sounding like a science fiction plot than just taking a jab at the Trump presidency; just a few years ago nobody would have predicted that would have happened. So, no, not pure trolling.

                      1. 2

                        Fair enough. I’m sorry for the accusation.

                        Since the author is critical of Apple/Google/Mozilla here, I took it as a sort of guilt by association attack on them (I don’t mind jabs at Trump), but I see that it probably wasn’t that.

                        1. 2

                          No problem.

                          I didn’t saw such possible interpretation or I wouldn’t have written that line. Sorry.

                      2. 3

                        After 20 years of Berlusconi and with our current empasse with the Government, no Italian could ever troll an American about his current President.

                        It was not my intention in any way.

                        As @olivier said, I was pointing to this surreal situation from an international perspective.

                        USA control most of internet: most root DNS, the most powerful web companies, the standards of the web and so on.

                        Whatever effect Cambridge Analitica had to the election of Trump, it has shown the world that internet is a common infrastructure that we have to control and protect together. Just like we should control the production of oxigen and global warming.

                        If Cambridge Analitica was able to manipulate USA elections (by manipulating Americans), what could do Facebook itself in Italy? Or in German?
                        Or what could Google do in France?

                        The Internet was a DARPA project. We can see it is a military success beyond any expectation.

                        I tried to summarize the debacle between W3C and WHATWG with a bit of irony because, in itself, it shows a pretty scary aspect of this infrastructure.

                        The fact that a group of companies dares to challenge W3C (that, at least in theory, is an international organisation) is an evidence that they do not feel the need to pretend they are working for everybody.

                        They have too much power, to care.

                        1. 4

                          The last point is the crux of the issue: are technologists willing to do the leg work of decentralizing power?

                          Because regular people won’t do this. They don’t care. This, they should have less say in the issue, though still some, as they are deeply affected by it too.

                          1. 0

                            No. Most won’t.

                            Technologist are a wide category, that etymologically includes everyone that feel entitled to speak about how to do things.

                            So we have technologists that mislead people to invest in the “blockchain revolution”, technologists that mislead politicians to allow barely tested AI to kill people on the roads, technologists teaching in the Universities that neural networks computations cannot be explained and thus must be trusted as superhuman oracles… and technologists that classify as troll any criticism of mainstream wisdom.

                            My hope is in hackers: all over the world they have a better understanding of their political role.

                          2. 2

                            If anyone wonders about Berlusconi, Cracked has a great article on him that had me calling Trump a pale imitation of Berlusconi and his exploits. Well, until Trump got into US Presidency which is a bigger achievement than Berlusconi. He did that somewhat by accident, though. Can’t last 20 years either. I still think Berlusconi has him beat at biggest scumbag of that type.

                            1. 2

                              Yeah, the article is funny, but Berlusconi was not. Not for Italians.

                              His problems with women did not impress much us. But for when it became clear most of them were underage.

                              But the demage he did to our laws and (worse) to our public ethics will last for decades.
                              He did not just changed the law to help himself: he destroyed most legal tools to fight the organized crime and to fight bribes and corruption.
                              Worse he helped a whole generation of younger people like him to be bold about their smartness with law workarounds.

                              I pray for the US and the whole world that Trump is not like him.

                      3. 3

                        Personally, I was pretty bummed when they torpedoed XHTML. We were well on the (admittedly painful) way towards documents that could be parsed and manipulated with common tooling.