Email is still mostly unmolested if you understand the security and spam context; it’s not that google made it impossible to run your own smtp server, but in order to do so and not get flagged as spam, there are a lot of hoops to jump through. IMHO this is a net benefit, you still have small email providers competing against gmail, but much less spam.
Email is mostly unmolested because it’s decentralized and federated, and a huge amount of communication crosses between the major players in the space. If Google decided they wanted to take their ball and go home, they would be cutting of all of Gmail, Yahoo mail, all corporate mail servers, and many other small domains.
If we want to make other protocols behave similarly, we need to make sure that federation isn’t just an option, but a feature that’s seamless and actively used, and we need a diverse ecosystem around the protocols.
To foster a diverse ecosystem, we need protocols that are simple and easy to implement, so that anyone can sit down for a week in front of a computer and produce a compatible version of the protocol from first-enough principles, and build a cooperating tool, to diffuse the power of big players.
The only way to combat Google and Microsoft’s spam filters is sending my e-mail, texting my friend say, “Hey I sent you an e-mail. Make sure it’s not in your spam folder.” Usually if they reply, my e-mail will now get through .. usually. Sometimes it gets dropped again.
I have DKIM, DMARC and SPF all set up correctly. Fuck Gmail and fuck outlook and fuck all the god damn spammers that are making it more difficult for e-mail to just fucking work.
Forgive the basic question: do you have an rDNS entry set for your IP address so a forward-confirmed reverse DNS test passes? I don’t see that mentioned by you in your blog post, though it is mentioned in a quote not specifically referring to your system.
It’s not clear who your hosting provider (ISP) is, though the question you asked them about subnet-level blocking is one you could answer yourself via third-party blacklist provider (SpamCop, Spamhaus, or many others of varying quality) and as a consequence work with them on demonstrable (empirical) sender reputation issues.
Yes I’ve been asked that before and haven’t updated the blog post in a while. I do have reverse DNS records for the single IPv4 and 2 IPv6 addresses attached to the mail server. I didn’t originally, although I don’t think it’s made that big a difference.
I’ve also moved to Vultr, which blocks port 25 by default and requires customers explicitly request to get it unblocked; so hopefully that will avoid the noisy subnet problem so often seen on places like my previous host, Linode.
I think a big factor is mail volume. Google and Microsoft seem to trust servers that produce large volumes of HAM and I know people at MailChimp that tell me how they gradually spin up newer IP blocks by slowly adding traffic to them. My volume is very small. My mastodon instance and confluence install occasionally send out notifications, but for the most part my output volume is pretty small.
Email is inherently hard, especially spam filtering; Google and Microsoft just happen to be the largest email providers, so it appears to be a Google or Microsoft problem, but I don’t think it is.
E-mail was once the pillar of the Internet as a truly distributed, standards-based and non-centralized means to communication with people across the planet.
I think you’re looking through rose-tinted glasses a bit. Back in the day email was also commonly used to send out spam from hijacked computers, which is why many ISPs now block outgoing port 25, and many email servers disallow emails from residential IPs. Clearly that was suboptimal, too.
Distributed and non-centralized systems are an exercise in trade-offs; you can’t just accept anything from anyone, because the assholes will abuse it.
It’s also not really a Google issue; many non-Google servers are similarly strict these days, for good reasons. It’s just that Google/Gmail is now the largest provider so people blame them for not accepting their badly configured email server and/or widely invalid emails.
I’ve worked a lot with email in the last few years, and I genuinely and deeply believe that at least half of the people working on email software should be legally forbidden from ever programming anything related to email whatsoever.
Mastodon, Synapse, and GNU Social all implement a mixture of blacklists, CAPTCHAs, and heuristics to lock out spambots and shitposters. The more popular they get, the more complex their anti-spam measures will have to get. Even though they’re not identical to internet mail (obviously), they still have the same problem with spambots.
Those problems are at least partly self-inflicted. There’s nothing about ActivityPub which requires you to rehost all the public content that shows up. You can host your own local public content, and you can send it to other instances so that their users can see it.
Rehosting publicly gives spammers a very good way to see and measure their reach. They can tell exactly when they’ve been blocked and switch servers. Plus all the legal issues with hosting banned content, etc.
In general, you’re right. A well-designed system needs to balance a lot of trade-offs. If we were having a different conversation, I’d be talking about usability, or performance, or having a well-chosen set of features that interact with each other well.
But this subthread is about email, and abusive use is the problem that either causes or exacerbates almost every other problem in email. The reason why deploying an email server is such a pain is anti-spam gatekeeping. The reason why email gets delayed and silently swallowed is anti-spam filtering. The reason why email systems are so complicated is that they have to be able to detect spam. Anti-backscatter measures are the reason why email servers are required to synchronously validate the existence of a mailbox for all incoming mail, and this means the sending SMTP server needs to hold open a connection to the recipient while it sifts through its database. The reason ISPs and routers block port 25 by default is an attempt to reduce spam. More than half of all SMTP traffic is spam.
If having lots of little servers is your goal, and you don’t want your new federated protocol to have control under a small number of giant servers, then you do need to solve this problem. Replicate email’s federation method, get emails emergent federation behavior.
XMMP has a lot of legitimate issues. Try setting up a XMMP video chat between a Linux and macOS client. I’d rather lose my left arm than try doing that again.
Audio, video and file transfer is still very unreliable on most IM platforms. Every time I want to make audio or video call with someone we had to try multiple applications/services and use the first one that works.
Microsoft Teams does this pretty well, across many platforms. Linux support is (obviously, I guess) still a bit hacky, but apparently is possible to get to work as well.
It pains me to see how happily people are getting herded around by GAFAM with dirty tricks like this, but I honestly don’t know what to do. I haven’t managed to convince even a single person to use “freer” software in years.
Once upon a time I would’ve tried to spread the Holy Word of open-source (or at least user-respecting) software like Firefox and Linux, but I’ve mostly given up on that. In my experience, “normal people” associate these things (yes, even Firefox) with “technical people”, so they’re too scared to try. (And actual technical people are too stubborn ;-D)
My last job was a shop of 20 or so developers. We all had the same standard-issue macbook. I could tell mine apart because I put a Firefox sticker on it. I evangelized quite a bit with my teammates. By the end of my three-year tenure there, I had convinced nobody that at least supporting Firefox in our web applications was neccesary, and nobody to use Firefox as their main browser.
I left that job a few weeks ago, and I learned from my old boss that he and two others started using Firefox because they missed me and my ridiculous evangelism.
It’s okay to give up, or take a break, from evangelizing because it’s kind of exhausting and puts your likability with others at risk. But I think it’s hard to tell when you’re actually being effective. There’s no real feedback loop. So if you’re tired of it and don’t want to do it anymore, by all means stop evangelizing. But if you want to stop just because you don’t think you’re not being effective, you may be wrong.
Of course that’s what they care about, I know. IMHO not having a multi-billion-dollar corporation breathing down your neck is a pretty big plus, but I guess that’s just a bit too abstract for most people, as you seem to be implying.
It’s an interesting article, and I don’t doubt that it’s true. What I see at play here is the perennial conflict between the business side and the engineering side.
The engineers WERE on the same team and DID want the same things, and I don’t doubt there was zero malice on their part.
However, you have folks over your shoulder saying “HEY did you put $FEATURE that we need to sell $CLIENT?” and so it goes.
IMO this is precisely why Mozilla is so important, and folks who want to break away from them and encourage people to run AsceticHipsterIceWeaselFox instead are making a mistake.
well the question of whether firefox is a good tactic to resist google domination is much more complicated. it seems like you’re saying mozilla isn’t as bad as google, which makes sense, but that doesn’t mean supporting firefox is worthwhile.
That’s not what I’m saying at all. I’m saying that Firefox is 100% open source, which means we can validate and verify what it’s doing ourselves, and that’s important. It’s the only major player in the marketplace we can say that about.
In any case I think the point is that Mozilla, including its managers, has internal pressures to “do the right thing” whereas at Google that’s far, far less pronounced. What replaces “do the right thing” at Google is “make money” which then gets you all sorts of nasty results like surveillance capitalism and the kind of antitrust-smelling allegations leveled here. (Is Mozilla perfect? Of course not, like anything in this world. But I do think there is that strong “do good” ethos in the org that is lacking in many other places.)
no, google managers, because they decide what goes in chrome, which determines what goes in firefox.
i don’t have any personal experience with mozilla, but a belief in “doing the right thing” doesn’t give me much comfort. i mean that used to be google’s ethos too, and it didn’t prevent them from doing harm.
in mozilla i’m sure there is already a desire to maintain the organization and keep your job, which translates to “maintain the money flows from google.”
“coordinated plan that involved introducing small bugs on its sites that would only manifest for Firefox users.”
as if Googlers have time for that. I would guess simple pressure to launch fast, and not spending the eng effort optimizing for FF due to low market share.
I’m all for ‘don’t attribute to malice what can be explained by incompetence’ but I don’t believe Google is that incompetent.
You’d be surprised. The right answer is often the simplest. It is easier to build something for one browser than it is for more than one. Google builds against Chrome. They develop features using what Chrome supports. Is that the fastest way to build features? Yes. Is this the right thing to do? Debatable. Is this malicious? No.
Better call their editor then. But more broadly, the problem is the aggregate effect of Google’s actions at scale, and it doesn’t matter what lies within the hearts of their engineers.
I think this is a case of product managers across the company deprioritizing non-Chrome support once Chrome had a large market-share. They try to launch first and iterate. So they prioritize the larger platforms first, which would include Chrome. There’s no malicious Google-wide directive that we kill Firefox by not supporting it for our web products. That said, there’s no company-wide pillar to uphold the open web by supporting Firefox just as well as Chrome either. (We have similar pillars for privacy/diversity initiatives.) Note that Google’s iOS apps are well-maintained. That’s because there’s a lot of money and customers in that ecosystem.
A company can both have a coherent vision and a thousand people making individual decisions.
I’m not sure what your comment is really meant to say, but if your product has a simple bug when viewed in a competing browser, that’s literally a one-liner, and which can be fixed before lunch, it clearly must be an intentional undertaking to instead ship it as-is, and then use your two-week release cycle to fix the issue that only takes a few man-hours to fix.
Of course, this is simplifying, but I don’t really see how the first-iteration argument could be applied here:
How long does it take to develop a product?
How much effort is involved to ensure compatibility?
How much is gained by the competition looking bad in the first two weeks after launch?
These are the questions at stake here.
Speaking of which, as a Mozilla user, it’s hardly a coincidence that all the while that Google video-conferencing products don’t work in my browser, much of the competition, like CoderPad.io, works just fine. (Yet we still get surprised when it does work, because Google’s the one that’s setting the standard, even though video-conferencing has been available at Mozilla since like 5 years ago (2014?) if a quick search is to be believed.)
What I’m saying is that in the absence of institutional pressure to uphold certain standards, those standards wont be met. Each product launch needs to meet privacy, security, i18n, and a11y standards. They’re enforced by external committees and manual testers. After launch, there are periodic reviews and every new feature launch needs to get the same approvals. There isn’t a similar checkmark for multibrowser compatibility and performance, so devs/PMs don’t care about it as much.
Plus, it’s always possible the devs or manual testers didn’t find that small issue in Firefox. Every product launches with bugs.
I’m trying to say it’s not institutional malevolence, just that it’s an institutional non-goal. Perhaps if there’s enough pressure (say from lawmakers), great multi-browser support would be one of those launch checkmarks. As it stands, the customers and the money don’t make it a priority.
Traditionally this is precisely why antitrust laws exist. Google now has an effective monopoly on the browser market, and there are no checks and balances to prevent Google from eradicating all the competition. Whether Google does it with intentional malice or not is really beside the point.
The problem with traditional anti-trust legislation in the US in relation to this issue is that it’s not at all clear who is harmed by this monopoly.
A traditional monopoly will naturally raise prices, as there is no competition to force it not to, and this will harm consumers of the product it produces. A lawyer would argue: how is getting a high-quality browser for free harming consumers?
You might argue that Google has a monopoly on online advertisements, but that’s not the case. Facebook is also a huge advertisement broker.
I don’t find my statement controversial though. Why would an enterprise, unhindered by competition or regulation, not raise prices to the absolute maximum the market can bear? It would be its fiduciary duty to do so, to benefit its owners.
No it’s not. Loads of medicines like insulin are totally unencumbered, yet their price is rising dramatically.
Also what are you talking about? You asked when monopolies and trusts have ever raised prices, implying that has never happened. It has. I gave an example. And there are plenty of other examples throughout history, it just so happens that was the answer I thought of literally instantly.
Loads of medicines like insulin are totally unencumbered, yet their price is rising dramatically.
You’re saying that some company has a monopoly on a patent-free product?
You asked when monopolies and trusts have ever raised prices, implying that has never happened.
The context was “anti-trust legislation”. It makes no sense to suggest anti-trust legislation to solve a problem begat by pro-trust legislation; just eliminate the pro-trust legislation.
You’re saying that some company has a monopoly on a patent-free product?
No. A group of companies. Sometimes known as a trust, see also the term “antitrust law.” In the case of insulin, a group of three companies.
Or perhaps the cost of insulin has risen dramatically due to its production cost rising dramatically? And every other developed country in the world has developed technology to offset those costs except the US?
Great question! No, they can’t sell in the US. It’s a hot topic in the Democratic presidential primary. Also, check out this article about policy to reduce prescription drug prices from 2017, which mentions allowing import from Canada and Mexico as possible solutions, to force US providers to set accurate prices or lose business.
Regardless of historical examples, that’s literally Uber’s planned business model. The fares they charge for rideshare simply don’t cover costs. They’ve been operating at billion-dollar losses each year to support this.
So why set fares unsustainably low? To gain marketshare. To put cab companies out of business. Once they have market dominance, they can bring their fares up to sustainability, then profitability, then gouge-level profitability.
Regardless of historical examples, that’s literally Uber’s planned business model
Rather an automatically-accepted just-so truism, much like the original one that I questioned.
This reminds me of when people thought Google had no business reason to create a web browser. Now it’s “obvious”. But since you can’t see any other business model then clearly there’s only one option.
Google wasn’t posting yearly losses in the billions and asking for additional funding. Regardless of Uber’s actual business plan, there are many rich people convinced that the plan is monopoly.
Meanwhile, consumers have had 10 years of good service and VC-subsidized prices. And any raised prices once Uber achieves “monopoly” will be under threat if they are high enough to present a profitable (competitor) opportunity.
This is basic economics. The imagined “problem” of a perpetual, abusive monopoly never seems to manifest. Except of course in cases where the monopoly is enforced by law.
I agree that policy is the only lever that can change a big player. Even the threat of regulation may be enough.
I think it does matter if one characterizes this issue as intentional or not. If it’s intentional, it’s certainly malicious and may point to the people being bad actors. Whereas, if it’s the outcome of an anarchic process, it’s bad, too, but perhaps its just an oversight. The punishment and fixes will have to be different. Also it’s hard to convince someone to change their views if it looks like you’re arguing in bad faith so it’s important to characterize the problem correctly.
It’s perfectly possible for the organization as a whole to be malicious while everybody within the organization just follows the process. The problem here is with Google as a company as opposed to intents of any single individual. As others pointed out conscious decisions had to be made to stop testing with Firefox, to introduce features outside W3C spec, and so on. I think that the individuals who choose to work for a particular organization share moral responsibility for the actions of that organization.
Notably, this to me seems to at the very least confirm a discrepancy between the words (the official “We’re on the same side. We want the same things”; unspoken, which makes it unclear, but I can only assume this to mean at least “conforming to Web Standards”) and the actions (not prioritizing tests with Another Standards Conforming Browser, i.e. Firefox). In harsher words, such behavior tends to be called hypocrisy.
That’s the first thing. The second thing is, presumably, someone at some level of management had to make an explicit decision to drop Firefox testing, at some point in time - assuming it was there before Chrome was released. Then, someone else at the company had to vet this decision. This makes it at least two people. As others said, hard to imagine such levels of pure, bliss, unscrutinized incompetence, at Google of all places. And even assuming charitably those were the real reasons, such incompetence would have been noticed later and corrected. But then, also from your comments, we clearly see it was not.
It’s possible for the Chrome team to be for the open web and for teams across the company to launch products with bugs in non-Chrome browsers. Both are possible at the same time. I don’t think bugs at the margins invalidate whatever the Chrome team is doing.
I’m not saying that anyone decided to drop Firefox testing. That’s not true. There is no management directive to drop Firefox support. I’m presenting a distributed, anarchic model of decision-making. I’m saying that when you have dozens of teams working on dozens of web-products (ranging from small teams working on quick experiments to large teams supporting billion-user products), it’s possible for bugs to slip through that make it seem like the company doesn’t care. The fact that perfect multi-browser is not required to ship a product doesn’t mean that devs/PMs don’t care about it. Perfection on any axis is not required to ship a product.
The recent GMail web product launched even though it loaded slower than the previous version. That’s because users liked the new version better. The tradeoff was to launch and improve performance after.
There is no management directive to drop Firefox support.
It’s enough if there’s no directive to keep it.
If someone pumps a balloon, and then at some point stops pumping it, that’s not an action per se, but still conscious decision of inaction. Saying: “wow, I’m soo surprised the balloon deflated! I do still care about the balloons being inflated! I just now pump a different balloon!” in that situation doesn’t appear completely honest.
This reads almost like a joke. No offence, but I really cannot believe that any development teams at a company whose core competence is web applications would not test their web applications on the leading N% web browsers.
N might vary from 95% for a small shop to 99% which was the case at my former employer: a 20-person small business writing web applications. We supported all major browsers on the three major desktops, Apple and Samsung phones and tablets default browsers, and a few other mobile browsers.
How could a company the size of Google test their web applications only on a single web browser?
I really cannot believe that any development teams at a company whose core competence is web applications would not test their web applications on the leading N% web browsers
Believe it. If you’re testing a small application it’s not too hard. But when you’re testing huge applications for not only bugs, but performance issues, it’s a lot harder. And Google engineers have buff workstations and top of the line laptops where lots of performance issues simply don’t show up.
There are other more traditional business reasons this stuff happens that Google isn’t immune to. Once big projects get enough momentum they tend to roll out even if there are obvious problems—the sunk cost fallacy. I don’t know anything specific about the polymer redesign mentioned, but I wouldn’t be shocked if someone told me that’s what happened.
Well, each web-site at Google probably has on the scale of 10-50 engineers on it. The apps are complex, the tech stack is custom, and there are competing priorities, like at any company. If bugs slip through the cracks, it’s for the same reason as any other company.
The primary and secondary features of all major Google apps work fine across all current browsers. Is that not true?
The primary and secondary features of all major Google apps work fine across all current browsers. Is that not true?
I don’t know, other than Gmail and Search I do not use many Google products.
And it does not matter how many engineers (or devs) are on an app, what matters is the the testing procedures in the build processes. Whether they need to sit a “Firefox-testing” intern next to the “Chrome-testing” intern, or if they need to run test-in-firefox.sh as well as test-in-chrome.sh is immaterial. What is material is that Firefox (and IE, and a few other browsers) exist and need to be tested for.
“My team is too large” is not an excuse, especially as these things scale better for larger teams.
I’m not an iOS expert and don’t work on iOS apps at the company, but I really don’t think it took a year for major apps to support iPhone X. It’s up to the individual PMs to prioritize features; I can’t speak to why iPad support isn’t wide-spread. You can imagine there are many competing goals for each product.
Without going into too much detail, it would have been a significant undertaking to not only develop, but continue to support since any modification to DOM logic could transitively impact shadow DOM.
A pet peeve of mine: Why does Mozilla participate in the WHATWG group? Who benefits from a Living Standard (an oxymoron if ever there was one)? Isn’t this exactly what is happening there too?
Why would they leave WHATWG? Mozilla doesn’t have more power to sway standards going alone than within WHATWG.
WHATWG mostly focuses on documenting the reality, so it merely reflects the power dynamics between browser vendors. If something exists and is necessary for real-world “web compatibility”, even if that’s Chrome’s fire-and-motion play, it still gets documented, because that’s just what has to be supported to view the web as it exists.
This is how you get to OOXML. Why should Mozilla invest resources in documenting what Chrome does? Further, why should Mozilla legitimize what Chrome does by implementing non-standard conforming behavior? What was wrong with the original URL standard from IETF?
Also, do read google’s critique of OOXML, especially the “why multiple standards aren’t good” question, which is relevant in this case (because a living standard is no standard at all).
What was wrong with the original URL standard from IETF?
It wasn’t reflecting the reality.
The old W3C and IETF specs are nice works of fiction, but they’re not useful for building browsers, because they don’t document the actual awful garbage the web is built from.
If OOXML had been a reasonably complete standard, good enough to actually allow you to interoperate with Microsoft Office, I for one would have wholeheartedly supported it. It doesn’t.
Could elaborate on what you’d prefer they do instead of continue to participate in WHATWG, of which they’re one of the 3 founding members?
Do you believe they should:
stop adding things to the Web platform and just focus on fixing existing bugs?
keep advancing the Web platform but do outreach primarily through their own open mailing lists and open bug tracker?
do outreach via the W3C? (Note that Mozilla founded WHATWG, along with Apple and Opera—and without Google, which wouldn’t release Chrome for another 4 years—because of their frustrations with the W3C.)
something else entirely?
Genuine question, tried to avoid making any option sound unreasonable.
Similarly, why is a Living Standard an oxymoron? I believe all stakeholders in the Web platform benefit from the Living Standard:
implementors of user agents have a centralized place to research and discuss interoperability of every API in the Web platform, with significantly more detail than MDN or caniuse.com (the next best places I know of)
web developers both benefit indirectly, from the improved interoperability, and benefit directly by being able to go to the centralized place when they need spec-level detail about the operation and interop of Web APIs
web consumers benefit from the improved compatibility between web pages/webapps and user agents
If you have a ‘living standard’, then there is a very interesting ramification. You can have a browser that correctly implements html5, and a webpage that’s written in correct html5, and yet the webpage will not render correctly. Now, ‘html5’ doesn’t mean anything anymore.
Browsers add, change, fix and break things all the time. Pages roughly follow what browsers support at the time. The living standard is just realistic about the living nature of the web.
The living standard is just realistic about the living nature of the web
But it doesn’t have to be that way. C is standardized, versioned, and while there are extensions they are also standardized. Compilers generally (looking at you, msvc) advertise support for, and do in fact support a given version. Most programming languages are like this. Python has no official standard, but it’s still very definitely versioned and other implementations like, e.g., pypy, say ‘we support python x.y’ and that means something. What does it mean to say ‘we are written in html5’ or ‘we support html5’? Nothing.
W3C tried for over 15 years the approach of telling everyone they are wrong and should be ashamed of their non-standard markup, and that only lead to W3C losing control of the HTML spec.
In practice, the spec needs to define that you can have to allow exactly 511 unclosed <font> tags, no more no less, because that’s what IE did, and there are pages that rely on exactly this. And supporting HTML5 means supporting these things, which is much much more meaningful than “we support HTML4” meant where the syntax was hand-waved as “just use sort-of SGML”, and that was so disconnected from reality it meant nothing useful for browser vendors.
That’s because HTML5 is too broad a term (and nebulous to boot since often times people mean HTML5 and CSS3 and ES6+). You can still make meaningful statements like “we support such-and-such attributes from HTML5”, or “we are compatible with Firefox 67”, because what you really care about is the set of features that are supported, not the version number.
Besides, what’s the proposed alternative? Wait a couple years for the new HTML version to come out and then everybody implements and ships that? That’s been tried before and everyone hated it. That’s the reason evergreen browsers are such a big deal. Of course you can say, well we’ll implement what’s in the working drafts and not wait for the final publication to start implementing, but then you’re back to following a living document. Better to just make it explicit, which is what WHATWG does.
You can still make meaningful statements like “we support such-and-such attributes from HTML5”
Which version of their behaviour?
“we are compatible with Firefox 67”
Congratulations, you’ve implemented versioning! Now that wasn’t hard, was it?
Wait a couple years for the new HTML version to come out and then everybody implements and ships that? That’s been tried before and everyone hated it
I wasn’t really around when html4 was going on but why, exactly, did people hate that? It works fine in other languages.
Of course you can say, well we’ll implement what’s in the working drafts and not wait for the final publication to start implementing, but then you’re back to following a living document
No you’re not! Because if you do it this way, the browsers have to support multiple versions at once. Just like the c compilers—now, c never breaks backwards compat so this is hardly an issue, but now you can make drastic changes to html, progress can actually be made more quickly! Html 2017 can break compatibility with html 2014, be soo much better than html 2014. But because there’s a version specifier at the top, browsers can correctly implement both html 2014, 2017, and 2020-draft, and websites written in all 3 will still work. Standards will advance more quickly, and websites don’t break unless they choose to use a -draft specification, in which case they’re in no more danger of breaking than they are with the html5/WHATWG.
“xyz with support for abc advanced feature” is just another feature. If you look at any MDN page that’s how it’s treated; the browser support section will have a “basic support” column with additional columns for more advanced extensions added later.
Congratulations, you’ve implemented versioning! Now that wasn’t hard, was it?
I’m confused as to what the point is here, can you clarify? Might just be that the text is hard to read because of the lack of facial cues, body language, etc.
I wasn’t really around when html4 was going on but why, exactly, did people hate that? It works fine in other languages.
Because people weren’t willing to wait. Everybody was excited about the new shiny but no one wants to hear about a technology that will be super exciting to use and then wait a full two or three years to use it. There’s also the problem of real-world implementations. If you design the standard in an ivory tower and then everybody implements it, how do you know it won’t be garbage to use in practice? Speaking as someone who worked (a little) on ActivityPub it’s very easy to get in your head and think you have a great design that works and then someone implements it and runs into all sorts of corner cases or problems or whatever that you didn’t consider. People actually using the technologies is still the best way to flush out design issues that we as a field know of.
Now that I’m thinking about it I think a big part of this is that in a language like C, you can already do most to all of what you want to do without the new version (I’m not familiar enough with C to comment in detail on how much more easy or expressive the newer extensions make it). But because the web is so sandboxed that a lot of what gets added adds fundamental expressive power or otherwise fundamentally changes the platform. You can’t polyfill <audio>. You can’t polyfill <video> or <canvas> either.
Standards will advance more quickly, and websites don’t break unless they choose to use a -draft specification, in which case they’re in no more danger of breaking than they are with the html5/WHATWG.
That’s not true - they’ll be in far more danger because with the current HTML Living Standard one of the golden rules is to not break backwards compatibility. That’s why the HTML Living Standard has so many gross hacks in it (as discussed elsewhere in this thread); it’s all to preserve compatibility. Compatibility is broken very sparingly, and it requires lots of discussion, browser telemetry to see how much would break in the wild by the change, etc.
Congratulations, you’ve implemented versioning! Now that wasn’t hard, was it?
I’m confused as to what the point is here
If you say “we’re pegged to firefox 67,” then ‘firefox 67’ is as much a version as ‘html 4’.
they’ll be in far more danger because with the current HTML Living Standard one of the golden rules is to not break backwards compatibility. That’s why the HTML Living Standard has so many gross hacks in it (as discussed elsewhere in this thread); it’s all to preserve compatibility. Compatibility is broken very sparingly, and it requires lots of discussion, browser telemetry to see how much would break in the wild by the change, etc.
I think you are missing my point. If a webpage specifies which version it wants, then versions are allowed to break compatibility and don’t need gross hacks or long discussion.
I think that working within the standard is precisely the right thing here. I believe that WHATWG was the wrong approach (as seems evident from this chain of events mentioned here).
I don’t really know if google has some nefarious, hidden scheme in place to tank firefox, but I personally switched away from Firefox because it really just wasn’t that good. I was a FF user before quantum and after, and it never really made much of a difference web-wide. I’m not much of a google user in general. I have gmail, but I use it through my phone or desktop clients. I watch youtube, but only through a roku or other similar device (ie not on desktop). Firefox’s issues of slowness or plain brokenness never came from Google sites to me.
FWIW, I don’t use Chrome, but I do use a blink-powered browser now.
I am very worried about the future of the internet standards with Google being the driving force behind them - even now there have already been many cases of dubious decisions made by Google when it comes to implementing the standard incorrectly on purpose or pushing various technologies which people are opposed to for their own benefit (for example AMP). I wonder what the once free platform will look like in about 20 years from the technological standpoint and I hope that it will not be governed by Google.
We need a name for this pattern around network protocols: “Embrace, Capture, Break away, Lock-in”
Google did this with Google Talk vs XMMP, email (try running your own mailserver), AMP, RSS…
Email is still mostly unmolested if you understand the security and spam context; it’s not that google made it impossible to run your own smtp server, but in order to do so and not get flagged as spam, there are a lot of hoops to jump through. IMHO this is a net benefit, you still have small email providers competing against gmail, but much less spam.
Email is mostly unmolested because it’s decentralized and federated, and a huge amount of communication crosses between the major players in the space. If Google decided they wanted to take their ball and go home, they would be cutting of all of Gmail, Yahoo mail, all corporate mail servers, and many other small domains.
If we want to make other protocols behave similarly, we need to make sure that federation isn’t just an option, but a feature that’s seamless and actively used, and we need a diverse ecosystem around the protocols.
To foster a diverse ecosystem, we need protocols that are simple and easy to implement, so that anyone can sit down for a week in front of a computer and produce a compatible version of the protocol from first-enough principles, and build a cooperating tool, to diffuse the power of big players.
So how do you not get flagged for spam? I want to join you. I run my own e-mail server and have documented the spam issue here:
https://penguindreams.org/blog/how-google-and-microsoft-made-email-unreliable/
The only way to combat Google and Microsoft’s spam filters is sending my e-mail, texting my friend say, “Hey I sent you an e-mail. Make sure it’s not in your spam folder.” Usually if they reply, my e-mail will now get through .. usually. Sometimes it gets dropped again.
I have DKIM, DMARC and SPF all set up correctly. Fuck Gmail and fuck outlook and fuck all the god damn spammers that are making it more difficult for e-mail to just fucking work.
Forgive the basic question: do you have an rDNS entry set for your IP address so a forward-confirmed reverse DNS test passes? I don’t see that mentioned by you in your blog post, though it is mentioned in a quote not specifically referring to your system.
It’s not clear who your hosting provider (ISP) is, though the question you asked them about subnet-level blocking is one you could answer yourself via third-party blacklist provider (SpamCop, Spamhaus, or many others of varying quality) and as a consequence work with them on demonstrable (empirical) sender reputation issues.
Yes I’ve been asked that before and haven’t updated the blog post in a while. I do have reverse DNS records for the single IPv4 and 2 IPv6 addresses attached to the mail server. I didn’t originally, although I don’t think it’s made that big a difference.
I’ve also moved to Vultr, which blocks port 25 by default and requires customers explicitly request to get it unblocked; so hopefully that will avoid the noisy subnet problem so often seen on places like my previous host, Linode.
I think a big factor is mail volume. Google and Microsoft seem to trust servers that produce large volumes of HAM and I know people at MailChimp that tell me how they gradually spin up newer IP blocks by slowly adding traffic to them. My volume is very small. My mastodon instance and confluence install occasionally send out notifications, but for the most part my output volume is pretty small.
Email is inherently hard, especially spam filtering; Google and Microsoft just happen to be the largest email providers, so it appears to be a Google or Microsoft problem, but I don’t think it is.
I think you’re looking through rose-tinted glasses a bit. Back in the day email was also commonly used to send out spam from hijacked computers, which is why many ISPs now block outgoing port 25, and many email servers disallow emails from residential IPs. Clearly that was suboptimal, too.
Distributed and non-centralized systems are an exercise in trade-offs; you can’t just accept anything from anyone, because the assholes will abuse it.
Cheap hosting is very hard to run a mailserver from because the IP you get is almost certainly tainted.
Having valid rDNS, SPF & DMARC records helps.
It’s also not really a Google issue; many non-Google servers are similarly strict these days, for good reasons. It’s just that Google/Gmail is now the largest provider so people blame them for not accepting their badly configured email server and/or widely invalid emails.
I’ve worked a lot with email in the last few years, and I genuinely and deeply believe that at least half of the people working on email software should be legally forbidden from ever programming anything related to email whatsoever.
In other words, Google didn’t have to break email because email has been fundamentally broken since before they launched GMail.
Worse, newer protocols like Matrix and the various relatives of ActivityPub and OStatus don’t fix this problem.
Matrix, ActivityPub and OStatus don’t fix Email? Well it’s almost as if they are trying to solve other problems than internet mail.
You completely and utterly missed the point.
Mastodon, Synapse, and GNU Social all implement a mixture of blacklists, CAPTCHAs, and heuristics to lock out spambots and shitposters. The more popular they get, the more complex their anti-spam measures will have to get. Even though they’re not identical to internet mail (obviously), they still have the same problem with spambots.
Those problems are at least partly self-inflicted. There’s nothing about ActivityPub which requires you to rehost all the public content that shows up. You can host your own local public content, and you can send it to other instances so that their users can see it.
Rehosting publicly gives spammers a very good way to see and measure their reach. They can tell exactly when they’ve been blocked and switch servers. Plus all the legal issues with hosting banned content, etc.
You’re acting as if that ONE problem (abusive use) is THE only problem and the rule and guide with which we should judge protocols.
While a perfectly reasonable technocratic worldview, I think things like usability are also important :)
In general, you’re right. A well-designed system needs to balance a lot of trade-offs. If we were having a different conversation, I’d be talking about usability, or performance, or having a well-chosen set of features that interact with each other well.
But this subthread is about email, and abusive use is the problem that either causes or exacerbates almost every other problem in email. The reason why deploying an email server is such a pain is anti-spam gatekeeping. The reason why email gets delayed and silently swallowed is anti-spam filtering. The reason why email systems are so complicated is that they have to be able to detect spam. Anti-backscatter measures are the reason why email servers are required to synchronously validate the existence of a mailbox for all incoming mail, and this means the sending SMTP server needs to hold open a connection to the recipient while it sifts through its database. The reason ISPs and routers block port 25 by default is an attempt to reduce spam. More than half of all SMTP traffic is spam.
If having lots of little servers is your goal, and you don’t want your new federated protocol to have control under a small number of giant servers, then you do need to solve this problem. Replicate email’s federation method, get emails emergent federation behavior.
There https://lobste.rs/s/lvajly/google_is_eating_our_mail
XMMP has a lot of legitimate issues. Try setting up a XMMP video chat between a Linux and macOS client. I’d rather lose my left arm than try doing that again.
Desktop Jingle clients never really matured because it wasn’t a popular enough feature to get attention.
These days I expect everyone just uses https://meet.jit.si because it works even with non-XMPP users and no client
I just got jitsi working w/ docker-compose meet.dougandkathy.com – not headache free, but no way I could build it myself
Audio, video and file transfer is still very unreliable on most IM platforms. Every time I want to make audio or video call with someone we had to try multiple applications/services and use the first one that works.
Microsoft Teams does this pretty well, across many platforms. Linux support is (obviously, I guess) still a bit hacky, but apparently is possible to get to work as well.
https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguish
It pains me to see how happily people are getting herded around by GAFAM with dirty tricks like this, but I honestly don’t know what to do. I haven’t managed to convince even a single person to use “freer” software in years.
Once upon a time I would’ve tried to spread the Holy Word of open-source (or at least user-respecting) software like Firefox and Linux, but I’ve mostly given up on that. In my experience, “normal people” associate these things (yes, even Firefox) with “technical people”, so they’re too scared to try. (And actual technical people are too stubborn ;-D)
My last job was a shop of 20 or so developers. We all had the same standard-issue macbook. I could tell mine apart because I put a Firefox sticker on it. I evangelized quite a bit with my teammates. By the end of my three-year tenure there, I had convinced nobody that at least supporting Firefox in our web applications was neccesary, and nobody to use Firefox as their main browser.
I left that job a few weeks ago, and I learned from my old boss that he and two others started using Firefox because they missed me and my ridiculous evangelism.
It’s okay to give up, or take a break, from evangelizing because it’s kind of exhausting and puts your likability with others at risk. But I think it’s hard to tell when you’re actually being effective. There’s no real feedback loop. So if you’re tired of it and don’t want to do it anymore, by all means stop evangelizing. But if you want to stop just because you don’t think you’re not being effective, you may be wrong.
You can’t convince “normal” people by saying it’s “freer” and open source (unfortunately, not even more private) - they don’t care about this stuff.
I convinced my wife to try FF once quantum came out saying “it is much faster”. She got used to it.
I managed to convince a teammate to use FF because of the new grid inspector.
People only care about “what’s in it for me”.
Of course that’s what they care about, I know. IMHO not having a multi-billion-dollar corporation breathing down your neck is a pretty big plus, but I guess that’s just a bit too abstract for most people, as you seem to be implying.
It’s an interesting article, and I don’t doubt that it’s true. What I see at play here is the perennial conflict between the business side and the engineering side.
The engineers WERE on the same team and DID want the same things, and I don’t doubt there was zero malice on their part.
However, you have folks over your shoulder saying “HEY did you put $FEATURE that we need to sell $CLIENT?” and so it goes.
IMO this is precisely why Mozilla is so important, and folks who want to break away from them and encourage people to run AsceticHipsterIceWeaselFox instead are making a mistake.
you mean mozilla is important because it’s exempt from the business vs engineering dynamic? or less beholden to it?
I think Mozilla has internal pressures of its own but they’re different pressures from your typical straight up commercial environment.
i don’t think mozilla engineers really have a say over what features they implement though, they’re still beholden to google managers
I get where you’re coming from here, because Chrome drives the “standard” but I think you’re oversimplifying.
well the question of whether firefox is a good tactic to resist google domination is much more complicated. it seems like you’re saying mozilla isn’t as bad as google, which makes sense, but that doesn’t mean supporting firefox is worthwhile.
That’s not what I’m saying at all. I’m saying that Firefox is 100% open source, which means we can validate and verify what it’s doing ourselves, and that’s important. It’s the only major player in the marketplace we can say that about.
THAT is what makes Mozilla worth supporting IMO.
Did you mean Mozilla managers?
In any case I think the point is that Mozilla, including its managers, has internal pressures to “do the right thing” whereas at Google that’s far, far less pronounced. What replaces “do the right thing” at Google is “make money” which then gets you all sorts of nasty results like surveillance capitalism and the kind of antitrust-smelling allegations leveled here. (Is Mozilla perfect? Of course not, like anything in this world. But I do think there is that strong “do good” ethos in the org that is lacking in many other places.)
no, google managers, because they decide what goes in chrome, which determines what goes in firefox.
i don’t have any personal experience with mozilla, but a belief in “doing the right thing” doesn’t give me much comfort. i mean that used to be google’s ethos too, and it didn’t prevent them from doing harm.
in mozilla i’m sure there is already a desire to maintain the organization and keep your job, which translates to “maintain the money flows from google.”
Googler here, opinions are my own.
“coordinated plan that involved introducing small bugs on its sites that would only manifest for Firefox users.”
as if Googlers have time for that. I would guess simple pressure to launch fast, and not spending the eng effort optimizing for FF due to low market share.
It wasn’t low at first. It lowered over time, due to things like these, among others.
You’d be surprised. The right answer is often the simplest. It is easier to build something for one browser than it is for more than one. Google builds against Chrome. They develop features using what Chrome supports. Is that the fastest way to build features? Yes. Is this the right thing to do? Debatable. Is this malicious? No.
Intentions are irrelevant; impact is all that matters.
If intentions are irrelevant, then there’s no need to use that word in the first sentence of the article.
Better call their editor then. But more broadly, the problem is the aggregate effect of Google’s actions at scale, and it doesn’t matter what lies within the hearts of their engineers.
Full disclaimer: I work at Google.
I think this is a case of product managers across the company deprioritizing non-Chrome support once Chrome had a large market-share. They try to launch first and iterate. So they prioritize the larger platforms first, which would include Chrome. There’s no malicious Google-wide directive that we kill Firefox by not supporting it for our web products. That said, there’s no company-wide pillar to uphold the open web by supporting Firefox just as well as Chrome either. (We have similar pillars for privacy/diversity initiatives.) Note that Google’s iOS apps are well-maintained. That’s because there’s a lot of money and customers in that ecosystem.
A company can both have a coherent vision and a thousand people making individual decisions.
I’m not sure what your comment is really meant to say, but if your product has a simple bug when viewed in a competing browser, that’s literally a one-liner, and which can be fixed before lunch, it clearly must be an intentional undertaking to instead ship it as-is, and then use your two-week release cycle to fix the issue that only takes a few man-hours to fix.
Of course, this is simplifying, but I don’t really see how the first-iteration argument could be applied here:
These are the questions at stake here.
Speaking of which, as a Mozilla user, it’s hardly a coincidence that all the while that Google video-conferencing products don’t work in my browser, much of the competition, like CoderPad.io, works just fine. (Yet we still get surprised when it does work, because Google’s the one that’s setting the standard, even though video-conferencing has been available at Mozilla since like 5 years ago (2014?) if a quick search is to be believed.)
What I’m saying is that in the absence of institutional pressure to uphold certain standards, those standards wont be met. Each product launch needs to meet privacy, security, i18n, and a11y standards. They’re enforced by external committees and manual testers. After launch, there are periodic reviews and every new feature launch needs to get the same approvals. There isn’t a similar checkmark for multibrowser compatibility and performance, so devs/PMs don’t care about it as much.
Plus, it’s always possible the devs or manual testers didn’t find that small issue in Firefox. Every product launches with bugs.
I’m trying to say it’s not institutional malevolence, just that it’s an institutional non-goal. Perhaps if there’s enough pressure (say from lawmakers), great multi-browser support would be one of those launch checkmarks. As it stands, the customers and the money don’t make it a priority.
Traditionally this is precisely why antitrust laws exist. Google now has an effective monopoly on the browser market, and there are no checks and balances to prevent Google from eradicating all the competition. Whether Google does it with intentional malice or not is really beside the point.
The problem with traditional anti-trust legislation in the US in relation to this issue is that it’s not at all clear who is harmed by this monopoly.
A traditional monopoly will naturally raise prices, as there is no competition to force it not to, and this will harm consumers of the product it produces. A lawyer would argue: how is getting a high-quality browser for free harming consumers?
You might argue that Google has a monopoly on online advertisements, but that’s not the case. Facebook is also a huge advertisement broker.
When has that ever happened? And how much did the period of higher prices offset the preceding period of aggressively low prices?
I was paraphrasing from faulty memory the perceived arguments for enacting antitrust legislation in the US in the late 19th century:
https://en.wikipedia.org/wiki/United_States_antitrust_law#History
I don’t find my statement controversial though. Why would an enterprise, unhindered by competition or regulation, not raise prices to the absolute maximum the market can bear? It would be its fiduciary duty to do so, to benefit its owners.
Prescription drug costs in the US literally right now?
The context was “anti-trust legislation”. The monopoly nature of pharmaceutical companies is a direct result of “pro-trust legistation”: patents.
No it’s not. Loads of medicines like insulin are totally unencumbered, yet their price is rising dramatically.
Also what are you talking about? You asked when monopolies and trusts have ever raised prices, implying that has never happened. It has. I gave an example. And there are plenty of other examples throughout history, it just so happens that was the answer I thought of literally instantly.
You’re saying that some company has a monopoly on a patent-free product?
The context was “anti-trust legislation”. It makes no sense to suggest anti-trust legislation to solve a problem begat by pro-trust legislation; just eliminate the pro-trust legislation.
No. A group of companies. Sometimes known as a trust, see also the term “antitrust law.” In the case of insulin, a group of three companies.
Or perhaps the cost of insulin has risen dramatically due to its production cost rising dramatically? And every other developed country in the world has developed technology to offset those costs except the US?
I’m open to other hypotheses.
Can’t those countries can’t sell insulin to the US? If not, is it because of another law preventing them from doing so?
Great question! No, they can’t sell in the US. It’s a hot topic in the Democratic presidential primary. Also, check out this article about policy to reduce prescription drug prices from 2017, which mentions allowing import from Canada and Mexico as possible solutions, to force US providers to set accurate prices or lose business.
Regardless of historical examples, that’s literally Uber’s planned business model. The fares they charge for rideshare simply don’t cover costs. They’ve been operating at billion-dollar losses each year to support this.
So why set fares unsustainably low? To gain marketshare. To put cab companies out of business. Once they have market dominance, they can bring their fares up to sustainability, then profitability, then gouge-level profitability.
Rather an automatically-accepted just-so truism, much like the original one that I questioned.
This reminds me of when people thought Google had no business reason to create a web browser. Now it’s “obvious”. But since you can’t see any other business model then clearly there’s only one option.
Google wasn’t posting yearly losses in the billions and asking for additional funding. Regardless of Uber’s actual business plan, there are many rich people convinced that the plan is monopoly.
Meanwhile, consumers have had 10 years of good service and VC-subsidized prices. And any raised prices once Uber achieves “monopoly” will be under threat if they are high enough to present a profitable (competitor) opportunity.
This is basic economics. The imagined “problem” of a perpetual, abusive monopoly never seems to manifest. Except of course in cases where the monopoly is enforced by law.
I agree that policy is the only lever that can change a big player. Even the threat of regulation may be enough.
I think it does matter if one characterizes this issue as intentional or not. If it’s intentional, it’s certainly malicious and may point to the people being bad actors. Whereas, if it’s the outcome of an anarchic process, it’s bad, too, but perhaps its just an oversight. The punishment and fixes will have to be different. Also it’s hard to convince someone to change their views if it looks like you’re arguing in bad faith so it’s important to characterize the problem correctly.
It’s perfectly possible for the organization as a whole to be malicious while everybody within the organization just follows the process. The problem here is with Google as a company as opposed to intents of any single individual. As others pointed out conscious decisions had to be made to stop testing with Firefox, to introduce features outside W3C spec, and so on. I think that the individuals who choose to work for a particular organization share moral responsibility for the actions of that organization.
I’m in agreement with you here. My main point is that being precise with describing the issue is important.
Notably, this to me seems to at the very least confirm a discrepancy between the words (the official “We’re on the same side. We want the same things”; unspoken, which makes it unclear, but I can only assume this to mean at least “conforming to Web Standards”) and the actions (not prioritizing tests with Another Standards Conforming Browser, i.e. Firefox). In harsher words, such behavior tends to be called hypocrisy.
That’s the first thing. The second thing is, presumably, someone at some level of management had to make an explicit decision to drop Firefox testing, at some point in time - assuming it was there before Chrome was released. Then, someone else at the company had to vet this decision. This makes it at least two people. As others said, hard to imagine such levels of pure, bliss, unscrutinized incompetence, at Google of all places. And even assuming charitably those were the real reasons, such incompetence would have been noticed later and corrected. But then, also from your comments, we clearly see it was not.
It’s possible for the Chrome team to be for the open web and for teams across the company to launch products with bugs in non-Chrome browsers. Both are possible at the same time. I don’t think bugs at the margins invalidate whatever the Chrome team is doing.
I’m not saying that anyone decided to drop Firefox testing. That’s not true. There is no management directive to drop Firefox support. I’m presenting a distributed, anarchic model of decision-making. I’m saying that when you have dozens of teams working on dozens of web-products (ranging from small teams working on quick experiments to large teams supporting billion-user products), it’s possible for bugs to slip through that make it seem like the company doesn’t care. The fact that perfect multi-browser is not required to ship a product doesn’t mean that devs/PMs don’t care about it. Perfection on any axis is not required to ship a product.
The recent GMail web product launched even though it loaded slower than the previous version. That’s because users liked the new version better. The tradeoff was to launch and improve performance after.
It’s enough if there’s no directive to keep it.
If someone pumps a balloon, and then at some point stops pumping it, that’s not an action per se, but still conscious decision of inaction. Saying: “wow, I’m soo surprised the balloon deflated! I do still care about the balloons being inflated! I just now pump a different balloon!” in that situation doesn’t appear completely honest.
Yes, I agree. I’m mostly concerned about the framing of the issue here, and we’ve come to agreement.
[Comment removed by author]
This reads almost like a joke. No offence, but I really cannot believe that any development teams at a company whose core competence is web applications would not test their web applications on the leading N% web browsers.
N might vary from 95% for a small shop to 99% which was the case at my former employer: a 20-person small business writing web applications. We supported all major browsers on the three major desktops, Apple and Samsung phones and tablets default browsers, and a few other mobile browsers.
How could a company the size of Google test their web applications only on a single web browser?
Believe it. If you’re testing a small application it’s not too hard. But when you’re testing huge applications for not only bugs, but performance issues, it’s a lot harder. And Google engineers have buff workstations and top of the line laptops where lots of performance issues simply don’t show up.
There are other more traditional business reasons this stuff happens that Google isn’t immune to. Once big projects get enough momentum they tend to roll out even if there are obvious problems—the sunk cost fallacy. I don’t know anything specific about the polymer redesign mentioned, but I wouldn’t be shocked if someone told me that’s what happened.
Well, each web-site at Google probably has on the scale of 10-50 engineers on it. The apps are complex, the tech stack is custom, and there are competing priorities, like at any company. If bugs slip through the cracks, it’s for the same reason as any other company.
The primary and secondary features of all major Google apps work fine across all current browsers. Is that not true?
I don’t know, other than Gmail and Search I do not use many Google products.
And it does not matter how many engineers (or devs) are on an app, what matters is the the testing procedures in the build processes. Whether they need to sit a “Firefox-testing” intern next to the “Chrome-testing” intern, or if they need to run
test-in-firefox.sh
as well astest-in-chrome.sh
is immaterial. What is material is that Firefox (and IE, and a few other browsers) exist and need to be tested for.“My team is too large” is not an excuse, especially as these things scale better for larger teams.
Like it took a year for apps to support iPhone X or most apps still don’t have iPad keyboard support?
I’m not an iOS expert and don’t work on iOS apps at the company, but I really don’t think it took a year for major apps to support iPhone X. It’s up to the individual PMs to prioritize features; I can’t speak to why iPad support isn’t wide-spread. You can imagine there are many competing goals for each product.
Does anyone know how much work would be involved in adding support for the deprecated API used by youtube?
Without going into too much detail, it would have been a significant undertaking to not only develop, but continue to support since any modification to DOM logic could transitively impact shadow DOM.
A pet peeve of mine: Why does Mozilla participate in the WHATWG group? Who benefits from a
Living Standard
(an oxymoron if ever there was one)? Isn’t this exactly what is happening there too?Why would they leave WHATWG? Mozilla doesn’t have more power to sway standards going alone than within WHATWG.
WHATWG mostly focuses on documenting the reality, so it merely reflects the power dynamics between browser vendors. If something exists and is necessary for real-world “web compatibility”, even if that’s Chrome’s fire-and-motion play, it still gets documented, because that’s just what has to be supported to view the web as it exists.
This is how you get to OOXML. Why should Mozilla invest resources in documenting what Chrome does? Further, why should Mozilla legitimize what Chrome does by implementing non-standard conforming behavior? What was wrong with the original URL standard from IETF?
Also, do read google’s critique of OOXML, especially the “why multiple standards aren’t good” question, which is relevant in this case (because a living standard is no standard at all).
It wasn’t reflecting the reality.
The old W3C and IETF specs are nice works of fiction, but they’re not useful for building browsers, because they don’t document the actual awful garbage the web is built from.
If OOXML had been a reasonably complete standard, good enough to actually allow you to interoperate with Microsoft Office, I for one would have wholeheartedly supported it. It doesn’t.
Could elaborate on what you’d prefer they do instead of continue to participate in WHATWG, of which they’re one of the 3 founding members?
Do you believe they should:
Genuine question, tried to avoid making any option sound unreasonable.
Similarly, why is a Living Standard an oxymoron? I believe all stakeholders in the Web platform benefit from the Living Standard:
If you have a ‘living standard’, then there is a very interesting ramification. You can have a browser that correctly implements html5, and a webpage that’s written in correct html5, and yet the webpage will not render correctly. Now, ‘html5’ doesn’t mean anything anymore.
Browsers add, change, fix and break things all the time. Pages roughly follow what browsers support at the time. The living standard is just realistic about the living nature of the web.
But it doesn’t have to be that way. C is standardized, versioned, and while there are extensions they are also standardized. Compilers generally (looking at you, msvc) advertise support for, and do in fact support a given version. Most programming languages are like this. Python has no official standard, but it’s still very definitely versioned and other implementations like, e.g., pypy, say ‘we support python x.y’ and that means something. What does it mean to say ‘we are written in html5’ or ‘we support html5’? Nothing.
W3C tried for over 15 years the approach of telling everyone they are wrong and should be ashamed of their non-standard markup, and that only lead to W3C losing control of the HTML spec.
In practice, the spec needs to define that you can have to allow exactly 511 unclosed
<font>
tags, no more no less, because that’s what IE did, and there are pages that rely on exactly this. And supporting HTML5 means supporting these things, which is much much more meaningful than “we support HTML4” meant where the syntax was hand-waved as “just use sort-of SGML”, and that was so disconnected from reality it meant nothing useful for browser vendors.That’s because HTML5 is too broad a term (and nebulous to boot since often times people mean HTML5 and CSS3 and ES6+). You can still make meaningful statements like “we support such-and-such attributes from HTML5”, or “we are compatible with Firefox 67”, because what you really care about is the set of features that are supported, not the version number.
Besides, what’s the proposed alternative? Wait a couple years for the new HTML version to come out and then everybody implements and ships that? That’s been tried before and everyone hated it. That’s the reason evergreen browsers are such a big deal. Of course you can say, well we’ll implement what’s in the working drafts and not wait for the final publication to start implementing, but then you’re back to following a living document. Better to just make it explicit, which is what WHATWG does.
Which version of their behaviour?
Congratulations, you’ve implemented versioning! Now that wasn’t hard, was it?
I wasn’t really around when html4 was going on but why, exactly, did people hate that? It works fine in other languages.
No you’re not! Because if you do it this way, the browsers have to support multiple versions at once. Just like the c compilers—now, c never breaks backwards compat so this is hardly an issue, but now you can make drastic changes to html, progress can actually be made more quickly! Html 2017 can break compatibility with html 2014, be soo much better than html 2014. But because there’s a version specifier at the top, browsers can correctly implement both html 2014, 2017, and 2020-draft, and websites written in all 3 will still work. Standards will advance more quickly, and websites don’t break unless they choose to use a -draft specification, in which case they’re in no more danger of breaking than they are with the html5/WHATWG.
“xyz with support for abc advanced feature” is just another feature. If you look at any MDN page that’s how it’s treated; the browser support section will have a “basic support” column with additional columns for more advanced extensions added later.
I’m confused as to what the point is here, can you clarify? Might just be that the text is hard to read because of the lack of facial cues, body language, etc.
Because people weren’t willing to wait. Everybody was excited about the new shiny but no one wants to hear about a technology that will be super exciting to use and then wait a full two or three years to use it. There’s also the problem of real-world implementations. If you design the standard in an ivory tower and then everybody implements it, how do you know it won’t be garbage to use in practice? Speaking as someone who worked (a little) on ActivityPub it’s very easy to get in your head and think you have a great design that works and then someone implements it and runs into all sorts of corner cases or problems or whatever that you didn’t consider. People actually using the technologies is still the best way to flush out design issues that we as a field know of.
Now that I’m thinking about it I think a big part of this is that in a language like C, you can already do most to all of what you want to do without the new version (I’m not familiar enough with C to comment in detail on how much more easy or expressive the newer extensions make it). But because the web is so sandboxed that a lot of what gets added adds fundamental expressive power or otherwise fundamentally changes the platform. You can’t polyfill
<audio>
. You can’t polyfill<video>
or<canvas>
either.That’s not true - they’ll be in far more danger because with the current HTML Living Standard one of the golden rules is to not break backwards compatibility. That’s why the HTML Living Standard has so many gross hacks in it (as discussed elsewhere in this thread); it’s all to preserve compatibility. Compatibility is broken very sparingly, and it requires lots of discussion, browser telemetry to see how much would break in the wild by the change, etc.
If you say “we’re pegged to firefox 67,” then ‘firefox 67’ is as much a version as ‘html 4’.
I think you are missing my point. If a webpage specifies which version it wants, then versions are allowed to break compatibility and don’t need gross hacks or long discussion.
I think that working within the standard is precisely the right thing here. I believe that WHATWG was the wrong approach (as seems evident from this chain of events mentioned here).
I don’t really know if google has some nefarious, hidden scheme in place to tank firefox, but I personally switched away from Firefox because it really just wasn’t that good. I was a FF user before quantum and after, and it never really made much of a difference web-wide. I’m not much of a google user in general. I have gmail, but I use it through my phone or desktop clients. I watch youtube, but only through a roku or other similar device (ie not on desktop). Firefox’s issues of slowness or plain brokenness never came from Google sites to me.
FWIW, I don’t use Chrome, but I do use a blink-powered browser now.
I switched from Chrome to Firefox when Quantum came out. So far the only thing that hasn’t worked well for me is Google Hangouts.
I am very worried about the future of the internet standards with Google being the driving force behind them - even now there have already been many cases of dubious decisions made by Google when it comes to implementing the standard incorrectly on purpose or pushing various technologies which people are opposed to for their own benefit (for example AMP). I wonder what the once free platform will look like in about 20 years from the technological standpoint and I hope that it will not be governed by Google.
Do you think that posting alarmist tech news from sites that rely on traffic and advertising is congruent with your desired future for the internet?
Again??? How much sabotage can they do in one week?
The article is dated April 16 (two days ago). I think this is just a rehash of what you already heard.
As much as people care to repost, apparently.
Flag it and move on I guess.