I’m impressed by the lack of testing for this “feature”. It may have a huge impact for end users, but they have managed it to ship with noob errors like the following:
Why is www hidden twice if the domain is “www.www.2ld.tld”?
Who in their right mind misses that, and how on Earth wasn’t it caught at some point before it made it to the stable branch?
If I may ask, how is this worse than url = url.replace(/www/g, '')? If anything, the current implementation use a proper tokenizer to search and replace instead of a naive string replace.
Right, the amateurishness of Google here is stunning. You’d think with their famed interview process they’d do better than this.
On a tangential rant, one astonishing phenomenon is the helplessness of tech companies with multibillion capitalizations on relatively simple things like weeding out obvious bots or fixing the ridiculousness of their recommendation engines. This suggests a major internal dysfunction.
To continue off on the tangent, it sounds like the classic problem with any institution when it reaches a certain size. No matter which type (public, private, government…), at some point the managerial overhead becomes too great and the product begins to suffer.
Google used to have a great search engine. It might even still be great for the casual IT user, but the signal-to-noise ratio has tanked completely within the past ~2 years. Almost all of my searches are now made on DuckDuckGo and it’s becoming increasingly rare that I even try Google, and when I do it’s mostly an exercise in frustration and I spend the first 3-4 searches on quoting and changing words to get proper results.
I’m not able to rattle off any examples, sorry. I’ll try to keep it in mind and post an example or two, but don’t hold your breath :)
I’ve been using DDG as my primary search engine for 2-3-4 years now, and have tried to avoid Google more and more in that same time frame. This also means that all the benefits of Google having a full profile on me are missing from the equation, and I don’t doubt that explains a lot of the misery I experience in my Google searches. However, I treat DDG the same and they still manage to provide me with better search results than Google…
In general every search that includes one or more common words tend to be worse on Google. It seems to me that Google tries to “guess” the intent of the user way too much. I don’t want a “natural language” search engine, I want a search engine that searches for the words I type into the search field, no matter how much they seem like misspellings.
Looks like a dream scenario for phishing, with the opportunity to create legit-looking domain names, plus the secure padlock right next to the address bar.
I’m curious what they were trying to optimise for when coming up with this.
I’m curious what they were trying to optimise for when coming up with this.
Consumer lock-in is my guess. In conjunction with their other remarks about URLs, I think they want to make URLs unpredictable and hence scary, leading users to trust Google to tell them how to get to Apple’s website more than they trust ‘https://apple.com/’ to.
This gives them more power to advertise, more power to redirect, and more power to censor. From their point of view it’s pure win; from ours, not so much.
I think they want to scrap the URL bar all together so you can only make searches and click links (which go to google AMP pages) googles dream web is just one big google.
That’s just catching up to what everyone is doing anyway. Even commercials eschew a domain name and tell the listener to search the company. Back to the ye olde “AOL Keyword” days.
This. My hypothesis is that they are deliberately trying to break the URL bar with “improvements” such as these so that they can later justify removing it altogether.
As much as I’m annoyed with Firefox breaking DNS, this is arguably much worse. And what’s said is that all of the other major browsers will probably follow suit because imitating chrome is just what they all do now.
I fail to see how this make phishing any easier. Given an attacker own a domain, he’s free to use whatever legit-looking subdomain names he wants. And even if somehow an attacker took control of www subdomain of a target, user are so used to www being aliased to @, I don’t see anyone thinking they might be phished due to that.
I’m curious what they were trying to optimise for when coming up with this.
My guess is they are trying to rethink the way people navigate the web. URLs are coming from somewhere with quite different application and users. Maybe we can do better for the average user (People on lobste.rs are not the average users). Hopefully those small changes can be easily driven by user testing and UX researches.
A phisher who does obtain access to a domain can now quietly point WWW where they want and just one more thing will work out for their benefit. That isn’t a large difference, maybe, but could be quite confusing.
I don’t think URLs are working as a good way to convey site identity
That’s because they are supposed to convey a location, not an identity.
But it’s important we do something, because everyone is unsatisfied by URLs
Who’s “everyone”? Never heard anybody say they were unsatisfied with URLs. Typical google-speak where they claim they are working for the greater good, while they are simply trying to twist the web to make it easier for their algorithms to process.
I’m pretty sure the numbers speak loud enough about people not understanding URLs or its shortcoming only with all the successful phishing going around and all the confusion about the meaning of the padlock (Could be argued this is not an URL issue, but IMO still relies on the user understanding what is a domain).
Domain and URLs should be abstracted away to the average user. The user wants to go on Facebook or Google, not https://facebook.com or https://google.com.
Unlike the utm tracker, that they are keeping. URLs make enough sense to people, so much sense in fact that it confuses people when a URL shared doesn’t give someone else an identical view.
It is a sign of privilege and extra knowledge to look at a URL and see what, in a perfect world, might be removed.
I disagree. If you ever show the web to a totally new user, like in emerging markets, they look at a URL and are like “WTF is all that, I just want Facebook”. People actually download browsers based on whether it has ‘facebook’ or not (although this is a different issue).
I’m impressed by the lack of testing for this “feature”. It may have a huge impact for end users, but they have managed it to ship with noob errors like the following:
Who in their right mind misses that, and how on Earth wasn’t it caught at some point before it made it to the stable branch?
url = url.replace(/www/g, '')
- job well done!Worse
What’s really eye-opening is that comment just below wrapped in the pre-processor flag! Stunning.
Wow, so whoever controls
www.com
can disguise as any.com
page ever? And, as long as it’s served with HTTPS, it’ll be “secure”? That’s amazing.Pretty sure the code handles subdomains and leave the domains alone. https://cs.chromium.org/chromium/src/components/url_formatter/url_formatter.cc?sq=package:chromium&g=0&l=94
Not just .com. On any TLD so you could have lobster.www.rs
If I may ask, how is this worse than
url = url.replace(/www/g, '')
? If anything, the current implementation use a proper tokenizer to search and replace instead of a naive string replace.That’s just my hyperbole.
Right, the amateurishness of Google here is stunning. You’d think with their famed interview process they’d do better than this.
On a tangential rant, one astonishing phenomenon is the helplessness of tech companies with multibillion capitalizations on relatively simple things like weeding out obvious bots or fixing the ridiculousness of their recommendation engines. This suggests a major internal dysfunction.
To continue off on the tangent, it sounds like the classic problem with any institution when it reaches a certain size. No matter which type (public, private, government…), at some point the managerial overhead becomes too great and the product begins to suffer.
Google used to have a great search engine. It might even still be great for the casual IT user, but the signal-to-noise ratio has tanked completely within the past ~2 years. Almost all of my searches are now made on DuckDuckGo and it’s becoming increasingly rare that I even try Google, and when I do it’s mostly an exercise in frustration and I spend the first 3-4 searches on quoting and changing words to get proper results.
Large institutions collapsing under their own managerial weight is more of a ‘feature’ in this case.
What are a few examples of queries for which DDG produces better results than Google?
I’m not able to rattle off any examples, sorry. I’ll try to keep it in mind and post an example or two, but don’t hold your breath :)
I’ve been using DDG as my primary search engine for 2-3-4 years now, and have tried to avoid Google more and more in that same time frame. This also means that all the benefits of Google having a full profile on me are missing from the equation, and I don’t doubt that explains a lot of the misery I experience in my Google searches. However, I treat DDG the same and they still manage to provide me with better search results than Google…
In general every search that includes one or more common words tend to be worse on Google. It seems to me that Google tries to “guess” the intent of the user way too much. I don’t want a “natural language” search engine, I want a search engine that searches for the words I type into the search field, no matter how much they seem like misspellings.
Looks like a dream scenario for phishing, with the opportunity to create legit-looking domain names, plus the secure padlock right next to the address bar.
I’m curious what they were trying to optimise for when coming up with this.
Consumer lock-in is my guess. In conjunction with their other remarks about URLs, I think they want to make URLs unpredictable and hence scary, leading users to trust Google to tell them how to get to Apple’s website more than they trust ‘https://apple.com/’ to.
This gives them more power to advertise, more power to redirect, and more power to censor. From their point of view it’s pure win; from ours, not so much.
I think they want to scrap the URL bar all together so you can only make searches and click links (which go to google AMP pages) googles dream web is just one big google.
That’s just catching up to what everyone is doing anyway. Even commercials eschew a domain name and tell the listener to search the company. Back to the ye olde “AOL Keyword” days.
because the domain name system is broken in the first place.
it’s invented by network engineers for network engineers.
This has been the case for ages in Japan now, where ads often feature a search-like bar and the thing to type into said search bar.
This. My hypothesis is that they are deliberately trying to break the URL bar with “improvements” such as these so that they can later justify removing it altogether.
As much as I’m annoyed with Firefox breaking DNS, this is arguably much worse. And what’s said is that all of the other major browsers will probably follow suit because imitating chrome is just what they all do now.
I’ll be shocked if they don’t replace the address bar with a search-only box.
I fail to see how this make phishing any easier. Given an attacker own a domain, he’s free to use whatever legit-looking subdomain names he wants. And even if somehow an attacker took control of
www
subdomain of a target, user are so used towww
being aliased to@
, I don’t see anyone thinking they might be phished due to that.My guess is they are trying to rethink the way people navigate the web. URLs are coming from somewhere with quite different application and users. Maybe we can do better for the average user (People on lobste.rs are not the average users). Hopefully those small changes can be easily driven by user testing and UX researches.
A phisher who does obtain access to a domain can now quietly point WWW where they want and just one more thing will work out for their benefit. That isn’t a large difference, maybe, but could be quite confusing.
Am I mistaken, but haven’t Safari been doing this for a while? For me I enjoyed the new feature when it arrived in Safari.
I can see the issue with having
www.www.site.com
not showing though.This seems to be a trend with major browsers recently — I definitely take issue with it, but I’m guessing there’s some rationale behind the decisions.
Because URLs are hard to understand, apparently: https://www.wired.com/story/google-wants-to-kill-the-url/
That’s because they are supposed to convey a location, not an identity.
Who’s “everyone”? Never heard anybody say they were unsatisfied with URLs. Typical google-speak where they claim they are working for the greater good, while they are simply trying to twist the web to make it easier for their algorithms to process.
A URL is a URI, so they are definitely also identifiers.
I’m pretty sure the numbers speak loud enough about people not understanding URLs or its shortcoming only with all the successful phishing going around and all the confusion about the meaning of the padlock (Could be argued this is not an URL issue, but IMO still relies on the user understanding what is a domain).
Domain and URLs should be abstracted away to the average user. The user wants to go on Facebook or Google, not
https://facebook.com
orhttps://google.com
.I prefer what a lot of browsers do where they gray out most of the URL and show the domain name in full white/black
When your target is the lowest common denominator all you will make is dumb decisions - this being one of the dumbest.
great. one less thing that doesn’t make sense to non-tech-savy people removed from URLs.
Unlike the utm tracker, that they are keeping. URLs make enough sense to people, so much sense in fact that it confuses people when a URL shared doesn’t give someone else an identical view.
It is a sign of privilege and extra knowledge to look at a URL and see what, in a perfect world, might be removed.
I disagree. If you ever show the web to a totally new user, like in emerging markets, they look at a URL and are like “WTF is all that, I just want Facebook”. People actually download browsers based on whether it has ‘facebook’ or not (although this is a different issue).
Can’t wait until people start phishing sites that let you create your own sub domains by creating a www one or something -.-
What a useless feature