The link displayed in the bottom of the web browser window, when you hover over the link, can not be trusted.
Google has been doing it for ages. You can reproduce it on Firefox: Just search for something on Google, then hover over the link. The link will say it goes directly to the website. Then right click the link and dismiss the menu. Now hover over it again. The link has changed to a Google tracker URL. Itâs shady and itâs not okay, and in my eyes, it is a UI bug / exploit that needs to be fixed in the browsers. Links shouldnât change right under your nose.
This is a feature that all browsers had for ages: âDisable Javascriptâ.
The whole concept of running arbitrary code on client side is a feature, and it means that you can run any arbitrary code on client side. The browser has no way of knowing which snippet is good or bad.
You can argue that this specific case is obviously malicious, and should be treated explicitly, but you would have two major issues:
You cannot treat this without breaking something else (changing href attribute is a valid usage)
You will soon have to treat billions of other âmalicious casesâ (changing content based on user-agent, hiding content, âŠ)
And anyway, people will find a way to break through, for example by making calls to an analysis website on load, or event replace the current page with the target link passed through an analysis website.
The only safe way to counter this is to disable javascript, or at least, ask the user for the right to run it (though this will only get in the wayâŠ).
But the browser could kill the whole pattern by changing the order of execution. Currently execution flow looks like
Detect click from operating system
Run javascript
If click is on a href, go to link
Just change the order of 2 and 3. The browser controls the javascript vm. There is no reason it needs to let it run between detecting the click and checking for hrefâs.
Legitimate patterns that need the onclick to run before going to a link can still work, they just need to follow the link in js instead of doing it through the href element, which will avoid the browser displaying the contents of the false original link at the bottom of the page.
Running the code after following the link means giving the priority to the element themselves rather than the code.
That would indeed fix the issue for âmaliciousâ links, but it would kill a whole lot of other (legitimate) stuff.
Take as an example someone filling a form, and submitting it from a link. The onClick event can be used to check user input before submitting anything, and cancel the submission if, eg. one field is not filled, or incorrect.
By running the JS after following the link, you simply prevent the process. And I can barely imagine a (valid!) use-case with onClick that would work as intended if the code is run after the link is followed.
For forms, the url isnât displayed by the browser so this change isnât needed.
But, letâs pretend that it was so we can use your example. The âfixâ for making on-click continue to be useful would be as simple as removing the url from the html attribute, and instead passing it to the javascript, then the javascript could load the url (or send the post request) after validating the input. The only difference here is the url isnât part of the element, so the browser doesnât display it.
Interestingly, there exists a ping attribute on the a element, which would facilitate click tracking without sneakily changing the link with JavaScript. Google uses it on their search results, but only if youâre using Chrome. On Firefox, it appears to use <a onmousedown> instead, which would be responsible for the swap. The ping attribute is supported on Firefox, but itâs not enabled by default. It is on Chrome, though. Hmm.
This is interesting, but advice to look at the hover bubble isnât (as the opening para says) âa scam.â
Sketchy links often come via Web-based email, in direct messages, on forums or social media, etc., where the attacker canât just drop an onclick= attribute on the link. Hovering gives the user real information there, before the click and any bad consequences that can flow from it. Note that the context of the complained-about Consumer Reports advice is a story called âHow to Avoid Facebook Messenger Scams,â and Messenger is of course one of those places attackers canât just throw JavaScript around.
The onclick trick also doesnât make much difference to an attackerâs ability to land you on a âbadâ page (tracking, phishing, bug exploits, whatever youâre worried about). If the attacker can run JavaScript, youâre already on a bad page, and they can redirect you to another (at a confusable domain, say) without a click.
So the remaining problem is if you start at an untrustworthy site, click a link purporting to go to a good site, trust that the destination page is good due to the hover bubble, donât check the address bar, then do something like enter login creds.
Thatâs fine to point out (CR could always add âand check that address barâ), but itâs kind of confusing things not to note the important case where hovering does give you info, and really doesnât seem helpful to try to leverage it into associating a pretty good story from Consumer Reports(!) with âa scam.â
In general, mainstream computer security advice is lacking. Thereâs a saying that goes something like:
A man will be interested to read the news for his industry, only to find that much of it is incorrect. Then, he will move on to read news for other industries and not question the correctness.
This is called the Gell-Mann Amnesia Effect or Crichtonâs law.
From Michael Crichtonâs wikipedia page:
In a speech in 2002, Crichton coined the term Gell-Mann amnesia effect. He used this term to describe the phenomenon of experts believing news articles on topics outside of their fields of expertise, even after acknowledging that articles written in the same publication that are within the expertsâ fields of expertise are error-ridden and full of misunderstanding. He explains the irony of the term, saying it came about âbecause I once discussed it with Murray Gell-Mann, and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise haveâ.
I really donât understand why disabling JavaScript is met with such hostility by some people. It doesnât break sites that much (and one can always enable it for specific sites) and in a lot of situations it makes the web browsing experience objectively better!
I have uMatrix set up so I run 1st party Javascript. Still, I daily encounter websites where this permissiveness results in a completely broken website with blank page. I donât mind wasting time tweaking settings, using multiple browsers etc. to get to what I need with minimal leakage, but Iâm also sure most people would find my behaviour, completely reasonably, unreasonable if not crazy.
Fashionable web frontend development these days takes Javascript presence for granted. I talked to a bunch of candidates for a developer position recently and not one of the self-identifying front-end developers envisioned building a service without using Javascript as their main tool. JSX and CSS-in-JS are how you sprinkle what used to be foundations into JS, where âthe truthâ lies. Using Javascript for development of course doesnât require Javascript to display content, but itâs not exactly surprising that those who already invested themselves in using it as their primary tool generally donât see a problem of requiring it of others.
Which is a long way of me saying that hostility is coming from fewer websites working well without it AND too many developers seeing this pushback as a capricious infliction on their work.
and if some site (i regularly visit) does break, Iâm usually writing a small user script for it. mostly, itâs just a few lines: un-lazy-loading images, removing large sticky elements, or in the case of Big Googâ, rewriting outgoing URLs to not track clicks. Though I have to admit that I havenât yet found a satisfying solution to javascript-based onmouseover menus (on sites I donât intend to visit often/again), except futzing with the devtools element inspector.
Google has been doing it for ages. You can reproduce it on Firefox: Just search for something on Google, then hover over the link. The link will say it goes directly to the website. Then right click the link and dismiss the menu. Now hover over it again. The link has changed to a Google tracker URL. Itâs shady and itâs not okay, and in my eyes, it is a UI bug / exploit that needs to be fixed in the browsers. Links shouldnât change right under your nose.
This is a feature that all browsers had for ages: âDisable Javascriptâ.
The whole concept of running arbitrary code on client side is a feature, and it means that you can run any arbitrary code on client side. The browser has no way of knowing which snippet is good or bad.
You can argue that this specific case is obviously malicious, and should be treated explicitly, but you would have two major issues:
href
attribute is a valid usage)And anyway, people will find a way to break through, for example by making calls to an analysis website on load, or event replace the current page with the target link passed through an analysis website.
The only safe way to counter this is to disable javascript, or at least, ask the user for the right to run it (though this will only get in the wayâŠ).
But the browser could kill the whole pattern by changing the order of execution. Currently execution flow looks like
Just change the order of 2 and 3. The browser controls the javascript vm. There is no reason it needs to let it run between detecting the click and checking for hrefâs.
Legitimate patterns that need the onclick to run before going to a link can still work, they just need to follow the link in js instead of doing it through the href element, which will avoid the browser displaying the contents of the false original link at the bottom of the page.
Running the code after following the link means giving the priority to the element themselves rather than the code. That would indeed fix the issue for âmaliciousâ links, but it would kill a whole lot of other (legitimate) stuff.
Take as an example someone filling a form, and submitting it from a link. The
onClick
event can be used to check user input before submitting anything, and cancel the submission if, eg. one field is not filled, or incorrect.By running the JS after following the link, you simply prevent the process. And I can barely imagine a (valid!) use-case with
onClick
that would work as intended if the code is run after the link is followed.For forms, the url isnât displayed by the browser so this change isnât needed.
But, letâs pretend that it was so we can use your example. The âfixâ for making on-click continue to be useful would be as simple as removing the url from the html attribute, and instead passing it to the javascript, then the javascript could load the url (or send the post request) after validating the input. The only difference here is the url isnât part of the element, so the browser doesnât display it.
You find this out the hard way when you want to simply copy a link from google, only to paste a load of tracking horseshit
First, I agree with what you said, that this is not OK.
But how would you fix this in a way that doesnât break legitimate uses of onClick?
Interestingly, there exists a
ping
attribute on thea
element, which would facilitate click tracking without sneakily changing the link with JavaScript. Google uses it on their search results, but only if youâre using Chrome. On Firefox, it appears to use<a onmousedown>
instead, which would be responsible for the swap. Theping
attribute is supported on Firefox, but itâs not enabled by default. It is on Chrome, though. Hmm.This is interesting, but advice to look at the hover bubble isnât (as the opening para says) âa scam.â
Sketchy links often come via Web-based email, in direct messages, on forums or social media, etc., where the attacker canât just drop an onclick= attribute on the link. Hovering gives the user real information there, before the click and any bad consequences that can flow from it. Note that the context of the complained-about Consumer Reports advice is a story called âHow to Avoid Facebook Messenger Scams,â and Messenger is of course one of those places attackers canât just throw JavaScript around.
The onclick trick also doesnât make much difference to an attackerâs ability to land you on a âbadâ page (tracking, phishing, bug exploits, whatever youâre worried about). If the attacker can run JavaScript, youâre already on a bad page, and they can redirect you to another (at a confusable domain, say) without a click.
So the remaining problem is if you start at an untrustworthy site, click a link purporting to go to a good site, trust that the destination page is good due to the hover bubble, donât check the address bar, then do something like enter login creds.
Thatâs fine to point out (CR could always add âand check that address barâ), but itâs kind of confusing things not to note the important case where hovering does give you info, and really doesnât seem helpful to try to leverage it into associating a pretty good story from Consumer Reports(!) with âa scam.â
In general, mainstream computer security advice is lacking. Thereâs a saying that goes something like:
This is called the Gell-Mann Amnesia Effect or Crichtonâs law.
From Michael Crichtonâs wikipedia page:
If I middle click the link, it goes to the correct place.
Yes, opening in a new tab does not trigger the onClick event.
I really donât understand why disabling JavaScript is met with such hostility by some people. It doesnât break sites that much (and one can always enable it for specific sites) and in a lot of situations it makes the web browsing experience objectively better!
I have uMatrix set up so I run 1st party Javascript. Still, I daily encounter websites where this permissiveness results in a completely broken website with blank page. I donât mind wasting time tweaking settings, using multiple browsers etc. to get to what I need with minimal leakage, but Iâm also sure most people would find my behaviour, completely reasonably, unreasonable if not crazy.
Fashionable web frontend development these days takes Javascript presence for granted. I talked to a bunch of candidates for a developer position recently and not one of the self-identifying front-end developers envisioned building a service without using Javascript as their main tool. JSX and CSS-in-JS are how you sprinkle what used to be foundations into JS, where âthe truthâ lies. Using Javascript for development of course doesnât require Javascript to display content, but itâs not exactly surprising that those who already invested themselves in using it as their primary tool generally donât see a problem of requiring it of others.
Which is a long way of me saying that hostility is coming from fewer websites working well without it AND too many developers seeing this pushback as a capricious infliction on their work.
and if some site (i regularly visit) does break, Iâm usually writing a small user script for it. mostly, itâs just a few lines: un-lazy-loading images, removing large sticky elements, or in the case of Big Googâ, rewriting outgoing URLs to not track clicks. Though I have to admit that I havenât yet found a satisfying solution to javascript-based
onmouseover
menus (on sites I donât intend to visit often/again), except futzing with the devtools element inspector.Doesnât work if JS is disabled.
Google does this since long time, but itâs easy to mitigate with a browser plugin:
https://github.com/Rob--W/dont-track-me-google