1. 43

Neat simple attack, works in Firefox but not Brave or Safari. EDIT: looks like he updated it and now it works everywhere! 😱

  1.  

  2. 46

    The link displayed in the bottom of the web browser window, when you hover over the link, can not be trusted.

    Google has been doing it for ages. You can reproduce it on Firefox: Just search for something on Google, then hover over the link. The link will say it goes directly to the website. Then right click the link and dismiss the menu. Now hover over it again. The link has changed to a Google tracker URL. It’s shady and it’s not okay, and in my eyes, it is a UI bug / exploit that needs to be fixed in the browsers. Links shouldn’t change right under your nose.

    1. 10

      This is a feature that all browsers had for ages: “Disable Javascript”.
      The whole concept of running arbitrary code on client side is a feature, and it means that you can run any arbitrary code on client side. The browser has no way of knowing which snippet is good or bad.
      You can argue that this specific case is obviously malicious, and should be treated explicitly, but you would have two major issues:

      1. You cannot treat this without breaking something else (changing href attribute is a valid usage)
      2. You will soon have to treat billions of other “malicious cases” (changing content based on user-agent, hiding content, …)

      And anyway, people will find a way to break through, for example by making calls to an analysis website on load, or event replace the current page with the target link passed through an analysis website.

      The only safe way to counter this is to disable javascript, or at least, ask the user for the right to run it (though this will only get in the way…).

      1. 2

        But the browser could kill the whole pattern by changing the order of execution. Currently execution flow looks like

        1. Detect click from operating system
        2. Run javascript
        3. If click is on a href, go to link

        Just change the order of 2 and 3. The browser controls the javascript vm. There is no reason it needs to let it run between detecting the click and checking for href’s.

        Legitimate patterns that need the onclick to run before going to a link can still work, they just need to follow the link in js instead of doing it through the href element, which will avoid the browser displaying the contents of the false original link at the bottom of the page.

        1. 3

          Running the code after following the link means giving the priority to the element themselves rather than the code. That would indeed fix the issue for “malicious” links, but it would kill a whole lot of other (legitimate) stuff.

          Take as an example someone filling a form, and submitting it from a link. The onClick event can be used to check user input before submitting anything, and cancel the submission if, eg. one field is not filled, or incorrect.
          By running the JS after following the link, you simply prevent the process. And I can barely imagine a (valid!) use-case with onClick that would work as intended if the code is run after the link is followed.

          1. 1

            For forms, the url isn’t displayed by the browser so this change isn’t needed.

            But, let’s pretend that it was so we can use your example. The “fix” for making on-click continue to be useful would be as simple as removing the url from the html attribute, and instead passing it to the javascript, then the javascript could load the url (or send the post request) after validating the input. The only difference here is the url isn’t part of the element, so the browser doesn’t display it.

      2. 4

        You find this out the hard way when you want to simply copy a link from google, only to paste a load of tracking horseshit

        1. 3

          First, I agree with what you said, that this is not OK.

          But how would you fix this in a way that doesn’t break legitimate uses of onClick?

          1. 3

            Interestingly, there exists a ping attribute on the a element, which would facilitate click tracking without sneakily changing the link with JavaScript. Google uses it on their search results, but only if you’re using Chrome. On Firefox, it appears to use <a onmousedown> instead, which would be responsible for the swap. The ping attribute is supported on Firefox, but it’s not enabled by default. It is on Chrome, though. Hmm.

        2. 12

          This is interesting, but advice to look at the hover bubble isn’t (as the opening para says) “a scam.”

          Sketchy links often come via Web-based email, in direct messages, on forums or social media, etc., where the attacker can’t just drop an onclick= attribute on the link. Hovering gives the user real information there, before the click and any bad consequences that can flow from it. Note that the context of the complained-about Consumer Reports advice is a story called “How to Avoid Facebook Messenger Scams,” and Messenger is of course one of those places attackers can’t just throw JavaScript around.

          The onclick trick also doesn’t make much difference to an attacker’s ability to land you on a “bad” page (tracking, phishing, bug exploits, whatever you’re worried about). If the attacker can run JavaScript, you’re already on a bad page, and they can redirect you to another (at a confusable domain, say) without a click.

          So the remaining problem is if you start at an untrustworthy site, click a link purporting to go to a good site, trust that the destination page is good due to the hover bubble, don’t check the address bar, then do something like enter login creds.

          That’s fine to point out (CR could always add “and check that address bar”), but it’s kind of confusing things not to note the important case where hovering does give you info, and really doesn’t seem helpful to try to leverage it into associating a pretty good story from Consumer Reports(!) with “a scam.”

          1. 10

            In general, mainstream computer security advice is lacking. There’s a saying that goes something like:

            A man will be interested to read the news for his industry, only to find that much of it is incorrect. Then, he will move on to read news for other industries and not question the correctness.

            1. 9

              This is called the Gell-Mann Amnesia Effect or Crichton’s law.

              From Michael Crichton’s wikipedia page:

              In a speech in 2002, Crichton coined the term Gell-Mann amnesia effect. He used this term to describe the phenomenon of experts believing news articles on topics outside of their fields of expertise, even after acknowledging that articles written in the same publication that are within the experts’ fields of expertise are error-ridden and full of misunderstanding. He explains the irony of the term, saying it came about “because I once discussed it with Murray Gell-Mann, and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have”.

            2. 5

              If I middle click the link, it goes to the correct place.

              1. 3

                Yes, opening in a new tab does not trigger the onClick event.

              2. 5

                I really don’t understand why disabling JavaScript is met with such hostility by some people. It doesn’t break sites that much (and one can always enable it for specific sites) and in a lot of situations it makes the web browsing experience objectively better!

                1. 4

                  I have uMatrix set up so I run 1st party Javascript. Still, I daily encounter websites where this permissiveness results in a completely broken website with blank page. I don’t mind wasting time tweaking settings, using multiple browsers etc. to get to what I need with minimal leakage, but I’m also sure most people would find my behaviour, completely reasonably, unreasonable if not crazy.

                  Fashionable web frontend development these days takes Javascript presence for granted. I talked to a bunch of candidates for a developer position recently and not one of the self-identifying front-end developers envisioned building a service without using Javascript as their main tool. JSX and CSS-in-JS are how you sprinkle what used to be foundations into JS, where “the truth” lies. Using Javascript for development of course doesn’t require Javascript to display content, but it’s not exactly surprising that those who already invested themselves in using it as their primary tool generally don’t see a problem of requiring it of others.

                  Which is a long way of me saying that hostility is coming from fewer websites working well without it AND too many developers seeing this pushback as a capricious infliction on their work.

                  1. 2

                    and if some site (i regularly visit) does break, I’m usually writing a small user script for it. mostly, it’s just a few lines: un-lazy-loading images, removing large sticky elements, or in the case of Big Goog’, rewriting outgoing URLs to not track clicks. Though I have to admit that I haven’t yet found a satisfying solution to javascript-based onmouseover menus (on sites I don’t intend to visit often/again), except futzing with the devtools element inspector.

                  2. 3

                    Doesn’t work if JS is disabled.

                    1. 1

                      Google does this since long time, but it’s easy to mitigate with a browser plugin:

                      https://github.com/Rob--W/dont-track-me-google