1. 18
  1.  

  2. 3

    It would be really interesting to learn what makes the author believe this set of relays is malicious. Though, of course, sharing this information would burn the technique…

    1. 2

      Definitely, like what does this mean:

      and the fact that someone runs such a large network fraction of relays “doing things” that ordinary relays can not do (intentionally vague)

      What kind of things can an ordinary client “not do”? Are those things just not built into the typical implementations or maybe the nodes coordinate amongst themselves and act in a way that a single node wouldn’t?

      1. 2

        If you control every node in the onion chain, you can can see the whole communication stream AND know which ip addresses are communicating with which services, and maybe fingerprint the browser using tor based off the tor config sent in (preferences as to routing) and the ssl negotiation traffic (it’s not a lot, but it’s not nothing) AND if there’s a self signed cert at the service being accessed, you can also MITM the stream and access the cleartext.

        At scale, that might yield some interesting information if you wanted to identify people with double lives that might be interesting (under cover spies, people worth blackmailing, discover clues to the future in traffic analysis), especially if you use other surveillance techniques like buying dns lookup data or operating major network infrastructure.

        Edit: On reflection this would be feasible and not especially expensive for a consortium of international law enforcement agencies looking to see who’s using illegal market places and distributing csam, with a smattering of maybe catching some terrorist activity.

      2. 1

        Well once you’ve identified a large group, if they keep coming back when removed from the directory, it’s not someone just doing it for fun.

        The question is how do they identify the group is under common control.