1. 23
  1.  

  2. 7

    I have become increasingly convinced recently that Moxie was right, or at least might have been right.

    All the supposed benefits of federated systems - censorship resistance, availability, privacy - apply to the network as a whole. But if you focus on individual users, those properties get worse, not better. For example: XMPP. Unlike Signal, XMPP cannot be “shut down”, because you would have to shut down an ever-expanding list of federated servers. Same with surveiling the XMPP network. But if your goal is to compromise an individual user, that’s much easier than it is under Signal. Which security team is going to have a better chance defeating that attack: Signal’s security team? Or the operator of whatever random XMPP node the user is using, or possibly even the targeted user themselves (if they self-host)? (Same goes for availability: if your node goes down, the rest of the network stays up but that doesn’t matter to you because your node is down. Is the availability of the average node really going to be better than that of a hosted service with a whole team and on-call rotation dedicated to keeping it up?)

    End-to-end encryption (like OMEMO) solves some of these problems, but is unable to make deep improvements in the same way Signal can. For example, under OMEMO all message routing information (i.e. To and From fields) is still sent in the clear, because it has to be in order to be interoperable on the XMPP network. Meanwhile, Signal was able to unilaterally deploy Sealed Sender and completely eliminate vast quantities of this metadata.

    When we* standardized ActivityPub, we got an issue requesting end-to-end crypto to be part of the spec. I’m too lazy to find the minutes but at least IIRC, our answer was basically “the underlying concerns there are mostly addressed by the properties of federation, and if it isn’t you can spec out a protocol to do crypto over the AP core protocol anyway.” It didn’t help that this request came relatively late in the game, so it didn’t make it into the core of the spec. I thought this was a good decision at the time. Certainly from a purely technical standpoint I stand by our decision - there was no reason E2EE couldn’t be an extension (like OMEMO). From a broader perspective, though, I wonder if we made targeting individual users that much easier.

    I’ll also say that quite frankly writing the above was quite painful - I like ActivityPub a lot, I still want to believe in a lot of what it enables (for example, from a software freedom perspective it’s an incredible improvement) and the fediverse is a really cool set of communities. I personally know a lot of good people that have and are working on it, people who are friends and whose hard work I don’t want to dismiss or devalue. I want to still believe - someone please tell me I’m wrong.

    Everything is a tradeoff. But because of network effects, in some cases we have to make tradeoffs for everybody. And that means there will always be some group that the chosen tradeoffs don’t really fit. (Edited to clarify a few sentences.)

    *: I was in the Working Group that standardized ActivityPub and a number of related specifications, though I joined relatively late. That being said, I haven’t been involved in those communities in any serious way for several years and am not entirely up to date. Please don’t take this as me speaking as an actual authority.

    1. 3

      Same goes for availability: if your node goes down, the rest of the network stays up but that doesn’t matter to you because your node is down.

      This is why I found a lot of the early informational material surrounding Mastodon somewhat dishonest. IIRC the docs basically said the same thing, that mastodon can’t be shut down because it’s distributed. But the average user doesn’t know the difference between the network and their node, and I really don’t care about AP-compliant servers existing somewhere on the internet if I’ve just lost all my friends and posts.

      1. 2

        Of course that google/gmail’s security team is better than you are. Same for many other services. But the value of hacking a service is the sum of all the values of the users (roughly speaking) which makes big services much more interesting. Getting into smaller ones might be less (economically) interesting.

        People who are or can be special targets probably shouldn’t self host and one of the reason is that they’re basically very interesting and easy enough to breach.

      2. 5

        This was a refreshing although frustrating read.

        Refreshing, because it tackles the failures of federation and peer-to-peer platforms, both in terms of security and in terms of copying the harmful designs of centralised platforms (via copying voting mechanisms, like/favorite metrics or embracing news feeds).

        Frustrating, because like many political essays, it loses focus and enters handwave territory as it reaches the ending, declaring that protocol designers must be ideologically and politically adept, and stop posturing by designing more insecure protocols, copying trite ideas and publishing manifestos (I must say it’s always amusing to read critiques of political posturing, as if those critiques were not posturing themselves).

        As time passes I become more nihilistic and realize that the big majority of people don’t care about centralization. In fact, I’d say that technical centralization is preferable: it offers a much better user experience, which, at the end of the day, is what the majority cares about. Decentralization should come at the governance level, either via user-owned or nationalised platforms.

        1. 3

          He’s right. Up until today decentralization and p2p networks have mostly been treated as novelties and tools for accessing information in ways that laws had not yet caught up with. What most people never understood until it was in the news was that using them pushed the liability from the service providers which used to be the owners of websites to the users themselves. It’s all just been a game of proxies and pinning fault with a healthy dose of legal threats and police state mixed in. Until there is a protocol that properly takes care of all of those issues our collective situation is going to get slowly worse as lawmakers figure out how to strangle the life out of any possible competitor to centralized easily controllable services. It’s happened with financial services already, it will happen with information services as well.

          The only thing I’m currently thankful for is the fact that most people are thinking about streaming services and mostly leaving torrenting and other p2p things alone. We need time to build a solution for this but it’s coming.

          1. 1

            A decentralized social network should have trust-less servers. The servers should just store encrypted data and serve it up, without being exposed to the social graph and with other metadata minimised.

            Identity should also be independent of servers (and hence DNS).

            At the networking layer they will also need to use something like onion routing or a mix network to fully protect the social graph.

            I don’t think existing decentralized social networks can really achieve all this, because privacy isn’t an add-on, it needs to be a fundamental guiding principle at every layer of design.

            1. 1

              What can we do and who is doing it?

              1. 2

                There’s obviously Tor, I2P, and FreeNet. They’re about as anonymous as you can get.

                The problem is that, while they can hide what you’re after, it’s harder to hide that you’re using an overly network at all. The best you can do is keep the relay that you use to join the network secret. In the limit, this means building a pure F2F network, which is kind of insular.