1. 18
  1.  

  2. 11

    I’m doing a lot of work on decentralized blogging (i have a pre-alpha protocol and implementation), but IMO, looking back to old-school blogging is the wrong direction.

    True decentralization has to start at the architecture and design level, and blogging just built on the normal Web 1.0 stack, so it was only decentralized in that personal websites are independent of each other. But setting up and running your own website is nontrivial. So the vast majority of bloggers used hosted systems like Blogger or LiveJournal or WordPress.com. That’s no longer decentralized IMO. Same is true of Mastodon et al. You have to give up a lot of trust and control to whomever runs your server. And the more “social” features like comments and pings were never secure and so were very vulnerable to spam.

    Truly decentralized blogging has to build from a secure P2P architecture, even if it’s not strictly run that way. Servers can and will exist, but their role is to help with discovery, connectivity and availability; they should have nothing to do with trust or identity — that’s controlled by the peer and the user, using cryptography. Scuttlebutt is an example of a system like this.

    Your points about retro computers are interesting. The crypto might be an issue for some older CPUs. How long does it take to do a Curve25519 key exchange, or encrypt with ChaCha20, on an Apple ][ or an 8086? I can say it’s not a problem on a Raspberry Pi, though; I’m using one as a mini-server for my protocol.

    1. 7

      Everyone says that decentralized $X needs to build on the P2P stack, but I haven’t really found many applications like this that actually work well for end users.

      SecureScuttlebutt has some incredible ideas, but anyone who’s ever tried to actually set it up and use it can tell you that it can be a challenge.

      So, what would this look like and can you suggest any current implementations that follow this model?

      1. 4

        That’s what I believe as well. A raspberry pi at home is the ideal middleware to bridge retrocomputers into the blogosphere since they can’t really do any of the crypto stuff.

        I’m quite active on Secure Scuttlebutt, my own client (Patchfox) is the only one with an RSS/Atom importer able to slurp a post from the blogosphere into SSB “blog messages” :-)

        I’m quite keen to follow what you’re doing with decentralised blogging, is there anywhere I can subscribe or keep an eye on?

        1. 3

          I’m keeping it pretty stealthy for the moment — I haven’t even asked anyone else to try it out yet — but I really want to start opening it up soon. When I do I will definitely post about it here.

        2. 3

          The crypto might be an issue for some older CPUs. How long does it take to do a Curve25519 key exchange, or encrypt with ChaCha20, on an Apple ][ or an 8086?

          This is certainly a legitimate problem but I think the experience of the Gemini community has shown that a Gopher (or plain HTTP) gateway is good enough in 95% of the “what about the old computers?” cases. It’s not ideal but I think it’s also fair to just admit that some computers will have to be left out at some point. I think that, for a truly P2P, decentralised protocol, such a solution doesn’t even have to sacrifice decentralisation. Even in retro computing circles there are very few people who only run Apple ][s – most of them also have a computer from this century at least. Pointing your favourite old friend at your current computer isn’t that hard, and you can run the gateway on that one. There are various approaches to that being tried out even in web land (see e.g. https://github.com/ttalvitie/browservice ), where things are far less flexible than with a P2P protocol.

          FWIW, I, too, think you’re right with regard to decentralisation. The relevant Web 1.0-era technology to find inspiration in isn’t blogs, it’s Napster and the further decentralised models that it spawned or influenced, like Kazaa and Bittorrent.

          1. 1

            You could also take the approach of something like Fujinet where you offload the networking bits to a cheap outboard CPU like the EPS32. Let it handle the HTTPS and then communicate to the retro-computer using a protocol/mechanism it can handle. In Fujinet’s case on the Atari, it works over the SIO bus. I know on the Apple II it works with the SmartCard I think? interface, and on the Atari Lynx it uses CommLynx.

          2. 2

            Decentralised doesn’t mean “no server” in the strict sense, nor in any sense that matters. It can mean simply having a choice of servers, and the option to build and/or run your own. Not only is this a decentralised system with all the benefits and liberties that entails, but it’s a helluva lot easier to build and maintain, a lot easier on the end-users, and more reliable than a global p2p mesh.

            1. 3

              Strictly speaking you’re correct, but I’ve been observing and/or working in this area since the 1990s and I’m really dissatisfied with federated architectures like Jabber or Mastodon. They still require you to put way too much trust on a server, and the difficulty of setting up and running servers means few people will do so, resulting in bigger and bigger agglomerations. Then the big servers start to play games about who they will or won’t connect with (either for business reasons, like the IM systems of the 00’s, or political reasons like Mastodon), which their users have to put up with because they can’t jump servers without losing their identity and reputation.

              The way forward seems to be to move all trust to the client, none to the servers. At that point servers are nothing more than way-stations or search engines for content.

            2. 1

              I think it’s worth separating the different layers in the hosting stack. You get huge economies of scale from being able to share a physical host. A single RPi can probably handle an individual blog with 95% of the CPU unused, but that really means that you’re paying for 20 times as much compute as you need. If you can have one-click deployment in a cloud environment (ideally in multiple, different, cloud environments) then you still remove anyone else from being able to use your blog to data-mine your readers, appropriate your content (check out the Facebook T&Cs conditions on IP sometime), and so on. That gets me most of what I want from a decentralized platform, along with the economies of scale that centralised solutions benefit from.

              [Shameless plug] We’ve just launched a Confidential Containers service that lets you deploy a workload in Azure with strong technical guarantees that no one at Microsoft can see inside (data encrypted in memory, with a per-VM key that the hypervisor doesn’t have access to). Expect to see all cloud providers building more Confidential Computing services, including FaaS and storage solutions, over the next few years. If I wanted something decentralised yet easy to adopt, I’d look at these as the building blocks. They’ll eventually converge onto some standards (or at least have third-party abstraction layers that paper over the differences) and you’ll end up being able to deploy on any cloud provider’s infrastructure (or roll your own if you want).

              Jon Anderson’s PhD looked at building a decentralised social network on top of cloud offerings about 10 years ago. His conclusion was that it would cost about $1-2/user/year. That price has probably gone down since then and will continue to do so. A RPi will cost at least 3-4x that just in electricity.

            3. 4

              Reading this article reminded me about a Visual Basic application I wrote in the early 00s that acted as a eZine “reader” in response to ASP hosting being way out of my teenage budget.

              It would consume a xml file I had hosted at one of the many free hosts that were commonplace back then and then download the content and assets so that an issue could be read entirely offline - something that was useful in the days of 56k dialup. It provided a comment feedback system that would cache the users comments and transfer them to my home computer the next time the user had internet connectivity.

              Sharing the executable was done via floppy disk which gave the air of secret society to the whole thing. Eventually I learned that PHP hosting was far cheaper than ASP and thus began my foray and eventual career in web development.

              1. 1

                Some of us post to multiple blogs or syndicate their content into multiple silos. A blogging client allows us to use a single application to do all that. Right now, I’m composing this message using Mars Edit while offline. I can chose to post it either to my online blog or to the decentralised platform Secure Scuttlebutt, all from the same interface.

                There’s another approach I just wanted to signpost here.

                I write my posts in Markdown and publish them using Nikola which can handle cross posting to other venues/mediums or even blogs just fine.

                Blogging clients are great and I’ve definitely enjoyed using them (I used to use Blogsy for IOS which no longer exists) but I personally find the utility of writing my posts in a destination independent format and storing them in Git gives me a lot more freedom and flexibility than even a blog client might.

                1. 2

                  That is how my current system works. I use a blog client (Mars Edit) to write. I write markdown in it. It sends it to my server. My blog is built with a static generator as well. My server saves the markdown file to the correct location. Adds it to git. Builds the site. I have all the flexibility of a static generator and git while still being able to use a blogging client when I want to.

                  1. 1

                    Nice! I’ve been using Visual Studio Code with Markdown All in One quite successfully, but MarsEdit has been around for quite a while and has a very loyal following.

                    I’m also kind of a stickler for wanting tools that are cross platform, because I’m just as likely to be on Linux or even the dread Windows (Yes I know break out the tar and feathers) as on a Mac.

                2. 1

                  There’s a URL that yields a full-text RSS feed. That erases all the format incompatibilities.

                  Now we need an indexing/discovery service. Let’s federate that. Compatible servers can hook in as new leaves, getting a copy of all the living feed URLs known and sending their new ones back upstream. Servers can organize however they like. If you’ve got lots of fast storage you can index billions of feed URLs. Tagging might be nice. Searching would be essential.

                  Then we add a threading semantic to the RSS feed signalling that a particular entry is a reply to a particular thread. One obvious mechanic for that is for all participating blogs to guarantee that the URL for each entry on their site is unique, and use that URL in a Reference header signalling that this entry is a reply to that particular URL.

                  Finally, ask RSS readers to fire off the blog-writing client/editor to construct an entry with a particular Reference header on demand.

                  Et voila, Super-Decentralized Usenet.