1. 43

Hi friends.

Beluga is my reaction to all the Twitter craziness. I’ve been thinking about this idea for a long time and finally decided to build the app a few months back.

The idea is straightforward. Can we build a Twitter-like experience without central servers or a single point of failure? RSS was my point of reference, and that’s why Beluga is essentially a feed reader/writer.

When you publish a post, Beluga will create a beluga.json file and upload it to your S3-compatible server. Other users can follow you and get updates by fetching that file. The beluga.json file is JSON Feed compatible to maximize interoperability. The app also publishes a beluga.xml which is a standard RSS feed. Users can follow your Beluga feed from the app or from their favorite RSS reader.

To make the app even more accessible, a static mini-site is generated and published to your S3-compatible online storage.

Here’s the mini-site of my feed: https://beluga.gcollazo.com

Please give the app a try.

Upcoming features:

  • Easy publish without setting your own S3 bucket
  • Read-only mode
  • Multiple device sync
  • Support for more S3 compatible providers
  • Support for GitHub pages
  • Mastodon interoperability
  1. 19

    It has a strong “the Russians used a pencil” energy vs. trying to understand ActivityPub, lol.

    1. 14

      “The Russians used a pencil” meme is just a myth, the Fisher Space Pen Company was privately funded. Both NASA and the Soviets used pencils until they both purchased bulk orders of the space pen.

      Ironically NASA paid much more for the mechanical pencils they used before the space pen, over $100 per pencil.

      1. 9

        The core engineering lesson is good though. Until a solution has been found inadequate (such as graphite being a fire risk), using something stupid and easy to implement makes a lot of sense.

      2. 9

        Thanks? I do understand ActivityPub and think that requiring a web app server with multiple services including a database is too complex and hard to scale. Not saying that configuring an S3 bucket is easy but I guess you know what I mean.

        One way of describing this app is “just the outbox of ActivityPub”. There’s no server to server communication. You publish to your server and “followers” can read from your server. If you have lots of followers, you could add more caching layers and a CDN.

        1. 6

          Definitely meant as a complement. I’ve looked into doing my own AP server and it seems like too much work for a side project. Static hosting is so much easier to do. It makes much more sense to me for social media to have “pull” as the default mode with an optional “push” layer for efficiency.

          1. 1


          2. 5

            Might you be conflating ActivityPub and Mastodon?

            There are very lightweight Fediverse instance suites that don’t require an external DB per-se.

            Honk sprints to mind.

            1. 2

              I am conflating ActivityPub and Mastodon. The fact that the most popular ActivityPub servers are Mastodon instances makes their implementation the standard.

              My issue with ActivityPub is that it uses a “push” model which is hard and expensive to scale. My self-imposed constraint of being an entirely “pull” system using dumb storage, if successful, is really easy and cheap to scale. BTW I will implement some ActivityPub compatibility to make it possible to follow Beluga feeds from ActivityPub/Mastodon servers.

              For example, a web hosting company could offer Beluga hosting which is essentially a shared web hosting like the ones we used to use in the early days of the web. You can find dirt cheap prices with unlimited storage/traffic with a free domain and zero maintenance for users.

              1. 1

                If one were to enable versioning of the S3 bucket, one could in theory allow public uploads to a file in the bucket, which will then act as the inbox.

                When the app does a sync, it’ll download all versions of the inbox since last time.

                I have not tested this and don’t know if S3’s REST API will be interoperable with other ActivityPub servers. If a simple POST is sufficient without any S3 specific headers, it might work.

                It might also be interesting to store the headers of the request which often contains the signing data. Not sure how one would go about preserving those.

                I’m also pretty sure that opening up anonymous/public write access in such a way can lead to abuse.

                If S3 is not a hard requirement, then one could implement this with more security in mind. The main goal is to keep the server as light as possible and keep most of the business logic in the app. The server would be a little smarter than S3 in regards to ActivityPub specific tasks.

                The app could sync in new posts from the server. When the user has posted something, the app could itself send the required notifications to other ActivityPub servers. One would need to handle unresponsive servers somehow, so it might be required to use the server for retries in those cases.

                One could also combine S3 and this custom ActivityPub server. The inbox could be referenced to another location via WebFinger. That way most of the public website is on S3, but the special handling of incoming write requests are outsourced to the custom server.

          3. 4

            Even simpler: twtxt. No S3 bucket, no JSON, only flat files.

            1. 1

              Although I believe the only iOS app for this ecosystem requires a yarnd pod to talk to which knocks down the simplicity slightly.

            2. 3

              I think there’s probably room for both. This might not work out for someone who posts a LOT, since generating those feeds may become quite cumbersome after a few hundred thousand posts. And if you/your followers are quite popular, I guess you’d have to pay more to support your content being published. In other words, this probably isn’t a great solution for ElonJet, but for your own personal account? Sure, why not?

              ActivityPub is definitely popular for a reason, but it’s a little complicated if you just want to run some kind of personal social networking server for yourself, similar to self-hosting a blog or personal home page. It’s clearly designed for a situation where someone runs a server for multiple people, not for each individual running their own infra. And personally, I’m all about competing technologies in this space. There should be more than one game in town, because social networking needs can be different. Feeds are a good start, but we need some kind of standard format to consume so it’s easy to build stuff like this.

              1. 2

                I think this is far superior to federation, by allowing for users’ feeds to be truly decentralised.

                1. 1

                  It’s pretty cool, but I’m not sure if it can replace federation in the realm of massive scale.

                  In my case, I have over 14,000 tweets on Twitter (joined in like 2007). If I wanted to migrate all of that to something like Beluga or twtxt, my feed may get extremely large and difficult to download on-demand. This would inevitably require some kind of cross between client-side caching of larger feeds, and asking the server if there are any updates to that feed. An account like @horse_ebooks has 18,000 tweets and posted all of that in about 4 years. This account was way more busy than I was, and not only that, the feed it generated was enormous so it would have to go through all of those posts to regenerate the feed again. Without some clever caching and compiling techniques, that is.

                  When you get into the realm of “things might be so slow that the posts you are seeing are actually from 5-10 minutes ago”, that’s when the value of microblogging can really take a hit. Sometimes I see tweets posted by popular authors, and it says they posted that 30s ago, yet by the time I click into the tweet it already has hundreds of replies, thousands of retweets, etc. I’m thinking about what would happen if that was hosted on some external server, with that many requests hitting that server. Would it take longer for me to see my entire timeline, just because I follow a few extremely popular users?

                  That’s kinda where federated social networking comes into play. This kind of thing would happen between servers, and if those servers are optimized properly (for example, processing new activity in the background rather than doing it in the request/response cycle), you wouldn’t necessarily see performance degradation on your whole timeline, but perhaps a subset of it. Federation isn’t a perfect solution, as we’ve seen with existing Mastodon servers which were unable to handle both the content moderation difficulties as well as the sheer volume of posts from very popular users and the users that interact with them. But it at least offers the capability to make this happen without needing to bank on a single giant corporation.

                  And, of course, so does Beluga. That’s why I’m trying it out! Looks like a kind of social networking that would be a little easier to read sometimes, as most high-traffic networks are just so busy every day that it becomes difficult to see what’s going on all the time. If what you want is to get your thoughts out to as many people as possible, you might want to choose ActivityPub. For those of us who want to bullshit around with the tech and don’t really care too much about who sees our posts? Beluga seems like a good choice.

                  1. 3

                    An account like @horse_ebooks has 18,000 tweets and posted all of that in about 4 years.

                    Let’s be generous and say each post was 1k… that’s still only 18mb for the entire feed. That’s nothing on the modern web.

                    1. 1

                      I’ve been thinking a lot about the large feed problem. This is not a problem today, but it will become an issue in the future.

                      The basics

                      • Feed clients must be smart and implement If-Modified-Since and If-None-Match headers so the server doesn’t blindly responds with the payload on each update request.
                      • The client app must cache everything locally and insert/update/delete changes after performing a comparison with the data on the server feed. Beluga currently implements this to enable offline support and local search (coming soon).

                      My current best idea/solution for the problem

                      • Set a limit to the main feed file beluga.json of 1000 posts.
                      • beluga.json must contain the latest posts in reverse chronological order
                      • beluga.json might include a URL to an archive file for example archive-0.json
                      • The archive file must be ordered in chronologically and contain up to 1000 posts
                      • The archive file might include a URL to an archive file for example archive-1.json
                      • A very large collection of posts might have several archive files. But the older the post the less like it is to be accessed
                      • Clients will only store the last 1000 posts locally and only access the archive when the user makes the request. Different clientes might choose different approaches.

                      This approach allows for smaller file sizes and the possibility of large collections of posts. The 1000 posts limit is arbitrary, clients might choose to set their own limits. I believe that client apps should implement quality metrics for feeds and even alert users of bad actors. I plan to do some of that if this ever becomes a problem.

                      Ordering the posts chronologically in the archive will help optimize file generation and upload on the client side since the files should mostly be append only until full. That means that in most cases of huge post archives the app will only generate and upload two files. The latests beluga.json and the last archive page archive-{N}.json

                      1. 3

                        Have beluga.json include a JSON-LD style link to the “next page” itself a beluga.json. Each file points to the next. Same decoder! 👌🏾

                        (I lightly suspect this is a non-problem given how well text compresses.)

                  2. 2

                    This might not work out for someone who posts a LOT, since generating those feeds may become quite cumbersome after a few hundred thousand posts.

                    My solution for this with my twtxt bots was to split them into two feeds - current (single entry) and archive (last 10). Could work for something like Elonjet (“last 10 flights”, “last 100 flights”).

                    1. 2

                      There are a bunch of decent options along the lines of having one file for each day or each week, or having one sequentially numbered file for every n posts. Add a “current” file as well and it should scale nicely, by never needing to touch old posts 🙂 except when scrolling through them all.

                      1. 2

                        There are a bunch of decent options like having one file for each day or each week, or having one sequentially numbered file for every n posts. Add a “current” file as well and it should scale nicely, by never needing to touch old posts 🙂 except when scrolling through them all.

                    2. 2

                      I think that’s a very apt comparison because while this works far better under a certain set of constraints, there is a reason for ActivityPub - and for the space pen.

                    3. 5

                      I thought this sounded familiar ;) https://lobste.rs/s/xvvjza/playing_with_activitypub#c_xrfbyb

                      Great job, I downloaded the app and will give it a try on my S3!

                      1. 3

                        Neat app! Maybe a little bit too opinionated (evil tech billionaire?). That aside, it’s cool to see people coming up with creative ways to alternative short form social networks.

                        1. 9

                          Thanks, this is a one-person project (me) so I get to make that sort of thing :)

                          1. 4

                            FWIW I seriously doubt that anyone who’d actually use this app will be offended by that, or find it as anything but funny.

                            I’d go even further and expand that to the set of people whose opinions you actually care about. But then I’ll offend people :)

                        2. 2

                          Very nice! I love the visual design, the technical architecture, everything.

                          Would it be possible to make this work with M1 Macs? Not talking about a dedicated app, just enabling the flag in the Mac App Store if the app is already binary-compatible. I try to avoid my phone when I can.

                          1. 1

                            Yes, that will happen as soon as I can tweak a few UI things that look weird in the mac

                          2. 2

                            Is there currently a way I could browse feeds?

                            1. 3

                              Right now the app will force you to setup your own feed before using it. That’s a design decision I will change very soon. You will be able to just open the app and follow users. The app includes a directory of feeds that is “opt-in”. It will only list users that explicitly ask to be listed. The app does not capture any user information.

                            2. 2

                              Great stuff! This has strong micro.blog energy. If I hadn’t invested time into my micro.blog and my mastodon profile, I’d probably try to get this up and running. Keep it up!

                              1. 2

                                Thanks! I’m a huge fun of micro.blog.

                              2. 2

                                Great approach, from building the standard to the app, it is an awesome kickoff point for possibly starting a real alternative. The simplicity of implementation is awesome.

                                That said, any backend that directly conflicts with the idea of going viral I think will be a problem over time. Basically, if someone went really viral with a tweet, they would either get shut off with billing limits or get qan unexpected bill. This puts the major goals of the system at odds with itself. Due to the way you have built it – these issues are easily overcome I think.

                                • Use vendors who don’t have limits/bill on pages served (like github pages).
                                • Vendors can offer free services and insert ads easily due to the simple format and structure.

                                The latter of those approaches really gets you to a decentralized twitter really quickly. One thing I think will be important is a plan for users to switch between vendors (due to cost or just new company offers less ads in stream). So two things that came to mind…

                                • Can users move between hosts somehow (leave a moved.xml or something to chain users to new location?)
                                • Can users publish a pub key so they could DM via a similar system? DMs could possibly live in public folder like /dm/to/robertmeta/(uuid).dm encrypted with my pub key? Obviously would only work when we are subscribed to each other initially.
                                1. 1

                                  Publishing a public key and signing the feed has been on my to-do list since day one. It’s going to happen. My priority after launch is the following:

                                  • Add more S3 services including a way to configure totally custom ones (HTTPS is and will be required)
                                  • Github Pages support
                                  • Multi-device sync which will also enable easy migration from one vendor to another.

                                  The last one is specially tricky because I want to create some sort migration or move file that can be published on the old host (and signed) alerting clients of the new feed information.

                                2. 2

                                  Only iOS? :(

                                  1. 4

                                    A friend is building an Android compatible version but I don’t have an ETA for that.

                                  2. 1

                                    Really cool idea. Do you have plans on supporting custom S3 compatible servers ( like user instances of MinIO)?

                                    1. 3

                                      We currently support:

                                      • AWS S3
                                      • Wasabi
                                      • Digital Ocean
                                      • Linode
                                      • Backblaze
                                      • DreamHost

                                      In the next update I will include a few more and hopefully an option to enable totally custom ones including MinIO.

                                    2. 1

                                      This is pretty neat. Where can one find out about the format without having to setup a s3 bucket? :)

                                      1. 2

                                        I will publish a detailed schema soon. The format is JSON Feed with some extensions labeled _beluga.

                                        In the meantime you could take a look at my feed: https://beluga.gcollazo.com/beluga.json

                                      2. 1

                                        It would be cool to wire up WebSub. It would be easy to do on the publish side. You just need to ping the WebSub hub after updating the feeds. However phone-only subscribing is not easy. You would want an anyways-online server to receive the pushes and convert them to push notifications.