1. 35
    1. 15

      this initiative started years ago, and i have to say that i’m very skeptical. they are basing this on ActivityPub which is a bit of a mess, imho. Here is an interesting post that talks about some of the issues/difficulties in using ActivityPub: https://inqlab.net/2021-11-12-openengiadina-from-activitypub-to-xmpp.html

      1. 7

        That article is presenting a pretty superficial view of ActivityPub, in my opinion.

        ActivityPub is a mostly complete specification in what it designs to be: a standardization of how to disseminate ActivityStreams vocabulary to Actors. Yes, you need a ton of other stuff on top of it to have a full working application, but that’s the case for 90% of specifications out there.

        Webfinger is not needed at all. Currently it’s the only way the main player in the space is doing user discovery, and everyone wants to be compatible with them, but a stand alone ActivityPub application can use other means for that and has no need for it. I have a ticket opened with Mastodon to lessen the requirements of their webfinger user discovery flow, but so far they haven’t been receptive, sadly.

        JsonLD is barely needed in the ActivityPub world, a Json parser/encoder should be enough for 90% of the use cases. The only places where you might want a full JsonLD parser/encoder is if your application wants to support dynamic vocabulary that’s not in the spec and needs to be dereferenced at run-time. Any application that wants to be performant would probably have a step at build time to ensure they satisfy the schemas they target, instead of relying of runtime logic any way.

        There are not indeed many C2S implementations out there, but as someone that wrote one of them, I will just say that the author didn’t try very hard to find them.

        1. 4


          I cannot follow @marius@littr.me from my mastodon server.

          1. 1

            Littr.me does not federate yet. Also probably users won’t be discoverable on Mastodon while the bug I mentioned is not addressed.

            1. 2

              I just find it funny that consistently every person i see who attempts an ActivityPub implementation has trouble with federating… something which one might consider the whole purpose of the spec… sorry, but i can’t help but see that as a worrying sign.

              1. 7

                What makes you think I’m having trouble with implementing federation?

                My projects don’t federate because that will bring a finality to the development process that I’m not yet ready to bestow upon them. littr.me, and it’s counterpart server are just example applications for a suite of libraries that I’m not ready to release into public consumption. Having federation would bring more attention and expectations from users than I’m prepared to deal with.

                The federated space is full of half baked attempts which in my opinion would have benefited from more time to develop outside of the spotlight. I don’t want to get overwhelmed and pressured into making something that I don’t fully control.

                1. 2

                  fair enough

      2. 1

        What would you recommend for event streaming? Never used Kafka but know that if your a company has lots of events, internally Kafka is viable solution. Believe you can self-host open-source Kafka. Not sure how feasible for some sort of public endpoints that work from Kafka data is, but not sure why fundamentally that couldn’t work. Again never used Kafka.

        1. 2

          Again never used Kafka.

          why is your comment all about Kafka then?

          1. 3

            I was interested in how a decentralized git issues and other would look before I saw this article. Happen to to have read a bit of Kafka documentation about 3 or so days ago. ActivityPub was identified in the comments as somewhat problematic. ForgeFed seems like a great idea though. So was wondering how else to solve this. You’re knowledgeable about ActivityPub, so made the connection about Kafka. Sorta a coincidental I guess. Event streams and git issues have been on my mind

    2. 7

      I’m somewhat skeptical of this, primarily because they want to base it off ActivityPub, a standard that is notoriously badly implemented (the status quo of most implementations is making it work with Mastodon and themselves). Also, I don’t see why software development needs to use a protocol meant for social media, especially when many developers complain about those github features.

      1. 3

        This is a social network though. Not in a Facebook way, but it’s effectively a network of servers sharing steams of updates about projects. Just because we don’t like the social features in GH, doesn’t mean the protocol itself doesn’t represent the updates/issues/PR interactions well.

        1. 4

          You can represent the updates/issues/PR’s in email just fine (has been done, multiple times even). You can represent them with git refs (also has been done, multiple times). You can represent these interactions in a multitude of ways that all work well. They of course have tradeoffs, but I don’t see any advantage in using ActivityPub for these interactions besides them being viewable on traditional decentralized social media services, and I don’t see any benefits for software development in that.

          1. 3

            There’s two problems with that: 1. There’s no good interface for git-over-email, 2. It brings all the issues of email deliverability. I really don’t want to chase maintainers on other networks to ask them to check the spam folder, or debug why gmail rejects my domain today.

            1. 4

              I think that any problem you can have with email can also happen using ActivityPub, only with taking more time to fix them as the ecosystem is so much younger.

            2. 1

              1: Patchwork, sourcehut work just fine. Plenty of customizable email clients to work with it too. Zero problems with that. 2: Email deliverability is not a problem for git patches. Legitimately never had problems with it. Email is probably one of the most reliable methods of comunication FWIW, and you only encounter problems when you are sending large amounts of automated emails to large email silos (google, microsoft, etc).

              Besides, why are you focusing on email? You can still represent those interactions in a bunch of other ways (git, as I suggested for example).

              1. 4

                My iPad mail client won’t let me send an email that patchwork or sourcehut will accept. I do a significant amount of open source work on my iPad.

                1. 1

                  I do agree that the fact that sourcehut doesn’t accept mails with HTML content is a drawback. I would say that a mail client that won’t let you send an email without HTML content isn’t good too. But email clients are relatively easy to change, and maybe you should look for a better one.

                  1. 5

                    With all due respect, I like Apple Mail and it works fine on literally every other service I use. Blocking access from the most used mail client on the most popular devices in the world is not a sustainable approach. My mail client is perfectly capable for everything I need to do. The service not accepting it is the one that is broken. Apple Mail sends a plaintext part anyways, so there’s no reason that the service can’t strip the HTML component from the email and forward it on.

                    I know how to send plain text email, but a lot of the time I don’t want to think about how my email is sent. I want to think about the email I’m sending or the contribution I am making. The main problem with a lot of these solutions is that they do not consider the user experience for users that aren’t computing gods. Hell I’m considered a computing god but I don’t want to have to be in galaxy brain mode 9001% of the time of the day. You really have to meet people where they are and then move forward with that. This is the dark side of mail user agent diversity: the fact that some mail user agents make decisions that you may think are dumb. Thinking the mail user agent is dumb and refusing to support it only says things like “we don’t want you to participate”. This prevents communities from growing.

                    Believe what you want though.

                    1. 2

                      Apple Mail sends a plaintext part anyways, so there’s no reason that the service can’t strip the HTML component from the email and forward it on.

                      It cannot. Stripping the HTML part breaks DKIM, and since usually at least one of SPF and DKIM must be valid, and SPF is invalid for mailing-list forwarded emails, the email as-is cannot be forwarded. You could resend the contents from some other address, but that is ugly and breaks patch authoring, so you’d need special casing for that. It’s honestly just easier to reject HTML emails.

                  2. 2

                    Sometimes it’s important to meet users where they are, not expect them to change in order to have a successful collaboration. Perhaps the workflow isn’t the best one if you have to ask users to change their email client, or even worse their email provider (as you did in another comment).

                    1. 1

                      Sometimes it’s useless to try working with people that are not willing to change. For example, the same argument could be made for making my website available over Gopher, because somebody’s ancient computer has no modern browser. But is it really useful for me to make extra effort on making my website available to them, when it would constrain me on what I could do without special casing different protocols? Sometimes, it is time for the user to change.

                      1. 1

                        Perhaps it’s those who are still trying to make email workflow come back that should change? They seem to be unwilling to change, so I guess we shouldn’t try and convince them? Likely their niche will continue to be small without some huge improvements in the user experience.

              2. 3

                I’m regularly getting git patches in my spam. And that’s on top of general deliverability issues. Also recently I’ve witnessed a group of very senior devs spending ~2 weeks resolving various issues to get the email workflow working correctly. I’m glad it works for you, but it’s not a universal experience.

                I concentrated on email, because you mentioned it as an example. Git refs themselves don’t solve the distribution issues. You need to accept the pr itself or a notification about the pr somehow. In theory you could accept unauthenticated pushes of branches, but that comes with its own issues.

                1. 2

                  What I read is that you have problems with your email provider, not with git workflow on email. And while yes, a bunch of big providers are bad with it, it doesn’t mean you can’t move to a good one. From the workflow side I’m not sure where the snags are, of course there is a need for a transition period between workflows, but I’d say it wouldn’t be much different when moving in the opposite direction (from email to web PRs).

                  As for git, it need not be (traditional) branches. Git can handle arbitrary refs, and you can design your PR’s, issues, etc. to sit under separate ref namespaces, and only put restrictions on how those refs are managed (e.g. only forward pushes for unauthenticated users). You can design your files to be usable with simple to automate union merges (for examples, see git-annex). There are ways to extend what already exists to fairly extensive systems.

                  1. 4

                    Most people aren’t moving from email to web PRs. They’ve always used web and never learned an email flow. So to switch to email will require mass retraining.

                    1. 1

                      And in 2008 most users haven’t heard of web PRs and only knew email. The retraining already happened once, it can happen again.

                      1. 4

                        Actually it didn’t, as most of the users who are working on web PRs started directly on the web. They weren’t collaborating on software in 2008. I think you make some compelling arguments around the reliability of email delivery versus web federation, but I disagree that from a user perspective the email workflow is the right one to use.

    3. 7

      I recently attended a talk by Eric S. Raymond at the Southeast Linuxfest this year, he’s started a project that seems similar in goal to what this is trying to achieve:


      I’m all for de-centralized software repos, and federation amongst them. I see gitea is on the list for implementing it, it’s my go-to and favorite self-hosted VCS.

      1. 2

        His talk is now on YouTube. https://youtube.com/watch?v=0HMghqwa6Gs

        1. 1

          That’s awesome, and you can just barely see me, Im to his right near the wall sitting in the front row. Thanks for posting this!

          1. 1

            We may have spoken at the conference!

    4. 5

      That the Vervis (reference implementation) storage appears pretty broken isn’t exactly a good argument for this!

    5. 4

      I’m really excited about this. I may switch to self-hosting once this stabilizes and is upstreamed to Gitea and/or Sourcehut.

      I love that the content is just HTML and the source could be anything. It’ll be neat when users can use AsciiDoc, reStructuredText, or Org mode in the comments too instead of being required to use Markdown with its many limitations or some platform’s specific Markdown “flavor” that demands platform lock-in.

    6. 3

      Wonder what other think here. I like the idea of the Ticket object in ForgeFed. Cross-platform issues. I prefer dispenellis/git-issue as it’s decentralized. However Github issues is good because it’s easy for everyone - network effect. If one could get issue events from codeberg, gitea, or Github, and then automatically get in dispennli’s, and vice versa, that would be, well the Ticket. Interested in making a little procol around dispenlli’s issues that can include images and video using actual media files or urls that get served.