Would it be reasonable to describe Usenet as “a reddit without servers”?
It’s good to see so much interest coming back into peer-to-peer systems as people become more aware of the problems of centralization.
It’s such a shame Usenet nowadays is only full of spam and pirates.
The old, non-piracy part of Usenet is the perfect examples why you need some kind of community moderation to scale past a certain size and not drown in spam and useless content. Usenet was a bit too open and that’s what killed it.
I imagine once this reddit drama blows over people will stop caring about alternatives yet again. Neat idea though, though I can’t help but think this will go as well as Diaspora did.
It would be nice for some sort of social peer to peer system to survive…
…however this one seems to not have been updated in a while.
Why is this always the first concern when evaluating software? People stop working on software for a multitude of reasons, and only some of the time because the design, or code base is flawed in some horrible way that it becomes unsalvageable.
One reason I stop working on software is lack of community interest, and/or the software being complete enough for my purposes. I assume other people do the same thing, too.
In part, because this software warns:
It’s also an experiment. Don’t trust it just yet. But definitely give it a shot!
We’ve built software as giant platforms on a teetering tower of unreliable abstractions. Functionality is slow, regressions are common, and that’s even before you throw in the now-standard business practice of shipping as part of figuring out what’s worth building and how to build it.
I’m talking in general, not this specific case. The pattern is the same. Software that seems to not have been touched recently is just discounted as not being worth using, which is simply silly.
In business projects I’d imagine the concern is that, if problems are discovered, there will be no core team to rely on for fixes. While the deciding team can opt to try and fix things on their own, those attempts may be more difficult than building something you understand yourself, or finding an actively maintained project.
I get nervous about software where I don’t think there’s any reasonable chance of bugs I find in it getting fixed - if it’s something that I can and will maintain myself if needed that might be OK, but that significantly increases the cost of using it.
If the software is incomplete and/or stops being useful (and therefore hasn’t been updated in a while), it means that it’s not useful. In which case, why even try to use it. As an end user, I like clicking buttons and having stuff happen. I don’t want to try to make stuff work. I want stuff that I know will work now, and in the future. Hence why when I see something that hasn’t been updated in a while, and is fairly experimental, I think “well, this has a good chance of not working as time continues”. You’ve given a bunch of reasons code stops being updated. The reasons you have listed are good reasons to not evaluate software, in my opinion (when building and/or trying to make my own, however, it’s something completely different).
tl;dr - People have abandoned it, why should I try to evaluate something that others have abandoned?
People have abandoned it, why should I try to evaluate something that others have abandoned?
Because sometimes the cost ofevaluating something and potentially taking ownership is much cheaper than writing it from scratch.
To quote myself
(when building and/or trying to make my own, however, it’s something completely different)
However, I guess sometimes it does make sense to spontaneously decide to roll my own because of the evaluation. Personally though, that doesn’t seem like something that is likely to happen.
Given the concern over project abandonment, and the request for help vetting security, the one thing that would be extremely helpful is a document describing the protocol. Two python files isn’t sufficient in that regard.
The problem with Reddit et al. is obviously the centralisation, yet everyone seems to repeat the problem over and over and over. A place will start to suck, people will leave, and they move to a new centralised site. (And amazingly part of the recent beef with Reddit is that the moderators want Reddit to implement a Reddit-specific e-mail system.)
I’ve been wondering if everything could be decentralised. Store articles locally like Usenet, distributing them through a Bittorrent-like swarm. That’s all solved. But how to deal with identities and spam? Maybe each user could do his own scoring, built on a web of trust. He could manually find a few people he likes and then score based on who those people like, etc.
Thus there’s no namespace collision problem. If two rival groups both want to claim a newsgroup name such as “unicorns,” each could score the other down and would only see its own content.
I like the idea of this, but the fact is that I almost never visit reddit on my desktop. That’s my workstation, and reddit is anything but helpful for me when I’m trying to be productive.
The 6-month fade away is going to assuredly cause a ton of rehashes of the same topics/posts, which people complain about enough on reddit (and other sites). It’ll either become super annoying or just end up being an accepted part of a temporary-content platform.