1. 58
    1. 7

      As usual with decentralized systems, the main problem I had was discovering good feeds. One could find stuff, if one knew what one was looking for, but most of the time these feeds only contain the first few lines of a article. And then again, there are other feeds that just post too much, making it impossible to keep up. Not everyone is coming to RSS/Atom which a precomposed list of pages and websites they read.

      These are the “social standards”, which I belive are just as important as the technical standards, which should have to be clarified in a post like this one.

      1. 6

        I agree. Finding good feeds is difficult indeed, but I believe that good content does spread by word at some point (it may even be word in social media, actually). Feeds that post too much are definitely a problem. Following RSS/Atom feeds of newspapers specifically defeats the purpose. Nobody can manage this hilarious amount of posts, often barely categorised. I don’t have a solution for these at hand; this article suggests that the standard should be improved on this. It might be a good idea to do so.

        Excerpt feeds I don’t like, because they are hard to search using the feed reader’s search facilities. I however still prefer an excerpt feed over no feed at all, which is why the article mentions this as a possible compromise. The main reason for excerpt feeds appears to be to draw people into the site owner’s Google Analytics.

        1. 3

          As far as unmanageably large&diverse sites go, I seem to recall at The Register you can/could run a search and then get an RSS feed for current and future results of that query. Combined with ways to filter on author etc. that worked a treat.

      2. 2

        the main problem I had was discovering good feeds

        This is why my killer feature (originally of GOOG Reader and now of NewsBlur) is a friends/sharing system. The value of shared content is deeply rooted in discovery of new feeds.

        feeds only contain the first few lines of a article

        Modern feed readers generally support making it easy to get full articles / stories without a context switch.

        feeds that just post too much, making it impossible to keep up

        Powerful filtration is also another place where modern readers have innovated. Would definitely check them out, because these are solved problems.

        1. 2

          Can you recommend any specific readers?

          1. 1

            NewsBlur is pretty great. It’s a hosted service, rather than a local application, but that’s kind of necessary for the whole “sharing” thing.

          2. 1

            If you’re an emacs user: elfeed. It has pretty good filtering and each website can be tagged.

            1. 1

              I tried that for a while, but eventually I just couldn’t keep up. I never really have the time to read an article when I’m in Emacs, since usually I’m working on something.

          3. 1

            I have been quite pleased with NewsBlur. It has the added benefit of being open source, so if it were to disappear (cough, cough, GOOG Reader), it could easily be resurrected.

            For the social aspect, of course, might want to poll friends first to see what they are on.

    2. 5

      So many comments here, I’m a little overwhelmed. Thanks to everyone <3

      Something that crossed my mind: maybe it could be possible to join the RSS/Atom efforts with the efforts in the area of decentral social networks, like Mastodon? Forgive me, I’m not a Mastodon user (yet), but maybe there is some kind of possible integration… Maybe, if RSS/Atom feeds could be “followed” somehow?

    3. 4

      I still use RSS and newsboat(fork of newsbeuter) to read all of my news daily. I haven’t found anything out there compelling enough to pull me away.

    4. 4

      There’s also feed.json that serves the same purpose but using JSON instead of XML

      https://jsonfeed.org

      1. 20

        In my opinion, jsonfeed is doing active harm. We need standardization, not fragmentation.

        1. 2

          Well as long as people are just adding an additional feed, xml/rss + json. You can have two links in your headers. Over the course of time, all readers will probably add support and then it shouldn’t matter which format your RSS feed is in. That’s kinda how we got to where we are today.

      2. 10

        How far spread is support for this in feed readers? RSS and Atom have a very broad support among feed readers, so unless there’s a compelling reason a working and widely supported standard shouldn’t be replaced just because of taste.

    5. 2

      Working on it: https://getstream.io/blog/winds-2-0-its-time-to-revive-rss/ It’s not so easy though, it’s a vicious cycle. Less people use RSS, less publishers support RSS, RSS tools degrade in quality and so on.

      You wouldn’t believe the number of if statements in the Winds codebase just to make RSS work (ish). The standard isn’t really much of a standard with everyone having small variations. Here’s an example, not all feeds implement the guid properly, so you end up with code like this: https://github.com/GetStream/Winds/blob/master/api/src/parsers/feed.js#L82

      1. 1

        Now, that looks like an interesting project. I have updated my SaveRSS page to include a link to Winds in the RSS clients section. You might also consider linking to the SaveRSS page for arguments on why to use RSS/Atom as a publisher.

        Personally, the project isn’t for me, though. I’m a happy user of elfeed, but I can absolutely see how your project can benefit the RSS/Atom community.

      2. 1

        Dang, this bloatware is pushing 6k stars on github already. Nothing like an RSS reader that combines Electron, Mongo, Algolia, Redis, machine learning (!), and Sendgrid

        1. 1

          l

          The goal is build an RSS powered experience that people will actually want to use. The tech stack is based around the idea of having a large group of people being able to contribute. (We use Go & RocksDB for most of our tech, so it was a very conscious move to use Node & Mongo for Winds to foster a wider community adoption)

          1. 1

            Makes sense. Thanks for the gracious reply, I feel bad about my grumpy comment.

    6. 2

      RSS was a great concept (and appropriate for its time), but was designed by people who didn’t comprehend XML namespaces, instead forcing implementations (both generators and readers) to escape XML and/or HTML tags, which requires multiple passes for generating and parsing feeds - with an intermediate encoding/decoding step (Really Simple?). They purportedly addressed this in RSS 2.0, but if you have a look at their RSS 2.0 example, they still got it wrong, persisting a 1990’s understanding of the web. Although I still use it, I shake my head in disappointment every time I see RSS source. RSS 2.0 should really have been based on something that could be validated, such as XHTML.

      At this point, it is probably way too late for a comeback, as:

      1. Social media platforms like Twitter are commonly used as a substitute and have a large hegemony over content.
      2. Browsers have given up on RSS in favor of their own peculiar readers.
      3. Google, Microsoft, Yandex and whatever Yahoo is now are pushing for an entirely different system based on extracting information from HTML content via an ever-changing pseudo-ontology that lacks definitions and is inconsistently employed by every practitioner.

      You could read the above points as things that RSS should be able to overcome. If RSS were indeed to make a comeback, I would hope that in a new “RSS 3.0” incarnation it would satisfy the following criteria:

      1. Standard comes before implementation (e.g., utilize existing standards).
      2. Validatable (e.g., employ XML namespaces and utilize an XSD for document validation).
      3. Human readable (i.e., subset of XHTML or HTML, that can be consistently rendered as in any modern web browser)
      4. Strict specification (use a well-defined structure with a minimal tag set that prevents multiple interpretations of the specification).

      I’ll admit, I do not like JSON one bit because it is antithetical to several, if not all of the above criteria. However, since a JSON alternative is desired, I would recommend that it be directly based on an XML/HTML version that does satisfy the above criteria. Then a simple XSL (read “standardized”) spreadsheet could be employed to generate the equivalent JSON version, satisfying both worlds.

      1. 9

        Doesn’t Atom fulfill your RSS 3 criteria?

      2. 2

        they still got it wrong, persisting a 1990’s understanding of the web. Although I still use it, I shake my head in disappointment every time I see RSS source. RSS 2.0 should really have been based on something that could be validated, such as XHTML.

        Atom does fulfill your second list’s criteria, is often used today in place of RSS, and can even be validated. My article even says that if in doubt, use Atom.

        Social media platforms like Twitter are commonly used as a substitute and have a large hegemony over content.

        The entire point of the site is to set something against this before it is too late. Today, there still are many sites providing feeds, and I do hope that this article will sustain that. To be clear, I don’t advocate leaving social media. All I ask in that article is to provide a feed additionally to your social media presence.

        Browsers have given up on RSS in favor of their own peculiar readers.

        I’ve actually never used Firefox’ RSS/Atom support and I don’t believe that browsers are the correct target for RSS/Atom feeds. There are feed reader programs that deal specifically with feeds and they are still being maintained, so I don’t see browsers removing their feed support as problematic.

        Google, Microsoft, Yandex and whatever Yahoo is now are pushing for an entirely different system

        You listed yourself why it isn’t a real alternative.

    7. 1

      The big question is if they actually need to be saved.

      A year ago I would have said no. Feed readers may not be as popular as Facebook, Twitter, etc. but sites have feeds and I can use them.

      For large parts this is still true. Luckily lots of sites build on Wordpress, Drupal, etc and they come with feeds out of the box. Sometimes the author may not even know he provides a feed for me.

      However, lately I have the feeling this is in decline. It seems a wix.com (yet another diy website ui) blog does not provide a feed by default. Some wordpress blogs lack the auto-discovery HTML header for the feed. Signs that supporting RSS/Atom is not that important for content producers anymore.

    8. 1

      I still use RSS routinely to read academic journals. For this use case, Zotero is an excellent feed reader.

    9. 1

      I recently setup my own tt-rss instance after getting tired of the commercial natures of Newsblur and Feedly (I even had a Newsblur subscription at one point).

      tt-rss is pretty nice and I even get all my YouTube channel subscriptions through it.

    10. 1

      I wasn’t aware that RSS and Atom was at risk of disappearing. I use the excellent rss-bridge to get non-RSS feeds into reasonable formats (instance here if anyone wants to try it out) and rss2email for most of the stuff I pull from there.

      I also use the news reader on Nextcloud as a place to put links to investment notices, and use newsboat for tracking sites I like. I’m contemplating reinstalling selfoss for mobile RSS, but tbh I found myself using wallabag more.