I have copied the latest git revision
c67b943aa894b90103c4752ac430958886b996b2
from https://gitlab.tt-rss.org/tt-rss/tt-rss to my gitea instance which is mirrored to https://gitlab.com/nodiscc/tt-rss and https://github.com/nodiscc/tt-rss.I don’t intend to make changes or bugfixes (it’s working fine), but I will try to keep it compatible with the PHP version in Debian stable, since I’ve been using it for years and would really like to keep doing so.
The loss of Google Reader is basically what taught me not to get too attached to services I can’t host myself. I’m hosting an older version of TT-RSS (due to migration issues to newer versions), and will continue with that until it no longer works for me, and then I will probably move on to CommaFeed. I’ve already tested all the commonly self hosted RSS readers out there, and that’s the one that fits my needs best, other than TT-RSS.
i don’t get why people use web services for rss, it can be done completely clientside, that’s… kind of the whole point of rss…
You could want to have multiple clients in sync.
Also a web service could be fetching 24/7 and perform classification algorithms before serving to the client that will only connect a few times a day.
No, it isn’t the whole point. The point is to curate our own news. And a separate question is how to browse the results. If you use two devices, you might want a server side solution. Maybe. There are many reasonable setups.
To keep it synchronized between devices
In my case (not necessarily your case, of course), the cheapest selling-point has become that I already have a browser open for almost everything else, so that’s one less thing to install and check in on. But it’s also easier to keep up to date reading when individual computers have problems and usually has a nicer API for scripting, if you need that sort of thing.
Try a good one such as Inoreader or NewsBlur, you’ll never look back
Is freshrss the best alternative at this point?
Yeah, it’s great, fast, works with lots of local clients and has lots of plug ins for whatever esoteric need you might have. I can fly through the days articles very quickly with a handful of key presses.
I like FreshRSS - I also have some readers that connect to my instance, like FluentReader that provides a better full article view, but I mostly use FreshRSS directly these days.
Looks like it supports a wide range of readers with two different API .
FreshRSS supports access from mobile / native apps for Linux, Android, iOS, Windows and macOS, via two distinct APIs: Google Reader API (best), and Fever API (limited features, less efficient, less safe).
If you already have a Nextcloud instance I can recommend the “App” called News. There is an official android app that works well.
I have had nextcloud in the past and may go back.
It always has been.
I use and like it.
I didn’t like it, as it didn’t have the exact full article view mode I desired, but lots of people like it.
deleted by creator
The gas chamber guys? Good riddance
Care to elaborate?
https://community.tt-rss.org/t/a-category-named-gas-chamber/649/7
That and every time ttrs is mentioned somewhere, people have stories how the dev is an ass.
🤣
I really enjoy tt-rss. I self host it so i guess I’ll keep using it until i find a replacement, but this is sad.
Kinda hope someone else picks up the work cause I have been using ttrss for well over 15 years.
While not the same I use an rss-to-email service that hits the minimal sweet spot for me
I no longer find it fun to maintain public-facing anything
I think the kids would say: “Mood.”
The post is abit low on details, but I strongly suspect this is a victim of AI scraping.
It really doesn’t seem like that’s the case. It doesn’t even makes much sense. What do tou think was being AI scrapped? The source code?
It makes a lot of sense. Both the git repos that they hosted and things like a RSS feed-reader are things that are the prime target for AI scrapers and the same time quite database query heavy on the backend so that the scraping really has a big impact on the costs of running these services.
And yes source-code is among what is the most targeted data to ingest by AI scrapers, mainly to train coding assistants but apparently it also helps LLMs to understand logic better.
First, source code is on github.
Second, RSS aggregators are self hostable, not a service provided by the dev. The dev would have not issues of a public instance of ttrss hosted by someone gets scrapped.
Third, RSS aggregators doesn’t really tend to be public facing. Due to their personal nature they don’t tend to be open. They are more account based.
Sorry, I really don’t see the case here.
What? They explicitly talk about shutting down their self-hosted infrastructure which includes two git services and other target of AI scraping. Did you even read the post?
They are closing the whole project.
Specifically they say that they are tired of pushing fixes and that they don’t find excitement in maintaining the project. With zero mentions at all to being scrapped or having any kind of AI related issue.
I don’t know if you knew the project before seeing this post. I did, I was considering between this and freshrss and chose freshrss specifically because I knew that the end of ttrss was close (this was like 2 years ago). There were a lot of signs that the development was ending and the project was on route to be abandoned.
No, they are shutting down their publicly hosted infrastructure and say that their project is “finished” anyways, so it doesn’t matter that much as a justification. But the main point about the post is the public facing infrastructure and how they lost motivation to run it.
Well, they’ve also been maintaining the software since 2005. They said why they’re closing shop, so why not take their words at face value? They have no obvious reason to lie.
Many of us have started and maintained projects and then moved on when our lives changed. That is just normal.
Yes and the reason they state sounds a lot like AI scraping made hosting public services such a PITA that they lost motivation to continue doing it. Lots of long running projects that used to require very little maintainance are now DDOSed by these scrapers.