Yeah, why not post the source? Any reason why it only provides links to the posts, rather than an embedding of the post on the page?
Oh yea I did that initially. Two reasons
You should try to find a solution, probably cache the entire page. It doesn’t seem like it changes too often and everyone sees the same content anyway. Also, just keep the content of the post, no need for any CSS.
That would be no problem, I can also get the user Icons of the people who posted it. I was thinking it may upset some server/content owners. Feels like a repost to directly copy no ?
I think there is a DNS issue:
$ nslookup lazymastodon.com
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
*** Can't find lazymastodon.com: No answer
I have this issue with two different ISPs so…
Hackernews killed it I had to make a quick backup on s3
http://lazymastodon.com.s3-website-eu-west-1.amazonaws.com/
if you clear cache DNS should point to that now
Yeah, but I’ve noticed that Switter isn’t always careful enough with those. I personally blocked Switter because I would often see ads from it.
This is really nice, to get some more overview over the fediverse and to find new peoole to follow.
Thank’s a lot!
glad you like it. It feels to me like the good old internet days where you come across quirky communities on the regular.
You know at first I thought “I wish it would show the post’s text”, but when you click through, you discover all the other content that is there. Very cool. I’m actually surprised at the amount of users all over!
Nice! Your title seems to have been truncated? Am interested in source - would be cool if you adapted some open ranking algo like reddit’s. To my biased eye, the ranking has some pretty old posts.
yea, old posts can get bumped by someone replying to it or “boosting” it. But actually I like the result because its more for exploration than another website to get an addiction too.
I love C++, but lots of STL additions annoy me. For example, std::variant and std::visit are really clunky to use, and there are plenty of other examples.
I’m using SyncThing at home. Just mirror and sync a folder across multiple machines.
One downside I see is the lack of storage somewhere else while all laptops are at home. Geographic risk.
It also requires all machines to store the full state. ~100GB in my case.
One popular differentiation between file synchronization and backups are that you can travel back in time with your backups. What happens if you - or more realistically: software you use - deletes or corrupts a file in your SyncThing repository? It would still be gone/corrupted and the problem would automatically be synced to all your machines, right?
Personally I use borgbackup, a fork of attic, with a RAID 1 in my local NAS and an online repository to which I, honestly, don’t sync too often because even deltas take ages with the very low bandwidth I got at home, so I did the initial upload by taking disks/machines to work …and hope that the online copies are recent ‘enough’ and I can’t really resist the thought that in scenarios where both disks in my NAS and the original machines are gone/broken (fire at home, burglaries, etc.) I would probably loose access to my online storage too. I should test my backups more often!
I use Borg too! At home and at work. I also highly recommend rsync.net, who are not the cheapest, but have an excellent system based on firing commands over ssh. They also have a special discount for borg and attic users http://www.rsync.net/products/attic.html
Hmm - that’s really not the cheapest!
3c/gb (on the attic discount) is 30% dearer than s3 (which replicates your data to multiple DCs vs rsync.net which only has RAID).
True, though S3 has a relatively high outgoing bandwidth fee of 9c/gb (vs. free for rsync.net), so you lose about a year of the accumulated 0.7c/gb/mo savings if you ever do a restore. Possibly also some before then depending on what kind of incremental backup setup you have (is it doing two-way traffic to the remote storage to compute the diffs?).
Ahh, I hadn’t accounted for the outgoing bandwidth.
That said, if I ever need to do a full restore, it means both my local drives have failed at once (or, more likely, my house has burned down / flooded); in any case, an expensive proposition.
AFAIK glacier (at 13% the price of rsync) is the real cheap option (assuming you’re OK with recovery being slow or expensive).
RE traffic for diffs: I’m using perkeep (nee camlistore) which is content-addressable, so it can just compare the list of filenames to figure out what to sync.
Eh - I don’t mind paying for a service with an actual UNIX filesystem, and borg installed. Plus they don’t charge for usage so it’s not that far off. Not to shit on S3, it’s a great service, I was just posting an alternative.
Yeah that’s fair, being able to use familiar tools is easily worth the difference (assuming a reasonable dataset size).
syncThing is awsome for slow backup stuff. But i wish i could configure it such that it checks for file changes more often. Currently it takes like 5 minutes before a change is detected which results in me using Dropbox for working directory usecases.
You can configure the scan time for syncthing, you can also run syncthing-inotify helper to get real-time updates
That’s one huge advantage of Resilio Sync. You don’t have to store the full state in every linked node. But until RS works on OpenBSD, it’s a no-go for me.
Does this respect #nobot? (It’s essentially the same as an ‘ambassador bot’, of which there are several, & the accepted way to inform ambassador bots that you don’t want your profile indexed is putting ‘#nobot’ in your bio. A lot of people will probably get pissed off if this policy isn’t respected.)
Ok I added this feature. I check every post for string #nobot before including it. Thanks for the advice.
#nobot is usually a user-level constraint – if it’s in a user’s bio, then they don’t consent to have their public posts scraped by an automated process. (I’m not sure if any process that supports this also searches for it in the post body. It’s sort of an ad-hoc thing, made in part in response to my early malfunctioning followbot project, & I had stopped working on that by the time it was accepted.)
If you’re using mastodon.py, you just need to grab the user ID from the post object, drop it into a ‘get user info’ request (the same way you’d get an avatar), and search the bio/profile.