This is a very thin wrapper around the Lobste.rs API. It’s currently using the Read-only endpoints, but in the future I might add authentication/commenting etc. For now, this serves my purposes.
It’s a little light on the documentation side, I’ll get to it soon and write some.
Interesting. I’m curious how this system works out. I could imagine a little bit of work coming up as soon as you have multiple accounts and transfers in between those accounts and stuff like that.
Not handling that is deliberate. This is meant for keeping track of your expenses, if you transfer money between your accounts, you’re not spending money.
If you want to keep track of stuff like sending money to a savings account, just add it as an expense. You’re supposed to count savings as money you can’t touch anyway, so it makes sense to show it as an expense. If you want to hide those from the stats you see from the script, just make a category for those and hide them with SQL (you can show them differently) or filter them out with sed.
Sure. What I mean is at some point you’ll run into other such edge cases, every one of which would require updating the SQL scripts you’ve written, to the point where your setup would start resembling an existing piece of accounting/budgeting software. :)
Interesting idea. What does data entry look like? You mentioned that you used an Android app previously, do you use some app on android for adding entries to the CSV file(s) too?
I’m currently entering the data on the computer, but I have some ideas on how to make it work with my phone.
I can also just edit the CSV files from my phone and use the script from Termux. I keep that folder on Syncthing.
Adding a command to my XMPP bot
Adding a command to my XMPP bot
Nice, I think I may copy this idea.
Did you have any batching requirements (primarily to cut down on radio time), or does it always send updates in ~real-time?
If Mobile Data and WiFi are not available, the app stores the data and uploads it after it gets the connectivity. That’s why the timestamp comes from a query parameter instead of the server timestamp.
what front facing proxy are you using that makes cgi your preference here?
My hosting provider uses Apache. On my VPS and sometimes locally, I use lighttpd as well.
Hey, I occasionally blog about reverse engineering, networking and other stuff that I find interesting. I can’t find too much time to blog due to my work but still I’m trying to keep it going.
By the way, this guide, and the crate it showcases, assumes that you have an IPFS node running on your localhost. It seems that setting that up and running it is out of scope for this guide.
Yeah, I didn’t go over that part because it’s quite easy to set up ipfs. Just get the package from your repos or download the binary (statically compiled Go).
After that it’s just ipfs init to initialize everything and ipfs daemon to run your local daemon.
No example pictures? :(
Hey, sorry I didn’t put any since I’ve done this a while ago. I don’t have too many on hand. Here’s a couple I found on my computer.
Can you fix the link to the place where you say you can read this article on IPFS here - where here links back to your article? I assume it should go somewhere else.
But it looks real interesting.
That link is right. It’s pointed at an ipfs gateway (ipfs.io), which is what the story link here also points to. So yes, you’re reading the article hosted on ipfs.
As duclare said, I posted the link from an IPFS gateway instead of my server. Which means you are already reading the article on IPFS.
Also if you have IPFS installed on your computer, you can read directly from that.
I misread that completely! I though it said “you can read about IPFS here” Doh!
So if I update my blog every other day, I will have to update my DNS records to the new hash?
I use a statically generated folder structure. If I change one file within that folder, does the top-level hash change?
Can you host a DB-backed site in this way?
If you update your blog, you can run ipfs name publish <new hash>, so your blog should update automatically with IPNS.
ipfs name publish <new hash>
If you change one file, the top level hash changes, but people won’t need to re-download the old files (I think).
There are some projects about db-backed sites, but I’m not sure if any of them are ready yet. You can ipfs pubsub for sending p2p data to channels, I think you can use this for some dynamic content.
Do you still have to republish the IPNS stuff every day? When I last looked IPNS hashes would only last for one day.
Looks like it. I push my site ( http://chriswarbo.net ) to IPFS, IPNS and an EC2 server. I’ve not updated it for several days, and the IPNS name doesn’t resolve anymore ( http://ipns.io/ipns/chriswarbo.net )
The post doesn’t mention any results from this approach, do you have any?
I can’t publish actual results for obvious reasons, but it does find a few servers in a short time (~15-30 minutes maybe, I wasn’t paying attention to the terminal)
Cool, that’s all I meant really. Can you say how many IPs you had to hit before finding those few? Or the average IPs per second? Thanks.
Scanning the internet randomly in that way is not gonna lead to a lot of results, at least not in any reasonable time frame.
If you instead look at sites that crawl the internet for a living, you get 17.000 results. Not all are actually Redis nodes and not all Redis nodes are completely open.
Attack vectors on Redis to compromise the whole system are known for quite some time, and Redis now has better defaults and a protected-mode by default. But people tend to not update it. We still reguarly have users coming into the IRC channel asking for help with cleaned/exploited Redis node.
I keep reminding people to not open up each and every service to the whole wide internet.
Yeah indeed, that’s exactly why I asked for the results - I’m curious to see if they found a single one with this technique.