(Mods: I feel like there’s almost no conversation to be had here except on security grounds. Feel free to remove tag if considered non-applicable.)
Having not seen this on lobste.rs yet, I decided to submit it so as to head this off at the pass.
Reading the proposition on their website, it’s pretty clear it’s a bad idea:
But thanks to the power of Open Source!!!, we can look a bit further. And the results are horrifying:
Just reading the source will find more, and I’ve barely spent an hour looking at this. I’m sure the (currently closed-source) server has a lot more, but honestly it’s depressing to think about.
Fie. Stay away. etc.
While I don’t argue that history on a hosted site is a bad idea by default, and doubly so when we consider all the terrible issues you listed, I wanted to reflect on one part of your comment: “Why would you want to save your shell history “in the cloud” I mean really”
I have a shared history, and I love it. Context, host, timestamp, exit code and all that, easily searchable. I do a lot of trial & error research on throw-away hosts, and having previous history is very useful there.
I’m curious as to how you set this up!
More to the point: throw-away hosts? What kinds of things are you hosting? My question stems from my working at a web agency: I work on a lot of different hosts, often from several different clients in a single day, but I can’t think of an instance where I’d be doing similar things on different hosts and wanting that history available to use.
(Maybe one case where I’d want this: one of my clients uses EC2 scaling groups. Usually the instance I debug on yesterday doesn’t even exist today. That gets mighty annoying.)
I use throw-away hosts to reproduce issues customers are facing, and once the host is no longer needed, it gets purged from existence. I burn plenty of hosts a week, but the knowledge I gain by doing stuff on them is something I want to keep. I work from Emacs, pretty much all the time, including ssh'ing into the target systems.
I capture all commands, and save them on my workstation, where Emacs runs from (this is trivial with eshell and a bit of emacs lisp). So, I have a single, unified history, with timestamps, output, exit code and whatnot. I have a small script that parses this and pushes it into ElasticSearch, and I can query that from Emacs again to have easy and convenient access to it. I also have a key combo that captures the last command, and allows me to tag it, or even turn it into a snippet I can paste later and fill out the blanks. This also gets indexed by ES.
The global history and the capture file is in git, and I clone it whenever I need to work on a different machine: for example, if I work from home, I just git pull the stuff, and run a reindex in the background to update ES on my home PC. It would be trivial to set up ES on a VPN, but it is faster if my searches are local.
There are probably better, more efficient ways to set this up, but this has worked remarkably well for me so far.
Any chance you have a link to the implementation of this? This sounds incredibly useful!
I plan to share the setup at one point, but it will take some time.
Have you done much with org-babel? I’ve considered using it for “executable” playbooks, similar in some ways to what you describe, though with your stuff, the creation of playbooks becomes almost trivial, as you can look back on and edit out the right commands…
No, haven’t yet. I am fairly new to Org, and while org-babel is on my list of things to explore, I have not been able to play enough with it yet.
I was going to say that a poor man’s version could be increasing your history limit to 10,000 and setting up a .bash_profile to do something like git commit + push on your history file on every command.
this is shockingly bad.
I was kind of hoping it was intended as a parody, TBH.
No such luck.
Anybody else was primed into initially thinking this was something about bash-github from recent open letters to github?