This is very clever.
However, this seems like solving the symptom rather than the cause: if the command and/or history of commands was important enough, shouldn’t a more rigorous approach towards provisioning be adopted? Or even command aliases?
Almost every time I’ve had to go looking through my (ba|z)sh history, it’s been indicative of a failure in my own processes, whether they be for remotely administering servers, or even my own personal machine.
It depends on your usage patterns. If you’re doing the same workflows over and over again more process can help. But a complex command you ran one time six months ago is best captured in command history.
Command history is the place where all automation should begin. In the spirit of YAGNI, don’t create a script until you run the commands manually three times.
Command history is actually a good source of things to automate, if you periodically try to look for patterns.
Neat idea. How about making your .bash_history file a named pipe and have the script read from that, instead of using PROMPT_COMMAND?
I did this at a hackathon. I found it kept having weird sync issues with a lot of shells open, but didn’t have time to fully investigate.
Won’t this block your shell after a while if the reading process somehow dies?
Yep. Equally PROMPT_COMMAND could cause delays/hangs depending on connectivity to the remote system.
That’s a very creative idea that I hadn’t thought of. I did want to avoid interfering with the client system’s history mechanism though.
Ah, good point!
My passwords regularly end up in my $HISTFILE, both by accident and when connecting to certain services, it would be good to not store that in a central repository.
Not sure how you would tackle this issue…
Store the hash and blacklist content in the $HISTFILE based on the hash. If you get that one in a quadrillion false positive then you just accept that you lost some data for the sake of security.
Yeah I like that….
Aside from the shared secret security, it would be easy to add a blacklist file to the code (checking it’s an 0400 file). I could implement this if you want.
Some encryption would be required - am pondering whether this should be SSL or a simpler scheme using the shared key.
If you think about it, there’s not actually a need for the central server to read the logs: it just needs to store & serve them to authorised clients.
You could have a single key shared by the clients, with SHA256(key || ‘client-server key’) being the client-server connexion key and SHA256(key || nonce) being the line-encryption key. Then the clients have simple configuration and the server cannot read the records, but all clients can read any client’s records.
More complex schemes are possible, but this should be good enough for what I think you want to do.
Am keen to keep it as simple as possible - re-using the secret key seems like the shortest path (but I may be missing a technique).
Here’s what I’ve been doing for 15 years. In .bashrc:
unset HISTFILE HISTFILESIZE
export HIST_LOGFILE="$HOME/.log/shell/$(date +"%Y")/$(date +"%m-%d-%H-%M-%S")-$(hostname)-$(tty |perl -pwe 's,/,_,g')"
mkdir -p $(dirname $HIST_LOGFILE)
export HISTTIMEFORMAT="%Y-%m-%d-%H-%M-%S "
trap "history -w $HIST_LOGFILE" EXIT
Sometime in the past I switched to Zsh and started saving complete timestamps for each command. In .zshrc:
setopt extended_history hist_find_no_dups
export HIST_LOGFILE=$HOME/.log/shell/`date +"%Y"`/$(date +"%m-%d-%H-%M-%S")-$(hostname)-$(test $TMUX && echo "tmux-")$(tty |perl -pwe 's,/,_,g')
mkdir -p $(dirname $HIST_LOGFILE)
fc -l -n -t "%Y-%m-%d %H:%M:%S" -D -i 1 >! $HIST_LOGFILE.tmp && mv $HIST_LOGFILE.tmp $HIST_LOGFILE # -d doesn't include seconds
fc -l -n $* 1
Basically I turn off the default location for history to keep windows from clobbering each other. Each session gets its own private history file. To search for commands I use grep. To combine history from servers I periodically rsync. Everything is always private to guard against the password issue.
My priority is maintaining an audit trail of what I did in experiments. So YMMV. But if this works for you, it’s a lot fewer moving parts than OP.
I just do this with zsh, it handles merging multiple histories in parallel with a few set options. I’ve never really wanted to combine history between servers however. Just have common history amongst shells. I prefer sharing the history as I end up in one shell and want to use a command I just typed in another.
I could probably sort out saving a history file a day but really can’t be arsed to bother, don’t see much value in that.
I expected this to be a post about BashHub, who I think we’ve linked before from lobsters. I must admit I’d trust this solution a bit more than a company trying to make a profit from centralising shell history. 🙂
I just use the Python program RASH. It stores bash history in an sqlite db.
Nice - that’s what I was originally looking for…