I use “non-standard” loosly here. I’m looking for CLI utilities that are definitely not part of the POSIX required or optional utilities, and more coloquiallly not considered to be standard BSD or *nix fare.
xsv! At work people often communicate data via csv/tsv. For better or for worse these can sometimes be multiple gigabytes. xsv lets me easily slice and dice things, or shove together a quick Unix pipeline to process it in well under a minute before someone else can even set up whatever unnecessary distributed big data tool they are primed to reach for. Plus, being able to easily “join” two csvs on a column without any prep is a godsend.
/u/burntsushi should setup that github sponsors thing (or equivalent). Between xsv and the rust regex crate I owe him at least a couple beers.
Out of curiosity: is there a a CSV tool that supports the conversion between narrow and wide representation of data? By this I mean that most people prefer a presentation with many columns (and hence many values) but conceptually it is often better to have a narrow representation where each row represents a single observation. In R this is a common theme.
Some more on macos, open to replace having to use finder and say to make audible alerts when make is done >.<. I also have this script to make notifications easy on the command line I call it notify and use it like notify “message” “title” (why title last? so I can just do notify message)
#!/bin/sh
message=${1:-""}
shift
title=${1:-""}
notification="display notification \"${message}\""
[ "${title}" != "" ] && notification="${notification} with title \"${title}\""
osascript -e "${notification}"
I also got sick of using the gui to close macos apps and made a “close” command too:
#!/bin/sh
if [ -z "${1}" ]; then
printf "usage: close app_name\n no application to close provided\n"
exit 1
fi
osascript <<END
tell application "${1}"
quit
end tell
END
None of these is particularly interesting, just useful to have around to know when something finished or to close say firefox from the command line. But it lets you then script the gui a bit easier. I suppose I could create a repo with these random macos scripts.
I also have an old af perl script named ts that simply timestamps output you pipe to it. I think something similar is in moreutils but I’ve had this thing for years before moreutils existed and its just a part of my dotfile setup so simpler to shunt around to any unix system.
After just missing the audiobells from printf '\a' enough. I wrote a simple shell script to use Pushover’s api send these types of notifications.
I get a notification on my personal laptop (linux),work laptop (osx) and phone. I can run locally or on remote machines, OS doesn’t really matter, and the notifications are pretty much instantaneous.
I just want an alert on my laptop when make finishes, not get alerts on my phone heh. I just use it like make && notify “make finished”, long as i see the notification i’m happy. No need to involve a web api in things IMO.
I meant to reply to this, but never actually got around to it. Might as well do it now. You know, 5 months late.
The reason for resorting to curl call is because these commands are often being run on a remote machine. e.g. manually kicking off a build, dumping/restoring a QA database, migrations, etc.
I suppose I could loop though a reserve ssh tunnel, but that just kind of seems like a pain.
I like the pbcopy default behavior well enough that I port it for use on X11, and handle Wayland too, so I just stick with the pbcopy command; this then works better for communicating with macOS-using colleagues.
Gron! It transforms JSON into greppable lines like path.to.items[3] = value, and if you like you can edit the result and use gron to transform the edited stuff back to JSON. This brings JSON structures into the line-oriented sed/grep/awk universe – I’m looking at you, Jupyter notebooks. I’ve only had gron for a few months, and it’s already my 13th-most-used command.
Example output (gron can read from stdin, files, and even URLs):
Thanks for mentioning this! I had no idea this existed, but is definitely in line with how my brain works vs jq. Even though I use jq almost daily, I can never remember some of the syntax.
Oh wow, I just noticed @mrcruz’s recommendation of xml2, elsewhere on this page, which does something similar for HTML and XML. Although for HTML I had hoped it would include element classes and IDs in the path — .../div/div/... is not useless, but .../div.comment#c_tez5vc/div.details/... would have provided more orientation points. Still, this is going in my toolbox; and I thought you might like it, too.
Part of the output of curl https://lobste.rs/s/eprvjp/what_are_your_favorite_non_standard_cli | html2:
/html/body/div/div/ol/li/ol/li/div/div/div=
/html/body/div/div/ol/li/ol/li/div/div/div/a/@href=/u/loc
/html/body/div/div/ol/li/ol/li/div/div/div/a/img/@srcset=/avatars/loc-16.png 1x, /avatars/loc-32.png 2x
/html/body/div/div/ol/li/ol/li/div/div/div/a/img/@class=avatar
/html/body/div/div/ol/li/ol/li/div/div/div/a/img/@alt=loc avatar
/html/body/div/div/ol/li/ol/li/div/div/div/a/img/@src=/avatars/loc-16.png
/html/body/div/div/ol/li/ol/li/div/div/div/a/img/@width=16
/html/body/div/div/ol/li/ol/li/div/div/div/a/img/@height=16
/html/body/div/div/ol/li/ol/li/div/div/div=
/html/body/div/div/ol/li/ol/li/div/div/div/a/@href=/u/loc
/html/body/div/div/ol/li/ol/li/div/div/div/a/@class
/html/body/div/div/ol/li/ol/li/div/div/div/a=loc
/html/body/div/div/ol/li/ol/li/div/div/div=
/html/body/div/div/ol/li/ol/li/div/div/div/span/@title=2020-10-22 17:16:17 -0500
/html/body/div/div/ol/li/ol/li/div/div/div/span=2 days ago
/html/body/div/div/ol/li/ol/li/div/div/div= |
/html/body/div/div/ol/li/ol/li/div/div/div/a/@href=/s/eprvjp/what_are_your_favorite_non_standard_cli#c_tez5vc
/html/body/div/div/ol/li/ol/li/div/div/div/a=link
/html/body/div/div/ol/li/ol/li/div/div/div=
/html/body/div/div/ol/li/ol/li/div/div/div/span/@class=flagger flagger_stub
/html/body/div/div/ol/li/ol/li/div/div/div=
/html/body/div/div/ol/li/ol/li/div/div/div/span/@class=reason
/html/body/div/div/ol/li/ol/li/div/div/div/span=
/html/body/div/div/ol/li/ol/li/div/div/div=
/html/body/div/div/ol/li/ol/li/div/div=
/html/body/div/div/ol/li/ol/li/div/div/div/@class=comment_text
/html/body/div/div/ol/li/ol/li/div/div/div=
/html/body/div/div/ol/li/ol/li/div/div/div/p=Thanks for mentioning this! I had no idea this existed, but is definitely in line with how my brain works vs
/html/body/div/div/ol/li/ol/li/div/div/div/p/code=jq
/html/body/div/div/ol/li/ol/li/div/div/div/p=. Even though I use
/html/body/div/div/ol/li/ol/li/div/div/div/p/code=jq
/html/body/div/div/ol/li/ol/li/div/div/div/p= almost daily, I can never remember some of the syntax.
/html/body/div/div/ol/li/ol/li/div/div/div=
Not the OP and not 2 lines, but a shell solution might look like this (with lots of UUoC):
#!/usr/bin/env bash
set -euxo pipefail
temp=$(mktemp)
trap "rm -f $temp" EXIT
cat >"$temp"
${EDITOR:-vi}" "$temp"; then
exec cat <"$temp"
If you really wanted to squeeze it down to 2 lines, then you could do some hacky vim/bash stuff (and accept that the file will exist after the process runs, ‘cause I interpret “2 lines” as “2 subprocesses”, and I’m already bending the rule with the shebang).
Worth noting: If you use set -o vi (highly recommend anyway), you can search command history with /<search term> and then n (any number of times) to flip through the matches.
Yes it’s fancy ctrl-r but for people like myself who find ctrl-r clunky to use and also find remembering long command strings difficult it’s a huge help.
I see rg and pngquant as the two main commands that aren’t present by default on my Ubuntu set up. I also use my own aliases/functions. For example, I use 2lc frequently to check if last two commands are giving the same result (when I’m working on my cli books, answering on reddit/stackoverflow, etc).
2lc ()
{
p1=$(fc -ln -2 | head -n1);
p2=$(fc -ln -3 | head -n1);
diff --suppress-common-lines -y -s <(eval command "$p1") <(eval command "$p2") | head -n5
}
$ ch rg -g
rg - recursively search current directory for lines matching a pattern
-g, --glob GLOB ...
Include or exclude files and directories for searching that match the given glob.
This always overrides any other ignore logic. Multiple glob flags may be used.
Globbing rules match .gitignore globs. Precede a glob with a ! to exclude it. If
multiple globs match a file or directory, the glob given later in the command line
takes precedence.
Going through my history of commands, I think I’m very boring in this regard (only ripgrep and custom tools in the top 100)
I use colordiff a lot, because I have to diff files not in git a lot and hexdump comes up more often than I’d like to.
Maybe the most unusual thing (which I stole somewhere) is a colorize function/alias so I can tail and still highlight stuff without resorting to grep and missing out on the rest:
Percol, jq and gron. I use these dozens of times per day. Also, fish shell, I find it superior to all others and don’t quite understand why anyone would use bash at this day an age.
I also use httpie, ag/ripgrep/ack, fd which are great, but they provide only incremental improvements on standard tools that work well. If I am writing shell scripts I always stick to curl, grep, find, etc.
how about: because I’ve used bash for 25 years and the second a key command doesn’t work in a new shell I feel I’m wasting my time. Why learn a new way to do the exact same thing? Especially why learn it just so things are colorful?
Fair in production at work. Learning on the side, trying out a new thing to really make sure that it is the exact same thing (it’s not … bash and zsh and fish are similar but not the same or exact same). Of course, what you are doing is maybe working well enough but pay attention to pain points. Did bash do something weird with variable escaping?
Of course no one can convince anyone, or prove software’s worth. So, here’s my camp and creds I guess. I’m using fish but I think/feel that you could achieve most of it with zsh plugins and then not frustration that the world is still using bash. Zsh is bash compatible so the world staying where it is doesn’t matter. That’s pretty nice. It’s also a zero cost switch, other than learning the tricks (the features that are supposed to be great) that you didn’t have before.
Saying it is about colors is a bit reductive. Color is multiplexing for your cerebral cortex. Colors help you scan. But it’s not just colors.
You don’t have to learn any command at all. The invocation syntax for commands is the exact same as bash. I still write my Shellscripts in Bourne shell like I did 15 years ago when I used bash. No change in there. What fish offers is a better UI for command input. Auto complete and history browser are superior to those of bash. You can have it without colours of you want. Colors are there to provide Information. For example, as you type you will get a suggestion for autocomolete. This is shown in a different colour, the same when picking autocplete suggestions. It is not quite accurate to say that it is just more colorful, because bash doesn’t have these features at all.
We use fish at work… for reasons… I was very quickly frustrated that commands were different and found it unusable.
You say autocomplete and history are superior, but they are also different. Sometimes the cost of change negates and superiority that the new thing would provide.
It’s not in a separate package or documented so well, but I have an example program in cligen that does all this and more. For me, the memory mapped IO mode is even ~2x faster than mawk on Linux for files in /dev/shm.
Though this is a TUI-app and not a CLI app that many of the others in this thread are, I have to mention Visidata. It is so handy for viewing and manipulating csv files.
One I find useful for Linux (or any system that uses X Windows) is xpaste. For example:
ls -l | xpaste
will copy the output of ls -l to the X windows primary selection (-s for the secondary selection, and -c for the clipboard selection). I wrote the opposite, xselect, to obtain the selection from from the command line. You might think “why? Isn’t that what the middle button, or ^C is for?” Because X allows multiple selections. I have a fragment of a web page highlighted in Firefox, and from there, I can select the following for as the selection:
TIMESTAMP
TARGETS (this list)
MULTIPLE
text/html
text/_moz_htmlcontext
text/_moz_htmlinfo
UTF8_STRING
COMPOUND_TEXT
TEXT
STRING
text/x-moz-url
I don’t use them often, but when I need to, I’m glad they exist.
There are three on OPENSTEP / macOS that are great and now have X / XDG equivalents:
open (xdg-open) opens a file with the default association. Great for popping into a graphical editor. It’s also really useful to do open . to open the current directory in a graphical file browser if you need to do some tasks that are easier in the GUI than the command line.
pbcopy and pbpaste (xpaste, not sure if there’s an xcopy?), to transfer data between the standard in/out streams and the pasteboard. Somewhat depressingly for 2020, the simplest way of copying a short file between two machines if often pbcopy < {file} locally and then cat > file command-V on the remote one.
I don’t have rsync installed on most of the machines I access, but I could do scp. That requires another connection and copying the target path though, whereas pbcopy in one terminal and cat in another, when I have both open already, doesn’t require any new SSH handshakes and is very fast.
The problem with rsync (or scp) is if you are forced to go through a management server first. Especially if you are working from home. Right now, to get to some of the servers at work, I have to ssh from my local computer (at home) to my computer at work, from there ssh to a management server, then ssh to the server itself.
Then from your local computer if you ssh target-server it will follow all those jumps. The same goes for rsync as it just execs ssh. (Or you can rsync -e "ssh -J ProxyJump office-computer,management-server".)
I know it by the term “management server,” but it’s really “only server within a datacenter that allows people to log in from outside said network. [1]” There’s no production service on said server, it’s just there to let us access the rest of the production servers.
That said, I’m in development, not devops, so I only go on said servers when absolutely necessary (usually to help test a new deployment). It’s rare that I have to copy a file to or from a production server, but when I do (pull down some critical data files to generate some stats from them [2]), I have to copy the files to the management server first.
[1] Said management server has a few interfaces, all on private IP addresses, and only one interface forwarded to our office network.
[2] I think I’ve done that three times in the past decade. Like I said, it’s not very often.
Trying it out now. The plugin architecture is going to be huge for me. I can work with files by creating my own plugins to do what I need. Then, I can start to chain them together and start building actual software without knowing what I’m building before hand.
Reverse Polish Notation, the style that older HP calculators use, rather than having operator precidence it’s a stack language, so rather than 1 * (2+3) you’d say 1 2 3 + *. I like it causw I had an HP calculator in high school, and I have a bit of a thing for concatenative languages. A proper modern language that works on this principle can be had at factorcode.org, but it’s not terminal based
Non-measurable preference? I have tree too. But to extoll this alias: It reads my .gitignore (if there is one). It has headers. Here is an output example. You can’t see the underlines of the column headings.
tmp/foo $ exa -l -T -L 2 --header --git-ignore -F -d -I node_modules
Permissions Size User Date Modified Name
drwxr-xr-x - you 22 Oct 12:23 ./
.rw-r--r-- 0 you 22 Oct 12:23 ├── blech.txt
.rw-r--r-- 0 you 22 Oct 12:22 ├── bleep.txt
.rw-r--r-- 0 you 22 Oct 12:22 └── bleh.txt
It understand git. Has some nice other options in the manual.
Apart the tools I built myself, I quite like the set of commands of scm_breeze which shortens a lot of git commands and give aliases like $e1 to the files listed in git status.
Some interesting ones I’ve found recently:
quickstack (dump the stack for a running process), selfdock (lightweight docker alternative), dstat (aggregate performance statistics), pv (progress bar for Linux commands)
Most of the non standard tools I use were already mentioned, but I recently discovered progress it’s useful for the times I forget to use pv or the files are bigger than I thought.
There’s “so common that that go on the base images for Production”:
jq
curl
and then there’s the stuff which doesn’t quite meet that threshold:
git
git-crypt
ag (silversearcher)
tree
direnv
pcregrep
socat
oathtool, qrencode
psql
xmlstarlet
rlwrap
The other honorable mention is for one which is login shell startup and only occasionally invoked directly by me, but when I do invoke it, it’s helping to save me from social failure:
birthday
This doesn’t count “stuff which I put in for the system, rather than for me”, such as etckeeper. Nor programming languages or tools to support them (Python, pyenv, go, cargo, etc). Even though sudo etckeeper unclean has been invoked more often than you might think.
hawk - a pretty awesome awk replacement. Also I know this is tangential, but gotta give some credit to iTerm - a fantastic price of software that makes the command line a very smooth experience for me. I haven’t seen many paid applications with that kind of polish.
I’ve had this in my terminal env configuration for a bout as long as I’ve been a programmer. I notice it’s no longer actively developed, are there better alternatives?
I use ripgrep on at least a daily basis, and not just for work. It is so much easier to use, and faster than standard grep.
xsv! At work people often communicate data via csv/tsv. For better or for worse these can sometimes be multiple gigabytes. xsv lets me easily slice and dice things, or shove together a quick Unix pipeline to process it in well under a minute before someone else can even set up whatever unnecessary distributed big data tool they are primed to reach for. Plus, being able to easily “join” two csvs on a column without any prep is a godsend.
/u/burntsushi should setup that github sponsors thing (or equivalent). Between xsv and the rust regex crate I owe him at least a couple beers.
Thanks so much for the kind words. Instead of sponsorship or beers, I suggest donating to your favorite charity. :-)
Out of curiosity: is there a a CSV tool that supports the conversion between narrow and wide representation of data? By this I mean that most people prefer a presentation with many columns (and hence many values) but conceptually it is often better to have a narrow representation where each row represents a single observation. In R this is a common theme.
xsv flatten
will do that. The output is itself CSV, so you canxsv flatten data.csv | xsv table
to get a nice aligned output display.Pbpaste, pbcopy
Some more on macos, open to replace having to use finder and say to make audible alerts when make is done >.<. I also have this script to make notifications easy on the command line I call it notify and use it like notify “message” “title” (why title last? so I can just do notify message)
I also got sick of using the gui to close macos apps and made a “close” command too:
None of these is particularly interesting, just useful to have around to know when something finished or to close say firefox from the command line. But it lets you then script the gui a bit easier. I suppose I could create a repo with these random macos scripts.
I also have an old af perl script named ts that simply timestamps output you pipe to it. I think something similar is in moreutils but I’ve had this thing for years before moreutils existed and its just a part of my dotfile setup so simpler to shunt around to any unix system.
After just missing the audiobells from
printf '\a'
enough. I wrote a simple shell script to use Pushover’s api send these types of notifications.I get a notification on my personal laptop (linux),work laptop (osx) and phone. I can run locally or on remote machines, OS doesn’t really matter, and the notifications are pretty much instantaneous.
script in question
Oh also pushover can be used with lobste.rs for replies and notifications
I just want an alert on my laptop when make finishes, not get alerts on my phone heh. I just use it like make && notify “make finished”, long as i see the notification i’m happy. No need to involve a web api in things IMO.
I meant to reply to this, but never actually got around to it. Might as well do it now. You know, 5 months late.
The reason for resorting to curl call is because these commands are often being run on a remote machine. e.g. manually kicking off a build, dumping/restoring a QA database, migrations, etc.
I suppose I could loop though a reserve ssh tunnel, but that just kind of seems like a pain.
For wayland users there is wl-clipboard which provides
wl-copy
andwl-paste
.I like the pbcopy default behavior well enough that I port it for use on X11, and handle Wayland too, so I just stick with the pbcopy command; this then works better for communicating with macOS-using colleagues.
entr, ripgrep, fasd, fzf, bfs, + parallel, chronic, vidir and ts from the
moreutils
package.Gron! It transforms JSON into greppable lines like
path.to.items[3] = value
, and if you like you can edit the result and use gron to transform the edited stuff back to JSON. This brings JSON structures into the line-oriented sed/grep/awk universe – I’m looking at you, Jupyter notebooks. I’ve only had gron for a few months, and it’s already my 13th-most-used command.Example output (gron can read from stdin, files, and even URLs):
Thanks for mentioning this! I had no idea this existed, but is definitely in line with how my brain works vs
jq
. Even though I usejq
almost daily, I can never remember some of the syntax.I created an alias for
jq
, calledjqpath
, that leveragesfzf
to give searchable output similar to gron, but that emits it injq
’s expected format: https://twitter.com/tednaleid/status/1302477635914739713Oh wow, I just noticed @mrcruz’s recommendation of xml2, elsewhere on this page, which does something similar for HTML and XML. Although for HTML I had hoped it would include element classes and IDs in the path —
.../div/div/...
is not useless, but.../div.comment#c_tez5vc/div.details/...
would have provided more orientation points. Still, this is going in my toolbox; and I thought you might like it, too.Part of the output of
curl https://lobste.rs/s/eprvjp/what_are_your_favorite_non_standard_cli | html2
:vipe is pretty useful. Can do things like:
To manipulate intermediate results with your $EDITOR. I’ve re-implemented this tool in Haskell:
https://hackage.haskell.org/package/editpipe
That’s funny, i thought it was based upon the moreutils author. I guess vipe predates Joey’s desire to write everything in Haskell?
[Comment removed by author]
You gonna tell us how? ;-)
Not the OP and not 2 lines, but a shell solution might look like this (with lots of UUoC):
If you really wanted to squeeze it down to 2 lines, then you could do some hacky vim/bash stuff (and accept that the file will exist after the process runs, ‘cause I interpret “2 lines” as “2 subprocesses”, and I’m already bending the rule with the shebang).
This doesn’t open a tty, so under some conditions it won’t actually open the editor UI.
Neat! Thanks!
Left as an exercise for the reader.
What are the two lines
Have a go, it’s a bit trickier than you think. I guarantee you won’t implement it correctly first try.
One I can not live without anymore is hstr: https://github.com/dvorka/hstr
Worth noting: If you use
set -o vi
(highly recommend anyway), you can search command history with/<search term>
and thenn
(any number of times) to flip through the matches.omg, this is life changing!!
how? it is just fancy ctrl-r
Yes it’s fancy ctrl-r but for people like myself who find ctrl-r clunky to use and also find remembering long command strings difficult it’s a huge help.
With regex search though
Any reason to prefer this over fzf’s built-in fuzzy history search?
I found it faster and more intuitive, but I am using it for years now and never tried fzf again, so things may have changed wrt fzf
I find
ffsend
pretty useful to securely share files from the command line (shameless plug).Isn’t Firefox Send discontinued?
Mozilla’s Firefox Send is. I’m hosting my own Send instance for this, and forked Send in an attempt to keep it alive: https://github.com/timvisee/send
Here’s another shameless plug, with netdrop you can send encrypted pipes or files inside your local network.
I use
mc
(Midnight Commander) a lot, alongsidebmon
andhtop
.May I suggest
vifm
as a vi key binding alternative tomc
.sl
https://packages.debian.org/buster/sl
Going through my history of commands,
I see
rg
andpngquant
as the two main commands that aren’t present by default on my Ubuntu set up. I also use my own aliases/functions. For example, I use2lc
frequently to check if last two commands are giving the same result (when I’m working on my cli books, answering on reddit/stackoverflow, etc).Another is
ch
(inspired by https://explainshell.com/) to quickly extract information from manual for command line options (see https://github.com/learnbyexample/command_help). For example:Going through my history of commands, I think I’m very boring in this regard (only ripgrep and custom tools in the top 100)
I use
colordiff
a lot, because I have to diff files not in git a lot andhexdump
comes up more often than I’d like to.Maybe the most unusual thing (which I stole somewhere) is a
colorize
function/alias so I can tail and still highlight stuff without resorting to grep and missing out on the rest:grep --color=always "$1\|^"
^
or$
can be skipped, i.e."$1\|"
should give the same resultalso,
ripgrep
has--passthru
option for this purposeI’m quite found of fuck.
Somehow I missed that. Wow, that’s great.
I had aliased
fuckgit
togit push --set-upstream origin master
because I miss that one so often.Yeah, that is pretty much the main use case I have for it as well =P
Percol, jq and gron. I use these dozens of times per day. Also, fish shell, I find it superior to all others and don’t quite understand why anyone would use bash at this day an age.
I also use httpie, ag/ripgrep/ack, fd which are great, but they provide only incremental improvements on standard tools that work well. If I am writing shell scripts I always stick to curl, grep, find, etc.
how about: because I’ve used bash for 25 years and the second a key command doesn’t work in a new shell I feel I’m wasting my time. Why learn a new way to do the exact same thing? Especially why learn it just so things are colorful?
Fair in production at work. Learning on the side, trying out a new thing to really make sure that it is the exact same thing (it’s not … bash and zsh and fish are similar but not the same or exact same). Of course, what you are doing is maybe working well enough but pay attention to pain points. Did bash do something weird with variable escaping?
Of course no one can convince anyone, or prove software’s worth. So, here’s my camp and creds I guess. I’m using fish but I think/feel that you could achieve most of it with zsh plugins and then not frustration that the world is still using bash. Zsh is bash compatible so the world staying where it is doesn’t matter. That’s pretty nice. It’s also a zero cost switch, other than learning the tricks (the features that are supposed to be great) that you didn’t have before.
Saying it is about colors is a bit reductive. Color is multiplexing for your cerebral cortex. Colors help you scan. But it’s not just colors.
You don’t have to learn any command at all. The invocation syntax for commands is the exact same as bash. I still write my Shellscripts in Bourne shell like I did 15 years ago when I used bash. No change in there. What fish offers is a better UI for command input. Auto complete and history browser are superior to those of bash. You can have it without colours of you want. Colors are there to provide Information. For example, as you type you will get a suggestion for autocomolete. This is shown in a different colour, the same when picking autocplete suggestions. It is not quite accurate to say that it is just more colorful, because bash doesn’t have these features at all.
This has NOT been my experience.
We use fish at work… for reasons… I was very quickly frustrated that commands were different and found it unusable.
You say autocomplete and history are superior, but they are also different. Sometimes the cost of change negates and superiority that the new thing would provide.
Great thread.
choose
replaces the usual awk oneliner to get a column of text for me. https://github.com/theryangeary/chooseNow this one is pure awesome, thanks; as someone who has to search for awk samples every time I use it, this will help me a lot.
It’s not in a separate package or documented so well, but I have an example program in cligen that does all this and more. For me, the memory mapped IO mode is even ~2x faster than
mawk
on Linux for files in /dev/shm.ripgrep, jump, fzf, httpie, direnv, pgcli, fuck, jq.
if you like jq try out jiq
Though this is a TUI-app and not a CLI app that many of the others in this thread are, I have to mention Visidata. It is so handy for viewing and manipulating csv files.
One I find useful for Linux (or any system that uses X Windows) is
xpaste
. For example:will copy the output of
ls -l
to the X windows primary selection (-s
for the secondary selection, and-c
for the clipboard selection). I wrote the opposite,xselect
, to obtain the selection from from the command line. You might think “why? Isn’t that what the middle button, or ^C is for?” Because X allows multiple selections. I have a fragment of a web page highlighted in Firefox, and from there, I can select the following for as the selection:I don’t use them often, but when I need to, I’m glad they exist.
There’s also
xclip
for piping stuff into selections and out of them.There are three on OPENSTEP / macOS that are great and now have X / XDG equivalents:
open
(xdg-open
) opens a file with the default association. Great for popping into a graphical editor. It’s also really useful to doopen .
to open the current directory in a graphical file browser if you need to do some tasks that are easier in the GUI than the command line.pbcopy
andpbpaste
(xpaste
, not sure if there’s anxcopy
?), to transfer data between the standard in/out streams and the pasteboard. Somewhat depressingly for 2020, the simplest way of copying a short file between two machines if oftenpbcopy < {file}
locally and thencat > file
command-V on the remote one.Really? You might want to check out
rsync
, it comes with your PC.I don’t have rsync installed on most of the machines I access, but I could do scp. That requires another connection and copying the target path though, whereas
pbcopy
in one terminal andcat
in another, when I have both open already, doesn’t require any new SSH handshakes and is very fast.The problem with rsync (or scp) is if you are forced to go through a management server first. Especially if you are working from home. Right now, to get to some of the servers at work, I have to ssh from my local computer (at home) to my computer at work, from there ssh to a management server, then ssh to the server itself.
If you’re using rsync over SSH this is solveable using ProxyJump. In your
ssh_config(5)
you can do:Then from your local computer if you
ssh target-server
it will follow all those jumps. The same goes for rsync as it just execs ssh. (Or you canrsync -e "ssh -J ProxyJump office-computer,management-server"
.)Is there a technical term for the type of management server you mentioned? I’d like to learn more.
I know it by the term “management server,” but it’s really “only server within a datacenter that allows people to log in from outside said network. [1]” There’s no production service on said server, it’s just there to let us access the rest of the production servers.
That said, I’m in development, not devops, so I only go on said servers when absolutely necessary (usually to help test a new deployment). It’s rare that I have to copy a file to or from a production server, but when I do (pull down some critical data files to generate some stats from them [2]), I have to copy the files to the management server first.
[1] Said management server has a few interfaces, all on private IP addresses, and only one interface forwarded to our office network.
[2] I think I’ve done that three times in the past decade. Like I said, it’s not very often.
https://en.wikipedia.org/wiki/Bastion_host
https://en.wikipedia.org/wiki/Jump_server
fzf, cht, fd, ripgrep, teip, batch, lf, kakoune in filter mode (kak -f), z.lua, lazygit
For K8s: stern is must-have.
https://github.com/ajeetdsouza/zoxide feels way easier to implement than z.lua
dunno how it’s easier than cloning the repo and adding one line of config to my shell but whatever works for you :)
See also,
https://lobste.rs/s/2mxwdm/rewritten_rust_modern_alternatives
nnn for me! It’s a great little file manager. Once you get used to using 1/2/3/4 you’ll love it. Also, batch rename is a masterpiece.
Trying it out now. The plugin architecture is going to be huge for me. I can work with files by creating my own plugins to do what I need. Then, I can start to chain them together and start building actual software without knowing what I’m building before hand.
moreutils - specifically
ifne
shows up in my personal scripts quite a bit.ncat - a more fully featured
nc
pv - (not sure if it has a homepage) pipe viewer, monitor the amount of data going through pipes
dc - (also unsure of homepage) RPN equivalent of bc, the terminal calculator
RPN?
Reverse Polish Notation, the style that older HP calculators use, rather than having operator precidence it’s a stack language, so rather than
1 * (2+3)
you’d say1 2 3 + *
. I like it causw I had an HP calculator in high school, and I have a bit of a thing for concatenative languages. A proper modern language that works on this principle can be had at factorcode.org, but it’s not terminal basedAah, gotcha. It’s been a good while since I’ve heard of RPN mentioned anywhere.
I’m familiar with the style, and have dabbled with some stack-based languages before.
I believe that’d be (no ssl support):
http://ivarch.com/programs/pv.shtml
+1 for mosh. In combination with tmux unbeatable
I like exa instead of ls. And few more tools I mention here.
I really want to like exa, but I get tripped up every single time using
exa -t
when what I want isls -t
. It’s such a productivity killer.I’d make an alias. I have alias a=‘exa’. Can do specific alias for the
-t
flag.Yeah. My
ls -ltr
muscle memory needs to go toexa -lrsold
and I don’t have it yet. I have an aliast
for a tree like list that is great.I agree, my muscle memory is a super power and a prison.
This is a decent solution. I could alias it to
lt
I do
lt
for the same. Funny how this particular muscle memorised incantation catches so many of us.How is this any better than the fully POSIX compliant
tree
?Non-measurable preference? I have tree too. But to extoll this alias: It reads my
.gitignore
(if there is one). It has headers. Here is an output example. You can’t see the underlines of the column headings.It understand git. Has some nice other options in the manual.
Apart the tools I built myself, I quite like the set of commands of scm_breeze which shortens a lot of git commands and give aliases like
$e1
to the files listed in git status.I love scm_breeze, but I have to go pretty far out of my way to get it to respect the XDG_BASE_DIR spec and to disable a bunch of it’s non-git wrappers. I mainly use this replacement entrypoint script: https://github.com/sethwoodworth/devenv-setup/blob/master/patches/scm_breeze.sh
As someone working a lot with logs and traces, lnav is a freaking lifesaver.
I used to work with the inventor of that and everybody at the company used it. Very smart dude.
Have you tried if https://github.com/akavel/up might also be useful to you?
Some interesting ones I’ve found recently: quickstack (dump the stack for a running process), selfdock (lightweight docker alternative), dstat (aggregate performance statistics), pv (progress bar for Linux commands)
fish
: a shell that just worksnvim
: comes sanely configured out of the box! I’m not sure I even have a vim config set up yet and I barely notice.open
to take a look at a directory in Finder. Handy for certain use cases that don’t fit neatly in a CLI workflowHmm, my workflow is quite boring I’m afraid. :)
The one I can’t live without is eshell’s
rgrep
; it shows its results in a hyperlinked buffer so you can jump right to the match.Honorable mention to
entr
andhtop
tho; those are great.xml2 and 2xml. Converts XML to a flat file-path like structure, with the other application doing the reverse.
This is the key application that allowed me to unlock a CLI-like pipeline for working with EagleCAD’s xml markup from the shell.
Docs are incredibly sparse though. The best resources I’ve founed aside for personally playing around with it follow:
Oh, this is wonderful. Much appreciated!
Most of the non standard tools I use were already mentioned, but I recently discovered progress it’s useful for the times I forget to use pv or the files are bigger than I thought.
For anyone who does anything with images ever, imagemagick and ffmpeg are absolutely necessary.
This highly featureful alternative to “ls” is my favorite, but I am a little biased ;-) : https://github.com/c-blake/lc
There’s “so common that that go on the base images for Production”:
and then there’s the stuff which doesn’t quite meet that threshold:
The other honorable mention is for one which is login shell startup and only occasionally invoked directly by me, but when I do invoke it, it’s helping to save me from social failure:
This doesn’t count “stuff which I put in for the system, rather than for me”, such as etckeeper. Nor programming languages or tools to support them (Python, pyenv, go, cargo, etc). Even though
sudo etckeeper unclean
has been invoked more often than you might think.hawk - a pretty awesome awk replacement. Also I know this is tangential, but gotta give some credit to iTerm - a fantastic price of software that makes the command line a very smooth experience for me. I haven’t seen many paid applications with that kind of polish.
The ones I use > 10 times daily are
I use fx on daily basis with powerful .fxrc extensions.
Also I created my own ls command ll: https://github.com/antonmedv/ll
ll
looks neat, glad you linked toexa
, I enjoy it a lot.wtf
I use a lot of the same mentioned in this thread:
rg
,fzf
,fd
But also, I use
navi
daily which is great.vcprompt https://github.com/djl/vcprompt
I’ve had this in my terminal env configuration for a bout as long as I’ve been a programmer. I notice it’s no longer actively developed, are there better alternatives?
Should be it.