The very reason you cite for claiming that you’re no longer in Kansas (*nix) is in fact the very embodiment of the UNIX philosophy.
Said philosophy dictates that by virtue of a set of standard interfaces (namely - everything is a stream of bytes and variations on that theme) tools become pluggable interchangable modules that all interoperate like a dream because they all conform to standard interfaces.
fish works great as a *nix shell because it acts like… A shell :)
This is the power of the UNIX philosophy made manifest. Don’t question whether you’re conforming to some kind of hide bound idea of what IS *nix based on a default set of stock utilities, revel in the fact that you’re leveraging the great work of others to embrace and extend the utility of your environment with this awesome paradigm.
You hit the nail in the head. Unix shells hit the sweetspot of practicality, human readability and computer readability.
If we walk back in time, the unix tools we all know, didn’t suddently sprang to existence all at the same time, but rather were born throughtout the years. curl is much newer than grep. And jq is much newer than grep. And even all these tools, while maitaining backwards compatibility have throughtout history incorporated new functionality that is today seen as part of their core value offer. For example ‘grep -P’.
ripgrep, fd et all are great and excel in some usages. But they don’t render their ancestor relatives obsolete in any way. I have been using ag_silversearched for almost a decade and love it, but I can’t imagine doing away without grep. Grep is still the golden standard and still does a ton of things that would only become obsolete if say, ag or rg implement it, at which point they become grep, which was never their point to begin with.
Enjoy the old reliable gems that always worked. Enjoy the new stuff that adds value with newer technologies. They are both great, it’s not a competition.
That’s not totally true. When I set out to build ripgrep, I specifically wanted it to at least be a better grep than ag is. ag is not a particularly good grep. It has a lot of flag differences, does multiline search by default (except when searching stdin) and does smart case search by default. It also has sub-optimal handling of stream searching when compared to either grep or ripgrep. ripgrep doesn’t do multiline search by default, it doesn’t use smart case by default and generally tries to use the same flags as grep when possible.
Overall, there’s a lot more in common between grep and ripgrep than there are differences, and that was most definitely intentional. For example, if you want to make ripgrep recursively search the same contents as grep, then it’s as easy as rg -uuu pattern ./.
Probably the biggest things that grep has that ripgrep doesn’t are BREs and more tailored locale support (but ripgrep is Unicode aware by default, and lacks GNU grep’s performance pitfalls in that area). Other than that, ripgrep pretty much has everything that grep has, and then some, including things that a POSIX compatible grep literally cannot have (such as UTF-16 support).
When I set out to build ripgrep, I specifically wanted it to at least be a better grep than ag is. ag is not a particularly good grep.
We are drifting very far off-topic, but just to be clear, this is a huge part of why I use ripgrep over ag or ack, and why this post even got written: ripgrep is often, but not always, a drop-in, so it’s very tempting to just swap it in for grep. And that works…provided that, at a bare minimum, the other party has rg installed.
(And you’ve done an absolutely phenomenal job with ripgrep, to be clear. I skipped past every single grep replacement until rg showed up. Thank you for putting so much time and thought into it.)
Getting a reply by the man himself. How cool is that?
I’ll take the chance to leave a an honest Thank You for your work. XSV is a beast and a life saver.
But on to the topic… perhaps ripgrep is is the example that comes closest to the original. But for example ag is clearly opinionated, it even used to say in their webpage that it was more oriented to search code (or was it ack that used to say that maybe…) It will search on the current folder by default, it defaults to a fancier output with color, it will ignore for example .git folder and so on.
I believe that the point I am trying to make is: such a corner stone like grep plays such an important role that the only way to replace it is if you create a compatible implementation, at which point it becomes grep.
I want to be able to use basic regular expressions when I want and extended or PCRE when I want. To search inside .git folder without looking up new flags on the manpage. I want to pull my old tricks with the flags that everyone knows whenever I need.
But credit to where is due. ack ag and rg did succeed to pass the “why would I use this instead of grep?” test.
For example, if you want to make ripgrep recursively search the same contents as grep, then it’s as easy as rg -uuu pattern ./.
Out of curiosity, why didn’t you for compatibility with say: grep -rn patern .
Yeah sure, you’re definitely right. That’s kind of what I meant by “not totally true.” A little weasely of me. ripgrep’s defaults are definitely tuned toward code searching in git repositories specifically. (By default, ripgrep respects your .gitignore, ignores hidden files and ignores binary files. That’s what -uuu does: turns off gitignore filtering, turns off hidden filtering and turns off binary filtering.) The main thing I wanted to convey in my previous comment is that when I originally designed ripgrep, I put careful thought and attention to making ripgrep a good grep itself, to the extent possible without compromising on the defaults that catapulted ack and ag to success. There are a surprising number of subtle details involved in that!
Just to add a couple of clarifications (that you might already know):
ripgrep’s default regex flavor is pretty close to grep’s “extended” flavor. ripgrep has broader support for Unicode things while EREs have some locale specific syntax.
ripgrep also has a -P flag that, like grep, lets you use PCRE2.
With ripgrep, if you want to search in .git then doing it explicitly will work: rg foo .git. Otherwise, yeah, you’d want rg -uuu foo if you just wanted ripgrep to search the same things as grep. ag doesn’t have this convenience. There is no real way to get ag to search like grep would. (You can get close by using several flags.)
Out of curiosity, why didn’t you for compatibility with say: grep -rn patern .
Because I perceived “recursive search of the current directory by default” as one of the keys to the success of ack and ag. (In addition to two other things: nicer output formatting and smart filtering.)
Basically, I tried to straddle both lines. You can tack an rg something on to the end of a shell pipeline and it should work just as well as grep does. Case in point, ag just messes up even in really simple cases:
I’d call this “drift”, like linguists see in language usage, but in this case due to scope and purpose changing.
It’s easier to adjust things as you need then to take the API’s and restructure them as a group all at once (inviting a mass outrage as well).
We’re also not ready for the next major paradigm shift, and because we don’t know what that “is”, we live in the moment and “change as needed”. Because we know what have, and what we think we need.
I’ve heard of 3 items on the list: fish, kakoune, and tmux – and I wouldn’t call screen part of the traditional unix tooling. I think the author is overextrapolating their config quirks as a trend.
bash isn’t traditional Unix, either: It’s the Bourne-Again Shell, made explicitly as an improved imitation of the Bourne Shell, which itself replaced the Thompson Shell back in the dim mists of time. More to the point: bash is GNU, and GNU’s Not Unix. GNU is the FSF’s attempt to make an improved clone of Unix.
This could just as easily be titled: “My customized, non-standard environment”. Yeah, the OP has installed a bunch of weird tools, but that doesn’t mean that the *nix environment is not pretty much the same as it has been for the past 20 years, it just means that the author has installed a bunch of non-standard tools and likes using them. It doesn’t seem to say much for the state of the *nix ecosystem in general, except maybe there are a lot more specialized tools you can use these days.
Hmm. I hear what you’re saying, but it’s a bit more nuanced than that. For the last 25+ years I’ve been doing development, I’m used to seeing variations in e.g. awk v. gawk or bash v. sh v. dash or the like. I think writing that all off as a “customized, non-standard environment” generically is a bit strong, yet the idea of shell scripts localized to Linux v. macOS v. SunOS or the like is pretty normal—and we generally have tools to deal with it, because the differences are, generally, either subtle, or trivial to work around.
What I’m observing now, and what I’m saying I think I’m part of the “problem” with, is a general movement away from the traditional tools entirely. It’s not awk v. gawk; it’s awk v. perl, 2020 edition. And I think the thing it says is we’re looking at a general massive shift in the Unix ecosystem, and we’re likely at the leading edge, where we’re going to see a lot of churn and a lot of variation.
I’m hearing in your comment that I may not have conveyed that terribly well, so I’ll think about how to clarify it.
I came here to post the same comment as @mattrose, and then read your response which clarified your point pretty well. I think I can summarize the point by saying “general purpose scripting languages are winning the ad hoc ‘tools market on Unix,’ in part due to the rise of flavor specific additions to POSIX, limiting compatibility and creating more specialized, non-portable ‘*nix’ shell scripts. This fact is more easily corrected by using one of the many flavors of a higher level scripting language that augments the Unix API with it’s own standard APIs” – Or something to that affect.
Thank you for sharing your experiences. You conveyed your point beautifully.
Observer bias is something that constantly comes to mind when I compare my experiences with others.
Our environment doesn’t help any more so to lessen that bias either.
I don’t doubt what you have witnessed, just as I don’t doubt what mattrose has experienced.
For what it’s worth, from what I have seen, I cannot say that I have seen anything that validates either of y’alls experiences - but that’s because I have my head stuck in a completely different world ;)
I think this is part of the cambric explosion of (open source) software. Twenty years ago, for any particular library there might be one or two well-maintained alternatives. OpenSSL and cURL, for example, achieved their central positions because they were the only reasonable option at the time.
I think that even then, there was (relatively) more variety in shell tooling because these tools have a far larger influence on many people’s user experience.
Compared to twenty years ago, the number of open source developers has grown by a lot. I’ve no idea how much, but I wouldn’t be surprised if it turned out to be a hundred or a thousandfold. It’s almost unthinkable today that there would be only one implementation of anything. And the variety of command-line tools has exploded even more.
I think you’re right in that there is an explosion of tooling in the *nix world in the past 10 or so years, it seems every day there’s a new command line tool that does a traditional thing in a new and non-traditional way, but…
I think that the people that are writing software for *nix, and even the ones that are writing the new tools that you are so fond of, realize that there is a baseline *nix … platform for lack of a better word, try very hard (and trust me, it’s not easy) to keep to that baseline API, and only depend on tools that they know are widely distributed, or can be bootstrapped up from those tools, via package management systems like apt, or macos homebrew or FreeBSD pkg tools. I would never write software trusting that the user has fish already installed on their machine, but I would trust that there is stuff like a Bourne shell (or something compatible), and grep, and even awk (It’s so small it even fits in busybox).
Personally, I think this explosion of tools is actually a good thing. I think it has upped user productivity and happiness to a great extent, because you can create your own environment that fits the way you do things. Don’t like vi, or emacs? Install vscode. Don’t like bash, use fish, or zsh, or even MS powershell. I write a lot of little tools in ruby, because I like the syntax, which means I end up writing a lot more scripts than I did back in the days when I was forced into using bash, or (euch) perl and I end up having a much nicer environment to work in.
The original reason I read your post is that I am worried about a fragmentation of the *nix API, but at a more basic level. For example, for many years, the way to configure ip addresses was ifconfig. There were a few shell script wrappers around it, but the base command was always available. These days, on FreeBSD, you still use ifconfig, but on some newer Linuxes, it’s not even installed anymore. And everyone does dynamic network configuration using drastically different tools. MacOS moving away from the GNU utilities more and more, even when it doesn’t make sense (I just installed Catalina and I’m still trying to get used to zsh) is another example. And let’s not even get into the whole systemd thing. (FTR, I approve, but it bugs me that it’s so linux specific)
Differences like these are troubling, and remind me of the bad old days when you had BSD, and Linux, and Solaris, and IRIX, and HP-UX, and AIX, and and and and. And every one of them had a different toolkit, and utilities.
Interestingly enough, all of these other variants faded away due to being tied to propietary hardware (except, kinda, Solaris), but there doesn’t seem to be anything stopping this from happening again, and I do see similar things happening again.
ifconfig disappearance has nothing with dynamic configuration. ifconfig has disappeared because its maintainers never adapted it to support new features of the network stack—not even support for multiple addresses on the same NIC. Someone could step up and do it, but no one did. In fact, the netlink API makes it much simpler to create a lookalike of the old Linux ifconfig or FreeBSD ifconfig, if someone feels like it.
It would be no harder to create UI-compatible replacements for route, vconfig, brctl etc. There’s just hardly a reason to.
The problem with making such a tool is that there’s a lot to do if one is to make it as functional as iproute2. I have most of it compiled in a handy format in one place. I can’t see how an ifconfig lookalike could be meaningfully extended to handle VRF—you’d have to have “vrfctl” plus new route options.
The dynamic configuration tools call iproute2 in some form, usually. It has machine-readable output, even though the format could be better. Few are talking netlink directly.
It’s not awk v. gawk; it’s awk v. perl, 2020 edition. And I think the thing it says is we’re looking at a general massive shift in the Unix ecosystem, and we’re likely at the leading edge, where we’re going to see a lot of churn and a lot of variation.
Guess I don’t buy this premise or axiom. I write a lot of bourne shell, note, not bash, that almost always runs on any unix. Its not particularly hard, but a lot of linux developers just don’t seem to care or even try. Your perl example is good though, cause I’ve rewrote a lot of that early 2000 nonsense back into plain olde shell when i find it and made it smaller in the process.
And observing this where exactly? Are you sure you’re not just in a bubble and self reinforcing? I’ve found just teaching people how to write shellscripts with say shellcheck and the regular tools tends to get them to realize all these fancy new tools might be great but for stuff that should last sticking with builtin tools isn’t that hard and means less effort overall in later maintenance.
I use many of these same ‘new-school’ utilities day to day, but my approach to the problem outlined in this article is a bit different. For example, I restrict use of fd to interactive use and always use find in scripts. I do not write bash but only POSIX sh scripts, and attempt to never use the platform-specific extensions of tools (ie. GNU-isms). This leaves me with broadly speaking portable shell scripts. But even these are not guaranteed to work because I don’t have the time to test every script on every platform. Lately I have been exploring using a dedicated, embeddable scripting language like Jim Tcl to handle anything more complex than a page or so of shell. Similarly, it is possible to use Vim or Neovim with a forest of plugins, but it is equally possible to stick to the core vi functionality. Seems like Git also has a ‘split personality’ when it comes to porcelain and plumbing commands. Maybe instead of ‘deprecation’ this is simply naturally occurring development?
I haven’t even heard of most of those (I know ripgrep, but use Silver Searcher. I know tmux, but don’t care.) My tool-ish stuff runs on perl, bash, or POSIX sh and the shell bits use POSIX and/or GNU tools. Hasn’t given me cause for regret so far.
Wasn’t there a similar situation in the 80s/90s, with all the commercial Unices back then? Irix, Tru64, HP-UX, AIX, Solaris, SunOS, SCO Unix, Unixware, etc etc.
ESR touches on this in his book The Art of Unix Programming:
“In fact, for years after divestiture the Unix community was preoccupied with the first phase of the Unix wars — an internal dispute, the rivalry between System V Unix and BSD Unix. The dispute had several levels, some technical (sockets vs. streams, BSD tty vs. System V termio) and some cultural.”
So I think it’s safe to say that the uniformity of the Unix experience has varied from time to time.
I think the core properties that make *nix great is not that you have exact replications of tools, but to have good interfaces, that turned into standards (POSIX) and that these core functionalities remain, in more advanced tools.
Take the shell. There is many implementations, but all the widely used ones have the same way of executing binaries, environment variables work pretty much the same (minus csh-style), there’s a common theme of using pipes, ingesting standard text and spitting out transformations that then can be used again.
A lot of the mentioned tools exist precisely because the authors embraced it, but wanted to do something slightly different.
I think the problem starts where standard-compaitbility is broken. Many tools avoid this, try to behave the correct way when it is expected or need that standard mode to be explicitly deactivated.
What’s more worrying in my opinion is that the amount of new software that gets created not targeting POSIX for no good reason. Now, I am not going to judge what a good reason is, but there certainly are cases where familiarity with POSIX and a slightly different design could have prevented long conversations on bug trackers, many FAQ-entries and a lot of them being wasted to port software elsewhere.
People go through great lengths to do “pure” implementations in their favorite programming languages or having zero dependencies, which is great. But outside of the shell scripting world it seems that the targeting POSIX to have an easier time running things on another system is lost.
I think this goes hand in hand with Docker being used for “portability”, even when in this case it’s just the developer and the server machine or in other words only the developer container.
The software mentioned in the article is rather portable, because they to a large degree target POSIX. Thex mostly build upon and extend it. From what I can see they also all provide interfaces that are in *nix-style.
What’s more worrying in my opinion is that the amount of new software that gets created not targeting POSIX for no good reason.
Could you clarify what you are referring to? Do you refer to software targeting the POSIX C API or command-line utilities conforming to POSIX (e.g. grep doing what POSIX specifies for grep)? Or both? ;)
Does it matter that tools are not standard or provided out of the box? They are still UNIX tools, and as tools they provide a mean to an end. You might like performance or egonomics more that the old tools, and that is quite ok.
Where all of this might become a problem is when you start sharing your scripts. That leads to invariably a lot of problems, as the environment of others might not match yours. They might not have the same versions, leading to frustration. So you might start coding your scripts more defensively, to ensure the right versions are there, but that might lead to blurring the original intent of your script. And honestly, who has time for all of that :) So, you might start using yet another Nix – this time, one without a star, to ensure that everyone has the right environment, they just have to have Nix installed.
Let’s say that a software author requires, say fish to install their software. If the software was packaged by a Linux distribution, fish would be included as a dependency. If I instead use Nix to satisfy this dependency, will it “play nice” with my distribution’s package manager?
Nix is hermetic in the sense that its “packages” do not use dependencies from the outside and its packages do not affect the system. Furthermore, one can have multiple versions of a package in the systems at the same time.
However, I was here explicitly talking about scripts, in a way that no installation is needed.
To illustrate the point, let’s say the author has a script written in fish, which relies on ripgrep and dust. Normally, his script would begin with #!/usr/bin/env fish. What I was emphasizing is using nix-shell to capture all the dependencies. It is achieved by using the following multiline shebang:
#!/usr/bin/env nix-shell
#!nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/8a3eea054838b55aca962c3fbde9c83c102b8bf2.tar.gz
#!nix-shell -i fish
#!nix-shell -p fish -p ripgrep -p dust
The first line is the regular shebang invocation which tell the system to use nix-shell as the interpreter. The second line is read by the nix-shell when starting to interpret the script, and -I is there to pin the dependencies to a particular “release” of nixpkgs. The third line (-i) tells the nix-shell that the actual interpreter is fish. Lastly, the fourth line references the dependencies. The particular versions will be the ones defined by the nixpkgs-channel from the given revision in -I.
The beauty of this approach is that one could also do it with perl or python scripts, pulling their dependencies (as long as they are in nixpkgs).
Portable unix-ish subset has always been somewhat elusive and different from what people use on their machines.
Publish a shell script, and someone will tell you it doesn’t work on SunOS, and that some BSD flavor needs an -x flag somewhere, but that doesn’t work with macOS tools that haven’t been updated in 13 years, and someone will have a weird setup without /tmp and commands in /opt/lol/, and your paths won’t work on MinGW, and so on.
Then you give up and write your “shell script” in Python.
I think it’s still *nix, but with different (more “modern”?) sensibilities. Tools like ripgrep or fd or fzf are still designed to be composed on the command line and to process text and files generically.
Aside: this post made me discover angle-grinder, and then logfmt, which I think is a great idea. It’s super simple and I might just start using it for my logging stuff. Thanks for the pointer!
I don’t get it.
The very reason you cite for claiming that you’re no longer in Kansas (*nix) is in fact the very embodiment of the UNIX philosophy.
Said philosophy dictates that by virtue of a set of standard interfaces (namely - everything is a stream of bytes and variations on that theme) tools become pluggable interchangable modules that all interoperate like a dream because they all conform to standard interfaces.
fish works great as a *nix shell because it acts like… A shell :)
This is the power of the UNIX philosophy made manifest. Don’t question whether you’re conforming to some kind of hide bound idea of what IS *nix based on a default set of stock utilities, revel in the fact that you’re leveraging the great work of others to embrace and extend the utility of your environment with this awesome paradigm.
You hit the nail in the head. Unix shells hit the sweetspot of practicality, human readability and computer readability.
If we walk back in time, the unix tools we all know, didn’t suddently sprang to existence all at the same time, but rather were born throughtout the years. curl is much newer than grep. And jq is much newer than grep. And even all these tools, while maitaining backwards compatibility have throughtout history incorporated new functionality that is today seen as part of their core value offer. For example ‘grep -P’.
ripgrep, fd et all are great and excel in some usages. But they don’t render their ancestor relatives obsolete in any way. I have been using ag_silversearched for almost a decade and love it, but I can’t imagine doing away without grep. Grep is still the golden standard and still does a ton of things that would only become obsolete if say, ag or rg implement it, at which point they become grep, which was never their point to begin with.
Enjoy the old reliable gems that always worked. Enjoy the new stuff that adds value with newer technologies. They are both great, it’s not a competition.
That’s not totally true. When I set out to build ripgrep, I specifically wanted it to at least be a better grep than ag is. ag is not a particularly good grep. It has a lot of flag differences, does multiline search by default (except when searching stdin) and does smart case search by default. It also has sub-optimal handling of stream searching when compared to either grep or ripgrep. ripgrep doesn’t do multiline search by default, it doesn’t use smart case by default and generally tries to use the same flags as grep when possible.
Overall, there’s a lot more in common between grep and ripgrep than there are differences, and that was most definitely intentional. For example, if you want to make ripgrep recursively search the same contents as grep, then it’s as easy as
rg -uuu pattern ./
.Probably the biggest things that grep has that ripgrep doesn’t are BREs and more tailored locale support (but ripgrep is Unicode aware by default, and lacks GNU grep’s performance pitfalls in that area). Other than that, ripgrep pretty much has everything that grep has, and then some, including things that a POSIX compatible grep literally cannot have (such as UTF-16 support).
We are drifting very far off-topic, but just to be clear, this is a huge part of why I use
ripgrep
overag
orack
, and why this post even got written:ripgrep
is often, but not always, a drop-in, so it’s very tempting to just swap it in forgrep
. And that works…provided that, at a bare minimum, the other party hasrg
installed.(And you’ve done an absolutely phenomenal job with ripgrep, to be clear. I skipped past every single
grep
replacement untilrg
showed up. Thank you for putting so much time and thought into it.)Me too. :P
Getting a reply by the man himself. How cool is that? I’ll take the chance to leave a an honest Thank You for your work. XSV is a beast and a life saver.
But on to the topic… perhaps ripgrep is is the example that comes closest to the original. But for example ag is clearly opinionated, it even used to say in their webpage that it was more oriented to search code (or was it ack that used to say that maybe…) It will search on the current folder by default, it defaults to a fancier output with color, it will ignore for example .git folder and so on. I believe that the point I am trying to make is: such a corner stone like grep plays such an important role that the only way to replace it is if you create a compatible implementation, at which point it becomes grep. I want to be able to use basic regular expressions when I want and extended or PCRE when I want. To search inside .git folder without looking up new flags on the manpage. I want to pull my old tricks with the flags that everyone knows whenever I need.
But credit to where is due. ack ag and rg did succeed to pass the “why would I use this instead of grep?” test.
Out of curiosity, why didn’t you for compatibility with say: grep -rn patern .
Yeah sure, you’re definitely right. That’s kind of what I meant by “not totally true.” A little weasely of me. ripgrep’s defaults are definitely tuned toward code searching in git repositories specifically. (By default, ripgrep respects your
.gitignore
, ignores hidden files and ignores binary files. That’s what-uuu
does: turns off gitignore filtering, turns off hidden filtering and turns off binary filtering.) The main thing I wanted to convey in my previous comment is that when I originally designed ripgrep, I put careful thought and attention to making ripgrep a good grep itself, to the extent possible without compromising on the defaults that catapulted ack and ag to success. There are a surprising number of subtle details involved in that!Just to add a couple of clarifications (that you might already know):
-P
flag that, like grep, lets you use PCRE2..git
then doing it explicitly will work:rg foo .git
. Otherwise, yeah, you’d wantrg -uuu foo
if you just wanted ripgrep to search the same things as grep.ag
doesn’t have this convenience. There is no real way to get ag to search like grep would. (You can get close by using several flags.)Because I perceived “recursive search of the current directory by default” as one of the keys to the success of ack and ag. (In addition to two other things: nicer output formatting and smart filtering.)
Basically, I tried to straddle both lines. You can tack an
rg something
on to the end of a shell pipeline and it should work just as well as grep does. Case in point,ag
just messes up even in really simple cases:Anyone who runs into that is going to be like, “okay, well, I guess I can’t use ag in shell pipelines.”
ripgrep doesn’t have flag for flag compatibility like ag does, but it at least should get all the common stuff right.
I’d call this “drift”, like linguists see in language usage, but in this case due to scope and purpose changing.
It’s easier to adjust things as you need then to take the API’s and restructure them as a group all at once (inviting a mass outrage as well).
We’re also not ready for the next major paradigm shift, and because we don’t know what that “is”, we live in the moment and “change as needed”. Because we know what have, and what we think we need.
I’ve heard of 3 items on the list: fish, kakoune, and tmux – and I wouldn’t call screen part of the traditional unix tooling. I think the author is overextrapolating their config quirks as a trend.
bash
isn’t traditional Unix, either: It’s the Bourne-Again Shell, made explicitly as an improved imitation of the Bourne Shell, which itself replaced the Thompson Shell back in the dim mists of time. More to the point:bash
is GNU, and GNU’s Not Unix. GNU is the FSF’s attempt to make an improved clone of Unix.This could just as easily be titled: “My customized, non-standard environment”. Yeah, the OP has installed a bunch of weird tools, but that doesn’t mean that the *nix environment is not pretty much the same as it has been for the past 20 years, it just means that the author has installed a bunch of non-standard tools and likes using them. It doesn’t seem to say much for the state of the *nix ecosystem in general, except maybe there are a lot more specialized tools you can use these days.
Hmm. I hear what you’re saying, but it’s a bit more nuanced than that. For the last 25+ years I’ve been doing development, I’m used to seeing variations in e.g.
awk
v.gawk
orbash
v.sh
v.dash
or the like. I think writing that all off as a “customized, non-standard environment” generically is a bit strong, yet the idea of shell scripts localized to Linux v. macOS v. SunOS or the like is pretty normal—and we generally have tools to deal with it, because the differences are, generally, either subtle, or trivial to work around.What I’m observing now, and what I’m saying I think I’m part of the “problem” with, is a general movement away from the traditional tools entirely. It’s not
awk
v.gawk
; it’sawk
v.perl
, 2020 edition. And I think the thing it says is we’re looking at a general massive shift in the Unix ecosystem, and we’re likely at the leading edge, where we’re going to see a lot of churn and a lot of variation.I’m hearing in your comment that I may not have conveyed that terribly well, so I’ll think about how to clarify it.
I came here to post the same comment as @mattrose, and then read your response which clarified your point pretty well. I think I can summarize the point by saying “general purpose scripting languages are winning the ad hoc ‘tools market on Unix,’ in part due to the rise of flavor specific additions to POSIX, limiting compatibility and creating more specialized, non-portable ‘*nix’ shell scripts. This fact is more easily corrected by using one of the many flavors of a higher level scripting language that augments the Unix API with it’s own standard APIs” – Or something to that affect.
Thank you for sharing your experiences. You conveyed your point beautifully.
Observer bias is something that constantly comes to mind when I compare my experiences with others.
Our environment doesn’t help any more so to lessen that bias either.
I don’t doubt what you have witnessed, just as I don’t doubt what mattrose has experienced.
For what it’s worth, from what I have seen, I cannot say that I have seen anything that validates either of y’alls experiences - but that’s because I have my head stuck in a completely different world ;)
I think this is part of the cambric explosion of (open source) software. Twenty years ago, for any particular library there might be one or two well-maintained alternatives. OpenSSL and cURL, for example, achieved their central positions because they were the only reasonable option at the time.
I think that even then, there was (relatively) more variety in shell tooling because these tools have a far larger influence on many people’s user experience.
Compared to twenty years ago, the number of open source developers has grown by a lot. I’ve no idea how much, but I wouldn’t be surprised if it turned out to be a hundred or a thousandfold. It’s almost unthinkable today that there would be only one implementation of anything. And the variety of command-line tools has exploded even more.
I think you’re right in that there is an explosion of tooling in the *nix world in the past 10 or so years, it seems every day there’s a new command line tool that does a traditional thing in a new and non-traditional way, but…
I think that the people that are writing software for *nix, and even the ones that are writing the new tools that you are so fond of, realize that there is a baseline *nix … platform for lack of a better word, try very hard (and trust me, it’s not easy) to keep to that baseline API, and only depend on tools that they know are widely distributed, or can be bootstrapped up from those tools, via package management systems like apt, or macos homebrew or FreeBSD pkg tools. I would never write software trusting that the user has fish already installed on their machine, but I would trust that there is stuff like a Bourne shell (or something compatible), and grep, and even awk (It’s so small it even fits in busybox).
Personally, I think this explosion of tools is actually a good thing. I think it has upped user productivity and happiness to a great extent, because you can create your own environment that fits the way you do things. Don’t like vi, or emacs? Install vscode. Don’t like bash, use fish, or zsh, or even MS powershell. I write a lot of little tools in ruby, because I like the syntax, which means I end up writing a lot more scripts than I did back in the days when I was forced into using bash, or (euch) perl and I end up having a much nicer environment to work in.
The original reason I read your post is that I am worried about a fragmentation of the *nix API, but at a more basic level. For example, for many years, the way to configure ip addresses was ifconfig. There were a few shell script wrappers around it, but the base command was always available. These days, on FreeBSD, you still use ifconfig, but on some newer Linuxes, it’s not even installed anymore. And everyone does dynamic network configuration using drastically different tools. MacOS moving away from the GNU utilities more and more, even when it doesn’t make sense (I just installed Catalina and I’m still trying to get used to zsh) is another example. And let’s not even get into the whole systemd thing. (FTR, I approve, but it bugs me that it’s so linux specific)
Differences like these are troubling, and remind me of the bad old days when you had BSD, and Linux, and Solaris, and IRIX, and HP-UX, and AIX, and and and and. And every one of them had a different toolkit, and utilities.
Interestingly enough, all of these other variants faded away due to being tied to propietary hardware (except, kinda, Solaris), but there doesn’t seem to be anything stopping this from happening again, and I do see similar things happening again.
ifconfig disappearance has nothing with dynamic configuration. ifconfig has disappeared because its maintainers never adapted it to support new features of the network stack—not even support for multiple addresses on the same NIC. Someone could step up and do it, but no one did. In fact, the netlink API makes it much simpler to create a lookalike of the old Linux ifconfig or FreeBSD ifconfig, if someone feels like it. It would be no harder to create UI-compatible replacements for route, vconfig, brctl etc. There’s just hardly a reason to.
The problem with making such a tool is that there’s a lot to do if one is to make it as functional as iproute2. I have most of it compiled in a handy format in one place. I can’t see how an ifconfig lookalike could be meaningfully extended to handle VRF—you’d have to have “vrfctl” plus new route options.
The dynamic configuration tools call iproute2 in some form, usually. It has machine-readable output, even though the format could be better. Few are talking netlink directly.
Guess I don’t buy this premise or axiom. I write a lot of bourne shell, note, not bash, that almost always runs on any unix. Its not particularly hard, but a lot of linux developers just don’t seem to care or even try. Your perl example is good though, cause I’ve rewrote a lot of that early 2000 nonsense back into plain olde shell when i find it and made it smaller in the process.
And observing this where exactly? Are you sure you’re not just in a bubble and self reinforcing? I’ve found just teaching people how to write shellscripts with say shellcheck and the regular tools tends to get them to realize all these fancy new tools might be great but for stuff that should last sticking with builtin tools isn’t that hard and means less effort overall in later maintenance.
I use many of these same ‘new-school’ utilities day to day, but my approach to the problem outlined in this article is a bit different. For example, I restrict use of
fd
to interactive use and always usefind
in scripts. I do not writebash
but only POSIXsh
scripts, and attempt to never use the platform-specific extensions of tools (ie. GNU-isms). This leaves me with broadly speaking portable shell scripts. But even these are not guaranteed to work because I don’t have the time to test every script on every platform. Lately I have been exploring using a dedicated, embeddable scripting language like Jim Tcl to handle anything more complex than a page or so of shell. Similarly, it is possible to use Vim or Neovim with a forest of plugins, but it is equally possible to stick to the core vi functionality. Seems like Git also has a ‘split personality’ when it comes to porcelain and plumbing commands. Maybe instead of ‘deprecation’ this is simply naturally occurring development?I haven’t even heard of most of those (I know ripgrep, but use Silver Searcher. I know tmux, but don’t care.) My tool-ish stuff runs on perl, bash, or POSIX sh and the shell bits use POSIX and/or GNU tools. Hasn’t given me cause for regret so far.
Wasn’t there a similar situation in the 80s/90s, with all the commercial Unices back then? Irix, Tru64, HP-UX, AIX, Solaris, SunOS, SCO Unix, Unixware, etc etc.
ESR touches on this in his book The Art of Unix Programming:
“In fact, for years after divestiture the Unix community was preoccupied with the first phase of the Unix wars — an internal dispute, the rivalry between System V Unix and BSD Unix. The dispute had several levels, some technical (sockets vs. streams, BSD tty vs. System V termio) and some cultural.”
So I think it’s safe to say that the uniformity of the Unix experience has varied from time to time.
I think the core properties that make *nix great is not that you have exact replications of tools, but to have good interfaces, that turned into standards (POSIX) and that these core functionalities remain, in more advanced tools.
Take the shell. There is many implementations, but all the widely used ones have the same way of executing binaries, environment variables work pretty much the same (minus csh-style), there’s a common theme of using pipes, ingesting standard text and spitting out transformations that then can be used again.
A lot of the mentioned tools exist precisely because the authors embraced it, but wanted to do something slightly different.
I think the problem starts where standard-compaitbility is broken. Many tools avoid this, try to behave the correct way when it is expected or need that standard mode to be explicitly deactivated.
What’s more worrying in my opinion is that the amount of new software that gets created not targeting POSIX for no good reason. Now, I am not going to judge what a good reason is, but there certainly are cases where familiarity with POSIX and a slightly different design could have prevented long conversations on bug trackers, many FAQ-entries and a lot of them being wasted to port software elsewhere.
People go through great lengths to do “pure” implementations in their favorite programming languages or having zero dependencies, which is great. But outside of the shell scripting world it seems that the targeting POSIX to have an easier time running things on another system is lost.
I think this goes hand in hand with Docker being used for “portability”, even when in this case it’s just the developer and the server machine or in other words only the developer container.
The software mentioned in the article is rather portable, because they to a large degree target POSIX. Thex mostly build upon and extend it. From what I can see they also all provide interfaces that are in *nix-style.
Could you clarify what you are referring to? Do you refer to software targeting the POSIX C API or command-line utilities conforming to POSIX (e.g.
grep
doing what POSIX specifies forgrep
)? Or both? ;)Does it matter that tools are not standard or provided out of the box? They are still UNIX tools, and as tools they provide a mean to an end. You might like performance or egonomics more that the old tools, and that is quite ok.
Where all of this might become a problem is when you start sharing your scripts. That leads to invariably a lot of problems, as the environment of others might not match yours. They might not have the same versions, leading to frustration. So you might start coding your scripts more defensively, to ensure the right versions are there, but that might lead to blurring the original intent of your script. And honestly, who has time for all of that :) So, you might start using yet another Nix – this time, one without a star, to ensure that everyone has the right environment, they just have to have Nix installed.
I’m not that familiar with Nix.
Let’s say that a software author requires, say
fish
to install their software. If the software was packaged by a Linux distribution,fish
would be included as a dependency. If I instead use Nix to satisfy this dependency, will it “play nice” with my distribution’s package manager?Nix is hermetic in the sense that its “packages” do not use dependencies from the outside and its packages do not affect the system. Furthermore, one can have multiple versions of a package in the systems at the same time.
However, I was here explicitly talking about scripts, in a way that no installation is needed.
To illustrate the point, let’s say the author has a script written in fish, which relies on ripgrep and dust. Normally, his script would begin with
#!/usr/bin/env fish
. What I was emphasizing is usingnix-shell
to capture all the dependencies. It is achieved by using the following multiline shebang:The first line is the regular shebang invocation which tell the system to use
nix-shell
as the interpreter. The second line is read by thenix-shell
when starting to interpret the script, and-I
is there to pin the dependencies to a particular “release” of nixpkgs. The third line (-i
) tells thenix-shell
that the actual interpreter isfish
. Lastly, the fourth line references the dependencies. The particular versions will be the ones defined by the nixpkgs-channel from the given revision in-I
.The beauty of this approach is that one could also do it with perl or python scripts, pulling their dependencies (as long as they are in nixpkgs).
Portable unix-ish subset has always been somewhat elusive and different from what people use on their machines.
Publish a shell script, and someone will tell you it doesn’t work on SunOS, and that some BSD flavor needs an
-x
flag somewhere, but that doesn’t work with macOS tools that haven’t been updated in 13 years, and someone will have a weird setup without/tmp
and commands in/opt/lol/
, and your paths won’t work on MinGW, and so on.Then you give up and write your “shell script” in Python.
I think it’s still *nix, but with different (more “modern”?) sensibilities. Tools like ripgrep or fd or fzf are still designed to be composed on the command line and to process text and files generically.
Aside: this post made me discover angle-grinder, and then logfmt, which I think is a great idea. It’s super simple and I might just start using it for my logging stuff. Thanks for the pointer!