I used to give the same advice, but I completely changed my opinion over the past 10 years or so. I eventually put in the time and learned shell scripting. These days my recommendation is:
Learn to use the shell. It’s a capable language that can take you very far.
Use ShellCheck to automatically take care of most of the issues outlined in the article.
I really don’t want to figure out every project’s nodejs/python/ruby/make/procfile abomination of a runner script anymore. Just like wielding regular expressions, knowing shell scripting is a fundamental skill that keeps paying dividends over my entire career.
Always pay attention to what version of bash you need to support, and don’t go crazy with “new” features unless you can get teammates to upgrade (this is particularly annoying because Apple ships an older version of bash without things like associative arrays).
Always use the local storage qualifier when declaring variables in a function.
As much as possible, declare things in functions and then at the end of your script kick them all off.
Don’t use bash for heavy-duty hierarchical data munging…at that point consider switching languages.
Don’t assume that a bashism is more-broadly acceptable. If you need to support vanilla sh, then do the work.
While some people like the author will cry and piss and moan about how hard bash is to write, it’s really not that bad if you take those steps (which to be fair I wish were more common knowledge).
To the point some folks here have already raised, I’d be okay giving up shell scripting. Unfortunately, in order to do so, a replacement would:
Have to have relatively reasonable syntax
Be easily available across all nix-likes
Be guaranteed to run without additional bullshit (installing deps, configuring stuff, phoning home)
Be usable with only a single file
Be optimized for the use case of bodging together other programs and system commands with conditional logic and first-class support for command-line arguments, file descriptors, signals, exit codes, and other nixisms.
Be free
Don’t have long compile times
There are basically no programming languages that meet those criteria other than the existing shell languages.
Shell scripting is not the best tool for any given job, but across every job it’ll let you make progress.
(Also, it’s kinda rich having a Python developer tell us to abandon usage of a tool that has been steadily providing the same, albeit imperfect, level of service for decades. The 2 to 3 switch is still a garbage fire in some places, and Python is probably the best single justification for docker that exists.)
While some people like the author will cry and piss and moan about how hard bash is to write, it’s really not that bad if you take those steps (which to be fair I wish were more common knowledge).
I think “nine steps” including “always use two third-party tools” and “don’t use any QoL features like associative arrays” does, in fact, make bash hard to write. Maybe Itamar isn’t just “cry and piss and moan”, but actually has experience with bash and still think it has problems?
To use any language effectively there are some bits of tribal knowledge…babel/jest/webpack in JS, tokio or whatever in Rust, black and virtualenv in Python, credo and dialyzer in Elixir, and so on and so forth.
Bash has many well-known issues, but maybe clickbait articles by prolific self-pronoters hat don’t offer a path forward also have problems?
I think there’s merit here in exploring the criticism, though room for tone softening. Every language has some form of “required” tooling that’s communicated through community consensus. What makes Bash worse than other languages that also require lots of tools?
There’s a number of factors that are at play here and I can see where @friendlysock’s frustration comes from. Languages exist on a spectrum between lots of tooling and little tooling. I think something like SML is on the “little tooling” where just compilation is enough to add high assurance to the codebase. Languages like C are on the low assurance part of this spectrum, where copious use of noisy compiler warnings, analyzers, and sanitizers are used to guide development. Most languages live somewhere on this spectrum. What makes Bash’s particular compromises deleterious or not deleterious?
Something to keep in mind is that (in my experience) the Lobsters userbase seems to strongly prefer low-tooling languages like Rust over high-tooling languages like Go, so that may be biasing the discussion and reactions thereof. I think it’s a good path to explore though because I suspect that enumerating the tradeoffs of high-tooling or low-tooling approaches can illuminate problem domains where one fits better than the other.
I felt that I sufficiently commented about the article’s thesis on its own merits, and that bringing up the author’s posting history was inside baseball not terribly relevant. When you brought up motive, it became relevant. Happy to continue in DMs if you want.
Integrating shellcheck and shfmt to my dev process enabled my shell programs to grow probably larger than they should be. One codebase, in particular, is nearing probably like 3,000 SLOC of Bash 5 and I’m only now thinking about how v2.0 should probably be written in something more testable and reuse some existing libraries instead of reimplementing things myself (e.g., this basically has a half-complete shell+curl implementation of the Apache Knox API). The chief maintenance problem is that so few people know shell well so when I write “good” shell like I’ve learned over the years (and shellcheck --enable=all has taught me A TON), I’m actively finding trouble finding coworkers to help out or to take it over. The rewrite will have to happen before I leave, whenever that may be.
I’d be interested in what happens when you run your 3000 lines of Bash 5 under https://www.oilshell.org/ . Oil is the most bash compatible shell – by a mile – and has run thousands of lines of unmodified shell scripts for over 4 years now (e.g. http://www.oilshell.org/blog/2018/01/15.html)
Right now your use case is the most compelling one for Oil, although there will be wider appeal in the future. The big caveat now is that it needs to be faster, so I’m actively working on the C++ translation (oil-native passed 156 new tests yesterday).
I would imagine your 3000 lines of bash would be at least 10K lines of Python, and take 6-18 months to rewrite, depending on how much fidelity you need.
(FWIW I actually wrote 10K-15K lines of shell as 30K-40K lines of Python early in my career – it took nearly 3 years LOL.)
So if you don’t have 1 year to burn on a rewrite, Oil should be a compelling option. It’s designed as a “gradual upgrade” from bash. Just running osh myscript.sh will work, or you can change the shebang line, run tests if you have them, etc.
There is an #oil-help channel on Zulip, liked from the home page
Thanks for this nudge. I’ve been following the development of Oil for years but never really had a strong push to try it out. I’ll give it a shot. I’m happy to see that there are oil packages in Alpine testing: we’re deploying the app inside Alpine containers.
Turns out that I was very wrong about the size of the app. It’s only about 600 SLOC of shell :-/ feels a lot larger when you’re working on it!
One thing in my initial quick pass: we’re reliant on bats for testing. bats seemingly only uses bash. Have you found a way to make bats use Oil instead?
I wouldn’t expect this to be a pain-free experience, however I would say should definitely be less effort than rewriting your whole program in another language!
I have known about bats for a long time, and I think I ran into an obstacle but don’t remember what it was. It’s possible that the obstacle has been removed (e.g. maybe it was extended globs, which we now support)
In any case, if you have time, I would appreciate running your test suite with OSH and letting me know what happens (on Github or Zulip).
One tricky issue is that shebang lines are often #!/bin/bash, which you can change to be #!/usr/bin/env osh. However one shortcut I added was OSH_HIJACK_SHEBANG=osh
Moving away from Python? Now it has my interest… in the past I skipped past know it’d probably take perf hits and have some complicaged setup that isn’t a static binary.
Yes that has always been the plan, mentioned in the very first post on the blog. But it took awhile to figure out the best approach, and that approach still takes time.
Python is an issue for speed, but it’s not an issue for setup.
You can just run ./configure && make && make install and it will work without Python.
Oil does NOT depend on Python; it just reuses some of its code. That has been true for nearly 5 years now – actually since the very first Oil 0.0.0. release. Somehow people still have this idea it’s going to be hard to install, when that’s never been the case. It’s also available on several distros like Nix.
What is the status of Oil on Windows (apologies if it’s in the docs somewhere, couldn’t find any mentioning of this). A shell that’s written in pure C++ and has Windows as a first class citizen could be appealing (e.g. for cross-platform build recipes).
It only works on WSL at the moment … I hope it will be like bash, and somebody will contribute the native Windows port :-) The code is much more modular than bash and all the Unix syscalls are confined to a file or two.
I don’t even know how to use the Windows sycalls – they are quite different than Unix! I’m not sure how you even do fork() on Windows. (I think Cygwin has emulation but there is way to do it without Cygwin)
To the point some folks here have already raised, I’d be okay giving up shell scripting. Unfortunately, in order to do so, a replacement would: […]
There are basically no programming languages that meet those criteria other than the existing shell languages.
I believe Tcl fits those requirements. It’s what I usually use for medium-sized scripts. Being based on text, it interfaces well with system commands, but does not have most of bash quirks (argument expansion is a big one), and can handle structured data with ease.
Always use #!/usr/bin/env bash at the beginning of your scripts (change if you need something else, don’t rely on a particular path to bash though).
I don’t do this. Because all my scripts are POSIX shell (or at least as POSIX complaint as I can make them). My shebang is always #!/bin/sh - is it reasonable to assume this path?
you will miss out on very useful things like set -o pipefail, and in general you can suffer from plenty of subtle differences between shells and shell versions. sticking to bash is also my preference for this reason.
note that the /usr/bin/env is important to run bash from wherever it is installed, e.g. the homebrew version on osx instead of the ancient one in /bin (which doesn’t support arrays iirc and acts weirdly when it comes across shell scripts using them)
My shebang is always #!/bin/sh - is it reasonable to assume this path?
Reasonable is very arbitrary at this point. That path is explicitly not mandated by POSIX, so if you want to be portable to any POSIX-compliant system you can’t just assume that it will exist. Instead POSIX says that you can’t rely on any path, and that scripts should instead be modified according to the system standard paths at installation time.
I’d argue that these days POSIX sh isn’t any more portable than bash in any statistically significant sense though.
Alpine doesn’t have Bash, just a busybox shell. The annoying thing is if the shebang line fails because there is no bash, the error message is terribly inscrutable. I wasted too much time on it.
https://mkws.sh/pp.html hardcodes #!/bin/sh. POSIX definitely doesn’t say anything about shs location but I really doubt you won’t find a sh at /bin/sh on any UNIX system. Can anybody name one?
That’s because the other ones are options and not errors. Yes, typically they are good hygiene but set -e, for example, is not an unalloyed good, and at least some experts argue against using it.
There are tons of pedants holding us back IMO. Yes, “set -e” and other options aren’t perfect, but if you even know what those situations are, you aren’t the target audience of the default settings.
Yup, that’s how you do it, It’s a good idea to put in the the time to understand shell scripting. Most of the common misconceptions come out of misunderstanding. The shell is neither fragile (it’s been in use for decades, so it’s very stable) nor ugly (I came from JavaScript to learning shell script, and it seemed ugly indeed at first, now I find it very elegant). Keeping things small and simple is the way to do it. When things get complex, create another script, that’s the UNIX way.
It’s the best tool for automating OS tasks. That’s what it was made for.
+1 to using ShellCheck, I usually run it locally as
shellcheck -s sh
for POSIX compliance.
I even went as far as generating my static sites with it https://mkws.sh/. You’re using the shell daily for displaying data in the terminal, it’s a great tool for that, why not use the same tool for displaying data publicly.
I went the opposite direction - I was a shell evangelist during the time that I was learning it, but once I started pushing its limits (e.g. CSV parsing), and seeing how easy it was for other members of my team to write bugs, we immediately switched to Python for writing dev tooling.
There was a small learning curve at first, in terms of teaching idiomatic Python to the rest of the team, but after that we had much fewer bugs (of the type mentioned in the article), much more informative failures, and much more confidence that the scripts were doing things correctly.
I didn’t want to have to deal with package management, so we had a policy of only using the Python stdlib. The only place that caused us minor pain was when we had to interact with AWS services, and the solution we ended up using was just to execute the aws CLI as a subprocess and ask for JSON output. Fine!
I tend to take what is, perhaps, a middle road. I write Python or Go for anything that needs to do “real” work, e.g. process data in some well-known format. But then I tie things together with shell scripts. So, for example, if I need to run a program, run another program and collect, and then combine the outputs of the two programs somehow, there’s a Python script that does the combining, and a shell script that runs the three other programs and feeds them their inputs.
I also use shell scripts to automate common dev tasks, but most of these are literally one-ish line, so I don’t think that counts.
(Although I will agree that it’s annoying that shell has impoverished flag parsing … So I actually write all the flag parsers in Python, and use the “task file” pattern in shell.)
(many code examples from others in that post, also almost every shell script in https://github.com/oilshell/oil is essentially that pattern)
There are a lot of names for it, but many people seem to have converged on the same idea.
I don’t have a link handy not but Github had a standard like this in the early days. All their repos would have a uniform shell interface so that you could get started hacking on it quickly.
I’ll stop writing shell scripts once a suitable replacement exists.
For something to be a “suitable replacement”, it has to let me run and interact with command-line programs as easily as or easier than shell does, and it has to exist on all unix-like systems which have been in active use for the past 20 years. Ideally it’d also be specified by POSIX.
There are problems with shell, but there are very good reasons for why people write shell scripts. Don’t dismiss them.
I once took a 15 line shell script that was mostly just invoking docker in various ways and decided to rewrite it in Python. It blew up to 60 lines, and wasn’t actually any more readable.
I agree, I have recognized Perl as an interesting language for exactly these reasons myself. Even tried learning it a few times. But unfortunately, it’s really hard to learn because it’s just so different from literally everything else, and even if I did learn it myself, I’d be really careful writing anything in it which other people may have to work with. Plus, it seems like a mess caused by lots and lots of major but backwards-compatible changes over the years.
In the end, I use shell for stuff where shell is a good fit, and Python 3 for stuff which really only needs to run on beefy (server/desktop style) Linux machines. Python is almost never on embedded Linux systems though, or systems where you can’t afford to waste lots of orders of magnitude of performance, plus the 2 -> 3 transition means Python 3 isn’t actually as widespread as I would like. Perl is a better choice than Python 3 by every metric other than the syntax and semantics of the language itself.
Yeah, I get the point of the article (maybe unfairly reduced to robustness and testing) but what would a python version even look like (assuming python here because of the URL). Replace everything with API calls? What if I really want to do a dig or a ping? Would I find native libraries to do those things instead of shelling out?
Shells like fish and oil make scripting better but it’s still scripting. If “the thing” keeps baking, I’ll turn it into a binary and I think this is where compiled languages make even more sense. Putting things in bin is great. Putting binaries into a docker image is great. Having a real CLI library is great compared to parsing like bash args. But all this is work compared to a duct_tape.sh when you know the unix chainsaw. I wonder if the term unix chainsaw is the entire thing I just said.
It’d be really cool to have a compiled and productive language. For me, this is crystal right now but there are few other options depending on your tastes.
Python can do pretty much everything shells do and pretty much everything you need in shell scripts…
…but I learned how to write shell scripts like 20+ years ago, and I still do it pretty much the same way. Whereas in about the same timeframe, the recommended way to call subprocesses in Python has changed at least two or three times, we’re now at the… third attempt at parsing CLI arguments and it’s so complex and batteries included that I have to re-read like thirty pages of docs every time I want to use it, and if anyone thought hey, I’ll do the smart thing and instead of piping uudecode/uuencode output like it’s 1981, I’m gonna use Python and its everything-but-the-kitchen-sink standard library, uu is getting deprecated.
I went through a “stop writing shell scripts” phase around 2012 or so. I stopped about five years later. I ported everything back to ksh because one-off scripts are a thing, but one-off programs have not been a thing anymore in a long time now.
I like Python and I use it every day but there’s no way I’m using it, or any programming language meant for professional software development as it’s widely understood in 2022, for the things I use shell scripts for. Life’s too short.
The API for handling processes in python is very error prone and has suffered from many bugs over the years, including regressions in later 3.x releases. It’s not better than a shell, just different.
Sorry if you know all this but yeah, that’s shelling out. It’s a neat trick to save the process as a variable but it’s still exiting python memory as a process and coming back. If you do this:
python3 -c ‘import os; os.system(“sleep 60”)’
Run and find the python pid and do a pstree [pid] you’ll see you get a new PID underneath python. It’s the same as typing bash and then bash within. Pstree will show you that bash owns bash until bash exits.
This is what any file handle is (in ’nix) even for a pipe. You could call “ping 8.8.8.8”, get a python ICMP library or you could make a socket yourself and follow the ICMP spec. Scripts are so easy when you know the shell but they are so incredibly frustrating when you know a language with nice data structures, algorithms/generators/comprehensions etc.
When a script has gotten too serious, I consider rewriting it but then also think about all the shelling out I need to do. How many commands does this rely on and how deeply? If I’m relying on grep’s speed and before and after flags, ugh. I wait for the script to bake. Is it used a lot? Is it brittle? Is its purpose known? If it is glue code, the answers are usually no.
I love crystal and find it much more ergonomic/succinct and productive than go or rust. But it doesn’t fit your description so well.
Compile a binary and it will depend on a bunch of clubs which have varying installation procedures.
I had a script laying around to copy all the dependencies output by ldd to a docker container, but feels messy and complicated when compared to a single binary.
You can do the subprocess. It’s more verbose than bash, but not too verbose. On the other hand you get more clarity and simpler lines wherever you’d use grep/sed/awk instead. Which exist in most scripts.
I believe that’s partly the point in the post. Using set -e effectively forces return code handling, at the least explicitly swallowing a bad return code with something like || true.
As an aside, this is why I’m not a fan of Go’s error handling. It’s optional because the “err := func()” can always be written as “func()” (afaik and can tell, anyway – happy to know if this can be forced.).
It’s good to have a linter like go vet, but I agree that compiler that defaults to throwing errors (or at least warnings) on clear anti-patterns is better. Go has an idiomatic escape already, too, in “_ := func()” to throw away the returned error, so I’m curious why go’s compiler allows any returned value to be implicitly and silently ignored.
I’m always wary of these “always” and “never” essays.
Instead of going down that route, why not set up a code budget before you begin and stick to it? After all, it’s one thing to spend a couple of coins on some bubblegum on your way to work. It’s something else entirely to invest in condominiums. There are lots of examples in life where we have limited resources and expect a limited return. We use budgets to allocate limited resources to return limited value. Why don’t we treat code the same way?
For my personal work, I budget 50 lines or so of bash to any project. If it goes beyond that, I’m coding in a higher-level language (which I also budget to as a way of keeping track of whether I’m compartmentalizing enough) If I were coding commercially, I’d drop that budget by half or more.
You run into problems with any programming language when the amount of complexity you’ve grasped outreaches your ability to understand it. With signals, string mangling, and the rest, that happens pretty quickly in bash. That doesn’t mean it’s a bad language. These are just risks that need to be managed.
Instead of shunning or loving some language, find a way to guide you to know when enough is enough, ie, it’s time to refactor or move to a different platform. Then stick to it. It’s the people who can’t help themselves but keep taking on more and more complexity in their code over time that are the ones to worry about, no matter what tech they’re using.
I believe that this isn’t an “always” and “never” essay, but rather a list of bad and good practices. In the last section the author covers the scenario you describe (50 line bash budget).
I don’t think any reasonable author would say “never” use bash, however the article points out how it may not be the best choice to use bash in many scenarios, thus “Please stop writing shell scripts” instead of “Do not write shell scripts”. It’s a persuasive essay, not an authoritative, commanding essay.
Wow, I am seeing a lot of Stockholm syndrome in the existing comments. “Hey, bash is just misunderstood! It didn’t mean to give me a black eye, it’s actually my fault because I forgot to turn on error checking and run a bunch of lint tools first. Anyway, if I left bash, no other language would ever love me.”
sh and its descendants are deformed excuses for programming languages, the result of many decades of gluing shit together at random. They make PHP look like Haskell. I’m trying to think of an analogy with some sort of powerful but absurdly awkward and dangerous power tool, but failing, probably because hardware isn’t immune from liability laws so no company could get away with selling something so badly designed.
Oil has all of these options under one group, oil:basic, so you don’t have to remember all of them. They’re on by default in bin/oil, or you can opt in with shopt --set oil:basic in bin/osh.
Also I think this title is a troll – it should be more like “Pitfalls of Shell” or something like that. Which are pretty well known by now
The answer is to fix shell, not write articles on the Internet telling people not to use it. They use it because it solves certain problems more effectively than other tools
Yeah this post omits the only piece of advice that would make it practical, which is pointing to another programming language that they consider better suited for the job. I’ve written code to launch and tend to processes in a lot of languages and they have all been as error prone as the shell. I don’t think people who bash on shells understand just how complex correct process handling is.
If you wouldn’t mind taking the opportunity to shill, how would you go about convincing somebody to switch to oil shell from bash, assuming they’re willing to ignore the lack of wide-spread deployment of oil? What’s your sales pitch?
That is, you have 3K lines of bash code, AND you want to switch to something else.
Well Oil is basically your only option! It’s the most bash compatible shell by a mile.
There are downsides like Oil needs to be faster, but if you actually have that much shell, it’s worth it start running your scripts under Oil right now.
Most of them amount to better tools and error messages. Oil is like a ShellCheck but at runtime. ShellCheck can’t catch certain things like bad set -e usage patterns because some of them can only be detected at runtime – or you would have a false positive on every line. (I should do a blog post about this.)
I also put some notes about “the killer use case” here:
i.e. I started running my CI in containers, and I think many people do. Oil not being installed isn’t a big issue there because you have to install everything into a container :)
Although we probably need a bootstrap script, e.g. like rustup, if your distro doesn’t have it. (Many do, but not all.)
I’d guess itamarst really meant the title (subject to the caveats in the article), but also that he wasn’t talking about alternate shells like Oil, as they are really a different matter. Nobody writes “don’t use fish” articles, and Oil is in the same boat — it isn’t available by default, waiting to blow your hand off, so there’s no need to warn folks away from it.
Any language that has been designed rather than duct-taped together over decades is going to avoid shell’s (bash/dash/ash/POSIX sh’s) faults. Please continue doing this! When /bin/oil is part of a stock Debian install we can start telling people to put #!/bin/env/oil at the top instead, but until then I think it’s sensible to post these warnings periodically since OSHA is unlikely to step in.
This article gets at the heart of the problem. By far the most common thing to do with shell scripts is to run a command, store the output, and use it as input to the next command, while aborting if any error occurs. The fact that this is actually pretty hard to do correctly in shell scripting is a huge problem. This is mostly because bash is trying to maintain compatibility with shells dating back 50 years at this point that were written for different purposes and with totally different requirements.
Bad reason #3: Just write correct code … In practice:You’re probably not working alone; it’s unlikely everyone on your team has the relevant expertise.
Sure it’s relevant if you do system administration on Unix-likes but even then, you can often get by with Python or Perl or similar. Past that though, it’s just often not relevant. Software development is an extremely broad field and just because someone doesn’t have the same expertise as you doesn’t mean they’re incompetent.
I would say it’s relevant if you’re doing any kind of software development, thing is I believe it’s a core skill regardless of your main tech stack. If you read the comments in the thread, you’ll see that there’s no good replacement for shell script, you could get by with Python or Perl or any other programming language, but the shell is best at automating OS tasks. There’s no better alternative for that.
I would say it’s relevant if you’re doing any kind of software development, thing is I believe it’s a core skill regardless of your main tech stack.
But it’s just not. Windows devs are more likely to need to know powershell or batch than bash or sh. Web devs practically live in node these days. Embedded software devs who work almost entirely out of proprietary IDEs probably don’t either. Once again, software development is an incredibly broad field, not everyone works on the same things you do in the same ways.
If you read the comments in the thread, you’ll see that there’s no good replacement for shell script, you could get by with Python or Perl or any other programming language, but the shell is best at automating OS tasks. There’s no better alternative for that.
This is once again debatable. If someone knows Python or Perl better, and those other languages are available, then for them it’s the best at automating tasks. New tools like zx are coming along, which combined with something like Deno could mean you wouldn’t even need an interpreter on the machine in question.
It’s just that’s impossible not to stumble upon shell while working as a software developer, you either take your time to understand it or try avoiding it out of misunderstanding.
I’ve been using Unix since 1990. I do development exclusively on Unix systems (Linux, Mac OS and Solaris these days), and I have pretty much avoided writing shell scripts. If I’m doing anything more complex than running a command over a list of files, I’ll reach for something other than sh to do the programming. In fact, I think most of my issues with using sh is that I use it for what it is—a shell, not a programming lanauge [1].
[1] My biggest hangup with sh is redirection and the utterly insane (in my opinion) semantics around it. I still can’t figure out (or remember) how to redirect stdout to a file, and stderr to a pipe. Yes, it’s been explained to me several times, and each time, it just seems bonkers to me.
If I’m doing anything more complex than running a command over a list of files, I’ll reach for something other than sh to do the programming.
If it’s complex, go for another programming language or start a new script, if it’s small and simple (few simple commands) stay in shell.
In fact, I think most of my issues with using sh is that I use it for what it is—a shell, not a programming lanauge
It’s not a issue, it’s good usage. You write shell scripts when you would like to repeat or re-use those few commands you typed in the CLI. That’s it. Not a good idea to write daemons or file managers in shell script.
My biggest hangup with sh is redirection and the utterly insane (in my opinion) semantics around it. I still can’t figure out (or remember) how to redirect stdout to a file, and stderr to a pipe.
You’ll remember once you use it enough.
Shell script is just for automating small, simple tasks, that’s it!
where, IIRC, you can put the two redirections in either order. |[1] also works (or any other fd number). rc also has saner quoting/syntax in general and its control flow constructs are closer to C style with () and {} (rather than keywords and their counterparts spelled backwards which is very Algol 68 sensibilities). It never really caught on even though there has been a port from Plan 9 to Unix for about 30 years.
This is once again debatable. If someone knows Python or Perl better, and those other languages are available, then for them it’s the best at automating tasks.
But the discussion is about which one is better by objective technical criteria. Not about which one is better for a given person. By that logic, the best is money. You pay someone else tomdo it.
But to the point. Shellscript has syntax designed and optimized to launch applications and glue their input and output with minimal effort. You get other stuff for free such as debug info (stderr) and buffer management by the operative system. You can’t do the same in so called general purpose programming languages because their design focus is on small bits of data strings, variables etc. The API to the os is more verbose and there no way to fix that without making compromises on other aspects of the language. It is a design choice, in both cases.
The reason why many people dislike it it’s because it’s unforgiving and has tricky bits. Few people commit to really learn it properly, resulting in getting bit back by some pitfall and writting such negative posts.
I think there is a second case that is less directly about competence and more about condescension? As in, some people get annoyed when Shell bites them and decide it’s beneath them.
Most of my time is now spent using Racket in places where I could use a shell script. It’s easier to write a Racket program that invokes other programs and work with their error codes and re-direct their output to the right places. Truly a joy for me, personally, as I do like writing Lisp.
For the most part, a lot of features in the Racket library do not need sub-processes to do those types of jobs.
For grep we have regexp objects which employ either racket-match or racket-match? to match across strings or filter.
seq can be mimicked by using a range function to iterate combined using expressions like for.
sort is done by using the appropriately named Racket function sort and changing the comparison function and input list.
If you want to sub-process invoke programs, then the output of a subprocess call can only be sent to a file stream like stdout or a plain file. To invoke multiple sub-processes one after another and continuously pass their outputs to one another involves a little bit of trickery which might be a bit complex to talk about in a comment, but it is do-able. The gist is to try to write tasks using the Racket standard library, then use subprocess when you need something not covered by it.
; display all files in pwd
(for-each displayln (map path->string (directory-list)))
; display all files sorted
(for-each displayln
(sort (map path->string (directory-list)) string<?))
; regexp match over a list of sorted files
(for-each displayln
(filter (λ (fname) (regexp-match? #rx".*png" fname))
(sort (map path->string (directory-list)) string<?)))
As posted in a sibling message, it’s much easier to use built-in functions than to shell out and call another program. Personally, I find Racket more convenient for writing scripts that need to work in parallel. For example, a script gets the load average from several machines in parallel over ssh.
Best way I can quickly sum it up is clever use of the function subprocess in Racket.
(define (start-and-run bin . args)
(define-values (s i o e)
(apply subprocess
`(,(current-output-port) ,(current-input-port) stdout
,(find-executable-path "seq")
,@args)))
(subprocess-wait s))
(start-and-run "seq" "1" "10")
This outputs the seq command to stdout, and allows for arbitrary commands so you can do zero-arg sub-processes or however many you need/like. The current-output-port and current-input-port calls are parameters that you can adjust by using a parameterize block to control the input/output from the exterior.
The output port must be set to a file, it cannot be set to an output string like with call-with-output-string, so output is either going to go straight to stdout, or you can use call-with-output-file to control the current-output-port parameter and store the output wherever you please.
The difficult part of advocating against writing software in shell in the abstract is that there are too many better choices to pick between for what you should recommend instead. You can’t swing a cat without hitting at least two programming languages that are vastly better. Damn near anything works. It is extremely difficult to accidentally design a programming language that is as bad as shell.
The flip side is that in any given specific situation it’s pretty easy. Just use what most people know already. If you have a collection of programmers who all know Python, just pick Python. Likewise Ruby, Lua, ECMAscript or pretty much anything.
Most programming languages that are better at enabling expression of logic are horrible at process handling. Most of the obvious contenders have horribly complex APIs that developed as they found bug after bug in their initial implementations that revealed the nuance of what a shell does. Languages like perl and ruby that carry the tradition of backticks for launching processes are vulnerable to deadlocks, for example. To get shell-like behavior from python you need to use Popen but that doesn’t give you the same behavior in all cases.
You can cut all the calls to mv, ls, and so on by making calls into libc. A bunch of this stuff gets easier because real APIs aren’t prone to fucking up strings like shell does. The majority of subprocess handling that remains is trivial, launch something and immediately wait for it. You don’t need a terse DSL for launching multiple processes in one go like “cat | cut | sed | tr | awk” if you move all the string handling into a real programming language instead where these things are easy instead of hard.
I do not commonly need to continuously pipe data into a subprocess’s stdin while continuously piping its stdout somewhere else. I have zero scripts in day to day use at work that need to do that instead of just fully buffering one or both sides of the communication. If it does come up I do know how to use select(2) or coroutines. :P
You don’t need a terse DSL for launching multiple processes in one go like “cat | cut | sed | tr | awk” if you move all the string handling into a real programming language instead where these things are easy instead of hard.
People that advocate for that, end up taking 20-100 times more time and code to do it “in a real programming language”.
The example you gave being a the most typical. They start with that claim , then between writting, testing it, debugging, etc. 20 to 100 lines of code later, anything from a morning to a week is past.
Every friggin time.
Then a month or two later, the whole team is called to solve some critical production issue because the solution in a real programming language abused buffering and crashed when 10gb of data were thrown at it.
Meanwhile, the shellscript handles buffering for you gracefully, it’s tiny and idiomatic.
In the post and in the discussion, I see two slightly different aspects.
First one is “don’t add extra dependencies”. If your project is in Python, build it with Python. If it is in Java — use Java. And,well, if you project is in shell, by all means, use the shell. While it seems that, eg, using Python to build (in a very general sense) a Java project is less lines of code, it is more accidental complexity: now the person building the project needs both Java and Python (or Java and bash). The hard problem to solve here is bootstrap. You really don’t want to ask a contributor to install some random “make, but Java” tool to just be able to run your scripts. Ideally, you want #! which works with baseline install of the language. Often that’s doable only via horrible hacks (see, eg, xtask, or how gradlew gets cross-platform support).
Second one is “shell is the least worst language for non-project automation”. Indeed, if what you are writing lives in ~/bin and isn’t shared with others, shell does seem like an potentially least bad tool, despite all of its flaws. If you can’t stand shell on aesthetical reasons… well, tough to be you. My personal journey here was from home-grown Python DSL, past the Ruby (backticks has the right syntax, but wrong semantics), through Julia (it has backticks with correct semantics, but Bosch it is slow to start) to “heck, I’ll just use Rust cause working auto-compete compensates any verbosity”.
If I start today, I’d probably look at deno: JS backticks have the semantics which can be employed correctly, and deno seems to be qualitative improvement in the design of scripting system.
Of course, a third option is to make Emacs a pid1 and call it a day :-)
As a software developer, I prefer strongly+statically typed languages, but I still enjoy shell scripts in some cases. The reason is probably that in such cases, it is not software development/engineering, just gluing things together and nothing too big. Then shell like Bash is a useful tool and serves me well.
The size is key limitation – if I can keep whole script in my head or if whole scipt fits on a single screen, it is OK. If it is longer, other tools are more suitable (a programming language with static typing).
As soon as you find yourself doing anything beyond that, you’re much better off using a less error-prone programming language.
Theoretically. But it is usually too late. No one will learn a new language and rewrite the script in such situation. Rather „add just these few lines here“. Probably the best solution is learn another language in advance. Then you can use shell, because it is the best tool for given task, not because it is only language you know.
It is appealing how little shellscripts change vs other languages, and it is useful for bootstrapping servers when python may not be installed yet. I don’t have update/packaging nightmares with shellscript. Will I still be heavily using python in 10 years? I’m not sure.
Posix compliance and deviations are a nightmare though. If it’s a complex script, another tool is more appropriate.
This is, by extension, a counterargument to the perennial weed of a recommendation to use Makefiles in projects where they are neither idiomatic nor sufficiently portable and better options (like the language the project was written in) exist.
Every few years I fall in the trap of trying to write a quick Makefile for managing some simple commands on a new project, and every single time I go through the pain of rediscovering just why I stopped doing it the last time around. .PHONY targets, weird whitespace semantics, weird variable semantics, etc. etc.
The cost benefit of writing software in a ‘serious’ language, for me anyway, is that the investment only makes sense if a lot of others use it. And people need to earn a living for their work. If there is software I actually need and want to get on with my life, shell script or some combo of it is amazing. It is of course preferable to distribute software mostly as a single binary, if possible. If it’s a limited audience, depending on what software it is, shell script is like a secret weapon. The ease of gluing together systems and software in shell script is at least an order of magnates easier than other popular languages imo.
For example, I have a private network (can be anywhere in the world) that I can just fuzzy search for any file (video or whatever), and I can open it or stream it to any device. As long as the file is on one of the devices. I can back up all files to a single device too, with a single command. And it’s secure. Basically stopped using third-party file sync software and stuff like Dropbox as a result. The setup is a pain though, and I completely forget how to add a new device sometimes, which is why shell script is terrible to distribute the kind of software I’m interested making at the moment.
Edit: and I can’t add a duplicate file to the system, unless all of it’s getting backed up to one of the devices. Each file is identified by it’s hash. Every new file must be tagged (also auto-tags like file type and name). Searches an index for the file associated with a hash of the file. I have another system that let’s me update files and track hash changes, basically a next level version control system
I used to give the same advice, but I completely changed my opinion over the past 10 years or so. I eventually put in the time and learned shell scripting. These days my recommendation is:
I really don’t want to figure out every project’s nodejs/python/ruby/make/procfile abomination of a runner script anymore. Just like wielding regular expressions, knowing shell scripting is a fundamental skill that keeps paying dividends over my entire career.
Bingo.
My advice is:
#!/usr/bin/env bash
at the beginning of your scripts (change if you need something else, don’t rely on a particular path to bash though).set -eou pipefail
after that.local
storage qualifier when declaring variables in a function.sh
, then do the work.While some people like the author will cry and piss and moan about how hard bash is to write, it’s really not that bad if you take those steps (which to be fair I wish were more common knowledge).
To the point some folks here have already raised, I’d be okay giving up shell scripting. Unfortunately, in order to do so, a replacement would:
There are basically no programming languages that meet those criteria other than the existing shell languages.
Shell scripting is not the best tool for any given job, but across every job it’ll let you make progress.
(Also, it’s kinda rich having a Python developer tell us to abandon usage of a tool that has been steadily providing the same, albeit imperfect, level of service for decades. The 2 to 3 switch is still a garbage fire in some places, and Python is probably the best single justification for docker that exists.)
I think “nine steps” including “always use two third-party tools” and “don’t use any QoL features like associative arrays” does, in fact, make bash hard to write. Maybe Itamar isn’t just “cry and piss and moan”, but actually has experience with bash and still think it has problems?
To use any language effectively there are some bits of tribal knowledge…babel/jest/webpack in JS, tokio or whatever in Rust, black and virtualenv in Python, credo and dialyzer in Elixir, and so on and so forth.
Bash has many well-known issues, but maybe clickbait articles by prolific self-pronoters hat don’t offer a path forward also have problems?
If your problem with the article is that it’s clickbait by a self-promoter, say that in your post. Don’t use it as a “gotcha!” to me.
I think there’s merit here in exploring the criticism, though room for tone softening. Every language has some form of “required” tooling that’s communicated through community consensus. What makes Bash worse than other languages that also require lots of tools?
There’s a number of factors that are at play here and I can see where @friendlysock’s frustration comes from. Languages exist on a spectrum between lots of tooling and little tooling. I think something like SML is on the “little tooling” where just compilation is enough to add high assurance to the codebase. Languages like C are on the low assurance part of this spectrum, where copious use of noisy compiler warnings, analyzers, and sanitizers are used to guide development. Most languages live somewhere on this spectrum. What makes Bash’s particular compromises deleterious or not deleterious?
Something to keep in mind is that (in my experience) the Lobsters userbase seems to strongly prefer low-tooling languages like Rust over high-tooling languages like Go, so that may be biasing the discussion and reactions thereof. I think it’s a good path to explore though because I suspect that enumerating the tradeoffs of high-tooling or low-tooling approaches can illuminate problem domains where one fits better than the other.
I felt that I sufficiently commented about the article’s thesis on its own merits, and that bringing up the author’s posting history was inside baseball not terribly relevant. When you brought up motive, it became relevant. Happy to continue in DMs if you want.
You’re really quite hostile. This is all over scripting languages? Or are you passive aggressively bringing up old beef?
Integrating shellcheck and shfmt to my dev process enabled my shell programs to grow probably larger than they should be. One codebase, in particular, is nearing probably like 3,000 SLOC of Bash 5 and I’m only now thinking about how v2.0 should probably be written in something more testable and reuse some existing libraries instead of reimplementing things myself (e.g., this basically has a half-complete shell+curl implementation of the Apache Knox API). The chief maintenance problem is that so few people know shell well so when I write “good” shell like I’ve learned over the years (and
shellcheck --enable=all
has taught me A TON), I’m actively finding trouble finding coworkers to help out or to take it over. The rewrite will have to happen before I leave, whenever that may be.I’d be interested in what happens when you run your 3000 lines of Bash 5 under https://www.oilshell.org/ . Oil is the most bash compatible shell – by a mile – and has run thousands of lines of unmodified shell scripts for over 4 years now (e.g. http://www.oilshell.org/blog/2018/01/15.html)
I’ve also made tons of changes in response to use cases just like yours, e.g. https://github.com/oilshell/oil/wiki/The-Biggest-Shell-Programs-in-the-World
Right now your use case is the most compelling one for Oil, although there will be wider appeal in the future. The big caveat now is that it needs to be faster, so I’m actively working on the C++ translation (
oil-native
passed 156 new tests yesterday).I would imagine your 3000 lines of bash would be at least 10K lines of Python, and take 6-18 months to rewrite, depending on how much fidelity you need.
(FWIW I actually wrote 10K-15K lines of shell as 30K-40K lines of Python early in my career – it took nearly 3 years LOL.)
So if you don’t have 1 year to burn on a rewrite, Oil should be a compelling option. It’s designed as a “gradual upgrade” from bash. Just running
osh myscript.sh
will work, or you can change the shebang line, run tests if you have them, etc.There is an
#oil-help
channel on Zulip, liked from the home pageThanks for this nudge. I’ve been following the development of Oil for years but never really had a strong push to try it out. I’ll give it a shot. I’m happy to see that there are oil packages in Alpine testing: we’re deploying the app inside Alpine containers.
Turns out that I was very wrong about the size of the app. It’s only about 600 SLOC of shell :-/ feels a lot larger when you’re working on it!
One thing in my initial quick pass: we’re reliant on bats for testing. bats seemingly only uses bash. Have you found a way to make bats use Oil instead?
OK great looks like Alpine does have the latest version: https://repology.org/project/oil-shell/versions
I wouldn’t expect this to be a pain-free experience, however I would say should definitely be less effort than rewriting your whole program in another language!
I have known about bats for a long time, and I think I ran into an obstacle but don’t remember what it was. It’s possible that the obstacle has been removed (e.g. maybe it was extended globs, which we now support)
https://github.com/oilshell/oil/issues/297
In any case, if you have time, I would appreciate running your test suite with OSH and letting me know what happens (on Github or Zulip).
One tricky issue is that shebang lines are often
#!/bin/bash
, which you can change to be#!/usr/bin/env osh
. However one shortcut I added was OSH_HIJACK_SHEBANG=oshhttps://github.com/oilshell/oil/wiki/How-To-Test-OSH
Moving away from Python? Now it has my interest… in the past I skipped past know it’d probably take perf hits and have some complicaged setup that isn’t a static binary.
Yes that has always been the plan, mentioned in the very first post on the blog. But it took awhile to figure out the best approach, and that approach still takes time.
Some FAQs on the status here: http://www.oilshell.org/blog/2021/12/backlog-project.html
Python is an issue for speed, but it’s not an issue for setup.
You can just run
./configure && make && make install
and it will work without Python.Oil does NOT depend on Python; it just reuses some of its code. That has been true for nearly 5 years now – actually since the very first Oil 0.0.0. release. Somehow people still have this idea it’s going to be hard to install, when that’s never been the case. It’s also available on several distros like Nix.
What is the status of Oil on Windows (apologies if it’s in the docs somewhere, couldn’t find any mentioning of this). A shell that’s written in pure C++ and has Windows as a first class citizen could be appealing (e.g. for cross-platform build recipes).
It only works on WSL at the moment … I hope it will be like bash, and somebody will contribute the native Windows port :-) The code is much more modular than bash and all the Unix syscalls are confined to a file or two.
I don’t even know how to use the Windows sycalls – they are quite different than Unix! I’m not sure how you even do fork() on Windows. (I think Cygwin has emulation but there is way to do it without Cygwin)
https://github.com/oilshell/oil/wiki/Oil-Deployments
I believe Tcl fits those requirements. It’s what I usually use for medium-sized scripts. Being based on text, it interfaces well with system commands, but does not have most of bash quirks (argument expansion is a big one), and can handle structured data with ease.
I don’t do this. Because all my scripts are POSIX shell (or at least as POSIX complaint as I can make them). My shebang is always
#!/bin/sh
- is it reasonable to assume this path?you will miss out on very useful things like
set -o pipefail
, and in general you can suffer from plenty of subtle differences between shells and shell versions. sticking to bash is also my preference for this reason.note that the
/usr/bin/env
is important to run bash from wherever it is installed, e.g. the homebrew version on osx instead of the ancient one in/bin
(which doesn’t support arrays iirc and acts weirdly when it comes across shell scripts using them)Reasonable is very arbitrary at this point. That path is explicitly not mandated by POSIX, so if you want to be portable to any POSIX-compliant system you can’t just assume that it will exist. Instead POSIX says that you can’t rely on any path, and that scripts should instead be modified according to the system standard paths at installation time.
I’d argue that these days POSIX sh isn’t any more portable than bash in any statistically significant sense though.
Alpine doesn’t have Bash, just a busybox shell. The annoying thing is if the shebang line fails because there is no bash, the error message is terribly inscrutable. I wasted too much time on it.
nixos has /bin/sh and /usr/bin/env, but not /usr/bin/bash. In fact, those are the only two files in those folders.
https://mkws.sh/pp.html hardcodes
#!/bin/sh
. POSIX definitely doesn’t say anything aboutsh
s location but I really doubt you won’t find ash
at/bin/sh
on any UNIX system. Can anybody name one?I would add, prefer POSIX over bash.
I checked, and
shellcheck
(at least the version on my computer) only catches issue #5 of the 5 I list.That’s because the other ones are options and not errors. Yes, typically they are good hygiene but
set -e
, for example, is not an unalloyed good, and at least some experts argue against using it.Not for lack of trying: https://github.com/koalaman/shellcheck/search?q=set+-e&type=issues
There are tons of pedants holding us back IMO. Yes, “set -e” and other options aren’t perfect, but if you even know what those situations are, you aren’t the target audience of the default settings.
Yup, that’s how you do it, It’s a good idea to put in the the time to understand shell scripting. Most of the common misconceptions come out of misunderstanding. The shell is neither fragile (it’s been in use for decades, so it’s very stable) nor ugly (I came from JavaScript to learning shell script, and it seemed ugly indeed at first, now I find it very elegant). Keeping things small and simple is the way to do it. When things get complex, create another script, that’s the UNIX way.
It’s the best tool for automating OS tasks. That’s what it was made for.
+1 to using ShellCheck, I usually run it locally as
for POSIX compliance.
I even went as far as generating my static sites with it https://mkws.sh/. You’re using the shell daily for displaying data in the terminal, it’s a great tool for that, why not use the same tool for displaying data publicly.
No, it really is ugly. But I’m not sure why that matters
I believe arguing if beauty is subjective or not is off topic. 😛
I went the opposite direction - I was a shell evangelist during the time that I was learning it, but once I started pushing its limits (e.g. CSV parsing), and seeing how easy it was for other members of my team to write bugs, we immediately switched to Python for writing dev tooling.
There was a small learning curve at first, in terms of teaching idiomatic Python to the rest of the team, but after that we had much fewer bugs (of the type mentioned in the article), much more informative failures, and much more confidence that the scripts were doing things correctly.
I didn’t want to have to deal with package management, so we had a policy of only using the Python stdlib. The only place that caused us minor pain was when we had to interact with AWS services, and the solution we ended up using was just to execute the
aws
CLI as a subprocess and ask for JSON output. Fine!I tend to take what is, perhaps, a middle road. I write Python or Go for anything that needs to do “real” work, e.g. process data in some well-known format. But then I tie things together with shell scripts. So, for example, if I need to run a program, run another program and collect, and then combine the outputs of the two programs somehow, there’s a Python script that does the combining, and a shell script that runs the three other programs and feeds them their inputs.
I also use shell scripts to automate common dev tasks, but most of these are literally one-ish line, so I don’t think that counts.
This makes sense to me
FWIW when shell runs out of steam for me, I call Python scripts from shell. I would say MOST of my shell scripts call a Python script I wrote.
I don’t understand the “switching” mentality – Shell is designed to be extended with other languages. “Unix philosophy” and all that.
I guess I need to do a blog post about this ? (Ah I remember I have a draft and came up with a title – The Worst Amounts of Shell Are 0% or 100% — https://oilshell.zulipchat.com/#narrow/stream/266575-blog-ideas/topic/The.20Worst.20Amount.20of.20Shell.20is.200.25.20or.20100.25 (requires login)
(Although I will agree that it’s annoying that shell has impoverished flag parsing … So I actually write all the flag parsers in Python, and use the “task file” pattern in shell.)
What is the “task file” pattern?
It’s basically a shell script (or set of scripts) you put in your repo to automate common things like building, testing, deployment, metrics, etc.
Each shell function corresponds to a task..
I sketched it in this post, calling it “semi-automation”:
http://www.oilshell.org/blog/2020/02/good-parts-sketch.html
and just added a link to:
https://lobste.rs/s/lob0rw/replacing_make_with_shell_script_for
(many code examples from others in that post, also almost every shell script in https://github.com/oilshell/oil is essentially that pattern)
There are a lot of names for it, but many people seem to have converged on the same idea.
I don’t have a link handy not but Github had a standard like this in the early days. All their repos would have a uniform shell interface so that you could get started hacking on it quickly.
You should investigate just for task running. It’s simple like
make
but none of the pitfalls of it for task running.I’ll stop writing shell scripts once a suitable replacement exists.
For something to be a “suitable replacement”, it has to let me run and interact with command-line programs as easily as or easier than shell does, and it has to exist on all unix-like systems which have been in active use for the past 20 years. Ideally it’d also be specified by POSIX.
There are problems with shell, but there are very good reasons for why people write shell scripts. Don’t dismiss them.
I once took a 15 line shell script that was mostly just invoking
docker
in various ways and decided to rewrite it in Python. It blew up to 60 lines, and wasn’t actually any more readable.I am absolutely not advocating for Perl (haven’t written it for 15 years and rarely use it), but Perl does meet many of your requirements.
I agree, I have recognized Perl as an interesting language for exactly these reasons myself. Even tried learning it a few times. But unfortunately, it’s really hard to learn because it’s just so different from literally everything else, and even if I did learn it myself, I’d be really careful writing anything in it which other people may have to work with. Plus, it seems like a mess caused by lots and lots of major but backwards-compatible changes over the years.
In the end, I use shell for stuff where shell is a good fit, and Python 3 for stuff which really only needs to run on beefy (server/desktop style) Linux machines. Python is almost never on embedded Linux systems though, or systems where you can’t afford to waste lots of orders of magnitude of performance, plus the 2 -> 3 transition means Python 3 isn’t actually as widespread as I would like. Perl is a better choice than Python 3 by every metric other than the syntax and semantics of the language itself.
Yeah, I get the point of the article (maybe unfairly reduced to robustness and testing) but what would a python version even look like (assuming python here because of the URL). Replace everything with API calls? What if I really want to do a
dig
or aping
? Would I find native libraries to do those things instead of shelling out?Shells like fish and oil make scripting better but it’s still scripting. If “the thing” keeps baking, I’ll turn it into a binary and I think this is where compiled languages make even more sense. Putting things in bin is great. Putting binaries into a docker image is great. Having a real CLI library is great compared to parsing like bash args. But all this is work compared to a duct_tape.sh when you know the unix chainsaw. I wonder if the term unix chainsaw is the entire thing I just said.
It’d be really cool to have a compiled and productive language. For me, this is crystal right now but there are few other options depending on your tastes.
Python can call subprocesses.
Python can do pretty much everything shells do and pretty much everything you need in shell scripts…
…but I learned how to write shell scripts like 20+ years ago, and I still do it pretty much the same way. Whereas in about the same timeframe, the recommended way to call subprocesses in Python has changed at least two or three times, we’re now at the… third attempt at parsing CLI arguments and it’s so complex and batteries included that I have to re-read like thirty pages of docs every time I want to use it, and if anyone thought hey, I’ll do the smart thing and instead of piping
uudecode
/uuencode
output like it’s 1981, I’m gonna use Python and its everything-but-the-kitchen-sink standard library,uu
is getting deprecated.I went through a “stop writing shell scripts” phase around 2012 or so. I stopped about five years later. I ported everything back to
ksh
because one-off scripts are a thing, but one-off programs have not been a thing anymore in a long time now.I like Python and I use it every day but there’s no way I’m using it, or any programming language meant for professional software development as it’s widely understood in 2022, for the things I use shell scripts for. Life’s too short.
The API for handling processes in python is very error prone and has suffered from many bugs over the years, including regressions in later 3.x releases. It’s not better than a shell, just different.
Sorry if you know all this but yeah, that’s shelling out. It’s a neat trick to save the process as a variable but it’s still exiting python memory as a process and coming back. If you do this:
Run and find the python pid and do a
pstree [pid]
you’ll see you get a new PID underneath python. It’s the same as typingbash
and thenbash
within. Pstree will show you that bash owns bash until bash exits.The python sleep will look very similar.
This is what any file handle is (in ’nix) even for a pipe. You could call “ping 8.8.8.8”, get a python ICMP library or you could make a socket yourself and follow the ICMP spec. Scripts are so easy when you know the shell but they are so incredibly frustrating when you know a language with nice data structures, algorithms/generators/comprehensions etc.
When a script has gotten too serious, I consider rewriting it but then also think about all the shelling out I need to do. How many commands does this rely on and how deeply? If I’m relying on grep’s speed and before and after flags, ugh. I wait for the script to bake. Is it used a lot? Is it brittle? Is its purpose known? If it is glue code, the answers are usually no.
I love crystal and find it much more ergonomic/succinct and productive than go or rust. But it doesn’t fit your description so well. Compile a binary and it will depend on a bunch of clubs which have varying installation procedures.
I had a script laying around to copy all the dependencies output by ldd to a docker container, but feels messy and complicated when compared to a single binary.
Can’t you do
--static
and avoid dynamic libs?It was experimental/unstable last time I checked.
You can do the subprocess. It’s more verbose than bash, but not too verbose. On the other hand you get more clarity and simpler lines wherever you’d use grep/sed/awk instead. Which exist in most scripts.
I mean, don’t program like this in any language:
cp
has a return value. Check it.I believe that’s partly the point in the post. Using
set -e
effectively forces return code handling, at the least explicitly swallowing a bad return code with something like|| true
.As an aside, this is why I’m not a fan of Go’s error handling. It’s optional because the “err := func()” can always be written as “func()” (afaik and can tell, anyway – happy to know if this can be forced.).
IIRC
go vet
will fail that particular idiom (failure to assign / check an error return).So no better than shell scripts then, you need a separate tool to yell at your mistakes.
It’s good to have a linter like
go vet
, but I agree that compiler that defaults to throwing errors (or at least warnings) on clear anti-patterns is better. Go has an idiomatic escape already, too, in “_ := func()
” to throw away the returned error, so I’m curious why go’s compiler allows any returned value to be implicitly and silently ignored.I’m always wary of these “always” and “never” essays.
Instead of going down that route, why not set up a code budget before you begin and stick to it? After all, it’s one thing to spend a couple of coins on some bubblegum on your way to work. It’s something else entirely to invest in condominiums. There are lots of examples in life where we have limited resources and expect a limited return. We use budgets to allocate limited resources to return limited value. Why don’t we treat code the same way?
For my personal work, I budget 50 lines or so of bash to any project. If it goes beyond that, I’m coding in a higher-level language (which I also budget to as a way of keeping track of whether I’m compartmentalizing enough) If I were coding commercially, I’d drop that budget by half or more.
You run into problems with any programming language when the amount of complexity you’ve grasped outreaches your ability to understand it. With signals, string mangling, and the rest, that happens pretty quickly in bash. That doesn’t mean it’s a bad language. These are just risks that need to be managed.
Instead of shunning or loving some language, find a way to guide you to know when enough is enough, ie, it’s time to refactor or move to a different platform. Then stick to it. It’s the people who can’t help themselves but keep taking on more and more complexity in their code over time that are the ones to worry about, no matter what tech they’re using.
I believe that this isn’t an “always” and “never” essay, but rather a list of bad and good practices. In the last section the author covers the scenario you describe (50 line bash budget).
I don’t think any reasonable author would say “never” use bash, however the article points out how it may not be the best choice to use bash in many scenarios, thus “Please stop writing shell scripts” instead of “Do not write shell scripts”. It’s a persuasive essay, not an authoritative, commanding essay.
Wow, I am seeing a lot of Stockholm syndrome in the existing comments. “Hey, bash is just misunderstood! It didn’t mean to give me a black eye, it’s actually my fault because I forgot to turn on error checking and run a bunch of lint tools first. Anyway, if I left bash, no other language would ever love me.”
sh and its descendants are deformed excuses for programming languages, the result of many decades of gluing shit together at random. They make PHP look like Haskell. I’m trying to think of an analogy with some sort of powerful but absurdly awkward and dangerous power tool, but failing, probably because hardware isn’t immune from liability laws so no company could get away with selling something so badly designed.
You meant it’s the C of script languages ?
That is a grave insult to C.
Is it ? All the arguments here in favor of “you’re using it wrong, you just have to X to avoid pitfall Y” read very similar.
Might be a case of “wrong tool for the job”. Just don’t write complex programs in shell.
I would find it more persuasive if the author showed me what idiomatic code to write in some other language with error checking.
As it stands the post boils down to “poorly written Shell scripts are footguns”, to which my response is “yes”.
Oil has all of these options under one group,
oil:basic
, so you don’t have to remember all of them. They’re on by default inbin/oil
, or you can opt in withshopt --set oil:basic
inbin/osh
.Also I think this title is a troll – it should be more like “Pitfalls of Shell” or something like that. Which are pretty well known by now
The answer is to fix shell, not write articles on the Internet telling people not to use it. They use it because it solves certain problems more effectively than other tools
Yeah this post omits the only piece of advice that would make it practical, which is pointing to another programming language that they consider better suited for the job. I’ve written code to launch and tend to processes in a lot of languages and they have all been as error prone as the shell. I don’t think people who bash on shells understand just how complex correct process handling is.
If you wouldn’t mind taking the opportunity to shill, how would you go about convincing somebody to switch to oil shell from bash, assuming they’re willing to ignore the lack of wide-spread deployment of oil? What’s your sales pitch?
Sure, the most compelling case is actually up the thread:
https://lobste.rs/s/iofste/please_stop_writing_shell_scripts#c_m4yng8
That is, you have 3K lines of bash code, AND you want to switch to something else.
Well Oil is basically your only option! It’s the most bash compatible shell by a mile.
There are downsides like Oil needs to be faster, but if you actually have that much shell, it’s worth it start running your scripts under Oil right now.
Some notes here: https://github.com/oilshell/oil/wiki/How-To-Test-OSH
Feel free to post on
#oil-help
on Zulip, file issues on Github, etc.I have a draft of other reasons here: https://www.oilshell.org/why.html
Most of them amount to better tools and error messages. Oil is like a ShellCheck but at runtime. ShellCheck can’t catch certain things like bad
set -e
usage patterns because some of them can only be detected at runtime – or you would have a false positive on every line. (I should do a blog post about this.)I also put some notes about “the killer use case” here:
http://www.oilshell.org/blog/2021/12/backlog-assess.html#punting-the-interactive-shell
i.e. I started running my CI in containers, and I think many people do. Oil not being installed isn’t a big issue there because you have to install everything into a container :)
Although we probably need a bootstrap script, e.g. like rustup, if your distro doesn’t have it. (Many do, but not all.)
Let me know if that makes sense!
I love your shill. It’s always upbeat and on topic.
I wrote ansible roles to install oil on my Linux/FreeBSD servers at home.
I’d guess itamarst really meant the title (subject to the caveats in the article), but also that he wasn’t talking about alternate shells like Oil, as they are really a different matter. Nobody writes “don’t use fish” articles, and Oil is in the same boat — it isn’t available by default, waiting to blow your hand off, so there’s no need to warn folks away from it.
Any language that has been designed rather than duct-taped together over decades is going to avoid shell’s (bash/dash/ash/POSIX sh’s) faults. Please continue doing this! When
/bin/oil
is part of a stock Debian install we can start telling people to put#!/bin/env/oil
at the top instead, but until then I think it’s sensible to post these warnings periodically since OSHA is unlikely to step in.This article gets at the heart of the problem. By far the most common thing to do with shell scripts is to run a command, store the output, and use it as input to the next command, while aborting if any error occurs. The fact that this is actually pretty hard to do correctly in shell scripting is a huge problem. This is mostly because bash is trying to maintain compatibility with shells dating back 50 years at this point that were written for different purposes and with totally different requirements.
I.e.: it is unforgiving to incompetent engineers.
That’s one strong reason to write shellscripts.
Not knowing how to write shell scripts doesn’t make someone incompetent, nor the inverse.
Also, even if it did, forcing people to use a thing just because they’ll make mistakes with it is sociopathic.
Not sure with the shell being such an integral part of the operating system.
Sure it’s relevant if you do system administration on Unix-likes but even then, you can often get by with Python or Perl or similar. Past that though, it’s just often not relevant. Software development is an extremely broad field and just because someone doesn’t have the same expertise as you doesn’t mean they’re incompetent.
I would say it’s relevant if you’re doing any kind of software development, thing is I believe it’s a core skill regardless of your main tech stack. If you read the comments in the thread, you’ll see that there’s no good replacement for shell script, you could get by with Python or Perl or any other programming language, but the shell is best at automating OS tasks. There’s no better alternative for that.
But it’s just not. Windows devs are more likely to need to know powershell or batch than bash or sh. Web devs practically live in node these days. Embedded software devs who work almost entirely out of proprietary IDEs probably don’t either. Once again, software development is an incredibly broad field, not everyone works on the same things you do in the same ways.
This is once again debatable. If someone knows Python or Perl better, and those other languages are available, then for them it’s the best at automating tasks. New tools like
zx
are coming along, which combined with something like Deno could mean you wouldn’t even need an interpreter on the machine in question.It’s just that’s impossible not to stumble upon shell while working as a software developer, you either take your time to understand it or try avoiding it out of misunderstanding.
I’ve been using Unix since 1990. I do development exclusively on Unix systems (Linux, Mac OS and Solaris these days), and I have pretty much avoided writing shell scripts. If I’m doing anything more complex than running a command over a list of files, I’ll reach for something other than sh to do the programming. In fact, I think most of my issues with using sh is that I use it for what it is—a shell, not a programming lanauge [1].
[1] My biggest hangup with sh is redirection and the utterly insane (in my opinion) semantics around it. I still can’t figure out (or remember) how to redirect stdout to a file, and stderr to a pipe. Yes, it’s been explained to me several times, and each time, it just seems bonkers to me.
If it’s complex, go for another programming language or start a new script, if it’s small and simple (few simple commands) stay in shell.
It’s not a issue, it’s good usage. You write shell scripts when you would like to repeat or re-use those few commands you typed in the CLI. That’s it. Not a good idea to write daemons or file managers in shell script.
You’ll remember once you use it enough.
Shell script is just for automating small, simple tasks, that’s it!
The Plan 9 rc shell had/has a nicer syntax for this:
where, IIRC, you can put the two redirections in either order.
|[1]
also works (or any other fd number). rc also has saner quoting/syntax in general and its control flow constructs are closer to C style with () and {} (rather than keywords and their counterparts spelled backwards which is very Algol 68 sensibilities). It never really caught on even though there has been a port from Plan 9 to Unix for about 30 years.But the discussion is about which one is better by objective technical criteria. Not about which one is better for a given person. By that logic, the best is money. You pay someone else tomdo it.
But to the point. Shellscript has syntax designed and optimized to launch applications and glue their input and output with minimal effort. You get other stuff for free such as debug info (stderr) and buffer management by the operative system. You can’t do the same in so called general purpose programming languages because their design focus is on small bits of data strings, variables etc. The API to the os is more verbose and there no way to fix that without making compromises on other aspects of the language. It is a design choice, in both cases.
The reason why many people dislike it it’s because it’s unforgiving and has tricky bits. Few people commit to really learn it properly, resulting in getting bit back by some pitfall and writting such negative posts.
I think there is a second case that is less directly about competence and more about condescension? As in, some people get annoyed when Shell bites them and decide it’s beneath them.
shell is a bad language for many reasons. but I would not recommend python as a replacement.
These threads are like “groundhog day”. I pointed out these similar past posts on my blog, including one from 2008:
http://www.oilshell.org/blog/2021/06/oil-language.html#real-problems-from-chris-siebenmann
https://utcc.utoronto.ca/%7Ecks/space/blog/programming/BourneShellLimitation
https://lobste.rs/s/d9xics/bourne_shell_bash_aren_t_right_languages
So basically the situation hasn’t changed since 2008, but people keep having the same conversation over and over again about shell :)
MANY more people use shell than in 2008. It’s more popular than ever
My rule of thumb is: if it cannot be done in one page of POSIX sh - rewrite it in real programming language.
Or a couple or three of programs written in a real programming language, glued together with POSIX sh.
See also: please stop writing Dockerfiles. Because they incorporate shell in a manner only mildly less deranged than Make.
Most of my time is now spent using Racket in places where I could use a shell script. It’s easier to write a Racket program that invokes other programs and work with their error codes and re-direct their output to the right places. Truly a joy for me, personally, as I do like writing Lisp.
Could you provide a few idiomatic examples of replacements of typical shellscript pipelines featuring grep, sek, sort, etc?
For the most part, a lot of features in the Racket library do not need sub-processes to do those types of jobs.
grep
we haveregexp
objects which employ eitherracket-match
orracket-match?
to match across strings or filter.seq
can be mimicked by using arange
function to iterate combined using expressions likefor
.sort
is done by using the appropriately named Racket functionsort
and changing the comparison function and input list.If you want to sub-process invoke programs, then the output of a
subprocess
call can only be sent to a file stream likestdout
or a plain file. To invoke multiple sub-processes one after another and continuously pass their outputs to one another involves a little bit of trickery which might be a bit complex to talk about in a comment, but it is do-able. The gist is to try to write tasks using the Racket standard library, then usesubprocess
when you need something not covered by it.As posted in a sibling message, it’s much easier to use built-in functions than to shell out and call another program. Personally, I find Racket more convenient for writing scripts that need to work in parallel. For example, a script gets the load average from several machines in parallel over ssh.
https://gist.github.com/6c7ab225610bc50a3bb4be35f8e46f18
Would also love to see examples.
Best way I can quickly sum it up is clever use of the function
subprocess
in Racket.This outputs the
seq
command tostdout
, and allows for arbitrary commands so you can do zero-arg sub-processes or however many you need/like. Thecurrent-output-port
andcurrent-input-port
calls are parameters that you can adjust by using aparameterize
block to control the input/output from the exterior.The output port must be set to a file, it cannot be set to an output string like with
call-with-output-string
, so output is either going to go straight tostdout
, or you can usecall-with-output-file
to control thecurrent-output-port
parameter and store the output wherever you please.The difficult part of advocating against writing software in shell in the abstract is that there are too many better choices to pick between for what you should recommend instead. You can’t swing a cat without hitting at least two programming languages that are vastly better. Damn near anything works. It is extremely difficult to accidentally design a programming language that is as bad as shell.
The flip side is that in any given specific situation it’s pretty easy. Just use what most people know already. If you have a collection of programmers who all know Python, just pick Python. Likewise Ruby, Lua, ECMAscript or pretty much anything.
Most programming languages that are better at enabling expression of logic are horrible at process handling. Most of the obvious contenders have horribly complex APIs that developed as they found bug after bug in their initial implementations that revealed the nuance of what a shell does. Languages like perl and ruby that carry the tradition of backticks for launching processes are vulnerable to deadlocks, for example. To get shell-like behavior from python you need to use Popen but that doesn’t give you the same behavior in all cases.
Eh.
You can cut all the calls to
mv
,ls
, and so on by making calls into libc. A bunch of this stuff gets easier because real APIs aren’t prone to fucking up strings like shell does. The majority of subprocess handling that remains is trivial, launch something and immediately wait for it. You don’t need a terse DSL for launching multiple processes in one go like “cat | cut | sed | tr | awk” if you move all the string handling into a real programming language instead where these things are easy instead of hard.I do not commonly need to continuously pipe data into a subprocess’s stdin while continuously piping its stdout somewhere else. I have zero scripts in day to day use at work that need to do that instead of just fully buffering one or both sides of the communication. If it does come up I do know how to use select(2) or coroutines. :P
People that advocate for that, end up taking 20-100 times more time and code to do it “in a real programming language”. The example you gave being a the most typical. They start with that claim , then between writting, testing it, debugging, etc. 20 to 100 lines of code later, anything from a morning to a week is past. Every friggin time.
Then a month or two later, the whole team is called to solve some critical production issue because the solution in a real programming language abused buffering and crashed when 10gb of data were thrown at it.
Meanwhile, the shellscript handles buffering for you gracefully, it’s tiny and idiomatic.
If you’re doing a lot of subshelling and want a nicer API, look into sh. It’s far from perfect, but it makes a lot of things a lot simpler.
In the post and in the discussion, I see two slightly different aspects.
First one is “don’t add extra dependencies”. If your project is in Python, build it with Python. If it is in Java — use Java. And,well, if you project is in shell, by all means, use the shell. While it seems that, eg, using Python to build (in a very general sense) a Java project is less lines of code, it is more accidental complexity: now the person building the project needs both Java and Python (or Java and bash). The hard problem to solve here is bootstrap. You really don’t want to ask a contributor to install some random “make, but Java” tool to just be able to run your scripts. Ideally, you want
#!
which works with baseline install of the language. Often that’s doable only via horrible hacks (see, eg, xtask, or how gradlew gets cross-platform support).Second one is “shell is the least worst language for non-project automation”. Indeed, if what you are writing lives in
~/bin
and isn’t shared with others, shell does seem like an potentially least bad tool, despite all of its flaws. If you can’t stand shell on aesthetical reasons… well, tough to be you. My personal journey here was from home-grown Python DSL, past the Ruby (backticks has the right syntax, but wrong semantics), through Julia (it has backticks with correct semantics, but Bosch it is slow to start) to “heck, I’ll just use Rust cause working auto-compete compensates any verbosity”.If I start today, I’d probably look at deno: JS backticks have the semantics which can be employed correctly, and deno seems to be qualitative improvement in the design of scripting system.
Of course, a third option is to make Emacs a pid1 and call it a day :-)
As a software developer, I prefer strongly+statically typed languages, but I still enjoy shell scripts in some cases. The reason is probably that in such cases, it is not software development/engineering, just gluing things together and nothing too big. Then shell like Bash is a useful tool and serves me well.
The size is key limitation – if I can keep whole script in my head or if whole scipt fits on a single screen, it is OK. If it is longer, other tools are more suitable (a programming language with static typing).
Theoretically. But it is usually too late. No one will learn a new language and rewrite the script in such situation. Rather „add just these few lines here“. Probably the best solution is learn another language in advance. Then you can use shell, because it is the best tool for given task, not because it is only language you know.
If only everyone would ….
This is the kind of thing a person who hasn’t studied economics says.
It is appealing how little shellscripts change vs other languages, and it is useful for bootstrapping servers when python may not be installed yet. I don’t have update/packaging nightmares with shellscript. Will I still be heavily using python in 10 years? I’m not sure.
Posix compliance and deviations are a nightmare though. If it’s a complex script, another tool is more appropriate.
This is, by extension, a counterargument to the perennial weed of a recommendation to use Makefiles in projects where they are neither idiomatic nor sufficiently portable and better options (like the language the project was written in) exist.
Every few years I fall in the trap of trying to write a quick Makefile for managing some simple commands on a new project, and every single time I go through the pain of rediscovering just why I stopped doing it the last time around. .PHONY targets, weird whitespace semantics, weird variable semantics, etc. etc.
Nowadays I recommend people either start with a proper task runner such as just or task, or to follow the ‘Taskfile’ standard for bash scripts: https://medium.com/@adrian_cooney/introducing-the-taskfile-5ddfe7ed83bd
No.
Please stop writing articles :)
Please stop being an asshole :)
Trying my best but not always succeed :)
The cost benefit of writing software in a ‘serious’ language, for me anyway, is that the investment only makes sense if a lot of others use it. And people need to earn a living for their work. If there is software I actually need and want to get on with my life, shell script or some combo of it is amazing. It is of course preferable to distribute software mostly as a single binary, if possible. If it’s a limited audience, depending on what software it is, shell script is like a secret weapon. The ease of gluing together systems and software in shell script is at least an order of magnates easier than other popular languages imo.
For example, I have a private network (can be anywhere in the world) that I can just fuzzy search for any file (video or whatever), and I can open it or stream it to any device. As long as the file is on one of the devices. I can back up all files to a single device too, with a single command. And it’s secure. Basically stopped using third-party file sync software and stuff like Dropbox as a result. The setup is a pain though, and I completely forget how to add a new device sometimes, which is why shell script is terrible to distribute the kind of software I’m interested making at the moment.
Edit: and I can’t add a duplicate file to the system, unless all of it’s getting backed up to one of the devices. Each file is identified by it’s hash. Every new file must be tagged (also auto-tags like file type and name). Searches an index for the file associated with a hash of the file. I have another system that let’s me update files and track hash changes, basically a next level version control system
This has persuaded me to think about Perl much earlier.
Yes, stop! Use https://github.com/google/zx instead.