1. 38
  1.  

  2. 25

    If the goal is being minimal then I really don’t see why you’d include features like coloring.

    1. 8

      I imagine the title meant something more like “minimal complete”, taken to an extreme a minimal template would be empty. Nevertheless, I found this post useful for the reasons they mentioned at the start. Everytime I write a bash script, I have to relearn basic bash things. I bookmarked this for future reference.

      1. 3

        That’s fair.

        I think though that there is some level of a “minimal bash template” that isn’t entirely empty. For instance, the set options at the beginning as well as having a cleanup function and setting it up with trap is pretty applicable for nearly every script, and would belong in a minimal template.

        1. 4

          Yeah, I agree. Still, it’s nice to have a template where I’d begin by paring it back rather than by scrambling to find things it’s missing. That said, this is probably missing some things too. I guess I won’t know until I need them. :P

    2. 16

      If you’re at the point where you need to parse flags, like in this example, you’re no longer writing “a simple script”: it’s now a full fledged program. Do yourself a favor and use an actual programming language. Yes, Bash can technically do a lot, but as someone who works on a project centered around 100k+ lines of Bash, it’s going to slow you down and introduce its own terrible categories of bugs.

      1. 11

        I struggle with this a lot, because there’s definitely some truth to this. For me the test is usually “is the primary role of this script/app to just call other binaries”. If the answer is yes I lean to shell scripts, as I’m unconvinced writing e.g. Python with subprocess calls, or C# with System.Diagnostics.Process, etc … represents an improvement. It’s likely to be quite a bit longer with all the extra process management, with minimal gain if you’re writing Bash in a reasonably disciplined way (e.g. use shellcheck, consider shfmt).

        Part of that discipline for me is the exact “boilerplate” which a template like the linked article provides.

        EDIT: Obviously once we’re talking 100k+ or even 10k+ lines of Bash we’re in an entirely different realm and op has my deepest sympathies.

        1. 7

          Wow really 100K lines of bash? What does it do?

          I keep track of such large programs here:

          https://github.com/oilshell/oil/wiki/The-Biggest-Shell-Programs-in-the-World

          There are collections of scripts that are more than 100K lines for sure, but single programs seeminlyg top out around 20-30K… I’d be interested to learn otherwise!

        2. 15

          My experience with Bash is: “avoid it at any cost”. Unless you are writting very OS specific stuff, you should always avoid writting bash.

          Bash efficiency is a fallacy, it is never the case. Bash is sticky, it will stay with you until it transforms into a big black-hole of tech-debt. It should never be used in a real software project.

          After years of Bash dependency we realized that it was the biggest point of pain for old and new developers in the team. Right now Bash is not allowed and new patches introducing new lines of Bash need to delete more than what they introduce.

          Never use Bash, never learn to write Bash. Keep away from it.

          1. 4

            What do you use instead?

            1. 8

              Python. Let me elaborate a little bit more.

              We are a Docker/Kubernetes shop, we started building containers with the usual, docker build/tag/push, plus a test in between. We had 1 image, one shell script did the trick.

              We added a new image, and the previous one gained a parameter which existed in a JSON file which was captured using jq (first dependency added). Now we had a loop with 2 images being built tested and pushed.

              We added 1 stage: “release”. Docker now had build tag push, test, tag push (to release). And we added another image, the previous images gained more parameters, something was curled from the public internet, the response piped into jq. A version docker build-arg was added to all of the images, this version was some sort of git describe.

              2 years later, the image building and testing process was a disaster. Impossible to maintain, all errors captured after the images were released, the logic to build the ~10 different image types were spread into multiple shell scripts, CI environment definitions, docker build-args. The images required very strict order of operations to build: first run script build then run script x then tag something… etc.

              Worst of all, we had this environment almost completely replicated to be able to build images locally (when building something in your own workstation) and remotely in the CI environment.

              Right before the collapse, I requested to management 5 weeks to fix this monstrosity.

              1. I captured all the logic required to build the images (mostly parameters needed)
              2. I built a multi-stage process that would do different kind of tasks with images (build, tag, push)
              3. I added a Dockerfile template mechanism (based on jinja2 templates)
              4. Wrote definitions (a pipeline) of the process or lifecycle of an image. This would allow us to say, “for image x, build it, push it into this repo” or “for this image, in this repo, copy it into this other repo”
              5. I added multiple builder implementations: the base one is Docker, but you can also use Podman and I’m planning on adding Kaniko support soon.
              6. I added parallelized builds using multi-processing primitives.

              I did this in Python 3.7 in just a few weeks. The most difficult part was to migrate the old tightly coupled shell-scripting based solution to the new one. Once this migration was done we had:

              1. The logic to build each image was defined in an inventory file (yaml, not great, but not awfull)
              2. If anything needs to be changed, it can be changed on a “description file”, not in shell scripts
              3. The same process can be run locally and in the CI environment, everything can be tested
              4. I added plenty of unit tests to the Python codebase. Monkeypatching is crucial to test when you have things like docker build in the middle, although this can be fixed by running test using the noop builder implementation.
              5. Modularized the codebase: parts of the generic image process pipeline are confined to its own Python modules. Everything that’s application dependant lives on our repo, and uses the other modules we build. We expect those Python modules to be reused in future projects.
              6. It is not intimidating to make changes, people are confident about the impact of their changes, meaning that they feel encouraged to make changes, improving productivity**

              Anyway, none of this could be achieved by using Bash, I’m pretty sure about it.

              1. 13

                It sounds to me like your image pipeline was garbage, not the tool used to build it.

                I’ve been writing tools in bash for decades, and all of them still run just fine. Can’t say the same for all the python code, now that version 2 is officially eol.

                1. 3

                  bash 3 broke a load of bash 2 scripts. This was long enough ago that it’s been largely forgotten.

                  1. 1

                    I agree with you, the image pipeline was garbage, and that was our responsibility of course. We can write the same garbage in Python no doubt.

                    Bash however, does not encourage proper software engineering, definitely, and it makes software impossible to maintain.

              2. 1

                I can confirm this. I’ve had to replace a whole buildsystem made in bash with cmake roughly 2 years ago and bash still contaminates many places it should not be involved in with zero tests.

              3. 8

                I believe 2>&1 >/dev/null could be shortened to >&/dev/null?

                Also I wonder why un-trap in cleanup? Is the script expected to call the func not only at exit?

                Also, try using shellcheck whenever possible.

                1. 5

                  Guarding against recursion. If something goes wrong in cleanup() you’ll trap again and eventually bash itself will blow its stack.

                  1. 4

                    Shellcheck is the golden ticket. I use it on all my scripts and don’t consider code done until it passes. It’s taught me so much!

                  2. 7

                    About a hundred lines, including argument parsing, terminal color management, and helper functions.

                    I suspect a lot of folks will click the title out of latent anxiety about their own quick and dirty scripts. There’s nothing wrong with quick and dirty scripts.

                    Perhaps a hundred lines of throat clearing is “minimal” for internal tools used by larger teams, for shell scripts shipped as primary user or installation interfaces of products, or to meet standing policies about user interfaces. The “minimum” for most scripts I’ve written or helped maintain is a valid shebang. Maybe set -e.

                    1. 7

                      Agreed. I also think that when you’re writing scripts for consuption by anyone that isn’t just you (like an internal tool), you should think about removing Bash-isms to avoid issues with different shells.

                      1. 6

                        He does indicate it’s specifically meant to be run by bash via a shebang. Unless you really need to support multiple shells, the extra pain is just not worth it; bash is pervasive enough it’s a fair baseline for the vast majority of typical use-cases.

                        The alternative is lowest common denominator POSIX and that’s hard work. Sometimes it’s necessary, but it’s not pretty (not that bash is either, but it’s certainly going to be more verbose). The salt-bootstrap.sh script is a nice example of the approach.

                      2. 3

                        And set -u, so undefined variables don’t silently work (typos make this bad), and set -o pipefail so failed piped programs stop execution. And then -euo pipefail in every subshell, and don’t forget to split up VAR = $(set -e && dosomething) and export VAR because export swallows failed exit codes…

                        It’s really far too difficult to write correct bash, mostly it’s best to just avoid it.

                      3. 7

                        FWIW in Oil you can do put this at the top of your script and get sane behavior:

                        shopt -s strict:all 
                        

                        Or if you want to also run under bash:

                        shopt -s strict:all 2>/dev/null || true
                        

                        Then keep on writing the way you normally do. It’s sort of like “guard rails”. You’ll get better and earlier errors, and then you can run your script with bash too.

                        http://www.oilshell.org/release/latest/doc/oil-options.html

                        Though the optparse thing is a hole in bash (and currently in Oil), I’m discussing how to plug that hole right now:

                        https://oilshell.zulipchat.com/#narrow/stream/264891-oil-help/topic/Passing.20a.20map.20to.20a.20proc.20as.20reference (requires login)

                        (bash has getopts, but it leaves a lot to be desired)

                        1. 2

                          What’s the reason that sane behaviour isn’t the default? Compatibility? Is it possible to make it the default?

                          Oil shell looks pretty interesting.

                          1. 5

                            Good question, it is the default under bin/oil, but not bin/osh!

                            • bin/oil is for when you’re writing something new and don’t care about compatibility.
                            • bin/osh runs existing shell scripts. But it also gives you an upgrade path into saner behavior. (It’s also much more stable than bin/oil at the moment, although they are technically the same binary)

                            I guess I should write this in the docs somewhere… It’s explained in the blog but with a lot of context.

                        2. 6

                          Long enough that it removes the incentive to use bash instead of a Real Programming Language. Perfect! :-P

                          1. 6

                            This is not dissimilar to my own bash-script-template project. In fact, some parts of it are … really similar?

                            EDIT: Oh, I see my repo was linked near the bottom. So I guess some inspiration was drawn!

                            1. 5

                              I was thinking “ok, yet another cliché article full of misconceptions and myths” when I was reading the introductory paragraphs. But as I progressed, I found this to be a really well put together article. Very informative and with the technical details well motivated and explained in just the right amount of detail. This is a great read for anyone with brief exposure to shell scripting wanting to up their game.

                              The idea of a boilerplate for shellscripts is a conundrum in itself if you ask me. But as it is made clear in the article, these are a bunch of snippets than can be mixed and matched freely.

                              I don’t agree that bash is everywhere. Specially not so in docker images and the like. IIRC, bash footprint is rather large, and this makes a difference when deploying a large number of containers. I prefer to stick with bourne shell, which is more ubiquitous and portable. Bash and zsh are pretty much compatible with it. But I’ll reckon bash does add some useful functionality on top of it, in the end it becomes a matter of compromising/balancing. The gain in extra functionality is marginal for me, for some people it’s not.

                              The one myth that doesn’t appear to die is the whole “bash vs a real programming language”. This is just non sense. Shells are interpreters tailored towards interactive usage and scripting. General purpose programming languages have intricate syntax for parameter passing, external process execution and io handling. As well as other things such as redirection and file system browsing. These are literally the reasons why shells exist. Nothing stops you from setting for example python’s repl as your shell. You’ll notice quickly how bad of a match it is the first time you launch your terminal emulator application. Bash (and other shells) are extremely pragmatic when it comes to glue diferent programs together. The fact that the edge cases are frequent is by design. You couldn’t have clean raw parameter passing if you want to be able to pass them as strings without quoting chars and separated by spaces. I also think the whole “just use a programming language” myth feeds of many people not having basic knowledge of standard streams, shell expansion, stream redirection or even basic unix utilities such as cut, grep, wc, awk, etc.

                              1. 1

                                Your last para here (“The one myth …”) is nicely formulated, and helped me put my finger on what was frustrating about that frequent “vs” argument (see eg https://twitter.com/qmacro/status/1332303180240216066). Thanks.

                              2. 9

                                If I respond to this right now, it’s going to come across as trolling or ranting.

                                So for now, I’m just going to say I completely disagree with this, and will update this comment once I’ve had time to calm down and collect my thoughts.

                                1. 5

                                  If I respond to this right now, it’s going to come across as trolling or ranting.

                                  So for now, I’m just going to say I completely disagree with this, and will update this comment once I’ve had time to calm down and collect my thoughts.

                                  This is a great comment template for internet discussions. Thanks!

                                  :-)

                                2. 3

                                  :+1 pretty solid. Two nitpicks:

                                  • I don’t know if it’s necessary to trap all these callbacks, EXIT is fired on ERR and SIGINT as well. But maybe not in an older version of Bash.
                                  • cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1. I don’t think it’s necessary to redirect the outputs. If that fails, I would like to see the error message.
                                    1. 2

                                      I like some concepts described/explained in that blog post but please just use POSIX /bin/sh for scripting for which most of this template will also work.

                                      BASH is not that bad for interactive shell but ZSH and FISH are a lot better.

                                      1. 2

                                        When testing high cost steps for a script, or with many prerequisites, I’ve found it nice to have all functions souceable. This works on bash and zsh, but is simplify-able for just bash: https://github.com/ClashTheBunny/Debian64Pi/blob/master/stage1.sh#L237-L248

                                        This way, I can create, mount, run debootstrap, and then experiment with the next function of the script. Without this, it unmounts and I need to test it from the beginning again. This makes the later stage functions easier to iterate on.

                                        This is like the if main construct in Python.

                                        1. 1

                                          I like simple shell scripts, which means few lines. The script should fit on the screen so you can see it as whole at once. Then Bash (or other shell) is very efficient and serves well. You can understand the design of the script, grasp the author’s ideas and easily find and fix prospective bugs.

                                          If you waste hundred lines with your „minimal“ template, this advantage of simple shell scripts just disappears.

                                          For more complex things that does not fit on a single screen, it is usually better to switch to some programming language (with type-safety, compile-time checks, good support in IDE’s).

                                          1. 4

                                            simple shell scripts, which means few lines

                                            Citation needed? For me, simplicity means a lack of complexity, which this kind of “boilerplate” doesn’t add to. Indeed, a lot of it does make things simpler by ensuring cleanup, pipefail, etc.