1. 20
  1. 4

    generally good things. I’m opposed to set -e-ish advice just because I think the behavior it adds is unusual and not universal and bash provides the appropriate tools to check for errors in pipes as well: $PIPESTATUS.

    Overall for the first problem, I’d probably just set REPLY to some value and return with an error code instead of using stdout/err as any means of “returning” values, but that’s just me. REPLY is used in things like read -r so it doesn’t seem that monstrous to assume its safe for clobbering.

    so instead of

      local -r config=$(get_config_path)

    I’d probably do

      if get_config_path; then
        local -r config="$REPLY"
        : do something like fail since author is using set -euo pipefail


    for the usage case, I’d probably wrap all of the invocations in braces so they can be redirected to stderr, which seems to be the norm for –help related output.

      echo "usage: blah"
      echo ""
      echo "more"
    } >&2
    1. 1

      Initially I was skeptical about this (REPLY based) approach, but the more I consider it, the more I like it.

      1. 1

        That setting REPLY thought, that’s a great idea! I think I will definitely incorporate that a bit in my scripts.

        1. 1

          for the usage case, I’d probably wrap all of the invocations in braces so they can be redirected to stderr, which seems to be the norm for –help related output.

          I think the print_usage function is actually better as in the post (un-redirected) – that way it’s up to the caller to decide, and you can just do print_usage >&2 as needed. Further, call me pedantic, but I’d argue that in the case of --help, the usage message going to stderr is inappropriate – it’s not an error message, it’s the output that was requested, so just send it to stdout. For reporting syntax errors or whatever, sure, stderr is the right place, but --help isn’t an error (assuming you recognize it as a flag). Leaving the usage-printing function un-redirected makes it easy to handle both cases appropriately.

          1. 1

            set -e indeed has many pitfalls. As far as I know, Oil fixes every single one:


            If anyone knows any counterexamples, let me know. There should be no reason not to use it (if you’re using Oil – if you have to run under 2 shells, that’s a different story)

            I had planned to write up a longer doc / manual about this, but haven’t gotten around to it yet.

          2. 2

            Semi-related: Is there anything that you’d recommend for unit testing bash scripts?

            1. 5

              Noah’s Mill, but I’m also a fan of ryes.

              1. 3

                I’ve used BATS and it’s been a good experience. If you install via Homebrew just make sure to do brew install bats-core and not bats as the latter is an older, unmaintained release.

                1. 2


                  1. 1

                    (I’m the blog post author)

                    I’ve not actually tried testing bash scripts - once they get passed a fairly simple level I usually replace them with a small go binary

                    1. 1

                      I’ve looked into BATS, but the only time I bothered testing anything bash, I just ended up doing simple mocking like: https://github.com/adedomin/neo8ball-irc/blob/master/test/test.sh

                      of course this is testing something which would probably be considered a strange, if not mental, use of bash.

                    2. 2

                      The local and readonly are great advice; I should start using them more often in my (bash) scripts. I also struggle with figuring out a good log function; maybe something simpler like this is a good idea!

                      I also use a heredoc with the usage() function, but other than that, I think this is pretty good advice!!

                      1. 2

                        …and the “always use pipefail” meme propagates further. Maybe use pipefail.

                        1. 1

                          Oil has a way to mitigate the only exception I know of (comment on that same article):


                          I’d be interested if there’s any other reason not to use it.

                        2. 1

                          I’m intrigued by the last paragraph: “I keep a single template file which has all of this written into it, and new scripts start off with a copy-paste of the template. Could it be DRYer? Sure, but then I have to deal with dependency management, and it’s just not worth the hassle and overhead.”. I’ve started using a few library files to source into my scripts*, to be more consistent, and it’s worked well for me so far. I’m curious about the tradeoffs the author (@pondidium, right?) considered for either approach. What if you want to change your template to add an improvement to a function in there, for example?

                          *I discovered the Google Shell Style Guide a couple of months ago and have been using the non-exec-bit .sh-suffixed file names for such library files.