1. 33

Your bash scripts will be more robust, reliable and maintainable if you start them like this:

set -euo pipefail

    1. 5

      I generally agree with this, but disagree with the analogy for set -e:

      In all widely used general-purpose programming languages, an unhandled runtime error - whether that’s a thrown exception in Java, or a segmentation fault in C, or a syntax error in Python - immediately halts execution of the program; subsequent lines are not executed.

      Segmentation faults will halt C programs, yes, but that’s a pretty unusual kind of error, an actual memory-protection violation. Most errors don’t halt C programs. Plenty of functions will fail by setting errno, returning an error value, and doing nothing else. If you don’t explicitly check the return value and bail, your program continues executing, exactly as you don’t want it to do in either C or bash. Using set -e in bash sets a mode where you error out if any of these functions fail, which C doesn’t support, i.e. a mode where if fopen fails, the program bails instead of continuing on to your loop where you now try to fread from the file that didn’t open (the fread will also fail, but again without erroring out).

      For quick-and-dirty programs I actually wish C (or Go) had a version of this so I could write straightline code that doesn’t check any error returns, just treating all errors as fatal errors.

      1. 3

        I personally like to use set -e to protect my critical scripts to misbehave (eg, overwritting backups with empty tarballs). I do agree with you that set -e has its use, and shouldn’t be made automatic. Encountering an error doesn’t mean your whole script failed miserably. It could even be part of it:

        echo starting backup
        mountpoint -q /mnt/backup || { mount /mnt/backup || exit 1 }
        rsync -az /home /mnt/backup

        Setting -e in this case would prevent the script to recover from an easy to solve error (drive not mounted). This is neither desirable, nor practical for debugging.

        So really, don’t blindly use shell options, understand, and use them wisely!

        1. 2
          #define CHECK(f) if(f) { perror(#f); printf("%s:%d\n", __FILE__, __LINE__); exit(1); }
          FILE *f = fopen("example");
          CHECK(fread(f, 1, bytes, buffer) == bytes);

          Macros for == 0, == value, not null, etc are of course easy to build.

          CHECKV(bytes, fread(f, 1, bytes, buffer));

          CHECKV could print v for extra debugging deliciousness. Would still have to remember which function returns what to indicate error though.

          1. 2

            You can do that, yeah, but I’d prefer something I can just put once at the top, like #pragma ABORT_ON_ERROR, which the compiler or libc would implement.

            1. 1

              That would be neat, but this is definitely the next best.

              Too many people don’t realize that just because C macros aren’t all fancy and hygienic like lisp/rust macros, doesn’t mean they aren’t still incredibly useful.

            2. 1

              I use something similar in few places. I also at first use something like this at quick and dirty stage:

              if( (fd = open(“foo”, O_RDONLY)) < 0 || len = read(fd, buf, sizeof(buf)) < 0 || close(fd) < 0 ) { perror(argv[0]); exit(-1); }

              It could be nice to have a C to C compiler. That would add this low level exception handling. Describe for every function an error condition. Die on this error and display nice error message maybe with stack trace.

              Give ability to catch those exceptions, but then they have to be caught somewhere in caller.

            3. 1

              It should be noted that it is not possible to check if a command fails if -e is enabled, e.g. check if a binary is available with hash <name>, because the whole script will fail immediately as soon as one command returns a non-zero exit status. Besides this, I always use set -uo pipefail but I never had to use IFS='$\n\t'.

              1. 1

                Using failing commands as conditions still works.

                if hash <name>; then
                1. 1

                  You’re right, I didn’t remember the issue correctly. But, you can’t save the return value of a failing command for later use, which is what I had tried to do in some script:

                  # ...
                  set -e
                  # ...

                  The real point I want to make is that set -e is sometimes/often not what I want.

                  1. 1
                    failing-command || x=$?

                    Or a couple other constructs.

                    1. 1

                      This is what I use, calling upon the ‘unless’ perl-ism that I have come to love on my travels :)

            4. 1

              I’ve been following the conventions prescribed by cronic for a while, which are similar:

              set -o errexit -o nounset -o xtrace

              This has greatly improved my ability to figure out WTF went wrong after it goes wrong. I will definitely check out this suite as well. I’m a little anxious about changing the word delimiter this late in the game but if it helps, maybe.

              1. 1

                Another way to handle capturing exit codes:

                cmd arg1 arg2 ... && code=$? || code=$?
                1. 1

                  That’s not strictly related to the article, is it?

                  Also, you can just do cmd args; code=$?

                  (or put this in two consecutive lines without the semicolon)

                  1. 1

                    The article is about modifications that apply with various strictness settings, one of which is errexit. Your can see their solution for this case further down in the article.

                    Following statements are skipped when the preceding statement aborts and errexit is enabled. Thus it is not enough to have code=$? as a consecutive statement.