1. 38
  1. 11

    I think if it makes sense, let --dry-run be the default mode. This is consistent with doing the safest thing by default. As usual, the power of the default is not to be underestimated…

    For example, I made a program that rewrites shellscripts, and by default, it is a suggestive syntax highlighter (showing both that it understands the syntax and what it would do, as a colored character-level diff).

    1. 4

      I’d definitely second that. For one off migration or incident response scripts at work, I’ll tend to do nothing by default, and require --for-real (or --yes-i-really-mean-it, or something equally asinine) to actually do the work.

      1. 4

        Do you have data that suggests that this improves things? I would suspect that it depends on how often you use the command. I automatically add -f to rm -r commands without thinking, for example, so even though the default behaviour is safe, it doesn’t make me less likely to footgun. In contrast, I almost always add -n to git push commands where I’m not 100% sure what’s going to happen (e.g. if I have multiple remotes).

      2. 5

        Agreed that dry-run mode is a very useful feature for just about any piece of software that makes changes to the state of something.

        It can be tricky to get right, or even to nail down the right desired behavior, when there are data dependencies in the sequence of operations you’re performing.

        “Try doing thing A, and if that fails, do thing B instead” kinds of sequences are pretty hard to dry-run when you can’t tell for certain whether thing A will succeed without doing it for real. For example, creating a new file with a user-supplied name: your dry run code could check whether the file currently exists, but that’s no guarantee it won’t be created by something else in between the dry run and the execution. In that case you probably won’t want to take the article’s approach of “execute the results of the dry run” because the precomputed dry run won’t contain the correct sequence of actions.

        You also end up having to figure out how much complexity it’s worth imposing for the sake of keeping the dry runs as dry as possible. Is it better to, say, do real write operations against a real database and then roll everything back at the end, or just generate the list of write operations you would attempt to do but not actually send them to the database at all? The latter is faster and arguably safer, but if you have integrity constraints in your database, your database-interaction-free dry run won’t fully validate the input unless you duplicate all the constraints in application code.

        1. 4

          In various shell scripts I write, I work on the principle that --dry-run should print all the commands that would be executed without making any changes to the system (mostly). I mostly do this by running all the commands in a subshell and set a variable that is put in front of every command line that, when --dry-run is in effect, expands to printf '%s\n' or echo so that it prints the command.

          ( ${run} mkdir ${somedir}; )
          

          It doesn’t work for pipes but it does the job most of the time and it’s reasonably simple to work with. It has the added benefit of making it easy to find in the code what commands will get run. I’ve tinkered with ways to do pipes (eval? ugh…) but I didn’t want to introduce the complexity and have resorted to sh -c "..." to get around it. Still, it’s really nice when you can do a dry run and copy the commands. (I use subshells so I can change the environment, if necessary.)

          1. 4

            i’ve written tools with a --wet-run flag :)

            (co-worker made me change it to --dry-run=false)

            i think the destructive mode should be harder to type

            1. 5

              --in-anger!

            2. 2

              I agree very strongly with the value of dry-run and share the intention of always including it in my tools that do anything nontrivial.

              It’s also often broken in the tools I use. And it has also been my experience that the dry-run mode goes unloved pretty quickly in the tools I create unless I really build it in as a first class part of the run.

              In my case, that’s because most of my things grow from little trivial hacks you’d never add it to into something larger where you really want it. I don’t think I ever get the dry-run option right until after the first big refactor.

              It’s a tricky thing to get right, but often worth it when you can. I haven’t found a consistent pattern I like that helps me do that yet.

              1. 2

                The point of the article

              2. 2

                I would describe the general programming/design technique demonstrated here the continuation-passing style, which is a valuable tool to have in one’s designing toolbox.