1. 29
  1.  

  2. 7

    Total agree. All of my nontrivial programs now look like this:

    func main() {
        if err := exec(os.Args[1:], os.Stdin, os.Stdout, os.Stderr); err != nil {
            fmt.Fprintln(os.Stderr, err)
            os.Exit(1)
        }
    }
    
    func exec(args []string, stdin io.Reader, stdout, stderr io.Writer) error {
        // ...
    }
    
    1. 1

      What do you do about flags?

      1. 1
        1. 1
          import (
              "github.com/peterbourgon/ff/v3"
          )
          
          func exec(args []string, ...) error {
              fs := flag.NewFlagSet("myprogram", flag.ContinueOnError)
              var (
                  addr = fs.String("addr", "localhost:1234", "listen address")
                  // ....
              )
              if err := ff.Parse(fs, args); err != nil {
                  return fmt.Errorf("error parsing flags: %w", err)
              }
              
              // ...
          
      2. 2

        Agree completely. I wrote something similar in my blog.

        1. 2

          I’m recently kinda on the fence between two patterns, when starting to write new Go programs:

          func main() {
              err := run()
              if err != nil {
                  fmt.Fprintln(os.Stderr, "error: ", err)
                  os.Exit(1)
              }
          }
          
          func run() error {
              // ...
          }
          

          vs. some variants of:

          func main() {
              // ...
              if err != nil {
                  die("reciprocating splines: %s", err)
              }
              // ...
          }
          
          func die(format string, args ...interface{}) {
              fmt.Fprintf(os.Stderr, "error: " + format + "\n", args...)
              os.Exit(1)
          }
          

          The first makes it easier to extract fragments when refactoring, but feels more forced/artificial. The 2nd feels to me more natural/idiomatic.

          On a somewhat funny note, it’s worth to realize that you can actually full well test a main func — and I actually did something more or less like this at least once at my workplace:

          func TestMainDoesSomethingImportant(t *testing.T) {
              oldArgs, os.Args = os.Args, ...
              oldStdin, os.Stdin = os.Stdin, ...
              defer func() {
                  os.Args, os.Stdin = oldArgs, oldStdin
              }()
              // ...
              main()
          }
          

          (Actually, IIRC I went with an even “funnier” pattern of spawning go test from inside go test“We must go deeeper!” …IIRC I learnt this trick from the stdlib!) I’m recently pushing myself to try and avoid changing the structure of core code purely for the sake of making tests easier to write. I seem to be able to get surprisingly far on that path. That’s a recent thing for me, so I’m still not 100% sure what are my conclusions from this.

          1. 1

            The first makes it easier to extract fragments when refactoring, but feels more forced/artificial. The 2nd feels to me more natural/idiomatic.

            Except that the 2nd does not run deferred calls on error (e.g. releasing lock) because it immediately calls os.Exit.

            1. 1

              Right, that’s true… kinda in theory… but then, for releasing a (mutex) lock or even closing a file it doesn’t matter much, as the process is dead and OS does cleanup of resources held by the process. On one hand, some stuff like defer os.Remove(file) could still be useful; but OTOH, if one tries to write the app following a “crash-first” approach, it shouldn’t really matter much anyway. Even network peers must be ready to work in case of a lost connection already. So, somewhat to my surprise, you actually made me gain even more respect to this approach - for subtly encouraging writing crash-first software… funny! :)

          2. 2

            So passing your arguments in to another function is obviously a much more elegant and testable way of doing this, and I wonder if it’s a design mistake to make these values available as globals at all.

            On the one hand, it is extremely convenient to be able to just print stuff out and not worry about passing IO contexts or continuations around, on the other, it’s the programming language encouraging you to use globals where generally we discourage that.

            It’s maybe too much of a pain for IO, but for command line args and the environment, I don’t really see why those should be made easily available except as args to main.

            1. 1

              Yeah, totally. I usually do something like this.

              func main() {
              	if err := run(); err != nil {
              		log.Fatalln(err)
              	}
              }
              
              func run() error {
              	var opt struct {
              		foo string
              		bar bool
              	}
              	flag.StringVar(&opt.foo, "foo", "", "foo")
              	flag.BoolVar(&opt.bar, "bar", false, "bar")
              	flag.Parse()
              
              	if os.Getenv("PQ_CONNINFO") == "" {
              		return fmt.Errorf("missing env PQ_CONNINFO")
              	}
              
              	db, err := sql.Open("postgres", os.Getenv("PQ_CONNINFO"))
              	if err != nil {
              		return err
              	}
              	defer db.Close()
              
              	// Keep initializing global, side-effecty, non-pure things.
              	// Pass them down to my main logic, ideally as interfaces so they can be mocked out.
              
              	return nil
              }
              

              The other benefit here is that you get to use defer naturally, instead of accidentally killing your program somewhere in the middle or having to remembering to call db.Close every time before log.Fatal.