1. 11
  1.  

  2. 8

    Something related I learned about a couple weeks ago, bash supports parallel execution via forking and wait. For example:

    # Move multiple files in parallel
    for filepath in *; do
      mv "$filepath" "$filepath.bak" &
    done
    
    # Wait for child processes to complete
    wait
    
    # More code goes here
    

    This is practical for when you don’t want your coworkers to need to install an external dependency. Unfortunately, it’s not great for if you want to preserve output order (e.g. in my case, I was getting image signatures). It could be hacked further to use bash arrays but in the end parallel was an easier solution

    1. 1

      Interesting! Could this possibly be used then to port Pure to bash?

      1. 2

        It already has been? https://github.com/sindresorhus/pure#ports

        Looking at how pure works really quickly, its using zpty and file descriptors to do async communication and not fork/wait. So the simple answer would be no. This is also really basic job control in shells. You can get much more complicated. Note there is no checking of the return code vi using $! etc…

        If you really wanted to you could use $! to wait on specific background pids. If you wanted to preserve output order you could just keep an array of the pids you backgrounded and save their outputs to either a tempfile or variable and then output that at the end. At least thats the more old school way of doing it.

    2. 2

      Parallel is super-easy to use and super-convenient. I had one testsuite-running Makefile target that would iter on directly and use recursive MAKE to test them:

        @for dir in tests/$**; do
          $(MAKE) $(NO_PRINT) exec-one DIR=$$dir;
         done
      

      which I justed turned into an enumeration of the directories, piped into a parallel call:

      @for dir in tests/$**; do echo $$dir; done
       | parallel --gnu --no-notice --keep-order
           "$(MAKE) $(NO_PRINT) exec-one DIR={} 2>&1"
      

      (The inner 2>&1 is there to make sure that each subtest prints its stderr and stdout together, as the non-parallel version does. Otherwise parallel will cleanly separate the stderr of all runs from the stdout of all runs, but I wanted to preserve the exact behavior.)

      On my machine, the whole run went from 2m30s to 0m57s, with no observable change in behavior.