1. 15
  1.  

  2. 2

    It never occurred to me to desire write() calls to be atomic with respect to simultaneous read() calls. Mainly what I think of (and care about) is multiple threads or processes writing to the same file, and those are indeed atomic if it’s a regular file or if the amount written is small enough.

    1. 2

      Actually, even small writes to a pipe are not atomic once the pipe buffer fills up. The normal thing that happens there is the write partially completes and then blocks and the process is put to sleep. Latter parts of that write call only happen when a reader has read from the pipe to make room. If there are >1 writers to the pipe, then they can be awoken in any order in which case their writers are interleaved. This is avoidable with O_NONBLOCK, of course, but can be a real gotcha.

      As a concrete example, this is not reliable unless you change it to -P1 (defeating the purpose of real-time-speed up of the very slow file program):

      find . -print |
        xargs -P4 file -n -F:XxX: |
        grep ":XxX: .*script" |
        sed -e  's/:XxX: .*$//'`
      

      To make it fail you want to save the output and run this in a tree with a boatload of scripts or just change “script” to “.” to match everything. Then check that output against names in the tree. It may take multiple trials and your -P might need to be about as large as the number of CPU core threads you have, depending upon how busy/idle your system is.

      Anyway, the point is this fails even though every individual write(2) call by the file children of xargs is well below the “atomicity size limit” “guarantee” due to the sleeping/scheduling pattern noted above the example pipeline. (At least they’re well below if you are in a normal file tree where path + separator + file type is a reasonably short string.)

      1. 6

        Actually, even small writes to a pipe are not atomic once the pipe buffer fills up.

        That’s incorrect. According to POSIX: “Write requests of {PIPE_BUF} bytes or less shall not be interleaved with data from other processes doing writes on the same pipe.” The entire small buffer is written in one go, blocking the process first if necessary.

        https://pubs.opengroup.org/onlinepubs/9699919799/functions/write.html

        1. 3

          Oops. You are right. I stand corrected. Apologies!

          That pipeline (and a simpler one with just grep 'XxX.*XxX') does interleave, but it is from stdio buffering. Pipelines work fine with stdbuf -oL file. I should have been more careful about concluding something about the OS.

          Reading the source, it turns out that file -n|--no-buffer only does unbuffered output when also using --files-from. The file man page (in combination with an strace test to a tty with line buffering) fooled me by saying it was “only useful” (not “only active”) with --files-from .