1. 5

  2. 11

    Title is sort of the wrong message.

    The right message is the critical region should be as small as possible, and only protect against racy access to shared resources.

    Everything else that is in the critical region, but doesn’t need to be, increases contention without value.

    1. 3

      I think that’s a fair summary. Having worked with Go for awhile now, I think it’s interesting to try to formalize the guiding principles of writing robust thread-safe code. Keeping the critical section as small as possible is a well-known idea, and yet I still see examples of people locking around I/O, which might be an accident of the lock/defer unlock pattern.

      1. 3

        If the I/O is the contended for shared resource, eg. interleaving the outputs from multiple threads into a single output, then yes, i/o should also be “locked around”.

        The point is I/O or not I/O, the point is the shared resources, any shared resource, that as /u/tscs37 said below that need to be access atomically.

        It is very useful to think in terms of invariants. ie. Relationships that must always hold between data items.

        If a thread race may break the invariant (either by leaving the variables inconsistent relative to each other, or by providing a thread with an inconsistent snapshot of them) then you must lock around them to protect the invariant.

        However, a larger region than absolutely necessary, that is locked, doesn’t make you safer, it merely, as you rightly observed, always increases (sometimes drastically, as you observed) the latency of your system.

        It also, as /u/bio_end_io_t observed below, increases the opportunities for things to go very wrong. eg. Dead locks and priority inversions.

        which might be an accident of the lock/defer unlock pattern.

        In C, which lacks the nifty go “defer” thing, or the C++ RAII, or the Ruby “ensure”, I have taken to actually introducing a entire new scope for each lock…. eg.

        void myFunc( void)

        And then make sure as little as possible is in that scope.

        Sort of makes it in your face obvious which is the critical section. (Which is especially visually handy if you need to take two nested locks, and allows you to rapidly check that you always take them in the same order everywhere.)

        And Obvious is a Very Good Thing when dealing with multithreaded code.

        1. 2

          This is a fantastic response. Thank you for the thoughtful comments.

      2. 3

        I agree that keeping critical regions small and not locking around IO are good rules of thumb.

        There are some subtlies since you need to know what exactly is “as small as possible”. For example, in general, you don’t want to call kmalloc inside of a spinlock because it may block. Of course, you can pass the GFP_ATOMIC flag to kmalloc to avoid the block, but in most cases, you can allocate outside the critical section anyway.

        So really, you want to avoid blocking inside a critical region, but use something like a mutex or semaphore that can release the CPU if there is a chance you will need to sleep inside the critical region, because quite frankly, that can happen. IO tends to be something that can block, so if you NEED locking around IO, avoid using spinlocks.

        Edit: fixed typo. should say “that” instead of “than”

        1. 2

          There is two sides to it. There is of course when you need to keep this critical section as a small as possible and I think that would affect the majority of the code.

          There would also be code that needs to operate atomically in transactions. This would mainly affect anything that needs to process a lot of data while holding the lock for it. In that case you want the lock to be as wide as possible as to not have to enter the locked area all the time.

        2. 2

          Just a note, it would be helpful to preface code samples with the language that they’re written in (in this case, Go). Thanks!

          1. 1

            Good point. I’ll update the post.