1. 15

  2. 4

    Second, if there’s really no OS to speak about, and you are on the bare metal (or in the kernel), it gets even worse than priority inversion.

    On bare metal, we generally don’t worry about thread preemption, but we need to worry about processor interrupts. That is, while processor is executing some code, it might receive an interrupt from some periphery device, and temporary switch to the interrupt handler’s code.

    And here comes the disaster: if the main code is in the middle of the critical section when the interrupt arrives, and if the interrupt handler tries to enter the critical section as well, we get a guaranteed deadlock! There’s no OS to switch threads after a quant expires. Here are Linux kernel docs discussing this issue.

    The linux kernel does indeed prefer mutexes. From a related article,

    Unless the strict semantics of mutexes are unsuitable and/or the critical region prevents the lock from being shared, always prefer them to any other locking primitive.

    However, one of those strict semantics is

    • Mutexes may not be used in hardware or software interrupt contexts such as tasklets and timers.

    This is because interrupt handlers cannot sleep in the linux kernel. For a look at the reasoning behind this decision, have a look at this response on lkml. So mutexes cannot be used to protect data used in an interrupt context.

    So how does one prevent the scenario above, where an interrupt caused a deadlock by trying to take a lock held by a task it interrupted? Just disable interrupts before taking the spinlock. For more information, check out this article on locking.

    While spinlocks can be a poor fit for user code, they are necessary for kernel work, unlike what the article appears to be implying.

    1. 1

      From the title I expected an empty pile-on to the various other spinlock articles this week but this is a very practical and detailed writeup