1. 9
  1.  

  2. 3

    Oh cool! You also would have fun, I bet, doing a ring buffer instead of this linked-list style queue.

    You may find better throughput and memory performance for only marginally larger implementation cost.

    1. 2

      Ring buffers are cool. Also a non-locking threadsafe implementation is also pretty cool. I found one over here https://github.com/darkautism/lfqueue.

    2. 2

      Condition variables are not quite ideal here.

        pthread_mutex_lock(&queue->mutex);
        // (1)
          while(queue->head == NULL) { // block if buffer is empty
            pthread_cond_wait(&queue->wakeup, &queue->mutex);
          }
      

      If the pop thread gets descheduled at (1), everyone else gets blocked until the scheduler runs the pop thread again, which immediately goes back to sleep if the queue is empty. With semaphores, the “go to sleep if the queue is empty” check is atomic doesn’t lock everyone out of the queue.

      https://github.com/darkautism/lfqueue

      This is extremely bad. The scheduler is stupid (and your user might want to do other things with their computer) - it’s your job to make it easy for the scheduler and put your threads to sleep when they have nothing to do. If you have heavy contention, replacing all of your mutexes with spinlocks means all of your idle threads will steal scheduler time from the thread that can actually make progress and unblock the others. The point of lock-free algorithms is to make the worst case less bad, but this code does the exact opposite.

      Whoops that’s all wrong - the spinning threads can’t block other threads from making progress because an operation only fails when another thread succeeds. It’s still not good though. If your MPMC queue has high contention, each CAS operation basically becomes a cache miss and most of your CPU time goes to waste. If you’re bottlenecked on a contended MPMC queue, redesigning your program so that’s no longer the case is going to be a bigger win than using a faster queue.