1. 10
  1.  

  2. 3

    Interesting to see a more modern Java concurrency approach compared to what I remember from The Art of Multiprocessor Programming.

    What wasn’t clear to me on reading is how the example Atom satisfies the property stated by the author:

    Every operation will succeed after the finite number of tries

    For example, two threads could execute the swap function.

    Thread 1            Thread 2
    =======================================
    oldv = get()
                        oldv = get()
                        newv = swapOp(old)
                        cas(oldv, newv)
    newv = swapOp(old)
    cas(oldv, newv)
    oldv = get()
                        oldv = get()
                        newv = swapOp(old)
                        cas(oldv, newv)
                  ...
    

    If this repeats infinitely then thread 1’s swap never succeeds. Did I miss something? I’m never too confident of pen-and-pencil proofs of concurrent programs.

    What I always thought was a subtle design challenge for concurrent data structures is that you can have “bugs” in your API. For example, if you expose a read() operation and a write() operation on your lock-free data-structure then you open the user to do something like:

    if (read()) {
       write()
    }
    

    Allowing an atomicity violation to creep in.

    1. 1

      the only exit condition from swap is a successful swap, so it has to succeed: there’s a for loop out there. This is pretty much the whole point of Atoms: to allow async swaps using the unary function.

      If you want more guarantees, you can add a counter and make sure you exit after a certain amount of retries.

      As regards API design: you’re absolutely right. It’s always hard. Luckily, Atom doesn’t have this problem, since old value is always explicitly passed, and it’s user’s responsibility to make sure he doesn’t leak the modifiable state.

      1. 1

        Yeah, the author is not terribly precise here. My impression is that lock-free algorithms guarantee over-all system progress, but won’t prevent per-thread starvation. As you’ve noticed, an Atom can always starveā€“in particular, optimistic concurrency often does poorly under extremely high concurrency, as many threads may do work simultaneously, but only one will have its work actually be used.

        I think one possible next step from using AtomicRefs together is something like STM, where you can combine several separate atomic operations into a single larger atomic operation. I’ve heard mixed things about how well it performs in general though, so I’m not sure that it’s the next step.

        1. 1

          All the JVM-based implementations of STM that I have seen so far are using lots of locks internally, so wouldn’t say it’s a progressing step against contention.

          Although what could help is to design a system the way all the writes are going through the single writer.

          1. 2

            Locks don’t necessarily mean you suffer more from contention. If you are contended the vast majority of the time, programming with locks can be much faster than optimistic locking. In this way, programming with locks can ensure much higher throughput.

            Well, if we’re really looking at implementations, java compare and swap uses LOCK before CMPXCHG on x86, so we are all sinners. I think correct implementations of STM, even if they rely on locks eventually, can’t run into deadlocks, and are guaranteed to always make system-wide progress.

            Note that lock-freedom doesn’t mean you can’t use locks, it just means that at least one thread must always be able to make progress, even if individual threads starve.

            Single-threading your writes is OK if you don’t have to do a lot of writes, but it’s in no way a silver bullet, and can add latency unnecessarily, especially since you need to do locked bus traffic anyway if you want to read safely.