1. 10
  1.  

  2. 2

    The original implementation, and I assume this one too, don’t store absolute timestamps in the lock, as to avoid issues with clock skew across different machines.

    There are some downsides to this approach, if you have a stuck stale unreleased lock, the client needs to hang around and wait for the full lease duration before it can expire the lock.

    If you want to instead fail fast if the lock can’t be acquired, you will need some kind of daemon to ensure that the stuck locks are cleaned up, otherwise that lock key will always be blocked.

    Anecdotally, when I’ve used versioned objects and conditional writes with dynamodb, I’ve had a number of issues with contention over particular keys. I haven’t measured the latency, but it seems like it can take quite a long time for a write to be visible to all readers, when retrying operations, I needed to have quite large backoff durations with some randomness. I’m not really sure how ideal dynamodb is for a lock database.

    1. 3

      The original library was used internally at AWS, my team specifically used it for leader election upon application startup. I don’t remember having any issues with it, however that wasn’t a particularly latency sensitive use case.

        1. 1

          Ah good point, looking back, I think this might have been the main issue, trying to do conditional writes based off data obtained by an eventually consistent read.

          Dynamodb really seems to have a lot of gotchas.

          1. 1

            I don’t think it’s fair calling this a “gotcha”, this is a common feature for distributed data stores: fast and possibly old, or slow and correct.

            1. 2

              Well it got me, so maybe a “gotme”.

      1. 1

        Distributed locks can’t work, even in theory.

        There are two ways to handle dead machines. Either the lock can be held indefinitely, or it can expire.

        If it’s held indefinitely, every time hardware breaks requires manual intervention, and can cause deadlock. This is obviously inoperable at scale, and painful even in the small.

        If a timeout is added, then a stalled machine or a split network can cause the lock to expire, which means your code is run outside of a critical section. Your code needs to handle this, so the lock becomes unnecessary for correct code.

        In a distributed system, the only option is wait-free code ( https://en.m.wikipedia.org/wiki/Non-blocking_algorithm )

        1. 1

          Aye, if folks expect them to behave the same way as an in-process mutex, they’ll be disappointed. But for reference, Martin Kleppmann’s article How to do distributed locking seems to be a good explainer here, though.

          1. 2

            That entire article is about how redlock is broken, and how you can’t do a safe lock with it.

            You can use it as a hint for efficiency, but the code still needs to be wait-free in order to be correct. Distributed locks don’t work for locking.

            The example usage in the package doc is the exact pattern that is mentioned in Kleppman’s article. However, the usage example is missing the comment Kleppmann’s article contains:

            // THIS CODE IS BROKEN
            
          2. 1

            I think aphyr explained this as, roughly, “distributed locks aren’t possible, the best you can do is distributed leases”.

            In a distributed system, the only option is wait-free code

            Wait freedom is a strong property. I think many systems get by with just non-blocking semantics and an acceptable low probability of livelock?

            1. 2

              Yeah, you might be right. The essential property is that all other processes make progress if one of the participants in a protocol stalls out for an arbitrary amount of time (or permanently). Locks prevent this, and timing out locks means that in pessimal cases, things act as though you never got the lock.

              1. 1

                Well, I’m asking because it’s plausible to me that I might be wrong. Maybe at some very large scale you end up running into some effect along the lines of “with enough machines, there’s almost always at least one group of them that have gotten wedged into a livelock at any given time, so your overall availability is low without at least lock freedom”. I don’t know either way?

            2. 1

              Yes, we are well aware of the latter issue, this is often taken care off through either fencing token or atomic lock data update, depending on the constraints your application runs under. In delta-rs, we leverage atomic lock data update to implement PUT if absence logic on top of S3. Our algo is formally verified at https://github.com/delta-io/delta-rs/tree/main/proofs/src.