1. 36
    1. 48

      This API is in a new class of products I like to call “Deadlock as a Service”, or DaaS for short.

      1. 18

        And DaaS ist nicht gut

    2. 16

      I thought this was a joke until I got to the example section, which has a semi-serious example. I went, “really!? Someone actually thinks this is a good idea?” and then halfway through the section realized that actually, yes, this could possibly help solve a problem we’re dealing with right now in $dayjob.

      I don’t really know how to feel about this.

      1. 9

        I love how it’s both a joke and something that is actually useful. Distributed sync is hard, and this is an easy to use solution to the problem of atomic access.

      2. 3

        If there was a timeout, or grab required continually pinging to say “I’m still using it” it’s, on the surface, no worse than using redis. Of course, we have no idea about reliability, internal failover, SLAs, etc, etc, etc.

        1. 10

          Of course, we have no idea about reliability, internal failover, SLAs, etc, etc, etc.

          This isn’t quite true. We know that it uses, and I’m quoting, “the cheapest Linode instance”.

          1. 1

            Fair! :)

            I was trying to be a bit more charitable.

    3. 13

      It’s worth noting that distributed locking doesn’t work.

      If an owner of a mutex dies, then the mutex gets leaked, and your system deadlocks.

      If you try to solve this with a timeout, then you have to deal with network partitions and pauses where one program thinks it has the mutex, but in fact the mutex has timed out. To make your system resilient to this, you need to deal with mutexes getting unlocked as you run. That means you need your algorithms to work in the absence of locking.

      Distributed mutexes are, at best, an optimization.

      1. 10

        It’s ok, I’ve got an AI startup in the works that can resolve distributed deadlocks and network partitions automatically. Makes Erlang OTP look like a kid’s toy.

      2. 3

        Distributed leader election is equivalent to distributed mutual exclusion, for example.

    4. 7

      I built something similar to this years ago: https://github.com/kevsmith/crest

    5. 5

      A lot of people are thinking this is a really serious project and not a fun little silly thing

    6. 4

      My (extremely limited) experimentation with Parallel/curl leads me to believe that you managed to not build this on an eventually consistent datastore, which is more than I can say about a previous coworker’s attempt to build a distributed mutex system :-/


      1. 8

        It’s implemented on top of sqlite.

        Also. What. An eventually consistent mutex? That’s a new one.

        1. 4

          It wasn’t supposed to be eventually consistent – but when you chose DynamoDB as your backing store and don’t tell it you need strongly consistent reads, funny things happened when you were locking and unlocking under load.

          Thankfully it was caught before it got anywhere near a production system. By which I mean “we rewrote what he was doing to run on a single system so it didn’t need distributed anything.”

    7. 3

      This is so fun! thank you for sharing!

    8. 2

      Those who do not understand Unix are condemned to reinvent it, poorly