1. 9
  1.  

  2. 4

    This looks like a cool piece of work. Though I wonder how this handles consistency, especially with the Redis driver. For example, losing an INCR on only one node. Or getting things out of order, like two LPUSHes.

    1. 3

      Good question.

      I also wonder what the benefit of something like Dynomite is relative to implementing a Memcache API on top of Riak with a memory backend.

      1. 2

        It does have different ideas about data locality, the whole racks are a full replica thing. Riak doesn’t care at all as long as it’s replicated N times. Though you could achieve a similar effect with Riak Enterprise multi-datacenter replication.

        I’d be interested to see a performance comparison between the two.

        1. 3

          I’d also argue that it is significantly easier to build a replication layer on top of Riak than build a replication layer + Riak.

          What I find confusing about Dynomite is that I’m not sure what parts of Dynamo they actually copied.

          The post states:

          Dynomite is the Dynamo layer with additional support for pluggable datastore proxy, with an effort to preserve the native datastore protocol as much as possible.

          However:

          Each node in a rack has a unique token, which helps to identify the dataset it owns.

          But it doesn’t actually talk about how that unique token is calculated. Is it a consistent hash? And what is this ‘ownership’ concept?

          I find the lack of discussion of consistency and how it tracks causality also concerning. If you’re build a distributed system these should be the first things you tell the world, IMO.

          1. 2

            I find the lack of discussion of consistency and how it tracks causality also concerning.

            Precisely the point of my original comment. The early adopters will no doubt tell us eventually. ;)