1. 2
  1.  

  2. 1

    It’s interesting that figure 3 shows one kafka consumer writing to the cache. This is indeed a good way of ensuring sequential consistency because there will never be two concurrent writes to the cache happening. But if you can write to your database faster than you can consume messages and fan them out to caches, you either need to drop messages on the floor or throttle writes to the database.

    You could solve this by partitioning the messages based on some application logic, which is a common idiom with Kafka. I’m not so familiar with how replication in mysql/postgres works. Could you achieve something similar?