Just to share a little backstory on how this came about:
Back in 2019, I was doing some security work and had alot of fun—learning and earning bounties for exploits in e.g. OpenSSL, Fastmail and Google Chrome. This adversarial approach stayed with me, so when it came time to implement TigerBeetle’s consensus protocol, we thought it would be really cool if people could learn consensus by breaking a real implementation, because it’s a different way in, to engage with the subject matter. We’re usually taught the defensive “blue team” way, but this would be learning consensus the adversarial “red team” way.
So this is like the distributed systems equivalent of a security bug bounty program. If you can find a correctness bug that violates strict serializability or leads to data loss then you could earn a bounty of up to $3,000.
Consensus protocols are notoriously difficult to get right, so this is exciting for a bug bounty program, but TigerBeetle also has a challenging storage fault model, where the consensus protocol needs to survive scenarios where even writes to disk might be a no-op, or where writes (or reads) might be misdirected and written to (or read from) the wrong disk sector.
But to make it really realistic, our team are also providing some cool “red team” tools: e.g. a deterministic fault injection fuzzing simulator that can do state checking of all state transitions, and then replay anything interesting verbatim over-and-over so you can apply your skill to figure it out.
No pressure! ;)