It’s an interesting idea, but in my opinion, it reaches a little in service of bolstering the AP story. The observable local real world isn’t eventually consistent, and that’s where we spend all of our time and form all of our intuitions. Even when we see a far off event happening on TV, we perceive it as being perfectly contemporaneous and admit zero possibility that the experience will be altered retroactively to include updated distant information. So I feel like the argument is unconvincing, and I still feel like AP is a great idea waiting for an actual use case.
Personal anecdotal example of correspondingly low value: having dealt with global leaderboards myself for a certain (in)famous, well-trafficked multiplayer shooter, I was super interested to read an article on using AP & CRDTs with leaderboards (https://christophermeiklejohn.com/lasp/erlang/2015/10/17/leaderboard.html). But the benefit of AP was still a mystery to me after reading it; we were able to keep the leaderboards for even this game, which experiences significantly more traffic than all but possibly 2 mobile games (and maybe all), in a single centralized sql instance with a few read slaves. Certain very high update rate stats went into a nosql store. The insert, select and update statements for the sql and nosql stores were the predictable one-liners. Data didn’t have to be specially structured. All of our availability problems had nothing to do with network partitioning between members of the cluster over the course of several years. The architecture and problem-solving around the architecture were so standard I could get junior developers up to speed in hours. Comparing and contrasting that with LASP, even as an erlang fan, …
The observable local real world isn’t eventually consistent, and that’s where we spend all of our time and form all of our intuitions.
I don’t think this is a true statement. Knowledge is a function of perception, and is not immediate unless we’re the agent. Reasoning about truth external to us is bound by the laws of math and physics. For example, if a star goes super nova, I will eventually receive that update–but someone closer to the event will undoubtedly see it first. We can compute, based off relative distance, when that event occurred but it might take some time to form consensus.
I agree that it’s only a true statement insofar as I included the word “local”. I drop a hammer; the hammer is on the ground simultaneously for everyone in the room. There’s no possibility that a larger quorum of people are going to barge in and declare that the hammer did not in fact land on the floor, because it was replaced by a screwdriver before it could, and so now we should all amend our experiential beliefs to account for this new factual evidence. I’ll agree with you as far as cosmological events over interplanetary distances, but the market for those databases may not be very big at the moment.
But the benefit of AP was still a mystery to me after reading it; we were able to keep the leaderboards for even this game, which experiences significantly more traffic than all but possibly 2 mobile games (and maybe all), in a single centralized sql instance with a few read slaves.
The benefit of the CRDT based leaderboard, using Lasp, is that it allows peer-to-peer synchronization without coordinating with a central MySQL instance, where the guarantee is convergence by each member in the system without the risks of message ordering introducing nondeterminism.
Doh, I wasn’t clear, my apologies. I understood the paper, and Lasp looks pretty nifty. What I didn’t understand was the motivation or the payoff, given the triviality of the problem space, the low cost-to-implement and cost-to-operate of the “traditional” solution, and the added complexity of the Lasp-based solution. What factors, if any, got a lot better and made the cost-benefit analysis worth it to do it this way? Or was this pure thought experiment?