Hi Lobste.rs,
I submitted the homepage for Replicache earlier this year, when it was just a landing page:
https://lobste.rs/s/bnifoa/easily_add_offline_first_any_application
The commentary I received at the time was basically “interesting claims, but not much to see yet”.
I’m posting again to share that Replicache is now available in beta for web applications.
We are licensing it using the Business Source License (ala CockroachDB) and you can peruse the source here: https://github.com/rocicorp/repc.
The detailed design doc is here: https://github.com/rocicorp/replicache/blob/master/design.md.
For more information, see https://replicache.dev.
Or to just try it out right now, see https://js.replicache.dev.
Thanks!
Hi, I work at Notion, a collaborative note-taking and project management company. We use many similar ideas.
It’s a very interesting space to look at because offline-first and collaboration are now critical table-stakes features, and many of the vendors out there right now are quite rudimentary. For example, Firebase seems like a joke due to scale limits, and Firestore doubly so because of the lack of serious tooling. It’s heartening to see something with similar overall model to our system hopefully it means we’re both on the right tract.
Thank you for the substantive reply Jake.
I spoke with Chet Corcos and a few others (maybe you? if so, apologies for forgetting) at Notion early on in the development of Replicache about the system used there. I am not sure if it has changed since then. It was encouraging then as now to find that Replicache is basically a generalization of what you are doing.
A few thoughts:
It is possible to have the client view return a “coarse diff” to the diff server, rather than a full snapshot. The application can progressively increase the granularity of diff it returns to the diff server as it wants to trade complexity for performance. In an application like Notion, a nice place to draw boundaries would be the document level: when a document is updated, return a diff that contains the entire state of that document, but no other documents.
I agree that the subscribe-to-query at the backend is the dream that really makes a design like this complete. We think of the diff server as a sort of sledgehammer that makes the overall design of Replicache possible for many customers today, without dramatic server-side rearchitecture. However, if you have a more principled way to get those diffs, that is better. I’m hopeful that databases like FaunaDB (or maybe Materialized) will enable this functionality over time, and Replicache can become purely client-side technology.
You’re totally right about course-grained sync; we’re discussing it as our next step.
I think Cloudflare Durable Objects are also very interesting here, although it remains to be seen what the practical limits are.
Hi! In what is apparently become a Notion party I work with Jake :)
I come from the mobile side (Android) but have spent my career being sync adjacent. It’s wonderful to see someone treating the mobile <-> server relationship as Just Another Distributed System. Replicache incorporates all of the best things I’ve seen from a half dozen attempts at sync: git style branched state, pending transactions cleared as part of sync, optimistic updates. “transactions are guaranteed to be applied atomically, in the same order, across all clients” is new to me for client sync and very cool!
I’m actually in the middle of doing the per-document coarse diffing you were talking about! Some super rough thoughts:
sync
calls until nothing is left which feels inelegant.Two questions:
Thanks. I was involved in or nearby many attempts at this at Google over the years, so have been thinking about it on and off for a long time.
I don’t quite follow the spectrum you are trying to setup, but it seems interesting. Can you expand? Note that Replicache does not actually send all data in every sync to clients. Only deltas are sent to the clients. The “diff server” does fetch a full snapshot from the customer server, but (a) that is server-to-server communication, and (b) we hope to expand this in the future to allow the customer server to return coarse-grain diffs. At the limit, the server can just return diffs itself and you don’t need the diff server.
Here are some of the invariants that Replicache maintains, perhaps they will be useful for you:
I actually think in some ways it is easier to make the more general thing that allows transactions to be arbitrary. I’m not sure about the details of Notion, but in many attempts at this people try to restrict the data model to either things that always merge, or operations that know how to undo themselves. In both these approaches there’s an ongoing tax to working with the system – you have to learn how to use these specialized datastructures, and/or ensure that you keep the undo operation working properly.
Replicache instead takes on a big up-front cost (a versioned transactional cache that can be cleanly unwound) in exchange for less on-going complexity: any mutation works and can be guaranteed to revert perfectly. You don’t have to think so much about merge and sync on a day-to-day basis.
I don’t follow this. Are you talking about in your system or in Replicache? I think in any model there is always the case that the user adds more data while sync is happening so you’ll have to go again. It sounds like you’re talking about something else though.
I find the dag-y / git-like model of time very easy to reason about, and the performance of massive distributed databases is not needed in this application. So no, I can’t think of any constraint that would be useful to relax, on the client side.
This would greatly help, but it’s more difficult than it seems. There are versioned databases already that exist. But a sync system doesn’t need databases to be versioned, it needs queries to be versioned. Clients don’t have a copy of the full db, they have a projection of it. Typically an extremely reduced projection. (This is part of what hampered Couchbase adoption).
What we need to send to clients is a diff over that projection. To make matters worse most applications do work at the application layer “on top” of queries. The thing that needs to be versioned is not the db, or even a query result, it’s the data that gets sent to the client.
I think the work that materialize.io is doing is very relevant here, but they are targeting huge backend datastores.
This is why I like the diff server approach. Yeah it’s inefficient, but it’s simple and guaranteed to work. If you do it on a document-by-document level for e.g., Notion, you’re basically sending the complete content (modulo blobs) of a single document between servers on the same network everytime you sync that document, which doesn’t seem terrible to me.
But yes, for absolutely optimal performance, you’d want to be able to subscribe to a query on the server, and get deltas directly from it, without computing a diff.
TL;DR: The conflict resolution algorithm seems to be app-specific code that runs in the database. I didn’t get more details, documentation is unclear.
Pricing is ridiculous.
What would you price this at? It looks high for my company’s current scale [and at this point we want to own the whole stack anyways], but an earlier Notion might have found this offering attractive.
I realize you’re being derisive, but in a sense, yeah:
The fact that conflict resolution is handled by running normal, arbitrary functions serially against the database on client and server is the point. Other systems either restrict you to specialized data structures that can always merge (e.g., Realm), or force you to write out-of-band conflict resolution code that is difficult to reason about (e.g., Couchbase). In Replicache you use a normal transactional database on the server, and a plain old key/value store on the client. You modify these stores by writing code that feels basically the same as what you’d write if you weren’t in an offline-first system. It’s a feature.
===
TL;DR: Replicache is a versioned cache you embed on the client side. Conflict resolution happens by forking the cache and replaying transactions against newer versions.
When a transaction commits on the client, Replicache adds a new entry to the history, and the entry is annotated with the name of the mutation and its arguments (as JSON).
During sync, Replicache forks the cache and sends pending requests to your server, where they are handled basically like normal REST requests by your backend. You have to defensively handle mutations server-side (but you were probably already doing that!). Replicache then fetches the latest canonical state of the data from your server, computes a delta from the fork point, applies the delta, and replays any still pending mutations atop the new canonical state. Then the fork is atomically revealed, the UI re-renders, and the no-longer needed data is collected.
It is not rocket science, but it is a very practical approach to this problem informed by years of experience.
As for the price, it’s weird. Teams that have struggled with this have basically no problem at all with the price, if anything they seem to think it’s too low.