1. 60
  1.  

  2. 5

    So what’s the backend database? Is it an “real” graph database (like Neo4J)?

    1. 26

      We actually tried Neo4J in an early version of Notion (before my time) but it was very slow for the kinds of access Notion does.

      We try to use very boring technology. Our source-of-truth data store is Postgres, with a Memcached cache on top. Most of our queries are “pointer chasing” - we follow a reference from one record in memory to fetch another record from the data store. To optimize recursive pointer-chasing queries, we cache the set of visited pointers in Memcached.

      1. 5

        If could view your data structures as trees, rather than graphs, Postgres has ltree module, I use it for taxonomies, works very well (our performance loads are not very big, so cannot comment on very large deployments experience). Some times ago, I traced inner joins operations of regular tables of JSOB fields with our taxonomy trees, and works well. Postgres looks at indices as appropriate.

        1. 7

          I looked at ltree a few years ago when I joined Notion. When I first saw the DB access pattern I asked, “O(n) queries for a page?!? What! This should be O(1)!”. But I believe for ltree, to move a parent node, we’d need to rewrite the path on all descendants, so we get O(1) queries for page reads, but moving a block becomes an O(n) rows updated. That kind of write amplification fanout isn’t worth the more efficient read path.

          I think this kind of shock is common for relational database users coming from small data apps to much bigger data apps — when your data is big enough, you also have to give up JOIN and FK constraints. It’s just not worth the database resources at a certain scale.

          1. 1

            Yes, makes sense. In my case taxonomy trees are static data (changed rarely), so there are no writes into them. And they are negligible in size, compared to operational data that uses them.

            I can see that your system uses trees or graphs for operational data, so writing performance is a key criteria.

        2. 3

          Heh, can’t fault you for choosing Postgres! I assume you have many graphs, that aren’t particularly deep? From my understanding of what blocks are used for, it seems like you’d have graphs that are wider than they are deep, and … hundreds of nodes in total? A few thousand? Any idea what the average and largest number of nodes in a graph is?

          (I am assuming there’s one graph per document?)

          1. 5

            It’s more like a tree descending from the root of a workspace, and a workspace can have 1000+ users in it collaborating on shared pages or working in private. But pages are just a type of block, and like other blocks, can be nested infinitely. To render a page, we crawl recursively down from the “page” block, and we stop at page boundaries, which are just other page blocks below in the tree. (There is a cursor system for doing this incrementally and prioritizing the blocks at the “beginning” of a page that the user needs to see first.).

            So, spaces are quite a broad and deep graph, but the scope is about what you estimate for an individual page. I don’t have estimate numbers on hand for depth or block count within a space.

            1. 1

              Thanks, it’s been really interesting to learn about this :)

          2. 1

            Did you experiment with key value store like lmdb, LevelDB or Rocksdb (Or higher level dbms such as Riak or Myrocks)? I was actually looking to experiment with graphs in the database and was wondering how using these more specialized engine would perform. I suppose from an operational point of view, Postgresql is very nice when you already use it.

        3. 2

          How are concurrent edits to the same record handled? For example you people editing the item text or two people adding an item to a block?