1. 8
  1.  

  2. 1

    I wish they’d take migrations more seriously than they currently do.

    The only way to perform an arbitrary migration of data right now is to use my library Brambling.

    Which is a bit ridiculous since it took me a day or two to write and they could surely do a better job having access to the transactor and peer internals whereas I do not.

    Annoyingly, the only way to have a zero downtime migration is to have a middleware queue for your transactions which can pause and redirect tx flow.

    1. 2

      I think that migrations are philosophically opposed to Datomic’s notion of immutable history. When we’ve done migrations, we’re usually doing one of a couple things:

      1. Changing the index strategy. Here, you can just add or remove indices as needed
      2. Changing the data layout. If you are not changing your application’s API, but you are using a different schema, then both schemas can coexist and you can query old & new data either using DB rules or custom logic in the transactor. Alternatively, you can just rebuild the old entities to new ones in the same db; however, given that you always need a dual-read layer in any case, this buys you only a little over not rebuilding. Luckily, you can use database filters and reified transactions to ensure that, if you are rebuilding, data doesn’t show up in the old and new places.

      I can see that Brambling lets you make a new db with certain transformations of an old db, preserving the transaction structure, but I think that the other approaches (dual read layer or old/new schema) are more in line with the philosophy of datomic.

      1. 1

        I think that migrations are philosophically opposed to Datomic’s notion of immutable history.

        Having talked to them, it’s an excuse and not a reason. They used to say the same thing about shifting cardinality from One -> Many, now they support that.