1. 4

  2. 6

    From a pristine db:

    > db[1] = 1; db[2] = 2; db
    {u'1': 1, u'2': 2}
    > db.undo(); db
    {u'1': 1}
    > db[2] = 3; db; db.redo(); db
    {u'1': 1, u'2': 3}
    > db.undo(); db
    {u'2': 3} #???
    > db.undo(); db.redo(); db.redo(); db
    {u'1': 1, u'2': 3}
    > db.redo(); db
    {u'1': 1, u'2': 2} #?!


    • Client undo/redos will interfere with each other (one undos, the other redos, etc).
    • Since the server switches to listen on a different port when you create a ‘lock’, you can have a single client request a lock and then do nothing, which prevents anyone else from querying.

    I think this could be really interesting as a teaching tool: show how even a db that ‘looks’ okay can have subtle errors, and that dealing with concurrency is a really hard problem.

    1. 1

      Thanks for noticing this! It’s a bug in undoable that is now fixed.

      For your other two points, it’s also in the README [1][2], but more importantly I’m interested in discussing solutions.

      For example, I’ve avoided using any form of timing in this project prefering to leave it to libraries (that can independently update the heuristic definition of a lost connection). But a simple way to avoid locking up on a dead client is to have a timeout when waiting for requests from it.

      More generally, I don’t think handling ill-behaved clients is such a clean cut problem. What if the client has a bug that keeps requesting the same data in a loop?

      [1] “The client program has to be written so the order in which other accesses are treated are unimportant.” But I can see that it may be ambiguous if undo/redo are considered accesses.

      [2] “No deadlock (dead client) checks are implemented.”