How do programs not take infinite memory if you allow rewinds to arbitrary points in time? (I believe in FRP systems, this is called a “time leak”, but don’t quote me on that.)
A colleague was suggesting that an even better thing to do would be to have “unlimited undo” that is spaced out over a curve of time: a bunch of near ones, and drop off ones at the further away area with larger increments.
In general, the transactionality stuff does not yet store prior history unless you do something special, like Terminal Phase does, to instrument it. Instead, transactionality is used on handling a single toplevel message to an event loop: exceptions cause interactions to roll back safely, but listeners to a promise of that message are notified.
Nothing is stored in that case currently, but one of the things we are working on is our distributed debugging features. What we do want to keep in that case is the message causing the exception and the state of the event loop’s transactional heap at that time, so it can be debugged against in isolation. But deciding how that’s enabled, how much history to store, etc, and what are the good defaults… those are things we will need to figure out and explore through actual use.
@washort gave a name for this sort of exponentially-decaying data structure; it’s called an inigo. The underlying pattern that you’re describing has also been called exponential decay.
Generally you don’t need that much memory if you utilize de-duplication well, and you could save old transactions on disk to buy yourself a couple years of time. Some interesting conclusions have come from eidetic systems research, like for example the fact that you can store several years of desktop usage history on a fairly cheap hard drive with low overhead. I’m not sure how well that would scale to distributed systems, but there is some prescient that it is at least somewhat possible.
How do programs not take infinite memory if you allow rewinds to arbitrary points in time? (I believe in FRP systems, this is called a “time leak”, but don’t quote me on that.)
Looks like Terminal Phase takes periodic snapshots and puts them in a ring buffer:
https://gitlab.com/dustyweb/terminal-phase/-/blob/master/terminal-phase.rkt#L114
Yup! You got it :)
A colleague was suggesting that an even better thing to do would be to have “unlimited undo” that is spaced out over a curve of time: a bunch of near ones, and drop off ones at the further away area with larger increments.
In general, the transactionality stuff does not yet store prior history unless you do something special, like Terminal Phase does, to instrument it. Instead, transactionality is used on handling a single toplevel message to an event loop: exceptions cause interactions to roll back safely, but listeners to a promise of that message are notified.
Nothing is stored in that case currently, but one of the things we are working on is our distributed debugging features. What we do want to keep in that case is the message causing the exception and the state of the event loop’s transactional heap at that time, so it can be debugged against in isolation. But deciding how that’s enabled, how much history to store, etc, and what are the good defaults… those are things we will need to figure out and explore through actual use.
@washort gave a name for this sort of exponentially-decaying data structure; it’s called an inigo. The underlying pattern that you’re describing has also been called exponential decay.
Generally you don’t need that much memory if you utilize de-duplication well, and you could save old transactions on disk to buy yourself a couple years of time. Some interesting conclusions have come from eidetic systems research, like for example the fact that you can store several years of desktop usage history on a fairly cheap hard drive with low overhead. I’m not sure how well that would scale to distributed systems, but there is some prescient that it is at least somewhat possible.