Can anyone explain the benefits of highly accurate timekeeping in a datacenter since the article mostly glossed over it? I’m interested in finding out why Facebook wanted to make this.
Highly accurate time makes distributed systems easier to make. You can now (better) rely on time as a “happens before” instead of inventing a scheme like vector clocks. See the Spanner Paper from Google which talks about the TrueTime API. Of course, even with TrueTime, you end up with a window of time the event happened in because you can’t have clocks synchronized so precisely to rely on it alone.
Two events are recorded in two datacenters one millisecond apart. How do you tell which one happened first? What if the datacenter clocks have a five millisecond error bar?
More accurate time keeping enables more advanced infrastructure management across our data centers, as well as faster performance of distributed databases.
I know this is wrong, because I don’t need it in any way at all, but … I want one
Can anyone explain the benefits of highly accurate timekeeping in a datacenter since the article mostly glossed over it? I’m interested in finding out why Facebook wanted to make this.
Highly accurate time makes distributed systems easier to make. You can now (better) rely on time as a “happens before” instead of inventing a scheme like vector clocks. See the Spanner Paper from Google which talks about the TrueTime API. Of course, even with TrueTime, you end up with a window of time the event happened in because you can’t have clocks synchronized so precisely to rely on it alone.
I am sure others could add a ton more to this…
Two events are recorded in two datacenters one millisecond apart. How do you tell which one happened first? What if the datacenter clocks have a five millisecond error bar?