Just yesterday I was reading The byte order fallacy which seems it could apply to dates and times as well. It shouldn’t matter whether the server is running UTC or not. Instead any software running on it and communicating with other processes either needs to include the timezone in it’s messages, or convert to/from UTC. Assuming UTC is like assuming everything is little endian and saying “don’t plug any big endian machines into this network”.
I don’t know what version of cron the author is using. The man page for “vixie” cron (the one true cron) says this in a section about DST:
If time has moved forward, those jobs that would have run in the interval
that has been skipped will be run immediately. Conversely, if time has
moved backward, care is taken to avoid running jobs twice.
Yeah, I wasn’t too explicit there, but there are a bunch of programs that emulate cron like functionality inside your vm (e.g. resque scheduler, quartz etc). I think it’s better to just avoid the whole headache of DST entirely.
Actually you should use TAI, not UTC. On past performance, linux systems break when there’s a leap second; supposedly they’ve fixed the bugs this time, but they said that last time too.
For seriously engineered programs the system timezone shouldn’t matter (at my previous job we used checkstyle rules to ensure that we never called a method that used the system default timezone). But yeah, if you’re using the likes of cron or random shell scripts for mission-critical tasks, as many systems do, then setting the server timezone to something absolute is the pragmatic option.
Would that really be better? It seems like the perpetual offset from UTC could be a bigger problem than leap seconds.
Yeah, TAI spreads the problem all over. My pragmatic solution to leap seconds has been to turn off all computers over that moment and then power them back on.
If you’re big enough to run your own NTP servers you can use the google approach of skewing the second length over a couple of days.
Now you have three clocks… I’m working on a system that collects data from remote sensors and figuring out how to handle time has been educational. Fun fact: Unix time doesn’t have unique representations for all UTC dates.
Yup, which is a good reason to prefer TAI.
Yes, it’s much more testable. You need to handle the difference, but once your system is handling it it will keep handling it.
What is TA1? A quick search did not return helpful results…
TAI (Temps Atomique International) misspelled.
Probably TAI would be a better idea, as @lmm says; not only doesn’t it have DST, it also doesns’t have leap seconds.
If putting your servers in UTC matters, your time handling is broken. Timezones should be a presentation layer around TAI timestamps.
Did I just click on a link to an article on lobste.rs that… cited a comment to that posted link on lobste.rs?
The article is very western-centric. Most of the world doesn’t do Daylight Savings Time anymore. Japan doesn’t, China doesn’t, Kazakhstan doesn’t, Russia doesn’t, even US state of Arizona doesn’t.
Gosh, just look at the wikipedia, they do have a nice visual map up there – pretty much the whole world doesn’t do DST anymore! I’d say, might as well keep your clock at Beijing Time, no need to bother with UTC.
The article does not recommend that one use DST; it simply advises that if one finds oneself operating servers in those locales (and I assure you there are a nontrivial number of servers in the US), it’s a good idea to avoid the hassle of DST.
Picking an arbitrary offset (e.g. Beijing time) works, but it does tend to complicate your time arithmetic somewhat. You must:
I’m sure I speak for everyone here when I assure you that I have never made any of these mistakes personally and they have never cost me weeks of my life. ;-)