Just yesterday I was reading The byte order fallacy which seems it could apply to dates and times as well. It shouldn’t matter whether the server is running UTC or not. Instead any software running on it and communicating with other processes either needs to include the timezone in it’s messages, or convert to/from UTC. Assuming UTC is like assuming everything is little endian and saying “don’t plug any big endian machines into this network”.
I don’t know what version of cron the author is using. The man page for “vixie” cron (the one true cron) says this in a section about DST:
If time has moved forward, those jobs that would have run in the interval
that has been skipped will be run immediately. Conversely, if time has
moved backward, care is taken to avoid running jobs twice.
Yeah, I wasn’t too explicit there, but there are a bunch of programs that emulate cron like functionality inside your vm (e.g. resque scheduler, quartz etc). I think it’s better to just avoid the whole headache of DST entirely.
Actually you should use TAI, not UTC. On past performance, linux systems break when there’s a leap second; supposedly they’ve fixed the bugs this time, but they said that last time too.
For seriously engineered programs the system timezone shouldn’t matter (at my previous job we used checkstyle rules to ensure that we never called a method that used the system default timezone). But yeah, if you’re using the likes of cron or random shell scripts for mission-critical tasks, as many systems do, then setting the server timezone to something absolute is the pragmatic option.
Yeah, TAI spreads the problem all over. My pragmatic solution to leap seconds has been to turn off all computers over that moment and then power them back on.
Now you have three clocks… I’m working on a system that collects data from remote sensors and figuring out how to handle time has been educational. Fun fact: Unix time doesn’t have unique representations for all UTC dates.
The article is very western-centric. Most of the world doesn’t do Daylight Savings Time anymore. Japan doesn’t, China doesn’t, Kazakhstan doesn’t, Russia doesn’t, even US state of Arizona doesn’t.
Gosh, just look at the wikipedia, they do have a nice visual map up there – pretty much the whole world doesn’t do DST anymore! I’d say, might as well keep your clock at Beijing Time, no need to bother with UTC.
The article does not recommend that one use DST; it simply advises that if one finds oneself operating servers in those locales (and I assure you there are a nontrivial number of servers in the US), it’s a good idea to avoid the hassle of DST.
Picking an arbitrary offset (e.g. Beijing time) works, but it does tend to complicate your time arithmetic somewhat. You must:
Ensure every server, regardless of its location, uses Beijing time. Ensure that new sysadmins are made aware of this choice and don’t assume it’s a bug. Ensure they don’t subsequently “fix” the problem by changing the server’s timezone to UTC without consulting the Powers That Be.
Ensure that every piece of software which assumes UTC as its default regardless of system timezone (databases, external APIs, etc) is configured to use Beijing time or is wrapped by an appropriate translation layer. Write tests to verify times roundtrip correctly. This is is a great idea regardless of what TZ you choose. :)
Make sure to localize any use of a date library to Beijing time instead of accepting their default UTC behavior.
If you’re exchanging time information in, say, milliseconds since the unix epoch, and not a format that makes it clear what TZ you’re in, prepare for a fun time tracking down silent data corruption. Customers may or may not read your documentation. Customers may or may not implement your documentation correctly. If you do all internal time work in Beijing time, but expose UTC to customers, remember to always add the correct offsets exactly once and in the right direction.
If you exchange time information in a date format (e.g. ISO8601 W3Cschema) that nominally includes TZ information and accidentally omit the TZ because someone decided a quick little strftime was sufficient for a one-off script, and subsequently interpret Beijing times as UTC times, prepare for all kinds of fun.
I’m sure I speak for everyone here when I assure you that I have never made any of these mistakes personally and they have never cost me weeks of my life. ;-)
Just yesterday I was reading The byte order fallacy which seems it could apply to dates and times as well. It shouldn’t matter whether the server is running UTC or not. Instead any software running on it and communicating with other processes either needs to include the timezone in it’s messages, or convert to/from UTC. Assuming UTC is like assuming everything is little endian and saying “don’t plug any big endian machines into this network”.
I don’t know what version of cron the author is using. The man page for “vixie” cron (the one true cron) says this in a section about DST:
Yeah, I wasn’t too explicit there, but there are a bunch of programs that emulate cron like functionality inside your vm (e.g. resque scheduler, quartz etc). I think it’s better to just avoid the whole headache of DST entirely.
Actually you should use TAI, not UTC. On past performance, linux systems break when there’s a leap second; supposedly they’ve fixed the bugs this time, but they said that last time too.
For seriously engineered programs the system timezone shouldn’t matter (at my previous job we used checkstyle rules to ensure that we never called a method that used the system default timezone). But yeah, if you’re using the likes of cron or random shell scripts for mission-critical tasks, as many systems do, then setting the server timezone to something absolute is the pragmatic option.
What is TA1? A quick search did not return helpful results…
TAI (Temps Atomique International) misspelled.
Would that really be better? It seems like the perpetual offset from UTC could be a bigger problem than leap seconds.
Yeah, TAI spreads the problem all over. My pragmatic solution to leap seconds has been to turn off all computers over that moment and then power them back on.
If you’re big enough to run your own NTP servers you can use the google approach of skewing the second length over a couple of days.
Now you have three clocks… I’m working on a system that collects data from remote sensors and figuring out how to handle time has been educational. Fun fact: Unix time doesn’t have unique representations for all UTC dates.
Yup, which is a good reason to prefer TAI.
Yes, it’s much more testable. You need to handle the difference, but once your system is handling it it will keep handling it.
Probably TAI would be a better idea, as @lmm says; not only doesn’t it have DST, it also doesns’t have leap seconds.
Did I just click on a link to an article on lobste.rs that… cited a comment to that posted link on lobste.rs?
If putting your servers in UTC matters, your time handling is broken. Timezones should be a presentation layer around TAI timestamps.
The article is very western-centric. Most of the world doesn’t do Daylight Savings Time anymore. Japan doesn’t, China doesn’t, Kazakhstan doesn’t, Russia doesn’t, even US state of Arizona doesn’t.
http://en.wikipedia.org/wiki/Daylight_saving_time
Gosh, just look at the wikipedia, they do have a nice visual map up there – pretty much the whole world doesn’t do DST anymore! I’d say, might as well keep your clock at Beijing Time, no need to bother with UTC.
The article does not recommend that one use DST; it simply advises that if one finds oneself operating servers in those locales (and I assure you there are a nontrivial number of servers in the US), it’s a good idea to avoid the hassle of DST.
Picking an arbitrary offset (e.g. Beijing time) works, but it does tend to complicate your time arithmetic somewhat. You must:
I’m sure I speak for everyone here when I assure you that I have never made any of these mistakes personally and they have never cost me weeks of my life. ;-)