I think «ignoring leap seconds» is ambiguous in the text. I could mean two things.
Pretending that leap second correction does not exist and counting the absolute seconds.
Pretending that leap second correction makes leap seconds non-existent.
The system functions want to be the former, but because people want the days to start at the UTC-correct moment time synchronisation code lies to the hardware clock to achieve the latter (e.g. making the other «seconds» longer to make the leap second completely covered).
It is natural then, that libtai taking the documentation on the face value for the former ends up in a weird place when the reality is the latter.
POSIX doesn’t say it “ignores leap seconds”; in fact its definition of “seconds since the epoch” says nothing about leap seconds. However “seconds since the epoch (ignoring leap seconds)” is the usual way that unix time is described in other places (eg, the IETF). There, “ignoring” means “not counting”, but I agree it isn’t clear and I don’t like the phrase. Either definition means every day is counted as 86400 seconds.
This article is deeply confused. It says,
calculate how many seconds have elapsed since the Epoch timestamp assuming that all days are 86400 seconds long. In other words, Unix time is supposed to not include any leap seconds in its accounting.
OK, but in the next sentence:
You could think of it as returning the number of TAI seconds that have elapsed since 1970-01-01 00:00:10 TAI.
What? no!
I don’t understand how libtai is supposed to work, but it seems to have an undocumented requirement that your computer’s clock is set to the wrong time, tho djb does write that he thinks POSIX should be ignored.
The current POSIX definition explicitly says that the Unix time is based on UTC and only approximates the number of seconds since the Epoch. The formula given requires rerunning the same clock values during and after the leap second. Thus it has defined relation to leap seconds.
It counts UTC days as 86400 seconds (despite them being sometimes 86401 seconds). Of course, TAI days are always 86400 seconds (but have and offset from UTC)
The time system call on Linux is defined as «the number of seconds since the Epoch». As TAI and UTC agree on the seconds but disagree what minutes those seconds belong to, a literal reading would be indeed TAI seconds elapsed. Of course, the full manual page also promises POSIX.1-2008 compatibility, incorporating by reference the UTC-based interpretation. Meanwhile, DJB believes in keeping TAI in the system clock and fixing localtime. And libtai’s current-time function only makes sense on such systems.
POSIX messing up leap years is true, and also fixed a few years later.
I guess the most «correct» solution would be TAI in system clock and leap seconds in timezone data, but instead the idea of UTC tracking the Earth rotation speed seems to be abandoned. Maybe astronomic coordinated timezone for the old meaning could be created… No bets on whether libtai will switch to the now-constant UTC-TAI offset.
Leap seconds aren’t dead quite yet. The last BIPM conference passed a resolution that’s a couple steps removed from abolishing them: they have resolved that, by 2035, there will be a vote on another resolution to increase maximum allowed delta between UT1 and UTC.
Currently the maximum allowed delta is plus/minus 0.9s, and UTC is kept within that delta through leap seconds.
The resolution that was passed in the 27th conference empowers CIPM to go and figure out what new maximum delta to propose, and put that up for vote. The resolution also expresses the constraint that the proposed new maximum should make it unnecessary to adjust UTC again for at least one century. The earliest that vote could happen is the next conference in 2026.
Coincidentally, the earth’s rate of rotation has changed since 2016 such that UT1-UTC is currently staying within its 0.9s delta, so fingers crossed there will be no more leap seconds before the constraints are relaxed (and, god forbid, no negative leap seconds…).
This is very nitpicky, I accept :) But the conference is not abolishing discontinuous adjustments to UTC, it is merely changing the rules for when those adjustments must happen, in such a way that no adjustments should happen until all the current committee members are dead. Presumably this was more palatable to the convention voters than abolishing all connection between UT1 and UTC, either for philosophical reasons (to sell this resolution as an “experiment” that’s easy to roll back, for the UT1 fans) or for pragmatic reasons (if in 20 years it turns out this change sucks and needs to be reverted, the convention doesn’t need to have another argument about how to control the UT1-UTC offset, since the previously ratified mechanism for that is still in place and the only change is tweaking a tuning parameter again).
There are two treaty organizations involved in leap seconds: the BIPM and the ITU-R. Their governing articles are revised at triennial-ish treaty conferences, the CGPM and the WRC respectively.
The WRC towards the end of 2023 approved the plan to abolish leap seconds, so it seems likely to go ahead. Russia are likely to continue to kick up a fuss because GLONASS needs re-engineering; dunno how likely that is to cause significant last-minute [sic] problems.
The rate of change of rotation happened towards the end of 2019. You can see it fairly clearly in the IERS Bulletin A plot of UT1-UTC; there’s an even more obvious change in behaviour in the charts of the dXand dY earth orientation parameters. It’s weird.
Dunno why you emphasise “current” like that: unix time has had the same definition since the mid-1970s, the POSIX wording has been the same for at least 20 years (2024, 2004) and prior wording treated leap seconds the same (1997). It’s a mistake to treat the phrase as something meaningful in its own right: it isn’t an explanation of the formula, it’s a name for the value of the formula.
The Linux time(2) man page does not define its result as just “the number of seconds since the epoch”, it has whole paragraphs that try to explain POSIX time (but do so badly).
I guess the most «correct» solution would be TAI in system clock and leap seconds in timezone data
That has been tried but it doesn’t work in practice: the kernel needs to know UTC for compatibility with lots of APIs, network protocols, data formats. File systems are a major one.
There is lots of code dating back decades that depends on the standard definition of unix time. Programs that don’t like that definition have to use different APIs (ie, CLOCK_TAI) to get a uniform count of seconds.
POSIX 4.19 (linked from the post) specifies the math to use for converting a year/month/day/hour/minute/seconds timestamp to seconds since epoch, which accounts for leap years but not leap seconds. It then doubles down a few paragraphs later:
How any changes to the value of seconds since the Epoch are made to align to a desired relationship with the current actual time is implementation-defined.
As represented in seconds since the Epoch, each and every day shall be accounted for by exactly 86400 seconds.
It explicitly says that days in the seconds-since-epoch reckoning shall be 86400 seconds long, disregarding the 27 days since epoch that were 86401 seconds according to UTC. This has been the case since the 2001 edition, the oldest I can find without paying IEEE for the 1993 version.
Now, there are enough “approximation” and “implementation-defined” holes in this definition that you can justify an interpretation that it’s okay for the OS to always happen to accidentally carry an error of 27 seconds from the correct value, because doing so in practice makes the clock behave the way the programmer expects it to. And it seems that’s what implementations do. But an initial reading of that spec section seems very clear, to me, that leap seconds that occurred since UTC should not be included in the tally of seconds since Epoch, wacky as that may be.
This interpretation, that the seconds since epoch value returned by time functions must not include the 27 leap seconds since Epoch, is also what linux’s time(2) manpage documents:
This value is not the same as the actual number of seconds between the time and the Epoch, because of leap seconds and because system clocks are not required to be synchronized to a standard reference. Linux systems normally follow the POSIX requirement that this value ignore leap seconds.
And Linux’s clock_gettime:
This clock normally counts the number of seconds since 1970-01-01 00:00:00 Coordinated Universal Time (UTC) except that it ignores leap seconds; ; near a leap second it is typically adjusted by NTP to stay roughly in sync with UTC.
(though note clock_gettime hints at the truth, which is that the kernel doesn’t have intrinsic knowledge of new leap seconds and relies on external synchronization, but barring new leap seconds is expected to return a value that includes past leap seconds in order to align with UTC wall clock time)
FreeBSD’s man pages don’t specify either way, just say that it returns elapsed seconds since the Epoch timestamp, and state compliance with the relevant POSIX standard.
Scientists use TAI when they need to strike precise timestamps that aren’t going to grow mysterious extra seconds by political fiat (like UTC does).
The IERS is probably as far as you can get from a political organization. I believe the author is confusing leap seconds with DST changes, but they’re quite different.
I think the author is making a mistake too when mixing in POSIX time in the discussion. libtai can convert between TAI and UTC, that’s it. Expecting it to take the underspecified POSIX spec into account is too much.
Politics happen beyond elected officials. UTC was defined as a compromise between TAI and UT1, and IMO is firmly a political compromise that the BIPM conference made to get everyone to stick to a single time standard, rather than ending up with some civil time on UT1 and some on TAI.
libtai can convert between TAI and UTC, but the point is that the TAI timestamps it computes in tai_now() are incorrect, and seem to be so due to POSIX’s misleading definition of what seconds since Epoch represents, which seems to suggest that you need only adjust by 10s to construct a TAI64 timestamp from the output of time().
This implementation of tai_now assumes that the time_t returned from the time function represents the number of TAI seconds since 1970-01-01 00:00:10 TAI. This matches the convention used by the Olson tz library in ``right’’ mode.
… wow, amazing. I’d missed that! Thanks for the pointer!
That makes… I guess sense isn’t quite the word I’m looking for, but it’s consistent at least :) I think empirical evidence of what this did to other TAI64 implementations suggests that “it only works on my system” is perhaps not the best choice for a reference implementation!
(edited into the post with credit, thank you again)
It’s impossible to represent a leapsecond with a linear count of seconds.
There’s a fun wrinkle, though. The definition of seconds since the epoch says that during a leap second that you need to add 3600*23 + 60*59 + 60 to the start of the day, which is the same as 86400 seconds, which makes it equal to the start of the next day. It’s problematic for the date to change a second early, so in practice, systems that are aware of leap seconds step CLOCK_REALTIME back a second early, so they repeat the last second of the last day of the month instead of the first second of the first day of the month.
I think «ignoring leap seconds» is ambiguous in the text. I could mean two things.
The system functions want to be the former, but because people want the days to start at the UTC-correct moment time synchronisation code lies to the hardware clock to achieve the latter (e.g. making the other «seconds» longer to make the leap second completely covered).
It is natural then, that
libtaitaking the documentation on the face value for the former ends up in a weird place when the reality is the latter.POSIX doesn’t say it “ignores leap seconds”; in fact its definition of “seconds since the epoch” says nothing about leap seconds. However “seconds since the epoch (ignoring leap seconds)” is the usual way that unix time is described in other places (eg, the IETF). There, “ignoring” means “not counting”, but I agree it isn’t clear and I don’t like the phrase. Either definition means every day is counted as 86400 seconds.
This article is deeply confused. It says,
OK, but in the next sentence:
What? no!
I don’t understand how libtai is supposed to work, but it seems to have an undocumented requirement that your computer’s clock is set to the wrong time, tho djb does write that he thinks POSIX should be ignored.
The current POSIX definition explicitly says that the Unix time is based on UTC and only approximates the number of seconds since the Epoch. The formula given requires rerunning the same clock values during and after the leap second. Thus it has defined relation to leap seconds.
It counts UTC days as 86400 seconds (despite them being sometimes 86401 seconds). Of course, TAI days are always 86400 seconds (but have and offset from UTC)
The
timesystem call on Linux is defined as «the number of seconds since the Epoch». As TAI and UTC agree on the seconds but disagree what minutes those seconds belong to, a literal reading would be indeed TAI seconds elapsed. Of course, the full manual page also promises POSIX.1-2008 compatibility, incorporating by reference the UTC-based interpretation. Meanwhile, DJB believes in keeping TAI in the system clock and fixinglocaltime. Andlibtai’s current-time function only makes sense on such systems.POSIX messing up leap years is true, and also fixed a few years later.
I guess the most «correct» solution would be TAI in system clock and leap seconds in timezone data, but instead the idea of UTC tracking the Earth rotation speed seems to be abandoned. Maybe astronomic coordinated timezone for the old meaning could be created… No bets on whether libtai will switch to the now-constant UTC-TAI offset.
Leap seconds aren’t dead quite yet. The last BIPM conference passed a resolution that’s a couple steps removed from abolishing them: they have resolved that, by 2035, there will be a vote on another resolution to increase maximum allowed delta between UT1 and UTC.
Currently the maximum allowed delta is plus/minus 0.9s, and UTC is kept within that delta through leap seconds.
The resolution that was passed in the 27th conference empowers CIPM to go and figure out what new maximum delta to propose, and put that up for vote. The resolution also expresses the constraint that the proposed new maximum should make it unnecessary to adjust UTC again for at least one century. The earliest that vote could happen is the next conference in 2026.
Coincidentally, the earth’s rate of rotation has changed since 2016 such that UT1-UTC is currently staying within its 0.9s delta, so fingers crossed there will be no more leap seconds before the constraints are relaxed (and, god forbid, no negative leap seconds…).
This is very nitpicky, I accept :) But the conference is not abolishing discontinuous adjustments to UTC, it is merely changing the rules for when those adjustments must happen, in such a way that no adjustments should happen until all the current committee members are dead. Presumably this was more palatable to the convention voters than abolishing all connection between UT1 and UTC, either for philosophical reasons (to sell this resolution as an “experiment” that’s easy to roll back, for the UT1 fans) or for pragmatic reasons (if in 20 years it turns out this change sucks and needs to be reverted, the convention doesn’t need to have another argument about how to control the UT1-UTC offset, since the previously ratified mechanism for that is still in place and the only change is tweaking a tuning parameter again).
There are two treaty organizations involved in leap seconds: the BIPM and the ITU-R. Their governing articles are revised at triennial-ish treaty conferences, the CGPM and the WRC respectively.
The WRC towards the end of 2023 approved the plan to abolish leap seconds, so it seems likely to go ahead. Russia are likely to continue to kick up a fuss because GLONASS needs re-engineering; dunno how likely that is to cause significant last-minute [sic] problems.
The rate of change of rotation happened towards the end of 2019. You can see it fairly clearly in the IERS Bulletin A plot of UT1-UTC; there’s an even more obvious change in behaviour in the charts of the dX and dY earth orientation parameters. It’s weird.
Wow, I’d never looked at the UT1-UTC plot. The change is remarkably sudden, I expected it to be more leisurely.
Dunno why you emphasise “current” like that: unix time has had the same definition since the mid-1970s, the POSIX wording has been the same for at least 20 years (2024, 2004) and prior wording treated leap seconds the same (1997). It’s a mistake to treat the phrase as something meaningful in its own right: it isn’t an explanation of the formula, it’s a name for the value of the formula.
The Linux time(2) man page does not define its result as just “the number of seconds since the epoch”, it has whole paragraphs that try to explain POSIX time (but do so badly).
That has been tried but it doesn’t work in practice: the kernel needs to know UTC for compatibility with lots of APIs, network protocols, data formats. File systems are a major one.
There is lots of code dating back decades that depends on the standard definition of unix time. Programs that don’t like that definition have to use different APIs (ie, CLOCK_TAI) to get a uniform count of seconds.
POSIX 4.19 (linked from the post) specifies the math to use for converting a year/month/day/hour/minute/seconds timestamp to seconds since epoch, which accounts for leap years but not leap seconds. It then doubles down a few paragraphs later:
It explicitly says that days in the seconds-since-epoch reckoning shall be 86400 seconds long, disregarding the 27 days since epoch that were 86401 seconds according to UTC. This has been the case since the 2001 edition, the oldest I can find without paying IEEE for the 1993 version.
Now, there are enough “approximation” and “implementation-defined” holes in this definition that you can justify an interpretation that it’s okay for the OS to always happen to accidentally carry an error of 27 seconds from the correct value, because doing so in practice makes the clock behave the way the programmer expects it to. And it seems that’s what implementations do. But an initial reading of that spec section seems very clear, to me, that leap seconds that occurred since UTC should not be included in the tally of seconds since Epoch, wacky as that may be.
This interpretation, that the seconds since epoch value returned by time functions must not include the 27 leap seconds since Epoch, is also what linux’s time(2) manpage documents:
And Linux’s clock_gettime:
(though note clock_gettime hints at the truth, which is that the kernel doesn’t have intrinsic knowledge of new leap seconds and relies on external synchronization, but barring new leap seconds is expected to return a value that includes past leap seconds in order to align with UTC wall clock time)
FreeBSD’s man pages don’t specify either way, just say that it returns elapsed seconds since the Epoch timestamp, and state compliance with the relevant POSIX standard.
The IERS is probably as far as you can get from a political organization. I believe the author is confusing leap seconds with DST changes, but they’re quite different.
I think the author is making a mistake too when mixing in POSIX time in the discussion. libtai can convert between TAI and UTC, that’s it. Expecting it to take the underspecified POSIX spec into account is too much.
Politics happen beyond elected officials. UTC was defined as a compromise between TAI and UT1, and IMO is firmly a political compromise that the BIPM conference made to get everyone to stick to a single time standard, rather than ending up with some civil time on UT1 and some on TAI.
libtai can convert between TAI and UTC, but the point is that the TAI timestamps it computes in tai_now() are incorrect, and seem to be so due to POSIX’s misleading definition of what seconds since Epoch represents, which seems to suggest that you need only adjust by 10s to construct a TAI64 timestamp from the output of time().
They’re only incorrect if your computer is wrong :D
From https://cr.yp.to/libtai/tai.html
… wow, amazing. I’d missed that! Thanks for the pointer!
That makes… I guess sense isn’t quite the word I’m looking for, but it’s consistent at least :) I think empirical evidence of what this did to other TAI64 implementations suggests that “it only works on my system” is perhaps not the best choice for a reference implementation!
(edited into the post with credit, thank you again)
I suppose you could say CLOCK_REALTIME includes past leap seconds but will never return a value indicating a current leap second?
It’s impossible to represent a leapsecond with a linear count of seconds.
There’s a fun wrinkle, though. The definition of seconds since the epoch says that during a leap second that you need to add
3600*23 + 60*59 + 60to the start of the day, which is the same as 86400 seconds, which makes it equal to the start of the next day. It’s problematic for the date to change a second early, so in practice, systems that are aware of leap seconds step CLOCK_REALTIME back a second early, so they repeat the last second of the last day of the month instead of the first second of the first day of the month.