guy at alum.mit.edu
Thu Jun 30 02:46:09 UTC 2011
On Jun 29, 2011, at 6:53 PM, Robert Elz wrote:
> is/was supposed to just mean that you have a hardware clock that's
> ticking TAI (and so doesn't get adjusted for leap seconds), and then
> the "right" data makes the leap second adjustment so when the data
> is converted to human form we get the expected (UTC) result that matches
> the clock on the wall,
Most of the clocks-on-the-wall probably don't give results that match UTC, as they probably always go, at the end of the day-in-Greenwich, from 58 seconds to 59 seconds to 00 seconds, leap seconds notwithstanding. My watch also knows nothing of leap seconds; I suspect almost nobody's watch does.
(My work computer, and my home computer, and my mobile phone are another matter, given that they're all running BSD-based UN*X systems. At least on one of them, the Olson database is built without leap seconds:
$ file /usr/share/zoneinfo/America/Los_Angeles
/usr/share/zoneinfo/America/Los_Angeles: timezone data, old version, 4 gmt time flags, 4 std time flags, no leap seconds, 185 transition times, 4 abbreviation chars
and I think that's the case with all of them. My work and home computers sync up against NTP servers, and NTP actually cares about leap seconds, so, while the clock on the menu bar won't go from XX:59:59 to XX:59:60 to YY:00:00 over a positive leap second (at the end of the year, that'd probably be 15:59:59 to 15:59:60 to 16:00:00 out here in California), it'll probably end up in effect getting adjusted for leap seconds.)
So who was the person who wrote, in the current POSIX rationale:
Most systems' notion of "time" is that of a continuously increasing value, so this value should increase even during leap seconds. However, not only do most systems not keep track of leap seconds, but most systems are probably not synchronized to any standard time reference. Therefore, it is inappropriate to require that a time represented as seconds since the Epoch precisely represent the number of seconds between the referenced time and the Epoch.
A clock that's implemented as a hardware counter and a periodic interrupt will, *BY DEFAULT*, "increase even during leap seconds", without having to "keep track of leap seconds". Was the person who wrote that unaware of that, or are they concerned about, say, machines with hardware clocks that have year/month/day/hour/minute/second or something "helpful" such as that?
"Most systems are probably not synchronized to any standard time reference" would be a more serious reason than "most systems [do] not keep track of leap seconds" not to "require that a time represented as seconds since the Epoch precisely represent the number of seconds between the referenced time and the Epoch", as per your note that "The occasional leap second correction is typically more minor than the correction needed due to hardware imperfections."
In practice, I suspect that time_t doesn't explicitly stop across positive leap seconds or jump ahead across negative leap seconds, but, on most systems, might be adjusted by NTP etc. so that, in the sufficiently long run (which is probably at most only a few minutes) it has the same effect. A system really using an atomic clock might well really truly tick time_t once per second without any other changes, so that, if the Olson database lacks leap second information, "date -u" command won't print out the correct UTC label for the time, given that a POSIX-conformant system must "interpret "536457599 seconds since the Epoch" as 59 seconds, 59 minutes, 23 hours 31 December 1986".
More information about the tz