[tz] Leap seconds puzzle

Brian Inglis Brian.Inglis at systematicsw.ab.ca
Thu Apr 9 16:26:16 UTC 2015


On 2015-04-09 02:24, Lester Caine wrote:
> On 09/04/15 01:18, Ted Cabeen wrote:
>> Because leap seconds are intercalary, time math using time_t makes sense
>> to humans because we don't notice the difference.  We generally due time
>> math from the "one year ago", or "days since September 11th, 2001",
>> which all work well with the time_t construction.  The only time where
>> things get out of whack is when you are looking at time intervals that
>> cross a leap seconds where accuracy to the second level matters.
>
> The 'puzzle' is perhaps why the base IS seconds? ;)

Systems then used power line clocks with 50/60Hz interrupts, so the common
base was the second, which was good enough for file times; leap seconds did
not start until 1972; OSes and apps used interrupt jiffies for interval
timing. Even in 1980, PC DOS floppy file times were considered good enough
with two second resolution, and standardized 60Hz jiffies. The first
databases only provided times to the second; these were later extended to ms,
then us, now ns, soon ps.

> But as others have pointed out it is only important for some
> calculations. Timestamp data for all of the databases I use work with a
> day base and return time as a fraction of a day. This is a much more
> practical base for genealogical data than 'seconds' for many reasons and
> I still feel that any overhaul of the time_t libraries would be better
> based on this, if only for its much cleaner handling of 32/64bit device
> interworking problems.

Oracle server has always used date times with bytes for century, year,
month, day, hour, minute, second, each with various offsets to avoid
problems on networks which did not support eight bit transparency,
always supporting minimum value JD 0 -- 1 Jan 4713 BC, with upper
limits varying across versions.

-- 
Take care. Thanks, Brian Inglis


More information about the tz mailing list