Markus Kuhn Markus.Kuhn at cl.cam.ac.uk
Fri May 29 22:41:00 UTC 1998

"D. J. Bernstein" wrote on 1998-05-29 01:48 UTC:
> http://pobox.com/~djb/proto/tai64.txt

On most systems, using TAI is quite problematic, because most time
services publish exclusively UTC and not TAI (GPS being the notable
exception). This means, a TAI clock is doomed to go wrong without
periodic manual intervention (leaps seconds can be missed during

Therefore, I favour UTC as a timescale in computer applications. There
are two possible ways of representing leap seconds in a UTC second

(a) if you have a POSIX.1b style 32-bit nanosecond register that
indicates the nanoseconds that have passed since the start of the last
second, then just keep the second counter at 23:59:59 during the
leap second and run with the nanosecond counter from 1_000_000_000 up
to 1_999_999_999.


(b) define your integer time scale such that every day has 24*3600+1
seconds, i.e. you reserve a code for a potential leap second at the
end of every day (or month).

I prefer (a). It keeps us within the normally used UTC timebase
and doesn't make things unnecessarily complicated on non-leapsecond
aware systems. If you check out comp.std.unix, you'll see my posting
about introducing in POSIX a CLOCK_UTC clock that is only present
if the kernel has had a recent time service update and that doubles
the nanosecond range during leap seconds.

TAI and a correct difftime is practically never needed on normal
computer applications. A correct difftime implementation on POSIX
systems where the time_t scale explicitely does not provide codes
for leap seconds is a joke of inconsistency. Leap seconds are a concern
in distributed systems, where strictly monotonic precision timestamps
are necessary (e.g., banking databases), but here the two gigananoseconds
(2 Gns) approach (a) works nicely. TAI is only of concern in very special
purpose systems such as navigation and astronomical/geological observations.
It's nice to have TAI available, but it should not be used as the
primary timescale. It is just an auxiliary timescale that happens to
be accessable on systems with a builtin GPS receiver or with some future
TAI enhanced NTP version.

My preferred timestamp format would be the UTC96/2000 format, i.e.
a signed Bigendian 64-bit second counter starting with 0 at
2000-01-01 00:00:00Z followed by a 32-bit Bigendian nanosecond
counter. Alternative nice epoch start dates could be the year 0
(also known as 1 B.C.) of the Gregorian (!) calendar, or the year
1875 (when the Metric Convention was signed in Paris and the
Gregorian Calendar was already widely implemented). 0 simplifies
implementation of conversion routines slightly but might cause
historic confusion.

Attosecond timescales are practically useless for the forseeable
future. The best Cesium clocks in the world (currently the CS1 and CS2
operated by PTB in Braunschweig) do not reach 1 ns precision, nor does
USNO's clock-array regulated hydrogen maser in Washington DC. Electric
impulses in wires travel only around 20 cm per nanosecond, and 
relativistic effects really start to make things confusing
you if you want to build clocks with more than 1 ns precision.
So your TIA64NA sounds to me much like an overkill specification.


Markus G. Kuhn, Security Group, Computer Lab, Cambridge University, UK
email: mkuhn at acm.org,  home page: <http://www.cl.cam.ac.uk/~mgk25/>

More information about the tz mailing list