Markus Kuhn Markus.Kuhn at cl.cam.ac.uk
Sun May 31 01:18:04 UTC 1998

"D. J. Bernstein" wrote on 1998-05-30 19:44 UTC:
> Markus Kuhn writes:
> > This means, a TAI clock is doomed to go wrong without
> > periodic manual intervention
> You have the situation precisely backwards.
> For computers without any outside input, ticking UTC is impossible by
> definition.

Yes, but your next sentence shows that you misunderstood the reason why:

> Ticking TAI requires nothing more than an internal clock.

No. The speed with which UTC and TAI drift apart is two orders
of magnitude smaller than the frequency error of the majority of
computer clocks out there. Your "computers without any outside input"
are after a couple of months *far* away from both TAI and UTC. These
computers are therefore not of any concern here, because their
operators obviously are not concerned about the accuracy of their
time. Ok, so now we are only talking about computers *with* automatic
outside time input here. These usually receive UTC today, because
UTC is as specified in the various ITU-R TF.* recommendations the
time scale used for international time and frequency broadcasting
signals. UTC (but not TAI) is broadcasted by WWV, DCF77, DVB-SI,
DAB, various teletext carriers, NTP, and many more. Only navigation
systems such as Omega and GPS provide you with TAI, in the case
of GPS, both scales are provided.

> Leap seconds are announced several months in advance.

Yes. On a circular letter and on a web page by IERS and USNO.
Feeding this information into computers then requires manual
intervention unless we establish some leap second history update
protocol for all computers on this planet. I can't believe that
in the forseeable future more than a small minority of installed
systems will get this information in time. Therefore, any practically
usable timescale must today be derived from UTC and not from TAI.
This is what NTP does and this is what POSIX/Unix does for very good

> > Therefore, I favour UTC as a timescale in computer applications.
> You have been outvoted by thousands of programmers who subtract UNIX
> times to compute real-time differences.

You are either mixing up concepts here completely, or you are
quite inexperienced in timing issues. In order to understand
what "thousands on Unix programmers" are doing, you should have a look at
section of POSIX.1 (ISO/IEC 9945-1:1996 or ANSI/IEEE Std

The POSIX time_t scale is a count of seconds since the epoch, where the
term "seconds since the epoch" is strictly defined in a way that confuses
people who haven't read the spec carefully: We do not mean the real number of
seconds, but the seconds without counting inserted leap seconds. time_t is
a UTC scale encoding according to the following algorithm: Let
a struct tm contain what a UTC clock displays. Then the
corresponding time_t value is determined by

  tm_sec + tm_min*60 + tm_hour*3600 + tm_yday*86400 +
  (tm_year-70)*31_536_000 + ((tm_year - 69)/4)*86400

This algorithm allows to convert time_t into UTC YYYY-MM-DD hh:mm:ss
without any additional information (leap second table), therefore
time_t is equivalent to a UTC time and certainly not to a TAI time.
To convert time_t into TAI, you need a leap second table, which
practically no system on this planet has (systems operated by members
of the tz mailing list excluded of course ;-).

> > TAI is only of concern in very special
> > purpose systems such as navigation and astronomical/geological observations.
> Nonsense. One of the most basic code optimization techniques is to try
> several code alternatives on an unloaded system and time each one to see
> what's fastest. Many packages do this automatically during installation.
> What happens if someone installs such a package during a leap second?

Ok, slowly I understand fully your confusion. You should more carefully
differentiate between TAI and "some monotonic leap-second free
second count". These two are by no means the same. A monotonic second
count is trivial to implement. TAI is difficult to implement, because
to get TAI from the easily available UTC time, you need an up-to-date
leapsecond table.

I guess, what you really want is something like CLOCK_MONOTONIC as specified
in one of the more recent POSIX drafts (see current discussion in
comp.std.unix) and *not* a TAI clock. These are two very different functions.
guaranteed to have no jumps and to be available right after system
startup. CLOCK_TAI is guaranteed to represent TAI within some reasonable
absolute accuracy (say 100 ms).

> Saying ``well, they should use RDTSC or gethrtime() or CLOCK_RIGHT'' is
> missing the point. They _don't_.

Bad training of software engineers leads to bad products. So what?

> Telling all of them to change, for the
> sake of a minor simplification in xntpd, is poor engineering.

Who says that the gettimeofday() or the CLOCK_REALTIME clock should
follow precisely UTC?

I fully favour to add to POSIX a CLOCK_UTC that represents leap seconds
by counting in tv_nsec from 1_000_000_000 to 1_999_999_999 while
keeping the code for 23:59:59Z in tv_sec. CLOCK_MONOTONIC will just
have a value that is one higher than the value it had a second ago
and nobody guarantees *anything* else about CLOCK_MONOTONIC. It can typically
be a second counter since the last boot. It has nothing to do with TAI.
For CLOCK_MONOTONIC (and the equivalent gettimeofday) we have
unfortunatelly not any definition of what their value near a leap second
should be. A reasonable hack is for instance what electricity companies do
after leap seconds. They reduce the frequency from (say) 60 Hz to 59 Hz
for one minute, and this way, all UTC clocks that get their reference
frequency from the power network have followed smoothly with
their phase the UTC timescale. Many Unix systems have an adjtime()
call that performs phase adjustments to the kernel clock smoothly
by reducing or increasing the clock's reference frequency by 1%
until the phases match again. If you reduce the allowed skew to
500 pmm (a reasonable upper limit for the worst crystal you will find
in not completely broken computer clock circuits), then you will
need 2000 seconds to get synchronization with UTC back.

> > Attosecond timescales are practically useless for the forseeable future.
> Nanosecond timescales are woefully inadequate for certain applications.

They are a nice lower limit for a range of useful resolutions and they
are convenient to implement on 32-bit machines.

> Anyway, you should learn to read more carefully; /etc/leapsecs.dat uses
> TAI64, which is an 8-byte scale providing 1-second precision.

You should learn to read more carefully: My posting was carefully
written not to contain any references to /etc/leapsecs.dat. I was
generally discussing your Web page that as I understood it propagates
to use TAI as a generally preferable integer timestamping scale (such as
time_t), which I am convinced is a fatally bad engineering decision
(except in atomic clock driven navigation systems).

May be we are in violent agreement and are actually arguing in the
same direction and you just used TAI as a bad expression for "rate
monotonic clock", but if your vision really is to implement a kernel
clock in a way that allows to convert to/from TAI (or GPS or ET) without
a leap second table, then I feel that this is not a good design decision
for the reasons pointed out above.


Markus G. Kuhn, Security Group, Computer Lab, Cambridge University, UK
email: mkuhn at acm.org,  home page: <http://www.cl.cam.ac.uk/~mgk25/>

More information about the tz mailing list