Markus Kuhn Markus.Kuhn at cl.cam.ac.uk
Sun May 31 11:09:38 UTC 1998

"D. J. Bernstein" wrote on 1998-05-31 03:44 UTC:
> I'm interested in what works, not in religious arguments.

Same here! 8)

> There's a huge amount of code that subtracts UNIX times to compute
> real-time differences. There's a much, much, much smaller amount of code
> that converts UNIX times to local times.

I am not sure this is really true: The code that subtracts time
usually semantically does things like t = t + "one day". If one day is
represented by (time_t)86400, then with the POSIX definition of time_t,
what you get is the same time on the next day, which is what users
usually expect, no matter whether there was a leap second in between or

> > Your "computers without any outside input"
> > are after a couple of months *far* away from both TAI and UTC.
> Actually, with most clocks, it's easy to keep the error below 1 second
> for the entire lifetime of the computer.

In my experience, this works only for computers in well air-conditioned
rooms that are rarely switched off. Opening a window in winter near my
PC causes the clock frequency to change 20 ppm due to the temperature
drop, i.e. your 1 s error has accumulated for a crystal calibrated during
a hot summer day in less than a day. I could even see the server room's air
conditioning duty cycle on the NTP log files of the stratum-1 server
at the University of Erlangen. For 1 sec error over a computer's lifetime
(say 5 years), you should better invest in a small oven that keeps the
crystal temperature controlled on one of the extrema of the crystal's
frequency versus temperature curve.

> False. Accurate time differences are often crucial whether or not the
> local-time display is accurate.

Accuracy of local time display is probably a cultural thing. When I was in the
U.S., I rarely saw anywhere a clock with accurate time display. Most were at
least 3-8 minutes wrong. The media doesn't broadcast any time signals, and if,
then they are often over a minute wrong (as seen several times at CNN HN). The
only sources of accurate time I have seen in the U.S. were NTP computers, GPS
receivers and my shortwave radio when tuned to WWV. In Europe, is is customary
that most radio and TV stations send beeps as a precise hour marker at the
beginning of the news and people set their watches accordingly. In Central
Europe, modern low-cost radio alarm clocks now typically contain a DCF77
receiver since the hardware costs of such a time receiver are just around 10
USD. Railway stations also have radio clocks with the precise time, as do many
church bell towers. Since local time with subsecond accuracy is available so
widely, users also like to see their computers to be as accurate as BBC and
their radio clock with regard to its time display. In the U.S. on the other
hand, GPS servers that displayed local time with the GPS-UTC offset added were
sold for quite some time without anybody even noticing that the displayed time
was off by several seconds. Please be aware that your personal opinion about
the desirability of accurate local time display might be seriously regionally
biased. In Europe, many people will immediately call support if their supposed
to be synchronized clocks are five seconds wrong compared to the radio news

> > Feeding this information into computers then requires manual
> > intervention unless we establish some leap second history update
> > protocol for all computers on this planet.
> False. A new protocol is not necessary, since NTP is able to transmit
> leap-second warnings. (However, a new protocol would be a good idea for
> several obvious reasons.)

Last time I looked at NTP, it only announced the next leap second and did
not say what the current TAI-UTC difference is. So NTP does not provide
the information to convert reliably between UTC and TAI. I guess, this
can and should be changed of course in the next NTP revision. Once this
is done, you will be able to select between at least four clocks formats
on a NTP host:

CLOCK_UTC shows UTC with leap seconds counting tv_nsec from 1e9 to 2e9-1
and is unavailable if the synchronization source has been interrupted
for some time. This is for highly reliable timestamps (e.g., financial
transaction systems, timestamping services, etc.)

CLOCK_TAI shows TAI without any leap seconds and is unavailable if
the synchronization source has been interrupted for some time. I don't
expect many systems to have CLOCK_TAI available. This is for
navigation systems, astronomers, geologists, etc.

CLOCK_MONOTONIC shows an always available second counter that never jumps and
that is not guaranteed to be related to any absolute time scale. This is what
t1-t2 programmers should use. It can be identical to CLOCK_TAI if CLOCK_TAI was
available at boot time, but it does not need to be. A typical PC implementation
will probably read the CMOS clock at boot, interpret the time in there as UTC
and then set CLOCK_MONOTONIC accordingly once and never correct its phase
later. The system is allowed to adjust the frequency of CLOCK MONOTONIC by up
to 200 ppm once the frequency error of the clock has been determined when
external synchronization becomes available.

CLOCK_REALTIME This is a best effort estimation of UTC that also takes t1-t2
usage in existing systems into account. This value is also returned by
gettimeofday(). It is low-pass filtered to smooth out leap second phase jumps
over a couple of minutes, and it continues to run freely even when CLOCK_UTC is
unavailable. It is adjusted to CLOCK_UTC once  available by changing its
frequency within 1 %. If the system discovers that CLOCK_REALTIME - CLOCK_UTC
is more than 1000 seconds off, then a syslog warning is issued and
CLOCK_REALTIME brutally jumps to CLOCK_UTC without smooting time. This brutal
jump should happen only once when the system is installed first. CLOCK_REALTIME
is a compromise for backwards compatibility with existing practice of t1-t2
gettimeofday() usage. New applications should use CLOCK_MONOTONIC instead where
available, because the rate of CLOCK_REALTIME can be up to 10000 ppm
off while the rate of CLOCK_MONOTONIC is typically much better than 200 ppm.

What is wrong with this clock API? POSIX.1b provides already the
interface for accessing several flavours of clocks, so why not just use

> >   tm_sec + tm_min*60 + tm_hour*3600 + tm_yday*86_400 +
> >   (tm_year-70)*31_536_000 + ((tm_year - 69)/4)*86_400
> Don't be an idiot. 2100 is not going to be a leap year.

Only an idiot would feel better after implementing leap second formula
for 32-bit time_t. The above formula as quoted from POSIX.1 was only
designed to work in the tm_year range 1970..2038 since time_t is on most
systems a signed 32-bit integer. Implementing the correct leap year formula
when using a 32-bit time_t would just demonstrate the programmer's
ignorance of the int overflow. But don't worry, a whole army of S2G
consultants is already waiting to set up the next generation of
panic web pages once we have survived Y2K. :)


Markus G. Kuhn, Security Group, Computer Lab, Cambridge University, UK
email: mkuhn at acm.org,  home page: <http://www.cl.cam.ac.uk/~mgk25/>

More information about the tz mailing list