the ``need'' for POSIX times

Markus Kuhn Markus.Kuhn at cl.cam.ac.uk
Fri Oct 9 22:13:35 UTC 1998


Paul Eggert wrote on 1998-10-09 19:16 UTC:
> These models are entertaining, and it's fun to play with the algebra,
> but I fear that you've been involved with the problem a bit too much,
> and you need to step back and take a deep breath.  We need a model
> that's simple and easy to explain; the explanations above are neither.

I don't think that was a fair comment, because I am as well convinced
that my model is simple, straight forward, easy to understand and robust
in practice. I did step back numerous times and I have seriously
considered your alternative model and again and again concluded that it
is problematic. Below you will find another simple application scenario,
which I hope you will study seriously, and which I hope will help you to
understand and acknowledge the problems that I see in your concept.

> So.... why not use the official model instead?  Officially, since
> 1972, UTC-TAI has been a (negative) integer number of seconds, and
> when a leap second is inserted, UTC-TAI decreases by 1.  What this
> means is pretty simple: on an implementation whose internal clock
> ticks TAI, the UTC clock ticks right along with the internal clock --
> except during an inserted leap second, where the UTC clock is adjusted
> back by one second.
> 
> When converting a UTC clock to a printed representation, it's
> conventional to use :60 for the inserted leap second, but this is
> merely a notation to indicate that the UTC clock is repeating, much as
> the German-standard 'A' and 'B' suffixes are notations for repeated
> local time when the UTC offset decreases.

Ok. So far nothing wrong in your argument. The big intellectual accident
happens in the next sentence, and from then on the conclusions get
dubious:

> Viewed in this light, struct xtime's TIME_UTC is not really UTC, as
> TIME_UTC clocks have special values during an inserted leap second,
> whereas UTC clocks simply go back 1 second.

Sorry, but this is just obviously wrong:

UTC clocks display a special overflow value (60 - 60.999...) during the
leap second, *exactly* as struct xtime is doing (1e9 - 2e9-1). There is
absolutely no conceptual difference between "real UTC" and TIME_UTC,
since there exists an obvious bijective deterministic mapping between
both.

Struct xtime is just a simple fully static encoding of the full
YYYY-MM-DD HH:MM:SS display as found on any official UTC clock. "Static"
means here "independent of dynamically changing leap second tables".

> TIME_UTC is therefore a
> compromise between UTC clocks (which are not monotonic) and POSIX
> clocks (which have no leap seconds).

No, TIME_UTC is by all means a fully correct UTC clock.

> TIME_UTC therefore suffers the
> complexity of a solution that is neither fish nor fowl.

Again, I feel "neither fish nor fowl" is a fair comment. There exists a
simple and obvious eternally valid algorithm that converts in a
bijective way between a YYYY-MM-DD HH:MM:SS display of an official UTC
clock, and a struct xtime value.

On the other hand: There exists no eternally valid static algorithm for
this conversion in your modified TAI, because the timestamp versus
displayed time relationship changes each time you update a leap second
table. This might be very problematic and confusing for the non-expert
user (and even the expert), as I hope the following illustrates:

Practical example:

Say you have an electronic commerce system, which knows only the leap
seconds up to the end of 1998. You enter into this electronic commerce
system a command to rejects all contracts that expire after 2000-01-01
00:00:00, because for instance at this time a new law comes into effect
that is unacceptable for your business. This law is so unacceptable,
that even accepting a contract that expires 2000-01-01 00:00:01 is
absolutely unacceptable for you and your legal department would send you
to prison if your system did that. Now the following happens:

Your implementation uses the current leap second table, which does not
yet contain the mid-1999 leap second. It converts the 2000-01-01
00:00:00 cut-off date into an integer timestamp T based on the
assumption that there will not be a further leap second until then as
the current table suggests. Months later you receive an updated leap
second table and you install it in your system, not knowing what fatal
side effects this will have for you. The new leap second table contains
an additional leap second in mid-1999. This will cause the timestamp T
suddenly to be interpreted as 2000-01-01 00:00:01 by your system,
because UTC clocks "repeat" one second between now and then as your
system has just learned from the update. Your application software
however naturally contains no code to update these integer timestamps.
Your integer timestamps just change their real-live meaning as leap
second tables get updated, and nobody has expected that, because they
didn't read the fine print in the libtai manual that came in volume 13
of the system reference documentation.

Now someone sends you a contract (e.g., a multi-million stock market
option) that expires on 2000-01-01 00:00:01, trying to take advantage of
the new law that will be in force by then. Fatally, your system accepts
this contract against what the specification said, because after
converting 2000-01-01 00:00:01 to an integer and doing an integer
comparison, the expire date of this contract is now NOT any more after
the cut-off date (which was originally entered as 2000-01-01 00:00:00).
Result: You loose millions of dollars and go into prison. Or your
lawyers kill you right away.

End of example (and I could come up with numerous more).

Doesn't this convince you at least a bit that there are numerous
applications, where we care much less about the real number of seconds
until some date than we care about what exactly the precise official UTC
notation for this date is in YYYY-MM-DD HH:MM:SS notation?

Your lawyers are not at all interested in that this contract expires in
41865474 seconds from now. However they are definitely interested in
whether it expires on 2000-01-01 00:00:01 or 1999-31-12 23:59:59,
because this can make a few millions dollar difference on the options
market in case say the deal would in the former case be subject to new
tax.

My TIME_UTC (and also POSIX) provides this reliable relationship between
the easy to process integer struct value and the full broken-down time.
Your modified TAI does not provide this reliable relationship! (Unless
of course, you put your leap second table under a revision control
system and attach to every time stamp a version code that identifies
according to which revision of the leap second table this timestamp
should be interpreted when converting back to an external UTC
representation. But that would of course be a ridiculous fix.)

I can think of numerous examples, where the faithful encoding of UTC is
what really matters. Just think about legal investigations in the
context of some fraud, where timestamps in digitally signed documents
were encoded using your modified TAI encoding. The timestamp would be
meaningless unless you also sign with each document the entire leap
second table that is necessary to interpret it, right?

Relevant in the real world is UTC, not TAI. The values displayed on UTC
clocks usually matter, and not the number of seconds between two events.
Therefore, it is absolutely paramount that there is a secure way of
converting without any ambiguity between an xtime value and what a UTC
clock displays. You open a hole bag of risks by making the meaning of
every timestamp on the UTC scale dependent on the interpretation of a
leap second table.

> The struct xtime proposal would be simplified if it didn't use this
> complicated interface, and instead used either true UTC, or true
> POSIX.  (Of course, both true UTC and true POSIX could be supported,
> by having two different clock types.)

I certainly do true UTC. There is no such thing as true POSIX, because
POSIX does not allow accurate representation of time. What some POSIX
implementations therefore do at the moment as an ugly hack is to use a
1000 ppm frequency shift ramp to compensate this inconsistency in the
specification, and others (probably the majority) just repeat 23:59:59.
I hope you do not consider any of these to be an acceptable long-term
solution.

> I am dubious about
> standardizing on a new halfway-between-UTC-and-POSIX clock type that
> has never been used in practice and which has some nontrivial
> conceptual problems.

I am sorry, but I can't agree less. My proposal is a clear and faithful
one-to-one real UTC implementation, and I consider it to be free of
fundamental conceptual problems, as I hope to have pointed out in great
details in the past postings. It is exactly as complex as needed, no bit
too simple and no bit too complicated. I consider it to be ready for
standardization. Only the details of the text need some polishing, not
the basic design principles.

Markus

-- 
Markus G. Kuhn, Security Group, Computer Lab, Cambridge University, UK
email: mkuhn at acm.org,  home page: <http://www.cl.cam.ac.uk/~mgk25/>




More information about the tz mailing list