double rounding in xtime_diff()?

Paul Eggert eggert at twinsun.com
Mon Oct 12 13:01:13 UTC 1998


   Date: Mon, 12 Oct 1998 13:04:06 +0100
   From: Markus Kuhn <Markus.Kuhn at cl.cam.ac.uk>

   > The exact answer is 9007199254740994 (i.e. 2**53 + 2), a number that
   > is exactly representable as an IEEE double.  But the expression above
   > yields 9007199254740992 (i.e. 2**53) -- it is off by 2.

   Come on, that is a value 0.28 billion years from now.

But a couple of messages ago you were confident of being able to prove
that the expression works for all timestamps.

   Do you have a non-pathologic case, where you do not fill up the
   double mantissa completely? We define non-pathological as follows:

Can you prove that there isn't one?  Or will your definition of
``non-pathological'' (which was missing from the copy of the message
that I received) define away the problem?

The implementation must work correctly for _all_ timestamps, because
such timestamps will occur in practice if they are representable.
(Among other things, the maximum timestamp will occur often in real code.)

In practice, simple rounding error (which is large and cannot be
avoided) will be more important than this more esoteric
multiple-rounding error (which is smaller and can be avoided, e.g. if
the library internally uses infinite precision arithmetic).  It is the
simple rounding error that I am mainly objecting to; the
multiple-rounding error is icing on the cake.


     int_fast64_t sec;
     int_fast32_t nsec;

   has to be relaxed to require sec only to represent roughly all values
   from the start of the year -9999 to the end of the year +9999

My draft spec won't place any restriction on the representable years,
as that is a quality-of-implementation issue.  Even these relaxed
requirements are overkill for the vast majority of applications.  A
C-based CPU running in an automobile engine shouldn't be required to
handle timestamps all the way back to the Clovis people.

   This range requires a bit less than 40
   bits for sec, and then we do not care about rounding errors in IEEE
   double if sec is not a 40-bit representable integer.

This statement confuses the minimum requirement with the actual
implementation.  If the actual implementation supports 64-bit sec
(which is likely under struct xtime proposal), then the larger
timestamps are possible, and any credible implementation must handle
them correctly.

   The *real* problems are more related to how we present the new proposal
   to the committee and actually getting it through,

Yes, the politics must be handled carefully.  But the technical side
must also be done carefully; otherwise, what's the point of doing
anything?



More information about the tz mailing list