double rounding in xtime_diff()?
Markus.Kuhn at cl.cam.ac.uk
Sun Oct 11 11:22:03 UTC 1998
Paul Eggert wrote on 1998-10-11 00:25 UTC:
> my equivalent of difftime is only three lines long, as I posted before.
> But (as I mentioned earlier) that implementation has a double-rounding
I don't think so.
The code in question was:
(double) ((t1.sec - t2.sec) + (t1.nsec - t2.nsec) / 1.0e9)
where t?.sec is at least 64-bit int and t?.nsec is at least 32-bit int.
Can you really construct input values that will lead to your claimed
double rounding error on say a Pentium under gcc/Linux (standard IEEE
double arithmetic), or is this "bug" just a suspicion based on the
common (but inappropriate) belief that floating point arithmetic is
incomprehensible magic stuff that always adds unpredictable noise in the
last significant bits of the mantissa?
Note that int -> double rounding only takes place if the mantissa is
shorter than the integer value. Otherwise the conversion is just a
lossless and fully reversible reformatting of the number.
I assume that what you are talking about is that (t1.nsec - t2.nsec) is
first converted to double, and then the result of the double division is
rounded. However, have you considered that the 32-bit (t1.nsec -
t2.nsec) result fits completely into the > 32-bit double mantissa and NO
rounding can take place here? The division is IEEE guaranteed to be the
closest value, the (t1.sec - t2.sec) could be larger than the mantissa,
which will just move insignificant bits of the division result out of
the mantissa but not add additional rounding uncertainty that could move
us away from the closest possible value.
I have not yet found the time to formally proof it, but it looks to me
very much that there is no double rounding going on here and that the
presented C code is the best you can do in an implementation. I would
not know what I could do better in assembler.
> Also, the interface requires information loss if the times are
> sufficiently far apart, at least on the vast majority of hosts where
> double can't represent 96-bit integers exactly. There's no easy,
> portable fix for either problem.
There is a straight forward way to represent the difference as a 96-bit
struct xtime value. The code should be completely obvious, so I didn't
want to waste any time to post it in addition, but I mentioned several
times that especially in languages with operator overloading and strong
typing I would of course also expect xtime-only versions of the
arithmetic functions to be present in an API.
I consider double arithmetic useful here, because although I consider it
unacceptable that timespamps become less precise the farer we get away
from the epoch, I assume that most applications are perfectly happy with
floating point values used in their own calculations, where the
precision decreases logarithmically with the size of the difference but
is guaranteed to be independent of the age of the epoch.
Markus G. Kuhn, Security Group, Computer Lab, Cambridge University, UK
email: mkuhn at acm.org, home page: <http://www.cl.cam.ac.uk/~mgk25/>
More information about the tz