FW: time zone library
Robert Elz
kre at munnari.OZ.AU
Mon Jan 12 21:57:12 UTC 2009
Date: Mon, 12 Jan 2009 13:09:02 -0500
From: "Olson, Arthur David (NIH/NCI) [E]" <olsona at dc37a.nci.nih.gov>
Message-ID: <B410D30A78C6404C9DABEA31B54A2813029A0407 at nihcesmlbx10.nih.gov>
I'm going to leave your questions for someone else, but ...
| I'm using 64-bit time_t values, and find that it usually takes 62 to 64
| iterations. Basically, it is always worst case.
[...]
| when the resulting time_t would be positive, and it completes in 20 to
| 23 iterations.
First, I am not surprised at the "always worst case", that's what I would
expect, it can only be quicker by pure fluke, which should save n
iterations once in every 1/2^n cases (ie: you might expect to have it
take 54 iterations (64 bit time_t) one time in a thousand or so).
What really matters is that "worst" here doesn't really mean very much.
What kind of application do you have for which the performance of
mktime() matters enough that it is worth complicating the algorithm?
The "even at 64 bits it would still be very reasonable" comment was
written in the time when the computations were being done on Vax 11/780
(and perhaps even more commonly) 750 systems.
These days where CPUs are a thousand times faster (or more), and even
baby embedded on a chip systems are likely 10-100 times quicker than
the systems of the time, it is really hard to imagine an application that
could really have a need to call mktime() enough for its CPU cost to
matter in the slightest. If such an application did exist, it would
probably be better to tailor an algorithm that could make use of the
results all the thousands of conversions it must be doing to really
optimise the calculations, rather than just building in a constant
heuristic.
kre
More information about the tz
mailing list