FW: time zone library

John Dlugosz JDlugosz at TradeStation.com
Mon Jan 12 22:38:48 UTC 2009

When you have a server farm of over 200 high-end multicore machines,
eeking a little more performance out is worth some programmer time, as
compared with the price of buying more servers.  Even 1% is basically 2
more machines on the rack, with its inherent cost of ownership.

Basically, I found me a niche where performance still matters <grin>.

Actually, I think that the more complex line of code isn't _that_
complex, and saves a lot.  The real issue is testing.  "Check
everything" is less likely to have a mistake.  I'm actually making a
comprehensive test which checks every 15-30 minutes from 1930 through
2010, for every TZ file.  That will let me do a full regression test of
my code against your original.

Since I'm dealing with local times of various places in the application,
a major difference is to make timezone objects that can be instantiated,
rather than a single global setting.

BTW, what is the official way to refer to or cite this code and the
associated database?  I see several names and abbreviations in use.


-----Original Message-----
From: kre at munnari.OZ.AU [mailto:kre at munnari.OZ.AU] 
Sent: Monday, January 12, 2009 3:57 PM
To: John Dlugosz
Cc: tz at elsie.nci.nih.gov
Subject: Re: FW: time zone library 

    Date:        Mon, 12 Jan 2009 13:09:02 -0500
    From:        "Olson, Arthur David (NIH/NCI) [E]"
<olsona at dc37a.nci.nih.gov>
<B410D30A78C6404C9DABEA31B54A2813029A0407 at nihcesmlbx10.nih.gov>

I'm going to leave your questions for someone else, but ...

  | I'm using 64-bit time_t values, and find that it usually takes 62 to
  | iterations.  Basically, it is always worst case.


  | when the resulting time_t would be positive, and it completes in 20
  | 23 iterations.

First, I am not surprised at the "always worst case", that's what I
expect, it can only be quicker by pure fluke, which should save n
iterations once in every 1/2^n cases (ie: you might expect to have it
take 54 iterations (64 bit time_t) one time in a thousand or so).
What really matters is that "worst" here doesn't really mean very much.

What kind of application do you have for which the performance of
mktime() matters enough that it is worth complicating the algorithm?

The "even at 64 bits it would still be very reasonable" comment was
written in the time when the computations were being done on Vax 11/780
(and perhaps even more commonly) 750 systems.

These days where CPUs are a thousand times faster (or more), and even
baby embedded on a chip systems are likely 10-100 times quicker than
the systems of the time, it is really hard to imagine an application
could really have a need to call mktime() enough for its CPU cost to
matter in the slightest.  If such an application did exist, it would
probably be better to tailor an algorithm that could make use of the
results all the thousands of conversions it must be doing to really
optimise the calculations, rather than just building in a constant


More information about the tz mailing list