# [tz] Fractional seconds in zic input

Howard Hinnant howard.hinnant at gmail.com
Tue Feb 6 01:30:08 UTC 2018

```On Feb 5, 2018, at 7:31 PM, Paul G <paul at ganssle.io> wrote:
>
>> I want no truncation whatsoever.  I want to do exact time arithmetic.
>
> Then why are you advocating for a 1ms precision? If you don't want any truncation, then you should be arguing for unlimited precision representations. Anything else will necessarily be a truncation.

We’re having a philosophical argument.  We both want the “truth”, but the “truth” is also elusive.  For example if the two of us agreed that nanosecond precision of an offset is what was agreed upon in 1937 for some time zone, what is to prevent someone later from coming along and saying, no, actually we need picosecond resolution?  Or femtosecond resolution?!  Ultimately we could argue ourselves down to Planck time resolution.  This would obviously be ridiculous.  And if we accept that observation of ridiculous, then somewhere between Planck time resolution and gigasecond resolution is the optimum answer.  Finer is not always better and coarser is not always better.  There exists an optimum between these two ridiculous extremes.  If you’re going to argue a specific resolution (e.g. nanosecond), I would like to base that on something better than “finer is better” because I can go finer than nanoseconds, no problem.  And modern CPU’s have a clock tick at sub-nanosecond levels, so there’s a reasonable argument to go there.

Couple that with:  Finer precision implies shorter range for a given number of bits.

And we have an engineering tradeoff for precision vs range.  We can have the ultimate precision or the ultimate range, but not both.  We need to factor in engineering judgement on the best tradeoff of precision vs range for a given sizeof(representation).

>
>> If I have an  offset of 1ns, and I add that to a time point of 1us UTC, the result is 1001ns in time zone X.  To be able to accurately represent the time point in Zone X I have to be able to exactly represent 1001ns.
>
> True. This project does not decide what the time zones will be, though. You will have this problem if and only if some zone decides on an offset with nanosecond precision, and if that happens, tzdb will either have to truncate the real data to fit this arbitrary cutoff, or a second change to the precision supported will need to happen.
>
> Of course it's unlikely that any zone will actually implement an offset with sub-millisecond precision, but I'm not buying arbitrarily limiting it to milliseconds on the *input* to the compiler on that basis.

I have an engineering background and can not help myself but to view things with a benefit/cost ratio analysis.  I am 100% against prioritizing one dimension (eg. precision) while ignoring other dimensions (eg. sizeof, range, real-world application, backwards compatibility, etc.).  To prioritize precision above all else means that we represent the offset, time points, and time durations with a “BigNum type” that allocates memory on the heap to represent arbitrary precision and range.  That (imho) is not on the table.

Howard

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: Message signed with OpenPGP
URL: <http://mm.icann.org/pipermail/tz/attachments/20180205/3ec48b8e/signature.asc>
```