[tz] Fractional seconds in zic input
Paul G
paul at ganssle.io
Tue Feb 6 02:02:58 UTC 2018
My suggestion was that the input to the compiler (which is not strongly typed, so has no `sizeof`) should either have infinite precision or a clear upgrade path (e.g. a precision specification). So far Paul's patch has no effect on the *output* of the zic compiler, and I think it's at least reasonable for the tzdb (the *input* to zic) to be able to support arbitrarily fine inputs, considering that is the ultimate "source of truth", even without any of the other engineering concerns.
With regards to the other engineering concerns, that was what I was trying to appeal to when I said that nanoseconds are a more reasonable choice for precision *if we're arbitrarily limiting this anyway*. By selecting milliseconds or even microseconds, you're sacrificing precision in exchange for completely unnecessary range. Time zone offsets outside of +/- 1 day are dubious (and unsupported in many environments), +/- 1 week are very unlikely, and +/- 1 year is absurd. While both are pretty unlikely, I think nanosecond precision offsets are much more likely than >219 year timezone offsets, so assuming that you want to truncate the inputs *at all*, it would be preferable to use nanosecond precision than millisecond. Honestly, I'm fine with (assuming infinite precision isn't supported) any resolution such that the range is ~1 week.
On 02/05/2018 08:30 PM, Howard Hinnant wrote:
> On Feb 5, 2018, at 7:31 PM, Paul G <paul at ganssle.io> wrote:
>>
>>> I want no truncation whatsoever. I want to do exact time arithmetic.
>>
>> Then why are you advocating for a 1ms precision? If you don't want any truncation, then you should be arguing for unlimited precision representations. Anything else will necessarily be a truncation.
>
> We’re having a philosophical argument. We both want the “truth”, but the “truth” is also elusive. For example if the two of us agreed that nanosecond precision of an offset is what was agreed upon in 1937 for some time zone, what is to prevent someone later from coming along and saying, no, actually we need picosecond resolution? Or femtosecond resolution?! Ultimately we could argue ourselves down to Planck time resolution. This would obviously be ridiculous. And if we accept that observation of ridiculous, then somewhere between Planck time resolution and gigasecond resolution is the optimum answer. Finer is not always better and coarser is not always better. There exists an optimum between these two ridiculous extremes. If you’re going to argue a specific resolution (e.g. nanosecond), I would like to base that on something better than “finer is better” because I can go finer than nanoseconds, no problem. And modern CPU’s have a clock tick at sub-nanosecond levels, so there’s a reasonable argument to go there.
>
> Couple that with: Finer precision implies shorter range for a given number of bits.
>
> And we have an engineering tradeoff for precision vs range. We can have the ultimate precision or the ultimate range, but not both. We need to factor in engineering judgement on the best tradeoff of precision vs range for a given sizeof(representation).
>
>>
>>> If I have an offset of 1ns, and I add that to a time point of 1us UTC, the result is 1001ns in time zone X. To be able to accurately represent the time point in Zone X I have to be able to exactly represent 1001ns.
>>
>> True. This project does not decide what the time zones will be, though. You will have this problem if and only if some zone decides on an offset with nanosecond precision, and if that happens, tzdb will either have to truncate the real data to fit this arbitrary cutoff, or a second change to the precision supported will need to happen.
>>
>> Of course it's unlikely that any zone will actually implement an offset with sub-millisecond precision, but I'm not buying arbitrarily limiting it to milliseconds on the *input* to the compiler on that basis.
>
> I have an engineering background and can not help myself but to view things with a benefit/cost ratio analysis. I am 100% against prioritizing one dimension (eg. precision) while ignoring other dimensions (eg. sizeof, range, real-world application, backwards compatibility, etc.). To prioritize precision above all else means that we represent the offset, time points, and time durations with a “BigNum type” that allocates memory on the heap to represent arbitrary precision and range. That (imho) is not on the table.
>
> Howard
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://mm.icann.org/pipermail/tz/attachments/20180205/a2cec877/attachment.sig>
More information about the tz
mailing list