[tz] Fractional seconds in zic input
Howard Hinnant
howard.hinnant at gmail.com
Tue Feb 6 03:31:09 UTC 2018
On Feb 5, 2018, at 9:02 PM, Paul G <paul at ganssle.io> wrote:
>
> My suggestion was that the input to the compiler (which is not strongly typed, so has no `sizeof`) should either have infinite precision or a clear upgrade path (e.g. a precision specification). So far Paul's patch has no effect on the *output* of the zic compiler, and I think it's at least reasonable for the tzdb (the *input* to zic) to be able to support arbitrarily fine inputs, considering that is the ultimate "source of truth", even without any of the other engineering concerns.
Ah, I see our disconnect here. I have no concern about the zic compiler. The zic compiler is not in my workflow. The *input* to the zic compiler is the *only* thing that concerns me. I (and several others) have our own “zic compilers” which take this input, process it, and deliver it to our customers. For us, the product of the IANA repository is only the tz database, and not the tz code. Furthermore this “product”, the tz database, will be consumed by not just one alternative “zic compiler”, but many, and in many different languages on many different platforms.
>
> With regards to the other engineering concerns, that was what I was trying to appeal to when I said that nanoseconds are a more reasonable choice for precision *if we're arbitrarily limiting this anyway*. By selecting milliseconds or even microseconds, you're sacrificing precision in exchange for completely unnecessary range. Time zone offsets outside of +/- 1 day are dubious (and unsupported in many environments), +/- 1 week are very unlikely, and +/- 1 year is absurd. While both are pretty unlikely, I think nanosecond precision offsets are much more likely than >219 year timezone offsets, so assuming that you want to truncate the inputs *at all*, it would be preferable to use nanosecond precision than millisecond. Honestly, I'm fine with (assuming infinite precision isn't supported) any resolution such that the range is ~1 week.
As I’ve repeatedly expressed, a precision of offset demands a precision in time point. And it is the range of the time point, not the offset that concerns me.
utc_offset == local_time - utc_time
This is an algebraic equation that _must_ be true.
If utc_offset has precision nanoseconds, then either local_time or utc_time must have precision of nanoseconds or finer to make this equation true. This is a mathematical reality that exists. If utc_offset == 1ns, and neither local_time nor utc_time have the ability to represent nanosecond precision, how can the above equation possibly work, aside from coincidental examples where the number of nanoseconds in the utc_offset is an integral number of the precision of local_time or utc_time (e.g. a billion nanoseconds if both local_time and utc_time are seconds precision)?
Howard
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: Message signed with OpenPGP
URL: <http://mm.icann.org/pipermail/tz/attachments/20180205/cda40f9d/attachment.sig>
More information about the tz
mailing list