ballot 2 - result
Sat Apr 4 15:30:52 UTC 1987
Here are the opinions on how the time_t type should be defined.
It may not have been clear what I was getting at with the questions about
"how does your favorite machine arch allow for char addressing" or "how do
you measure distances". Those questions were meant to show the analogous
nature of time measurement to distance and to how one selects a particular
Time_t has been a single unit measure -- seconds. Compare that with
how address for Intel chips are made -- segment and offset. In that scheme,
there can be multiple "names" for the same location that can be resolved
only be a normalization process. Two of the choices (e, f and maybe b)
were also composed of two parts that, while they do extend the range, may
have drawbacks based on their form. Distances can be given in a form that
has several units (miles, feet, inches).
Calculations are easier if there is only one unit; no conversions are needed.
2. The type time_t should be defined as:
a. a signed long integer whose value is the number of seconds
since 1/1/70 0:0:0 UT (the SVID definition)
b. an implementation defined type (a long, or struct, or float or ?)
c. a signed long integer whose value is the number of seconds
since 1/1/70 0:0:0 UT but whose bits may have a special,
d. an unsigned long whose value is the number of seconds
since 1/1/70 0:0:0 UT and extends to the year 2100
e. a signed long integer whose value is the number of seconds
from a system- or implementation-selected date and time
f. a companion value to a date_t where time_t is number of seconds
since the start of the day represented in date_t
Choose 'a'. It is too late for a change to happen. Using an "overloaded
meaning" time_t will bite those who are not aware of the circumstances
where a particular bit pattern is not a simple count-of-seconds. The
other choices could be supported with a library that parallels the functions
provided by time() and ctime(3) funcs.
I vote for c', namely
an unsigned long integer whose value is the number of seconds
since 1/1/70 0:0:0 UT but whose bits may have a special,
all 1's means "error"
anything whose 25th through 32nd least significant bits (e.g.
the 8 high bits in a 32 bit long) are all 1 is interpreted
instead as follows:
take the low order 24 bits
The time represents 0:0:0 (or any unspecified time)
on the date that many days since 0/0/0, as a signed
24 bit integer
Alternatively, "error" might be taken as 9 high order 1's, a day 2^23
--Arthur David Olson--
A. If any other definition is decided on, the type name must be something
other than time_t. Any other approach will break existing code.
I vote for 'f' --
with 'f' we get more independent and are on the "safe side".
For me, (a) is the clear choice. Only (a) and (d) provide effective
compatibility for most programs. (c) is just a way to add complexity
to a programmer's life without adding real power; everybody will have
to deal with the special bits, but the people who actually care about
them will not be saved any code over using a more general time type.
This is contrary to the "spirit of UNIX". More practically, it is
contrary to the principle of simplicity that has made UNIX so wonderful
for programmers. (b) and (e) cost too much in portability of existing
programs (although (e) is not as bad as first appears, since most time
calculations are in effect relative). (f) provides too many ways to
represent the same time, which means that many programs would have to
have code to normalize time representations to the same date_t before
they could manipulate data, again violating the principle of simplicity.
It seems to me that there are really two problems here: time stamping
and arithmetic on stamps, and general operations on points in time.
The existing time_t is adequate for the first. Making it signed doesn't
really help with the general problem, but does make it reasonable to
represent time differences using a time_t. Of the other options, only
(b) and (f) address the general problem, and they do so poorly and at
the expense of simplicity and portability. I'd rather either design
a really easy-to-use general mechanism, or just punt the problem and
say "if you want more, use a private data structure". Note that a
truly general solution is pretty difficult, since it is easy
to argue for both picosecond resolution and megayear range. Since
a year is about 10**7 seconds, this requires a dynamic range of
10**25, or about 2**84. Floating-point is a pretty good answer, though
of course it introduces all of the myriad problems of floating arithmetic.
> A. Is the range of 1970 +/- 67 years sufficient?
Yes and no. One can infer from the lifetimes of OS/8 and IBM's DOS that there
will be UNIX machines in operation well past the year 2038. However,
it is quite likely that none of the machines will be 32-bitters by then,
so the question may become moot.
The negative range, of course, is pretty useless for serious date work,
but it seems pretty adequate for time differences. However, since there
will be files like /bin/true that will still have a 1980's modification
date in the 21st century, the negative range must always be able to handle
the difference between the current time and the epoch.
> B. What is a 'long' for machines with different architectures?
K&R guarantee that a long has at least 32 bits. Time_t should not be
required to be a C "long" type; it should simply be required to be at
least 32 bits. The wide-word types of the future can use their 32-bit
shorts if they want, or they can use a 128-bit long to increase accuracy.
> C. How is an invalid time represented?
Why is it necessary to represent one? If you decide on a representation,
then everybody has to check for it. This is one thing with IEEE floating
point, where the check can be done in parallel hardware, and quite another
with a software type. In any case, what is the meaning of the term
"invalid time"? All times are valid; they just might not be accurate.
More information about the tz