Default timezone

Guy Harris seismo!sun!guy
Sat Dec 20 23:11:42 UTC 1986


	I mean local wallclock time (conversion rules out of an instruction
	would be a little wierd...  but this processor has a "load.time"
	instruction, or something like that, which returns local wallclock
	time, 64 bits, high precision, and lasts forever...)

By "local wallclock time", do you mean local wallclock time (DST and all),
or just the number that it was originally set to at time T plus the number
of clock ticks since time T?  If it's the latter, it could just as easily
keep "UNIX time".

	The time system call (I can't remember now if its in the kernel or
	not or if time(2) just does this in each user's code) gets that value
	and then applies basically the reverse transformation of localtime()
	to get GMT to return to the user...

	My point was that the method of discovering what local wall clock time
	is shouldn't be constrained at all in the standard.

But in most cases you get wall clock time by doing "time" or "gettimeofday"
and feeding that to "localtime"; if you do that on this machine, and it's OS
is really UNIX-compatible, the fact that the hardware keeps some flavor of
local time is irrelevant.  The application would get "UNIX time", and then
call "localtime" to convert that back to local time in some time zone.

The standard *already* constrains the way you discover what local wall clock
time is; you get "UNIX time" from "time" and feed it to "localtime".
Hardware that keeps "wall clock time" is, in a POSIX system, an obstacle to
overcome (you have to convert "wall clock time" to "UNIX time") rather than
something that makes certain operations easier.

If you really don't want the standard to constrain how you get local wall
clock time, you'd have to add a function like "localtime" that takes *no*
arguments, and returns the broken-down value of the wall clock time, or add
a function like "time" that returns wall clock time as a "time_t" and use
"gmttime" or something if you want to break that down.

	    The S5R3.1 scheme does handle this, by having "init" put the
	    time zone in its environment.

	No, that only gets it to the children of init, and their children,
	until someone types "env - sh" to their shell...

How is typing

	env - sh

to your shell different from typing

	TZ=Europe/Albania; export TZ

to your shell?  In either case, the user is explicitly reaming out their
environment.  Are you saying that even if a user reams out their
environment, programs they run should get local time by default?  I neither
agree nor disagree with this; I haven't seen what I consider to be a good
argument either way.

	    Actually, "settz()" could be implemented in the S5R3.1 scheme.

	Yes, the problem is whether AT&T will do it or not.

Since some way of permitting a program to get all time conversions to be
done with the "official" local rules is required, the standard must include
something like that.  The S5R3.1 scheme already has a way of doing it; you
open "/etc/TIMEZONE", read it, and "putenv" the value in there.  However, 1)
you don't want to have to write the same idiom 500 times for 500 programs,
and 2) you want the standard to have a way of requesting "official" local
time without having to know so many details of how the rules are specified,
so that the standard doesn't have to require you to use TZ and all the
baggage (i.e., the ability for the user to choose which rules they want)
that this drags in.

So you want some way of saying "please give me local rules", rather than
some way of getting the "name" of the local rules and some way of saying
"please give me this set of rules".  "settz((char *)NULL)" seems fine to me.
(The behavior when "settz" is passed a string rather than a null pointer
could be left up to the implementation, in case any implementor might object
to a fully general scheme.)  If the standard does include that, AT&T could
implement it, and would have to if they wanted to sell to somebody who
required POSIX compliance.



More information about the tz mailing list