Paul Eggert eggert at CS.UCLA.EDU
Tue Jul 27 18:20:48 UTC 2004

Robert Elz <kre at> writes:

> That buffer is big enough for that string, but are we sure that string
> is the longest that can ever happen?

Yes.  It's an information-theoretic argument.  At compile-time we know
the number of bits in an int, so we can compute at compile-time an
upper bound on the number of digits that can be printed.  This is true
even for weird architectures that have "holes" in their int
representation, since the holes can't increase the number of digits.

> The year is after all printed using %ld - so there must at least be
> the potential for that one to be a long, which might be
> -9223372036854775808.

But that 'long' value is derived by adding 1900 to an 'int'.  It can't
possibly take more digits than the number of digits printed in an int.
Even if adding 1900 overflows INT_MAX, it will add at most one digit
to the print width, and that extra byte is already accounted for by
the width of INT_MIN (which has a leading minus sign).

Hmm, perhaps this fairly-subtle point should be commented.  I'll add a
comment in my next proposed draft.

> Beyond that, C doesn't actually promise that "int" is limited to 32
> bits does it?   Given that, the 2147483648 numbers are just speculation.

That part of the comment is just an example: the code works even with
wider ints.  I'll propose a reworded comment to make this clearer.

> Other than validating the input, before conversion to strings, to see
> that it will be printable in a reasonable number of characters, snprintf
> is really the only good solution.   Where it doesn't exist, just
> use a HUGE buffer - it is just stack space after all,

That approach runs into a different problem.  On most modern
architectures, the stack space overflow checking is fairly brain
damaged.  Sometimes there's no checking whatsoever (ouch!), but more
typically the assumption is that one does not have HUGE buffers on the
stack.  If you violate this assumption the behavior is undefined.  A
1K local buffer is safe on all the platforms that I know about (i.e.,
stack overflow will be detected if you use the buffer right away), but
an 8K buffer isn't.

Anyway, it's always better to not allocate space you don't need, as
this avoids stack overflow in some cases; this is true even for
architectures with perfect stack-overflow checking.

I agree that in general snprintf is the way to go: it leads to a
higher-performance solution, since it avoids an extra buffer copy.
However, the tradeoff is that it makes the code either less portable
(older hosts don't have snprintf, or have a buggy one) or more
complicated (if you have ifdefs).  So I can understand Arthur's
preference to stick with tried-and-true sprintf here.

More information about the tz mailing list