[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Date: 19 Feb 1986 21:36-PST
From: David Bein <pyramid!bein@sri-unix>
What do others think the reader for floating point numbers
should do when it calculates a 0 mantissa (for float-digits
equal to 53 this would be 4503599627370496 in terms of
a 53 bit 1.F value) and the smallest exponent which would be 0.0
if given to scale-float? (Assume the digits are non-zero)..
I can see one of the following possibilities:
(1) Could be quiet and produce the smallest possible non-zero
float of the desired type.
(2) Could be quiet and return 0.0.
Either (1) or (2) could be construed as producing "the closest
representable value to the input", but they both seem wrong. Mostly
they are both misleading in that the floating-point result bears little
resemblance to the input.
(3) Could complain about non-zero digits turning into 0.0.
(read-from-string "1e-50") on the Symbolics system currently blows out
with the following error message:
"The number 1E-50 is too small in magnitude for single-precision floating-point format."
Underflow which leaves a denormalized nonzero result is accepted
quietly, since the value is as accurate as possible given the format.
(4) Compute it in a higher format if possible and let it die
when converting that to the desired format.
I don't see how this differs from (3) except that the message the user
gets is more obscure.
Note that this question also covers what would be described
as exponent underflow since the mantissa for that would be
too small if we shift the exponent to be in range.
I vote for #3. Note that a proper printer based upon Guy
Steele's "How To Print Floating Point Numbers.." will not
produce digits which fall into this class, so this is only
a question for "virgin" input.