[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: IEEE float co-processors
> My only concern is that the interpreter would probably be stuck with
> doing the coercions at each step, and therefore might get a slightly
> different (less precise) answer than the compiler would come up with.
> Would that bother anyone? (Not me, but I only look at the first two
> digits anyway...)
I don't do numerical work, but working down the hall from a
particularly famous and outspoken numerical analyst as I do, I don't
dare be completely uninformed on the issue.
I think that one clearly wants the compiler and interpreter to give
identical results, not only for obvious reasons having to do with
debugging, but also for consistency with the CL campaign to bring
compiled and interpreted semantics closer together. Fateman's
proposal doesn't seem to make it hard for the interpreter to be
consistent with the compiler. For example, the implementation might
choose an implementation in which
1. All expressions of the form (arith-op a b c ...) are grouped
left-associatively (at least if one operand is floating point
and possibly always.
2. All arithmetic involving floating point is done in extended
precision, with the result being rounded to the appropriate
precision at the end.
Is there any difficulty in having either interpreter or compiler
follow this semantics?
Frankly, I think the freedom to re-arrange allowed on page 194 is a
bad idea for any expression that might involve floating point operands
(maybe for others, too, but my opinion isn't strong on that.) In
CL, of course, you don't have the freedom to say ``well, we'll compile
it this way for integers and ratios and that way for floating point''
because CL is not strongly typed. I therefore anticipate the
objection, ``well we don't want to lower the quality of code for the
usual case (some sort of integers, I presume).'' However, in the
absence of very clear, solid, empirical evidence suggesting
appreciable degradation of performance resulting from the compiler's
inability to reassociate expressions, I'd just as soon chuck the
flexibility.
There is a prevailing opinion that ``well, floating point is
approximate, so who cares? Why not rearrange as convenient?'' The
fact of the matter is that while floating point arithmetic is
approximate, it is ``approximate in a precise way'' (at least on most
architectures, and certainly on anything conforming to the IEEE
Standard 754.) For example, on most machines x+y, x*y, x-y, and x/y
all have the property that in the absence of over/underflow, the
result has a very small (order 2**(-number of binary digits of
precision or so)) RELATIVE error. Indeed, for addition (subtraction),
if the magnitudes of the operands are within a factor of 2 of each
other and the operands have opposite (the same) signs, so that
extensive cancellation occurs, the result is mathematically exact.
The common intuition that cancellation introduces error is completely
wrong. This property turns out to be very useful (for example, it
makes it possible to compute the area of a triangle with a simple
formula that works well even for very flat triangles.) It also
explains why some of us would like to avoid cavalier rounding of small
rationals to 0: doing so suddenly introduces a 100% relative error
into the computation. Better to get an error indication or the right
answer.
I suppose one could argue that usually, one doesn't care. For
integers and ratios this is certainly true, since the answer will be
identical in any case, unless something exceptional (and in particular
rare) happens. But when regrouping changes the answer in
unexceptional cases, and when important properties of that answer by
rights could be predicted by simple and well-understood rules if only
the grouping were predictable, one has to wonder.
Paul Hilfinger