[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

more boring-to-most-people floating point stuff



    Date: Sun, 26 Jan 86 14:13:29 PST
    From: fateman@dali.berkeley.edu (Richard Fateman)

    The example  (/ (expt 10 68) 1.0e38) is illustrating
    ( <operator> <number-type-1> <number-type-2> )  whereas
    (/ (* 1.0e34 1.0e34) 1.0e38) is illustrating (recursively)
    (<operator> <number-type-1> <number-type-1>) .

    Only when you combine two number types, one majorizing the other,
    do you face this issue. The intent when combining different precisions
    (and ranges) within floats is clear.  You coerce to the longer precision
    because it is likely to give you the right answer. Not the fastest.
    A reasonable extension of this is if you mix floats with even longer
    precision numbers (i.e., rational), you convert to rational, because it
    gives you the right answer, not the fastest.

    You can, of course, promote the notion that "anything goes" as soon as exact
    and approximate numeric types are mixed, or claim that it would be too
    costly to do the right thing. 

I agree it's a question of what's right, not what's costly.  [Now that
we have the EGC, we don't worry so much about consing.]  I think my
earlier statements about efficiency were really trying to get at my last
point below about generic numeric functions.  (Although I would like my
generic functions to be as efficient as possible.)

     To say this is the wrong forum for this is to
    appear at odds with the space devoted to this
    and other numeric data types in the language specification.

Okay, I think I see what you're driving at.  You are saying that
rational arithmetic is "right" (mathematically correct) and so
computations should be encouraged to happen rationally.  A user who
*really* wants floating-point should have to be careful to say so.

I agree that we should be pushing rational arithmetic.  But I think that
we should point out that use of a floating-point number anywhere in a
computation must be assumed to have "tainted" the result.  I think it
would be misleading and less useful for (* pi 2) =>
884279719003555/140737488355328.  It would certainly be confusing for
(* .001 10) => 42949675/4294967296.  It's hard enough explaining that
(* .001 10) => 0.010000001.

I guess I prefer that any fuzziness in the computation be reflected in
fuzziness of the result.  You could object that coercion should then
prefer the smaller precision rather than the larger, and I'd probably
agree.