[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

comparisons between floats and ratios



    Date: Fri, 24 Jan 86 12:25:47 PST
    From: fateman@dali.berkeley.edu (Richard Fateman)

    There is a rule (top of p 194, CLtL) indicating that CL preserves
    "as much accuracy as possible" (although not guaranteeing it).
    Thus when two numeric values are operated upon, the coercion
    is to the larger precision, which tends to preserve more accuracy.

    To be consistent with this rule, operations which combine floats of
    any precision with ratios should convert floats to ratios, not the
    reverse.

By introducing a floating-point number into a computation, you're
introducing a [hopefully] controlled inaccuracy.  CLtL is just trying to
preserve that amount of inaccuracy.

    Any float (except possibly IEEE NaN and infinity) can be converted to a
    ratio without loss of precision. (and Nan=0/0, inf = 1/0 might even be
    used).  Of course the rational answer could be re-floated if required.

    This would solve the problem of incorrect equality of 0.0 with a
    non-zero but small ratio which "underflows".

    Now consider comparing the floating point number 0.5 with an integer
    sufficiently large as to overflow the exponent range of floats.  The
    comparison is "an error", according to CLtL (p 194).  If one is to
    ensure correct handling of the large number of numeric data types
    in the language specification, I think the comparison should return
    the mathematically correct answer.

    (Alternatively, one could consider re-building all built-in functions 
    for applications like Macsyma.) 

You bring up three issues:
 - the coercion rules for mixed-type operations (+, -, *, etc.)
 - the coercion rules for mixed-type comparisons (=, <, etc.)
 - "correct answers" for mixed-type comparisons vs. blowing out 

I believe that although they are related, they are orthogonal.  So I
will discuss them separately.

I don't think that you really want the coercion rules for operations to
prefer rational arithmetic to floating-point.  Consider that (+ x 1)
will leave you with a rational result.  Remember that rational
canonicalization means you can't distinguish a "ratio" 1/1 from an
integer 1.  You'd have to be incredibly compulsive to keep your
computations from consing ratios and taking forever.

The reason we keep floating-point around is to do computations
efficiently.  By introducing a float, you're saying, "I know the answer
might not be exactly correct, but I know what I'm doing -- just give me
an answer, and give it to me fast."  [-: I do know that a lot of people
who use floating-point don't know what they're doing.  But we should
pretend they do. :-]

You might want comparisons to be done rationally, although pretending
that 0.0 is the same as 0 seems foolish to me.  And the efficiency issue
remains, since (< x 1) would most likely cons you a ratio of bigna.

I do agree that you'd like to get answers to any comparison that doesn't
involve NaNs.  But that's possible even with the current CLtL coercion
rules.  In the next release of the Symbolics system, mixed-type
comparisons are done in floating-point ala CLtL, but over- and underflow
are accounted for so that "correct" answers are always given.