[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
> The concept that 64 bits could be construed as a reasonable integer
> limit is silly since there are many hardware architectures where a basic
> arithmetic operation can be done on 64 bits in both integer and floating
> point. Also 64 bits is only ~21 decimal digits.
What has the size of the basic hardware operation size got to do with
anything? Frankly, as long as the standard doesn't specify a minimum
size for bignum coverage, I would say 64 bits would be conforming.
I have never done any Lisp coding myself (outside of tests of our
implementation) which required numbers bigger than 64 bits -- if I were
going out shopping for a CL for my own use, that restriction wouldn't
The most important thing is that each implementation make its limits
clear, so that the porter knows if she has a chance of making the port
in less than geological time and so that the shopper knows whether
the implementation is suitable for the proposed use. I don't really
care whether the limits are encoded as constants or printed on paper,
but I would like the standard to require that they be available and
that they be separated into representational limits and space limits.
gould/csd - urbana