[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
In article <8704191810.AA14369@primerd.prime.com>,
> The concept that 64 bits could be construed as a reasonable integer
> limit is silly since there are many hardware architectures where a
> basic arithmetic operation can be done on 64 bits in both integer
> and floating point. Also 64 bits is only ~21 decimal digits.
Maybe I should have put a few :-)'s in my original posting: I certainly
wasn't intending 64-bit bignums as a serious proposal. My reason for
throwing out the idea in the first place was to point out that integer
overflow is bad news whether the limit on integer size is large or small,
and also to point out that the manual doesn't *say* it has to be huge.
If the limit is huge, we can maybe get away with some handwaving and saying
that any program that runs up against the limit is wedged anyway. But
that does not change the fact that there is a problem with overflow.