[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

*To*: common-lisp@sail.stanford.edu*Subject*: Re: Bigosities*From*: sandra%orion@cs.utah.edu (Sandra J Loosemore)*Date*: Sun, 19 Apr 87 18:29:31 MDT*Newsgroups*: fa.common-lisp*References*: <8704191810.AA14369@primerd.prime.com>

In article <8704191810.AA14369@primerd.prime.com>, primerd!DOUG@ENX.Prime.PDN writes: > The concept that 64 bits could be construed as a reasonable integer > limit is silly since there are many hardware architectures where a > basic arithmetic operation can be done on 64 bits in both integer > and floating point. Also 64 bits is only ~21 decimal digits. > > Doug Maybe I should have put a few :-)'s in my original posting: I certainly wasn't intending 64-bit bignums as a serious proposal. My reason for throwing out the idea in the first place was to point out that integer overflow is bad news whether the limit on integer size is large or small, and also to point out that the manual doesn't *say* it has to be huge. If the limit is huge, we can maybe get away with some handwaving and saying that any program that runs up against the limit is wedged anyway. But that does not change the fact that there is a problem with overflow. -Sandra

**References**:**Bigosities***From:*primerd!DOUG@ENX.Prime.PDN

- Prev by Date:
**Bigosities** - Next by Date:
**bignums are bogus** - Previous by thread:
**Bigosities** - Next by thread:
**Bigosities** - Index(es):