[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: (declare (type fixnum ---)) considered etc.
Date: Thu, 24 Jul 86 09:21:27 PDT
From: Alan Snyder <snyder%hplsny@hplabs.HP.COM>
From: Rob MacLachlan <RAM@C.CS.CMU.EDU>
Subject: (declare (type fixnum ---)) considered etc.
Almost every other language in the entire world has a "INTEGER"
type which has an ill-defined, fixed precision. If Common Lisp is
going to have comparable performance to these languages when running
on the same hardware, then it is going to have to be comparably
(signed-byte 69). What's inefficient about that kind of construct. The
problem with those other languages is that you CAN'T write realiable
portable code that is depending on those "integers" In yonger days, I
would have wanted PDP-11 fortran to overflow "integers" into "bignums".
No such luck. CL is considerably more powerful than those other
languages, and still provides the necessary constructs to give hints to
the compiler. (signed-byte 12) is a perfectly valid thing. It
CORRECTLY reflects the intent of the programmer, AND is portable.
FIXNUM does neither of these.
Unfortunately, I must agree. I think the PL/I experience shows
that if you force people to declare the number of bits of
precision they want, in most cases they will find out what number
produces the best results on their current machine and use that,
thus making their programs non-portable in terms of efficiency.
I pity the minds of those programmers. If they are writing
system-dependent architecture-dependent code, fine. If they are writing
portable code, they MUST declare their intents to get reasonable
efficiency from a variety of systems.
There is no guarantee either that the maximum-array-bound
corresponds to what we think of as FIXNUMs; why shouldn't a
generous implementation allow BIGNUMs as array indexes? There
are, I admit, cases when the programmer knows how big integers
will need to be, mostly when dealing with fixed-size arrays; in
those cases, people who are concerned about efficiency should be
encouraged to declare the exact range. But, I don't think
fixed-size arrays are a particularly good idea, either. The
conventional solution is to have one or more standard integer
types with strongly suggested (required?) minimum precisions. I
think that is the right pragmatic solution to this problem, given
the desire to produce efficient code on stock architectures.
If you really want to go least-common-denominator, you will probably
have to settle for 7 guarenteed bits, or maybe 8 or 9; I can't remember
how big MacLisp fixnums are.