[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: (declare (type fixnum ---)) considered etc.
Date: Thu, 24 Jul 86 09:21:27 PDT
From: Alan Snyder <snyder%hplsny@hplabs.HP.COM>
Unfortunately, I must agree. I think the PL/I experience shows
that if you force people to declare the number of bits of
precision they want, in most cases they will find out what number
produces the best results on their current machine and use that,
thus making their programs non-portable in terms of efficiency.
I'm not sure what universal "the PL/I experience" you mean. Apparently
you have observed PL/I programmers who did not use PL/I's declarations
in order to achieve portability. That's hardly surprising. Most PL/I
programmers are not even aware that the world contains computers that
are not IBM System 370's. No matter how many PL/I programmers out there
don't care about portability, Common Lisp is a different story, and we
should provide a language definition aimed at allowing programs to be
portable.
There is no guarantee either that the maximum-array-bound
corresponds to what we think of as FIXNUMs; why shouldn't a
generous implementation allow BIGNUMs as array indexes?
(Actually, I never understood why array indexes were in any way
specifically tied to FIXNUMs. This seems arbitrary and pragmatic but
not in the spirit of portability.)
There
are, I admit, cases when the programmer knows how big integers
will need to be, mostly when dealing with fixed-size arrays; in
those cases, people who are concerned about efficiency should be
encouraged to declare the exact range.
Is that the only occasion you can imagine in which one might know an
upper bound on the value of an integer-valued variable? If so, then in
all other cases, the FIXNUM declaration is useless because the number
might be a bignum. But I don't think that's really so.
But, I don't think
fixed-size arrays are a particularly good idea, either. The
conventional solution is to have one or more standard integer
types with strongly suggested (required?) minimum precisions. I
think that is the right pragmatic solution to this problem, given
the desire to produce efficient code on stock architectures.
I don't see how "strongly suggested" precisions do any good at all.
They let us say "I guarantee that this program will probably run"? What
does that mean? What's the difference between a "strong" and "weak"
suggestion? Does "strongly suggested" mean "it's not required, but if
you don't implement it, programs won't necessary work properly", and if
so, how is that different from a requirement? "I strongly recommend
that you implement APPEND; it's not required, but some programs will not
port to your implementation...".
Required minimum precisions could be done, but I see no advantage over
just using the parameterized INTEGER declaration. Having a few standard
types with required minimum precisions is just like the parameterized
INTEGER declaration except that you only allow a few specific values for
the parameter. I don't see how that buys anything, and in some cases it
will result in overkill since you have to ask for slightly more than you
need. As I said earlier, it's trivial for a compiler to take a request
for "at least N bits", and round up to the next available size provided
by the hardware. (When I speak of "bits" I really mean "ranges", since
not all computers are binary etc., but you know what I mean.)