[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

&rest [discussion] replacement/addition

re: (defun fu (a &more-args)
       (do-args (arg)
	  (moby-calculation a arg)
	  ...more code...

VAX/NIL had an adequate solution that didn't involve the kind of hidden
state implicit in ideas similar to MacLisp's LEXPR's.  There were two
indicators for functions with "more" arguments -- &RESTV and &RESTL --
the former produced a simple (general) vector of the remaining arguments
and the latter produced a list.  The reason for this was that constructing
a stack-allocated vector out of the "remaining" arguments is something
that a stock-hardware implementation can do in essentially no time; but
re-structuring the "remaining" arguments into a stack-allocated list
takes time equivalent to ordinary listification of that many items.

So in VAX/NIL, &REST was *ambiguous* as to whether you got a vector or
list.  The safe and portable style was to use &REST and access its
components only with sequence-like functions that accept either a vector
or a list (such as ELT, LENGTH, etc).  Needless to say, all the appropriate 
functions were generalized to take vectors as well as list -- APPLY, MAP
and so on.  In some cases, we found it prudent to duplicate code --
once for the case when the &rest arg was a vector and once for when it was 
a list.  Here is a trivial example:
    (defun + (&rest args &aux (result 0))
      (if (listp args)
          (dolist (x args) (incf result x))
          (dovector (x args) (incf result x)))
["trivial" since Common-Lisp MAP would be adequate here anyway].

We found it genuinely hard to justify the assertion that micro-differences
in timings (due to this design) could amount to anything important in system
performance.  Hence it was very disappointing to see the Common Lisp votes
during '82 and '83 opt only for the version inspired by the special-purpose
hardware of the MIT LispMachine and its descendents.

It is also disappointing to see the amount of time (and mailer/diskspace)
wasted in defending a very quirky semantics for &rest lists -- that they
may be "shared" due to a potential optimization in APPLY.  (Note that this
couldn't happen with &rest vectors).   Despite what Will Clinger has argued, 
I think Chris Eliot's reasoning is the norm accepted in the user community
-- namely that from the viewpoint of the callee, (foo 1 2 3) and 
(apply #'foo '(1 2 3))  should be received identically.  It is a gross 
burden on the user to have to worry about hidden state in the implementation;
a burden which I should hope would be imposed only when there is *great* gain
to be had from doing so.

Gail Zacharias talked about the common idiom of simply doing a COPY-LIST on 
every &rest argument, to insure some normalcy.  Her reasoning seems, to me, 
to bolster the case for those who claim that that CL semantics are deficient:
    Subject: &REST Lists
    Date: 24 Mar 88 12:23:15 EST (Thu)
    From: gz@spt.entity.com (Gail Zacharias)
    . . . 
    If Common Lisp doesn't require unshared &rest lists, then I think
    it must provide a declarative version of this idiom so the COPY-LIST can
    be portably avoided when it's redundant.  Seems to me that the fact that 
    this is a common case where users find a need for conditionalization 
    indicates a real deficiency in Common Lisp's specification.
    . . . 
Of course, the problem isn't only the sharing of &rest lists, but the more 
common flaw that they may, unannouncedly, have dynamic extent.  By this, I 
mean the bug where a stack-allocated &rest list can creep out into global 
data structures, even though it will surely disappear when the frame that 
created it returns.  Allegedly, Symbolics is going to fix this bug in their 
next release (and TI may have already fixed it?); but we are now five years 
beyond the first CL specification!

Perhaps just one more word on what I think "*great* gain" would mean (in
the second paragraph back).  For almost two months we have heard numerous
"experts" claim that this ability to share &rest lists is an important,
nay *critical*, component of optimization in certain implementations.
I just don't believe these conjectures.  Eliot has done a "bang up" job
of presenting reasonable evidence that they can't be all that important;
so NOW, it is up to the advocates of this "optimization" to present some
hard evidence to the contrary.  And PLEASE, NO MORE hypothetical cases of 
what MIGHT occur, or what COULD CONCEIVABLY be;  either "put up" with some 
benchmark numbers from real applications (i.e., not just some fragments of 
a few functions or a couple lines of microcode), or "shut up".

-- JonL --