On Thu, Jul 27, 2006 at 12:23:23AM +0900, Charles O Nutter wrote:
You’re mixing language semantics and implementation details here. The
mechanics of method lookup is not a language feature; it’s an implementation
detail. On the other hand, the logic of which method gets invoked in a
hierarchy of types is a language detail. Scoping is a language feature, but
the means by which scope is maintained is an implementation detail.
In some ways, you’re right: implementation details are being mixed up
with language definition in the preceding list of features. In the case
of scoping, however, you’re not entirely right with regard to “the means
by which scope is maintained”. Dynamic scoping, by definition, requires
runtime scoping. Static scoping, by definition, does not. This means
that (to use Perl as an example, since I know it better than Ruby) my(),
which is used to declare variables in lexical scope, can be managed at
compile time, while local(), which is used for dynamic scope, can only
be managed at runtime – else it will not work as advertised. That’s
more than simply implementation details: implementation is, in this
case, dictated by language features.
good aspects and five bad, and you only make use of five good aspects, then
your code is sub-optimal. If you use less than five, you’re even worse off
and perhaps should consider doing things differently. Nothing about the
feature itself explicitly implies that performance should degrade by using
it…it’s a matter of using those features wisely and making optimal use of
their good aspects, balanced with their bad aspects.
I think closures are kind of a bad example for this, actually. There’s
nothing about closures that necessarily harms performance of the
language in implementation. In fact, closures are in some respects
merely a happy accident that arises as the result of other, positive
characteristics of a language that all can tend to contribute to better
performance of the implementation of a language (such as lexical scope,
which leads to better performance than dynamic scope). In fact, one of
the reasons Python doesn’t have proper closures (lack of strict lexical
scoping) is also likely one of the reasons Python still tends to lag
behind Perl for performance purposes.
The only real overhead involved in closures, as far as I can see, is the
allocation of memory to a closure that doesn’t go away until the program
exits or, in some implementations, until the program reaches a point
where it will absolutely, positively never need that closure again
(which is roughly the same thing for most uses of closures). A little
extra memory usage does not translate directly to performance loss. In
fact, in any halfway-decent system implementation, it really shouldn’t
result in reduced performance unless you start having to swap because
you’ve overrun “physical RAM”, I think.
The day may come when RAM is better managed so that performance gains
can be had for less memory usage, though, so I doubt this will always be
true.