Macros considered harmful
Posted: Thu Nov 05, 2009 8:01 pm
What are your thoughts on this paper that considers macros and lisp compilers to be undesirable?
2.1 Myth 1: Lisp Needs a Compiler
This is in fact the most significant myth. If you listen to Lisp discussion groups, the
compiler plays a central role. You might get the impression that it is almost a synonym
for the execution environment. People worry about what the compiler does to their code,
and how effective it is. If your Lisp program appears to be slow, you are supposed to get
a better compiler.
The idea of an interpreted Lisp is regarded as an old misconception. A modern Lisp needs
a compiler; the interpreter is just a useful add-on, and mainly an interactive debugging
aid. It is too slow and bloated for executing production level programs.
We believe that the opposite is true. For one thing (and not just from a philosophical
point of view) a compiled Lisp program is no longer Lisp at all. It breaks the fundamental
rule of "formal equivalence of code and data". The resulting code does not consist of S-
Expressions, and cannot be handled by Lisp. The source language (Lisp) was transformed
to another language (machine code), with inevitable incompatibilities between different
virtual machines.
Practically, a compiler complicates the whole system. Features like multiple binding
strategies, typed variables and macros were introduced to satisfy the needs of compilers.
The system gets bloated, because it also has to support the interpreter and thus two
rather different architectures.
But is it worth the effort? Sure, there is some gain in raw execution speed, and compiler
construction is interesting for academic work. But we claim that in daily life a well-
designed "interpreter" can often outperform a compiled system.
You understand that we are not really talking about "interpretation". A Lisp system
immediately converts all input to internal pointer structures called "S-Expressions". True
\interpretation" would deal with one-dimensional character codes, considerably slowing
down the execution process. Lisp, however, "evaluates" the S-Expressions by quickly
following these pointer structures. There are no searches or lookups involved, so nothing
is really "interpreted". But out of habit we'll stick to that term.
A Lisp program as an S-Expression forms a tree of executable nodes. The code in these
nodes is typically written in optimized C or assembly, so the task of the interpreter is
simply to pass control from one node to the other. Because many of those built-in lisp
functions are very powerful and do a lot of processing, most of the time is spent in the
nodes. The tree itself functions as a kind of glue.
A Lisp compiler will remove some of that glue, and replace some nodes with primitive
or
ow functionality directly with machine code. But because most of the time is spent
in built-in functions anyway, the improvements will not be as dramatic as for example
in a Java byte code compiler, where each node (a byte code) has just a comparatively
2
primitive functionality.
from:
http://software-lab.de/radical.pdf
2.1 Myth 1: Lisp Needs a Compiler
This is in fact the most significant myth. If you listen to Lisp discussion groups, the
compiler plays a central role. You might get the impression that it is almost a synonym
for the execution environment. People worry about what the compiler does to their code,
and how effective it is. If your Lisp program appears to be slow, you are supposed to get
a better compiler.
The idea of an interpreted Lisp is regarded as an old misconception. A modern Lisp needs
a compiler; the interpreter is just a useful add-on, and mainly an interactive debugging
aid. It is too slow and bloated for executing production level programs.
We believe that the opposite is true. For one thing (and not just from a philosophical
point of view) a compiled Lisp program is no longer Lisp at all. It breaks the fundamental
rule of "formal equivalence of code and data". The resulting code does not consist of S-
Expressions, and cannot be handled by Lisp. The source language (Lisp) was transformed
to another language (machine code), with inevitable incompatibilities between different
virtual machines.
Practically, a compiler complicates the whole system. Features like multiple binding
strategies, typed variables and macros were introduced to satisfy the needs of compilers.
The system gets bloated, because it also has to support the interpreter and thus two
rather different architectures.
But is it worth the effort? Sure, there is some gain in raw execution speed, and compiler
construction is interesting for academic work. But we claim that in daily life a well-
designed "interpreter" can often outperform a compiled system.
You understand that we are not really talking about "interpretation". A Lisp system
immediately converts all input to internal pointer structures called "S-Expressions". True
\interpretation" would deal with one-dimensional character codes, considerably slowing
down the execution process. Lisp, however, "evaluates" the S-Expressions by quickly
following these pointer structures. There are no searches or lookups involved, so nothing
is really "interpreted". But out of habit we'll stick to that term.
A Lisp program as an S-Expression forms a tree of executable nodes. The code in these
nodes is typically written in optimized C or assembly, so the task of the interpreter is
simply to pass control from one node to the other. Because many of those built-in lisp
functions are very powerful and do a lot of processing, most of the time is spent in the
nodes. The tree itself functions as a kind of glue.
A Lisp compiler will remove some of that glue, and replace some nodes with primitive
or
ow functionality directly with machine code. But because most of the time is spent
in built-in functions anyway, the improvements will not be as dramatic as for example
in a Java byte code compiler, where each node (a byte code) has just a comparatively
2
primitive functionality.
from:
http://software-lab.de/radical.pdf