My favorite example of your point from when I first discovered LISP was CLOS. The typical way to get C programmers to have OOP was to tell them to use horribly-complex C++ or switch to Java/C#. The LISP people just added a library of macros. If they don’t like OOP, don’t use the library. Done. People thought Aspect-Oriented Programming would be cool. Started writing pre-compilers for Java, etc. LISP folks did a library. I like your emphasis on how easy it is to undo such things if it’s just a library versus a language feature. A lot of folks never knew Aspect LISP happened because their language isn’t stuck with it. ;)
I sometimes get the feeling that some programming communities are very against the sentiment of extending the language in userspace due to reasons of consistency or too much power. I find such sentiments vaguely authoritarian and off-putting, and don’t buy them at all. Users almost always end up extending the language somehow, whether it is giant frameworks/metaprogramming/code generation.
A difference should be made between extending the vocabulary and extending the grammar:
All programmers are okay with extending the vocabulary.
But some programmers are reluctant to extend the grammar, not for authoritarian reasons, but because it hinders readability and maintainability.
Using a framework is not extending the language; it’s extending the vocabulary. Using code generation is not extending the language; it’s translating some source language to some target language.
In a natural language like English, you sometimes extend the vocabulary, but you rarely extend the grammar, and this is how we can understand each other. When you read an English sentence that contains an unknown word, you can still parse the sentence because you know the grammar. But if you read an English sentence that uses some kind of “syntactic macro for English”, then it will be very hard to understand what’s going go without learning the macro in the first place.
Users almost always end up extending the language somehow, whether it is giant frameworks/metaprogramming/code generation.
They might, but that isn’t necessarily a good thing. When I am implementing algorithms (admittedly, “algorithms” aren’t the same thing as “programs”), I find both large and extensible languages to be a distraction: The essence of most algorithms can be expressed using basic data types (integers, sums, products, rarely first-class functions) and control flow constructs (selection, repetition and procedure calls; where by “selection” I mean “pattern matching”, of course). Perhaps the feeling that you need fancy language features (whether built into the language or implemented using metaprogramming) is just a symptom of accidental complexity in either the problem you are solving, or the language you are using, or both.
I think code generators work fine. I don’t really get why people think lisp macros are better than just writing a tool like lex/yacc, they seem strictly less flexible to me.
Tools like yacc are “applied once”. In Lisp I can write a function that symbolically differentiates another function and produces another function. Then this higher order function can be differentiated again yielding an even higher order function and so on. You can’t do this with yacc.
In fact symbolic differentiation and other computer algebra things are precisely the reason why lisp was invented. It’s in the original Lisp paper.
In fact homoiconicity is the only good reason in favor of dynamic typing that I ever found. In most dynamically-typed languages I feel like the author simply didn’t know better. In those languages dynamic typing is only a gun to shoot yourself with. Lisp is the only dynamically-typed language that I found where dynamic typing is truly fundamental and seems to be put to good effect.
Lex/Yacc are mostly awful to work with. They complicate the build (even with native support in Make), suck to maintain, and are painful to debug. LLVM/Clang don’t bother using them, and the code is better for it. (Having debugged that stuff, I’m thankful they didn’t use them.) Maybe you can use lex to generate a state machine for you, or you can just do it manually. It’s a one time cost without the unholy mess of getting the proper includes.
If your language is small, then lex/yacc is likely more of a burden than any kind of boon. Just write a recursive descent parser and be done with it. It will likely be fast enough and you can still deal with the oddities, and probably with fewer contortions.
Using macros lets me do straight-forward stuff that cleanly integrates into the language and its tooling. I can also use code generators if they’re better suited for the job. I can also build code generators much more easily with macros in a language that’s already an AST. ;)
There’s also highly-optimized implementations, formally-verified subsets, existing libraries in sane language, and IDE’s. The overall deal is much better than yacc, etc. Such benefits are how Julia folks built a compiler quickly for a powerful, complex language: it was sugar coating over a LISP (femtolisp). An industrial one might have worked even better. sklogic’s toolkit with DSL’s was pretty interesting, too.
What always made me smile a little was the fact that Gregor Kiczales, one of the authors of The Art of the Metaobject Protocol (published in 1991, and the best book on OO I’ve ever read), is one of the main contributors to AspectJ.
My favorite example of your point from when I first discovered LISP was CLOS. The typical way to get C programmers to have OOP was to tell them to use horribly-complex C++ or switch to Java/C#. The LISP people just added a library of macros. If they don’t like OOP, don’t use the library. Done. People thought Aspect-Oriented Programming would be cool. Started writing pre-compilers for Java, etc. LISP folks did a library. I like your emphasis on how easy it is to undo such things if it’s just a library versus a language feature. A lot of folks never knew Aspect LISP happened because their language isn’t stuck with it. ;)
Now that’s what I call a selling point.
I sometimes get the feeling that some programming communities are very against the sentiment of extending the language in userspace due to reasons of consistency or too much power. I find such sentiments vaguely authoritarian and off-putting, and don’t buy them at all. Users almost always end up extending the language somehow, whether it is giant frameworks/metaprogramming/code generation.
A difference should be made between extending the vocabulary and extending the grammar:
Using a framework is not extending the language; it’s extending the vocabulary. Using code generation is not extending the language; it’s translating some source language to some target language.
In a natural language like English, you sometimes extend the vocabulary, but you rarely extend the grammar, and this is how we can understand each other. When you read an English sentence that contains an unknown word, you can still parse the sentence because you know the grammar. But if you read an English sentence that uses some kind of “syntactic macro for English”, then it will be very hard to understand what’s going go without learning the macro in the first place.
They might, but that isn’t necessarily a good thing. When I am implementing algorithms (admittedly, “algorithms” aren’t the same thing as “programs”), I find both large and extensible languages to be a distraction: The essence of most algorithms can be expressed using basic data types (integers, sums, products, rarely first-class functions) and control flow constructs (selection, repetition and procedure calls; where by “selection” I mean “pattern matching”, of course). Perhaps the feeling that you need fancy language features (whether built into the language or implemented using metaprogramming) is just a symptom of accidental complexity in either the problem you are solving, or the language you are using, or both.
Completely agree, you just end up with really baroque methods of metaprogramming if you try to prevent it.
I think code generators work fine. I don’t really get why people think lisp macros are better than just writing a tool like lex/yacc, they seem strictly less flexible to me.
Tools like yacc are “applied once”. In Lisp I can write a function that symbolically differentiates another function and produces another function. Then this higher order function can be differentiated again yielding an even higher order function and so on. You can’t do this with yacc.
In fact symbolic differentiation and other computer algebra things are precisely the reason why lisp was invented. It’s in the original Lisp paper.
In fact homoiconicity is the only good reason in favor of dynamic typing that I ever found. In most dynamically-typed languages I feel like the author simply didn’t know better. In those languages dynamic typing is only a gun to shoot yourself with. Lisp is the only dynamically-typed language that I found where dynamic typing is truly fundamental and seems to be put to good effect.
Great point about the link between homoiconicity and dynamic typing. Answered a question I was asking myself for a few years.
Interesting point
Lex/Yacc are mostly awful to work with. They complicate the build (even with native support in Make), suck to maintain, and are painful to debug. LLVM/Clang don’t bother using them, and the code is better for it. (Having debugged that stuff, I’m thankful they didn’t use them.) Maybe you can use lex to generate a state machine for you, or you can just do it manually. It’s a one time cost without the unholy mess of getting the proper includes.
If your language is small, then lex/yacc is likely more of a burden than any kind of boon. Just write a recursive descent parser and be done with it. It will likely be fast enough and you can still deal with the oddities, and probably with fewer contortions.
Using macros lets me do straight-forward stuff that cleanly integrates into the language and its tooling. I can also use code generators if they’re better suited for the job. I can also build code generators much more easily with macros in a language that’s already an AST. ;)
There’s also highly-optimized implementations, formally-verified subsets, existing libraries in sane language, and IDE’s. The overall deal is much better than yacc, etc. Such benefits are how Julia folks built a compiler quickly for a powerful, complex language: it was sugar coating over a LISP (femtolisp). An industrial one might have worked even better. sklogic’s toolkit with DSL’s was pretty interesting, too.
They most likely are less flexible. Just as programming languages and functions are strictly less flexible than assembly.
What always made me smile a little was the fact that Gregor Kiczales, one of the authors of The Art of the Metaobject Protocol (published in 1991, and the best book on OO I’ve ever read), is one of the main contributors to AspectJ.
Oh damn. Didnt know that. I might need a new example out of respect for his MOP work. Or just keep the irony coming. :)
[Comment removed by author]