Great post.
There’s a siren song that this time we’ll do it right and loper-os contains some great criticisms of today’s technology. Brett Victor and Tunes (now resurrected in Houyhnhnm Computing) also offer valuable glimpses of what could be.
That said, having actual projects matters. It’s much easier to discuss the limitations of today’s tool du jour than to build something better. I’m guilty of this too.
I don’t share your view that today’s designs will not be supplanted though:
I do think the computing mainstream benefits from fringe projects too. V8 and the JVM are both mature projects that have lispers contributing: there’s a lot to be learnt from rebuilding or relearning from older and less popular tech.
When you have a single entity that blows the scale of the graph out that much, it would be nice to offer a second graph with everyone else. Or just exclude PHP from the graph entirely and put in a note. But I’m not a visual/designer type.
Hello and interesting post!
I noticed that you didn’t include Dylan, which was one of the first languages to tackle a more complex macro system in infix-syntax code.
Perhaps that was due to reasons that you indicated in your post (but you didn’t name languages, so it is difficult to tell):
I had to cut a number of languages from early drafts of this post because their docs were so poor or I needed extensive knowledge of the grammar or even compiler itself!
Anyway, our macro system is documented in a few places independently:
swap in that chapter.As for your examples …
In Dylan, a swap macro is pretty easy:
define macro swap!
{ swap! (?place1:expression, ?place2:expression) }
=>
{ let value = ?place1;
?place1 := ?place2;
?place2 := value; }
end;
Dylan macros are hygienic, so there’s no problems with that. This is also pretty similar to syntax-rules from Scheme.
As for each-it, that isn’t hard either, as unlike syntax-rules, Dylan makes it easy to violate hygiene when needed without a lot of ceremony:
define macro each-it
{ each-it (?collection:expression)
?:body
end }
=>
{ for (?=it in ?collection)
?body
end };
end;
This wasn’t hard either … the ?=it just means that instead of being a substitution from the pattern like ?collection and ?body, it is a hygiene violation.
Using it is simple:
each-it (arguments)
format-out("%=\n", it);
end;
That approaches the simplicity of the Common Lisp version of this definition and is (to me) much nicer than the syntax-case definition from Scheme.
Thanks for your kind words!
I’m reluctant to name the languages I struggled with, as it’s hard to draw a line between their complexity and my inexperience. However, I completely overlooked Dylan (to my shame!), but it would have definitely been an interesting addition.
Your macro implementations are pretty readable. How does Dylan fare when you need to use a code to manually build a parse tree?
Well, our normal macro system isn’t like Common Lisp style macros. It is strictly like syntax-rules from Scheme. That said, some pretty complicated things have been built with the standard Dylan macro system, like the binary-data library.
Inside the compiler however, we have an extended version of this macro system that does allow for quotation and such. I like it a lot and it is really powerful. It is how large parts of the compiler are actually written, including our C-FFI interface. If you check out the D-Expressions paper that I reference above, it deals with this pretty extensively, although some of what is discussed in the paper is not implemented (the parts about the module system).
It is an open project for someone to make it so that a project can have a compiler plugin that uses this macro system extension. I don’t know that it would be incredibly difficult. It would probably be challenging, and would almost certainly be enjoyable.
Inside the compiler, these special compiler macros are defined by define ¯o rather than define macro. From there, they even start out looking like a regular macro, however, instead of the pattern matching and being used for template substitution (effectively), it is instead executing code which returns AST fragments. The compiler builds upon this and has &converter and other forms which just expand to ¯o definitions and are key to how things go from the parsed tokens to the actual compiler IR, eventually.
That would all be quite tedious if one still had to construct AST stuff by hand, but thankfully, it works with quoting as well, using #{ ... } and allowing for substitution. Inside the compiler, one can use that to construct AST fragments. Unfortunately, these tend to be used in fairly complex areas, so I don’t have a really nice simple example.
define ¯o c-address-definer
{ define c-address ?var-name:name :: ?pointer-designator:expression
?options:*;
end }
=>
begin
let initargs = parse-options($c-address-options, options, var-name);
let options = apply(make, <c-address-options-descriptor>, initargs);
let c-name = c-address-c-name(options);
if (c-name)
let import = c-address-import(options) | #{ #f };
#{ define constant ?var-name
= make(check-c-address-designator(?var-name, ?pointer-designator),
address: primitive-wrap-machine-word
(primitive-cast-pointer-as-raw
(%c-variable-pointer(?c-name, ?import)))) };
else
note(<missing-c-name>,
source-location: fragment-source-location(form),
definition-name: var-name);
#{ };
end if;
end;
end ¯o;
That one isn’t too difficult though and comes from our C-FFI. You can see that it defines a pattern to be matched, define c-address .... and then it executes some code to parse the options, some basic validation, and then it checks to be sure that a c-name has been provided in the options. If it has, it returns the quoted AST (define constant ?var-name ....). If it hasn’t gotten a c-name option, then it issues a compiler warning (note(<missing-c-name>, ...) and then returns an empty AST fragment (#{ }). That wasn’t too terrible! :)
lisp are the only language which are the only languages that I know which can really make heavy use of REPL
Haskell and Ocaml are also pretty decent at this.
The difference is that there is tight integration between the editor and the REPL in Lisps. When I work with Clojure, my IDE is connected to the running application. I can inspect and reload any code in the application straight from the editor.
From what I understand stativ typing slow you too much
ipython is excellent, but it doesn’t quite support incrementally writing a module in the same way as a lisp. E.g. if foo.py contains
from bar import func, redefiningfuncin bar is difficult.Julia has a lovely REPL experience.
Smalltalks are arguably REPL environments: you can evaluate code anywhere, and the debug/edit/continue experience is amazing.
With python you can get a little bit closer using Jupyter, since it blends an editing environment with a REPL more than using iPython directly. I try to remember to use it instead of iPython when I need a REPL. It’s especially nice if you’re doing exploratory data work, because you can draw graphs and charts.
Smalltalk is strange you have not codesource file. Wonder about Julia
I should have precisr for building a profram