1. 15

  2. 6

    I think there’s another reason Clojurists have macro allergies that we’re less inclined to talk about:

    We don’t have very many tools for understanding our code while it’s running. Good debuggers are relatively recent, so many of us rely primarily on two things: tests and stack traces.

    Stack traces are ugly, but we’ve learned to make the most of them.

    Code gen, of any kind, runs the risk of making stack traces harder to read, so it’s worth evaluating a DSL based on how much it muddies the waters of our program.

    So yeah, they provide additional complexity, but most of us have tried at some point to make use of macros and I don’t try to avoid them because of a design principle so much as I avoid them because they sometimes hurt.

    I haven’t written much CL or Scheme so I can’t say whether this applies to those languages, but I think the hosted nature of clojure probably makes it hurt a little more? Hard to say.

    1. 2

      I wish this were true, but in practice I don’t see “will this make stack traces worse” factoring into people’s decisions much. Real discussion about improving stack traces tends to get drowned out in the noise between “this is just unbearably bad” and “no everything is fine you just need to suck it up”.

      1. 1

        Yeah I think I’ve seen this in a lot of languages I work in, too. I’ve seen similar issues with Ruby metaprogramming (where an entire file is generated by macros and there’s no obvious way to ‘read the code’, mostly just to avoid typing.

        I think obscuring our reasoning with a more general statement that it ‘adds complexity’ doesn’t help convey why macros bug us.

        And you’re right about the results. I think this happens a lot with ‘developer discipline’ issues where the two extremes are to argue for better tooling or to tell newer people to ‘just learn it’.

    2. 5

      The source code of the Viaweb editor was probably about 20-25% macros.

      As a Clojure programmer, this makes me want to run away screaming.

      On the difference between Lisp and Clojure with regard to macros, a lot of it has to do with community but also how performance is (and historically was) handled.

      Many of the Lisp macros were written for performance, in a way that definline is used today. Also, Lisp was historically interpreted (although that has changed) while Clojure is compiled and JITted. So, the performance benefit to using macros goes away, and they become useful largely as a user-facing construct. They don’t compose well (in any language) and so macros on top of macros, at any level of complexity, tends to become unruly, which is one of the main reasons why “never write a macro when a function will do” wins out.

      As for 20-25% of a codebase being macros, it’s not as shocking in the 1990s Common Lisp context as it would be in 2017 Clojure context. That was how you killed boilerplate that, today, you’d use the 20% of OOP that actually works for. Do I suspect that Paul Graham was a great programmer? No. Above average, but the whole codebase got thrown out. Do I think that he was terrible? No, and 20-25% doesn’t seem that unreasonable.

      1. 7

        today, you’d use the 20% of OOP that actually works for

        Even at the time that was one of the main approaches. Graham, especially through his book On Lisp (I’m not really familiar with his code) represents one pole of 1990s Common Lisp, which rejected CLOS and used a mixture of functions and macros instead. But there was a significant faction that used CLOS pervasively, and it was even one of the main attractions of Common Lisp to some people (CLOS is much different than a C++ or Java-style object system). You can probably divide that into a few other poles, e.g. among the people who weren’t really into CLOS, some wrote in a more functional style (seen as “Scheme-like” within the CL community), and others wrote in a more imperative style, making heavier use of mutability and somewhat unusual Lisp features like dynamically-scoped variables. Graham is really towards the Scheme-like pole: no CLOS, and as little weird imperative CL stuff as possible (except, and admittedly this is a big caveat, for CL macros).

        I personally am writing Common Lisp with a lot of macros lately, but for a somewhat different reason: they are useful for templating, if you don’t want to use external whisker-style templates and instead want to use a Lisp-internal site-generating system like spinneret. Functions in this context have an order-of-evaluation problem: (foo (bar "baz")) should generate <foo><bar>baz</bar></foo>, not the “inside-out” <bar>baz</bar><foo></foo> that eager evaluation would give you (of course foo and bar here may do more than literally output their names as opening/closing tag pairs, that’s just to illustrate the inside-out order problem).

        If you don’t have arbitrary extensibility you can finesse this with some kind of parser, but half the benefit of using Lisp as your templating language is that you want it to be arbitrarily extensible through code (quasiquotation makes this especially nice), and the most straightforward way to make that work is to have foo and bar be macros that substitute the appropriate thing to evaluate, rather than eagerly evaluating it in the wrong order. I should note that spinneret helpfully provides some macro-generating macros to simplify things. ;-)

      2. 3

        Hmm. It is interesting to contrast both the brute ye olde macro expansion approach and this with Andrei Alexandrescu’s Design by Introspection. Slides

        Ye Olde view of macro’s was just a labour saving device. Doing away with tedious typing of boiler plate.

        In Alexandrescu’s view, everything a statically typed compiler knows about every symbol is available to be used in an expression at compile time.

        I think he has the tail of something extraordinarily powerful there.

        Part of the power arises from the static typing. The compiler knows a huge amount about each symbol. This information is not present in lisp, and can only be resolved at run time.

        1. 2

          I think that power is likely to be heavily realized in Jai, once that hits open source.

          1. 2


            Hmm. Just took at look at Jai.

            In someways heading in the same direction as D, but I think D is way way ahead of it.

            In what way is Jai is ahead of D? As far as I can see it has the SOA and AOS mechanism…. hmm. Can’t say I have ever wished for such a mechanism, and in the rare cases I might have, the problem was moot since I had encapsulated the data structure anyway.

            Integrated Build Process maybe…. Can’t say I have missed it. D has rdmd which allows you to treat D code as a script.

            A compelling argument for not writing an entirely new language for games is that the momentum and volume of C and C++ code in current game engines is too high, and switching to a new language is too much work for the amount of benefit. Blow argues that engines periodically rewrite their codebase anyway, and since Jai and C are so closely related, C code and Jai code can live side by side while the rewrites that would normally happen anyway take place. Since C and Jai interoperate seamlessly, Jai code can be built on top of existing C libraries. In fact, Blow uses the C interfaces to the OpenGL and stb_image libraries for his Jai test code. So, replacing C and C++ can be done with no added cost to development. Meanwhile, the benefits of replacing C with a language that has all of C’s benefits but fewer drawbacks means that programmers will be happier, and thus more productive.

            Holds for D as well.


            Those are strong languages, but none of them contain the right combination of features (or lack of features) that game programmers need. Automatic memory management is a non-starter for game programmers who need direct control over their memory layouts. Any interpreted language will be too slow. Functional-only languages are pointlessly restricting. Object-oriented-only languages are overly complex. Blow preferred to develop a new language with the qualities that game programmers need, and without the qualities they don’t.

            GC is not a requirement for D programs, but an opt in.


            Jai seems like a major step forwards on C…. but a major step backwards on D.

            1. 2

              Jai has a couple of (currently perceived, it remains to be seen how they materialize) advantages over D:

              1. A fresh PR face.
              2. A radically different perspective of the main dev

              Does D compile at 100Kloc/s? Can it run a video game during compile time? Do these ideas even matter for you?

              I’m not certain that these are things that every language needs, and Jai is more of the Go bent (80% solutions compared to being 100% solutions) than the Rust bent, as it were, but Jai as a language is something I’m going to be taking a very hard look at when it comes out, and as far as I can see, it runs with the idea of compile time computation further than anything else I’ve seen. That could turn out to be a foot gun, but if it ends up working out well, it’ll neat language to follow and use.

              By all means, if D is a better fit, use it. But don’t dismiss Jai just yet. Dig a little.

              1. 1

                Does D compile at 100Kloc/s?

                It’s a hard to find benchmarks and very hard to find side by side benchmarks.

                But certainly D is designed to be fast to compile (no macro preprocessor to start with) and getting faster (a lot of work going into redesigning the CTFE engine)

                In fact the rdmd utility allows you to treat D code as a #! script.

                Again, D code at once tends to be more compact (you can code your solution in far fewer lines of code than in almost anything else) and less compact (library code is highly templated and special cased according to the possible attributes of template parameters).

                Conversely the dmd and the ldc compilers are known to take different length of times to compile…. I believe most of the time in the llvm case is around spending more time on optimization.

                ie. You can always trade off compilation speed vs run time speed.

                Can it run a video game during compile time?

                Sort of an odd question…. not entirely sure what you mean or why you’d do that…..

                Yes, you can do arbitrarily complex functions evaluated at compile time, including expressions that use the compilers knowledge of all symbols it has compiled up to that point, and use the final result in the rest of the compilation. The is the CTFE (Compile Time Function Evaluation) feature of D.

                No, I wouldn’t use it to run a video game at compile time, since the point is to trade compile speed to be able to use the compilers exact knowledge of what it has compiled so far, for less boiler plate, for lower run time speed and increased run time safety.

                I will keep an eye on Jai… but D is looking like a far more serious contended at this stage.

          2. 2

            [meta] Thanks for linking to both the video and the slides of the talk. Much appreciated.