1. 29
    1. 30

      My primary reason for not reaching for metaprogramming early or often is that it’s difficult to grep for methods generated in this way, especially when they’re constructed via substrings, like, "some_#{specific}_method". I work in a fairly large Rails monolith, and the number of gems, modules, and just raw code I would have to dig through to find where something like this is being created means it’s a giant pain in the ass to debug. At a professional level, the seams for when metaprogramming is both appropriate and warranted ought to be carefully stitched, lest you waste more of your fellow developer’s time than you save.

      Because “m-word” is harmful for the community. It becomes a bad label that used with too much freedom both from outside and inside the Ruby world, being attached randomly to practices and features one just doesn’t like or doesn’t understand. This situation leads to decay of ideas, when perfectly well-designed and natural libraries and language features become almost “prohibited”.

      Citation needed. I would’ve preferred to see this article lay out principles for when metaprogramming is valuable, what the standard community patterns for using it look like, and why we should use it over other language features. This would have made a far stronger case about why it shouldn’t be called magic. I’m not in any way opposed to it in principle, but like any tool in the Rubyist’s toolkit it needs to be applied judiciously otherwise I feel warranted in treating it like something arcane and dangerous.

      As an addendum, the snark does a poor job at being persuasive. The author would do better to vent their frustration without pretending to be pedagogical.

      1. 6

        I agree 100% with this. Some of the examples do make sense and it’s fair, but some of them do not, and are magic, and I did not read a good explanation on why. The author probably had a different intention, but to me, the article reads at least partially as “it works, so it’s not magic”. Yes, many things do some job, but that doesn’t mean they’re doing it well.

    2. 27

      I’ve used ruby for 10+ years, understand meta-programming very well, and think it is almost always a mistake. Indeed, every ruby feature that breaks locality of behavior is best avoided entirely. This includes mixins and inheritance.

      “magic” as a dismissive pejorative represents deep cultural wisdom. Please, keep using it.

      Could not disagree more with the take presented in the article.

      1. 4

        Honest question: If you’re preferred programming style avoids any of the kinds of code you can only write using a dynamically typed language… why not use a statically typed language? It seems like you’re already writing code that would be amenable to static types and doing that gives you all of the nice stuff provided by static analysis: refactoring, code navigation, better error checking, etc.

        1. 2

          the kinds of code you can only write using a dynamically typed language

          I’m not aware of any such code.

        2. 2

          To be clear, I didn’t say ruby was my favorite language (though I like it fine)… just that I’ve used it a long time professionally. I think starting on a new project, all else being equal, I would usually choose a static language for the reasons you mentioned.

          With that said, the equation is not exactly that simple in my opinion. This is something I’d need more time to articulate fully, but subjectively, I do think there is a different experience to coding in ruby/js/etc vs Go, or even something concise but statically typed like Haskell. I’m not sure what the value of that thing is, and it probably doesn’t make up for what you lose, but there is something there.

          In ruby specifically, the built in Enumerable methods are nice, and the syntax is pretty. I don’t consider these huge factors, but those would be the answers to your question.

          EDIT: Maybe one example of what that “thing” is, from J/APL, where the integers 0 and 1 are your boolean types, and this allows you to write stuff like +/ 1&p: to count primes. So there can be advantages to “loose types” as well as footguns, with none of this mitigating my complaints about features that break locality of behavior.

          1. 2

            Rich Hickey and Clojure has come up with Clojure.spec as a half way house….

            You can have “plain old data” and a mechanism to specify the shape of it and validate and generate plain old data.

            I’ll admit to being conflicted.

            I remember building the most amazing arrays of hashes of hashes of arrays… in perl…. and never being able to maintain what I wrote.

            I think he has an interesting idea, but I’m not convinced.

        3. 2

          This is a good question, but I think the question of “can you reliably determine which function is being called here without running the code” is more or less orthogonal to being able to statically determine the type of a given value.

          Most mainstream dynamic languages are bad at both, but languages like Erlang, Racket, Clojure, and Lua handle the first one fairly reliably (the main exception being methods, which are often not used very widely) without making any attempt to solve the latter.

      2. 1

        You sound like go programmer that is programming go in ruby. ;-)

        I’d rephrase your…

        Locality of Behavior is the principle that: The behaviour of a unit of code should be as obvious as possible by looking only at that unit of code


        Locality of Behavior is the principle that:

        The behaviour of a unit of code should be as obvious as possible by looking only at that unit of code, assuming the dependencies that this unit of code invokes are well named, and their effect on this code is well understood.

        ie. You don’t have understand how the dependencies do it, just what they do, and in particular, what the effect on this portion of the code is.

        1. 1

          You sound like go programmer that is programming go in ruby. ;-)

          I learned Go years after learning ruby, and hated it at first but learned to appreciate it.

          The behaviour of a unit of code should be as obvious as possible by looking only at that unit of code, assuming the dependencies that this unit of code invokes are well named, and their effect on this code is well understood. ie. You don’t have understand how the dependencies do it, just what they do, and in particular, what the effect on this portion of the code is.

          I used to think this way too. No more. The dependencies have to all be explicit as well. That is, myDep.doSomething where myDep is explicitly passed to your code is fine. But doSomething where doSomething exists because of a mixin or from your parent class or because of method_missing or some other magic is no good.

          1. 0

            The point about “functionality inherited from parent” isn’t to provide magic to the child.

            The whole point is Liskov’s Substitution Principle. The child IS an instance of the parent class and can be used wherever the parent can be used.

            People keep stepping away from the L in SOLID and then wondering why it all hurts.

            LSP isn’t a guideline. It’s a mathematical rule. Violate LSP you have a bug, it’s that simple.

            If you think in terms of Design by Contract all the time, LSP (and complying with it) becomes intuitive.

            1. 1

              Yes, yes, I know all the arguments, I’ve read all the books. I know all about SOLID and can write that kind of code just fine. I assure this is not a problem with me being uninformed, or having too little experience for the ideas to become “intuitive.”

              It is quite the reverse, and my recommendation is never to use inheritance (EDIT: to clarify, I am talking about implementation inheritance. Programming to an interface is just fine). If this strikes you as a novel stance, just google “inheritance is evil”. If you want specific recommendations for decent articles, let me know and I can dig through my bookmarks.

              1. 0

                Eh, seen a bunch of that sort of articles go by.

                I look at the examples they give and start screaming at the screen… yup, if you do that, of course it will hurt.

                So Don’t Do That.

                Not convinced.

                I guess we’ll have agree to disagree… unless I have to maintain your code! :-)

    3. 20

      The umbrella of “magic” (in Ruby and other languages) is wide and useful.

      To me, something is magic when one or more of the following is true:

      • It is not obvious (using basic constructs) why it works.
      • It abuses or bends the conventions of the language. Rubyist DSLs are the common example of this, using an #include statement to load a file at compile time in C is too.
      • It has genuinely unexpected side-effects. Why is allocating this object making a web request? Why do all of my arrays now have a new method available after this code runs?
      • It won’t work reliably for every developer. “I don’t care that the FooBar monadic endofucktor always works for @friendlysock; when others invoke it they usually forget some obscure environment flag and it breaks”.

      Some observed things that make me suspicious about magic:

      • It usually doesn’t scale with developer quality. I can’t guarantee that a tired, inexperienced, malicious, or stupid developer won’t reliably cause trouble with it.
      • It spreads. Once you get the taint of magic in your codebase, it tends to spread as it touches other things.
      • It corrupts. Similarly, once a developer uses just a little magic, they tend to start using it to get around every minor inconvenience or aesthetic displeasure. Before too long they’re defining single-character macros to do database access (a baby example).
      • It usually doesn’t compose. Unless care is taken, most magic in programming does not play well with other magic. It may not, in the case of monkeypatching core libraries, even play well with non-magic.
      • It gaslights. ORMs are a great example of gaslighting magic…many act like N+1 queries aren’t even a thing, and then some poor sod goes and brings down prod because they didn’t realize what would happen.

      I don’t think we should ban magic; sometimes it really is the only way to get something done. But we also should treat it for what it is, and respect the technical and human cost associated with its usage.

      1. 3

        This perfectly sums up my feelings. I was typing up something similar but discarded it because this is much more thoughtful and covered a bit more ground. I particularly like the point around corruption. There’s a lot of devs out there that lean on abstraction without understanding it. All of us have been there at one point or another. Writing code is largely a social activity and there needs to be clarity around abstractions and a proper process in place for guiding devs to produce code in line with a project’s objectives.

      2. 2

        using an #include statement to load a file at compile time in C is too.

        This is part of the standard library, and certainly not “abus[ing] or bend[ing] the conventions of the language.”

          1. 2

            Well, #include <stdio.h> is “using an #include statement to load a file at compile time in C.” which is very standard.

            And IMO That’s OK. The thing about the C preprocessor is that you can’t use macros in includes, so the paths are fundamentally fixed. If you see an include, you can always determine what file was included (as long as you know the include path, which is generally pretty sane). Something like what OP wanted will always require either generated C source, or generated object code. If it’s one line… go nuts.

    4. 21

      Dear Rubyists! Please stop use “magic” as a synonym of “metaprogramming”!

      I am not a Rubyist, so I’ll absolutely continue calling it magic. If I can’t see it in the code and if I can’t grep for it, it’s not explainable. I am happy to read the docs, if I get /any/ pointer what to search for. If it’s not discoverable because it comes hand wave somewhere, there’s no point in not calling it magic.

      1. 8

        Absolutely agreed. From the article:

        “Is it magic? I don’t see any #drive! method here, but it magically works???”

        Any sane developer would laugh. No, silly, it is just inheritance. We can guess that #drive! method defined in parent Car class, and can easily find this class’ and method’s code

        If you think reliably finding the definition of the method is easy you probably don’t know much about how Ruby works, pal!

        He’s using “junior developers” as a scapegoat when saying it’s fine to make things more complex than they need to be, but like … what have you gained here, really? If you have two alternatives and one of them is understandable by a junior developer and one isn’t, the second one better be a goddamn miracle in terms of value to justify its inclusion.

        His rubyist background also leads him to conflate “metaprogramming” with “hard to tell what’s really going on” but that link is a correlation, not a law. It’s perfectly possible to metaprogram in ways that are transparent and clear. (though maybe not in Ruby)

        Although in the end, I do prefer “spooky action at a distance” to “magic” when describing the problems of unclear code like the examples in the article.

    5. 9

      Agreed. I have slowly grown to hate the term “magic” because it implies the unexplainable. Half the time when I encounter it, I end up needing to dig into what’s actually going on anyway sooner or later, and once I do it’s usually not that complicated. It’s not magic, it’s just something that doesn’t need to be explained right now.

    6. 7

      It’s just the canonical way to refer to this style of non–locally-obvious programming. A slang term created by it’s detractors (maybe they would like Zig?) but reclaimed by fans, like me. My macrolicious style is magic AF.

    7. 4

      As Andrew Smith said:

      People fear what they don’t understand and hate what they can’t conquer.

      I’m not immune from this just because I insert a quote. :P

      It’s magic until you understand it and put the time in. Did you inherit something? Did you watch it break and change in your hands and deeply understand it in many ways or did you not have time? Is it frustrating because you never had success? Or did you have lots of success but still don’t like it?

      Lots of good tips in the original article. I like grep, fzf, ag, silver_searcher and text search tooling. But solargraph is at a higher level than text. It’s not without tradeoffs. Github’s repo is so large they can’t use solargraph. It’s not fast. It’s trying to parse the tippy-top of abstraction and expression.

      In the best times, Ruby DSL and expression is glorious and wonderful. I can express user roles in near-english to the point when a Product Manager asks “are admins supposed to be able to publish blog posts?”, I can just read the rule verbatim. In the worst of times, a typo, a conflict, the indirection, the abstraction causes many hours of debugging and hair pulling when the root cause is finally found.

      However, these two best and worst of times have happened to me in other languages too. Django has expressive helpers and indirection conventions for productivity reasons. I’ve had Go burn me from sitting on top of something else (the OS, what can you do?). These are similar patterns at different degrees.

      Ruby is super expressive, with the trade-off of towering abstractions. It’s optimized for happiness, always has been. Some communities might not understand this initial premise. But then there’s such a large Rails influence that the topics get conflated, which might be why things like Hanami have started. But maybe it’s too late. There are projects like Blitz, Remix, Next, Redwood, Sveltekit that are bringing some of these values over. Are they at parity? Will they be? I have no idea.

      But Ruby will come and go. I just wonder if you could bin languages/frameworks/tools into “good for machines” or “good for humans” and make a predictive model? If someone invents PerfectLang™ tomorrow and it is good for machines, can you predict where it will be used and how?

      1. 3

        It’s optimized for happiness, always has been. Some communities might not understand this initial premise.

        Or maybe they just need something else to become happy. Ruby is in no way special or good when it comes to “optimizing for happiness”. It does that for a certain subset of developers. Which is cool, of course, but it is just that. Some people will be happy using it. Others won’t. I personally think the second group is larger, but I have no sources to back that up.

    8. 4

      In the context of Ruby especially, I see meta-programming being touted as a way to get code to read with a fluency that approaches natural language, but I don’t think that’s a very useful definition of “readability” in the technical sense. I’m much more concerned about code being readable in terms of the programming language it’s written in rather than some natural language, because that’s what has the most bearing on my ability to work with it. It has been my enduring experience, however, that meta-programming used towards the end of reading naturally harms technical readability because of all of its non-locality, and to little benefit. If I already know a programming language, then thoughtful organization of code units is going to be just as meaningful to me in piecing together the high-level semantics of the program as having a chain of method calls that reads like an English sentence is, and usually at a significant complexity savings! It seems like the tradeoffs are rarely worthwhile.

    9. 2

      I have come to love metaprogramming. It is the correct solution to reams of buggy boilerplate code.

      But I insist on several things to remove the “m” word.

      A. The result should be inspectable / readable.

      For example, if a C preprocessor meta program of mine fails to compile, I automatically run it through the preprocessor preserving the output. The expanded macros are all in one horrendous great unreadable long line.

      So I automatically reformat that with clang-format.

      Then I recompile that.

      Now you can see what the meta program expanded to, and see what the compile is whinging about.

      B. Your meta program should have unit tests, same as your code.

      I have unit tests that exercise the metaprogram, verify it does the right thing at run time AND can switch it to a mode where I can verify that the metaprogram produces the output I expect.

      C. It’s debugable.

      My metaprograms have documentation explaining how to debug them. It’s as easy as debugging ordinary code.

      D. Your metaprograms should be well structured and even better documented, good API and strong internal checks to verify correct usage.

      No matter whether it’s ruby or D metaprogramming, these should be the criteria by which we judge a metaprogram.

    10. 2

      Ok whether or not it’s “magic”, is it hard to understand for the target audience?