Informally, I feel like the definition of ‘magic’ is closely related to debugability. Magic features make common operations succinct. They become too much magic when they work well for the common case (with correct code) but are hard to debug in other cases.
Apple’s Cocoa Bindings is my go-to example of too much magic. They used key-value coding and key-value observing to eliminate controllers in most cases. One vendor wrote a blog about using them to delete tens of thousands of lines of code from their flagship app. Rather than implementing a data source / delegate for each view, you just told generic controllers what properties should be exposed in the view. When it worked, it was great. You could build applications with absolutely no glue code. When it didn’t work, it was almost impossible to debug. If you wrote a delegate, you could stick breakpoints on the methods it implemented and walk through them to see what arguments they were called with, what they returned, and what you’d done wrong. With Bindings, you just saw empty views. You’d probably got some key wrong, but figuring out which took more time than just writing a simple controller.
I think a lot of magic is strictly speaking reducing connascence. There is some code that addresses some concern in many places without being explicitly mentioned. Changing the details related to this concern does not affect the code which implicitly uses those changes; and the details of the user code do not require changing the core. More magic, same or slightly less connascence.
I usually try to follow the “principle of least astonishment”. If something works as the user expects, then they don’t consider it magical. If it works in a surprising way, it’s magical. It becomes more murky if something works as expected for 99% of use cases, but 1% of the time it does something unexpected.
In JavaScript, that’d be + working as expected for when both operands are numbers, or both are strings. But it becomes surprising when dealing with mixed types.
Python and Ruby allowing overriding of operator behaviours is a great example of where a language allows the user to introduce “magic”. Python devs working with mostly Django will be surprised by Numpy codebases. But at the same time, Numpy’s changes allow for very succinct code to exist.
When designing a new language, I often think about this. If a language’s syntax is C-like, then people will come into it with certain expectations of how it works. If a language’s syntax is something totally different, many of those expectations won’t exist. For example, if someone’s learning an array language, the notions of the expected behaviour of common operators are totally different, so it’s less surprising when they work differently.
The magic comes in when something looks like the programmer should know how it works, but it doesn’t.
It’s less broad, but I’ve always liked Conal Elliot’s definition of having an implementation but no denotation.
It looks like the definition of magic here is the same as that of the term connascence.
I agree it’s very useful to have these metrics when assessing the quality of code.
Informally, I feel like the definition of ‘magic’ is closely related to debugability. Magic features make common operations succinct. They become too much magic when they work well for the common case (with correct code) but are hard to debug in other cases.
Apple’s Cocoa Bindings is my go-to example of too much magic. They used key-value coding and key-value observing to eliminate controllers in most cases. One vendor wrote a blog about using them to delete tens of thousands of lines of code from their flagship app. Rather than implementing a data source / delegate for each view, you just told generic controllers what properties should be exposed in the view. When it worked, it was great. You could build applications with absolutely no glue code. When it didn’t work, it was almost impossible to debug. If you wrote a delegate, you could stick breakpoints on the methods it implemented and walk through them to see what arguments they were called with, what they returned, and what you’d done wrong. With Bindings, you just saw empty views. You’d probably got some key wrong, but figuring out which took more time than just writing a simple controller.
I think a lot of magic is strictly speaking reducing connascence. There is some code that addresses some concern in many places without being explicitly mentioned. Changing the details related to this concern does not affect the code which implicitly uses those changes; and the details of the user code do not require changing the core. More magic, same or slightly less connascence.
I usually try to follow the “principle of least astonishment”. If something works as the user expects, then they don’t consider it magical. If it works in a surprising way, it’s magical. It becomes more murky if something works as expected for 99% of use cases, but 1% of the time it does something unexpected.
In JavaScript, that’d be
+working as expected for when both operands are numbers, or both are strings. But it becomes surprising when dealing with mixed types.Python and Ruby allowing overriding of operator behaviours is a great example of where a language allows the user to introduce “magic”. Python devs working with mostly Django will be surprised by Numpy codebases. But at the same time, Numpy’s changes allow for very succinct code to exist.
When designing a new language, I often think about this. If a language’s syntax is C-like, then people will come into it with certain expectations of how it works. If a language’s syntax is something totally different, many of those expectations won’t exist. For example, if someone’s learning an array language, the notions of the expected behaviour of common operators are totally different, so it’s less surprising when they work differently.
The magic comes in when something looks like the programmer should know how it works, but it doesn’t.
I was expecting “magic is everything that does not have definition”, sigh