There is no trick or technique; it’s all about diligence and simplicity.
While I agree that there is no trick to development, there are useful techniques. And it’s funny that after saying “there is no trick” it is immediately followed by a phrase (“it’s all about…”) that is basically the kind of aphorism the author is railing against.
I think the most disappointing part about the “X-driven development is the answer” ideas is that not only do people continue to preach and give serious talks about them, but that development communities continue to fall for them. Or worse, someone gives a talk about “X-driven development” as a useful technique and some communities or luminaries take that and run with it, saying it’s the “one true way”.
Programming is a pop culture, indeed.
Trust yourself only.
Terrible advice. Please don’t follow this. Your own experience is very valuable, but so is the experience of your coworkers, especially those who have worked on the codebase for a while. “Trust yourself only” is selfish, hubristic, and ahistorical. It’s how we get into these belief loops in the first place.
Exactly! I was discussing this here just yesterday in the context of TDD. I’ve been thinking about it more, and I think part of the reason for this is that the people who tout a specific practice as “the way” are acting in earnest. The technique helped them so much that they can’t help but think it will also help everyone else, but they lose the nuance as to why it helped them.
There could be many reasons that a specific technique resonates with an individual (or, say, a type of work). It’s not as simple as “this is the best way to write code”. If it were, everyone would be doing that.
Of course, this is difficult to explain in general and it gets tiresome. But I share your disappointment.
I see we’ve progressed beyond poorly-contrived anti-OOP arguments on to FP. Progress?
I can’t help but note that the horrible counter-example of data-driven design is… an AST. You built an AST of S-Expressions.
You realize that literally every programming language you use is built on these… right?
Writing a compiler function that converts ASTs to other data might seem esoteric at first, but I use this pattern all the time and I don’t even do language design. It’s really freaking useful! I’ll take an AST over a nested-if of doom any time.
I don’t want to fully agree with the OP, but I also disagree with you.
Languages have an AST because their requirements are “accept arbitrary input code from users, and evaluate it.”
If that’s your goal, any solution is going to have the complexity of parsing some form of code and then evaluating it.
But if you’re working with requirements you get from some group, that’s not precisely your task. Yes, you get requirements, yes there are new ones. But you still have a choice whether to implement those requirements directly in code, or build an interpreter to process some data driven representation of their requirements.
Which one is right? It depends. But it would be silly to deny that the interpreter approach has costs. If there are enough rules, if they change often enough, those push you towards paying the cost of an interpreter. But the interpreter also adds another layer to debug and it complicates observability. If the rules don’t live inside the code, it may let your end-users write infinite loops, and you’ll need to answer “how do I test/maintain these rules?” Code coverage can’t see whether you’re executing your rules.
All this can be worth it, but I think there is a middle place where it sucks, and you’d be better off just writing out the business logic yourself. Judging where that middle place is can actually be hard, though.
Abstract syntax is inevitable, as you say. But does abstract syntax need to be a tree? Not necessarily. There are languages whose abstract syntax is a matrix, list, or graph. Many compilers convert trees to DAGs or graphs in order to share intermediate work more efficiently, moving from tree homomorphisms to graph homomorphisms.
I don’t know know if you somehow reacted to the pre-edit version of my post (where I wrote “input”), but I edited it to “code” a long time ago. And most web apps do not take in source code, parse it into an AST and then execute it.
If you write a DSL, a basic interpreter is not enough, you need good error messages & debug tools, ideally even some static analysis .. and of course documentation.
… that’s a lot, and it seems like you don’t need that, until you actually build a DSL & it just hurts without it.
So unless you are prepared to do that, strongly consider reverting your DDD approach to plain code when it gets complex.
It sounds like there is a business feature that requires user-defined (maybe) logic based rules, and person is trying to develop a mini DSL/language– I don’t see how that wouldn’t totally fall on it’s face in any paradigm.
I didn’t totally get the context, but my suggestions would be:
If you (the developer) maintain the rules, use the host language instead of designing a mini language.
If anyone any untrusted person can modify the rules, let them write rules in SQL (or JS/clojure and sandbox the code).
If the business truly wants some point & click logic visual rule builder, it sounds like you’ll have to go down this path (but again I would probably compile it down to SQL– or Clojure depending).
I don’t see how that wouldn’t totally fall on it’s face in any paradigm.
Why would this “fall on its face”? If the DSL is constrained enough, a basic logcal syntax is not difficult with the right approach. Consider the RFC definition for SCIM filter syntax. I built a more robust implementation of this than Okta’s in a matter of three days.
While I agree that there is no trick to development, there are useful techniques. And it’s funny that after saying “there is no trick” it is immediately followed by a phrase (“it’s all about…”) that is basically the kind of aphorism the author is railing against.
I think the most disappointing part about the “X-driven development is the answer” ideas is that not only do people continue to preach and give serious talks about them, but that development communities continue to fall for them. Or worse, someone gives a talk about “X-driven development” as a useful technique and some communities or luminaries take that and run with it, saying it’s the “one true way”.
Programming is a pop culture, indeed.
Terrible advice. Please don’t follow this. Your own experience is very valuable, but so is the experience of your coworkers, especially those who have worked on the codebase for a while. “Trust yourself only” is selfish, hubristic, and ahistorical. It’s how we get into these belief loops in the first place.
Exactly! I was discussing this here just yesterday in the context of TDD. I’ve been thinking about it more, and I think part of the reason for this is that the people who tout a specific practice as “the way” are acting in earnest. The technique helped them so much that they can’t help but think it will also help everyone else, but they lose the nuance as to why it helped them.
There could be many reasons that a specific technique resonates with an individual (or, say, a type of work). It’s not as simple as “this is the best way to write code”. If it were, everyone would be doing that.
Of course, this is difficult to explain in general and it gets tiresome. But I share your disappointment.
I see we’ve progressed beyond poorly-contrived anti-OOP arguments on to FP. Progress?
I can’t help but note that the horrible counter-example of data-driven design is… an AST. You built an AST of S-Expressions.
You realize that literally every programming language you use is built on these… right?
Writing a compiler function that converts ASTs to other data might seem esoteric at first, but I use this pattern all the time and I don’t even do language design. It’s really freaking useful! I’ll take an AST over a nested-if of doom any time.
I don’t want to fully agree with the OP, but I also disagree with you.
Languages have an AST because their requirements are “accept arbitrary
inputcode from users, and evaluate it.”If that’s your goal, any solution is going to have the complexity of parsing some form of code and then evaluating it.
But if you’re working with requirements you get from some group, that’s not precisely your task. Yes, you get requirements, yes there are new ones. But you still have a choice whether to implement those requirements directly in code, or build an interpreter to process some data driven representation of their requirements.
Which one is right? It depends. But it would be silly to deny that the interpreter approach has costs. If there are enough rules, if they change often enough, those push you towards paying the cost of an interpreter. But the interpreter also adds another layer to debug and it complicates observability. If the rules don’t live inside the code, it may let your end-users write infinite loops, and you’ll need to answer “how do I test/maintain these rules?” Code coverage can’t see whether you’re executing your rules.
All this can be worth it, but I think there is a middle place where it sucks, and you’d be better off just writing out the business logic yourself. Judging where that middle place is can actually be hard, though.
I also want to partially agree and disagree.
Abstract syntax is inevitable, as you say. But does abstract syntax need to be a tree? Not necessarily. There are languages whose abstract syntax is a matrix, list, or graph. Many compilers convert trees to DAGs or graphs in order to share intermediate work more efficiently, moving from tree homomorphisms to graph homomorphisms.
Every web application has this requirement as well, it’s called a web form. Parse, don’t validate.
I don’t know know if you somehow reacted to the pre-edit version of my post (where I wrote “input”), but I edited it to “code” a long time ago. And most web apps do not take in source code, parse it into an AST and then execute it.
You’re thinking of parsing/compiling in far too narrow terms. This technique is useful for lots of things, not just code execution.
I’ve thought this too, the way I think of it is:
If you write a DSL, a basic interpreter is not enough, you need good error messages & debug tools, ideally even some static analysis .. and of course documentation.
… that’s a lot, and it seems like you don’t need that, until you actually build a DSL & it just hurts without it.
So unless you are prepared to do that, strongly consider reverting your DDD approach to plain code when it gets complex.
It sounds like there is a business feature that requires user-defined (maybe) logic based rules, and person is trying to develop a mini DSL/language– I don’t see how that wouldn’t totally fall on it’s face in any paradigm.
I didn’t totally get the context, but my suggestions would be:
Why would this “fall on its face”? If the DSL is constrained enough, a basic logcal syntax is not difficult with the right approach. Consider the RFC definition for SCIM filter syntax. I built a more robust implementation of this than Okta’s in a matter of three days.