When we duplicate code instead of abstracting it, we’re predicting that the code we could have abstracted will somehow be inadequate for the future.
This axiom is wrong. Whether something is adequately abstracted changes with every commit, so the future should not be taken into account when making this decision.
For example:
I’m adding a new feature that replaces an old feature. Before my change, a function with just two lines of code got called four times from different parts of the code.
In my change, I remove three of those calls, so after my change, this function is used only once.
Now the abstraction needs to be de-encapsulated and the code moved inline. Ideally in the same commit, or at least in the same PR as the rest of my change.
Before the change, not abstracting the code would clearly be wrong. After the change, abstracting the code would clearly be wrong.
Because abstractions aren’t free, sometimes we’re better off duplicating code instead of creating them.
This opening statement relies on the idea that all abstractions are macros; that is, that every abstraction corresponds to some polynomially-sized local (hygienic) expansion around the abstraction’s call/use site. However, as Shutt proves in their post on Abstractive Power, sometimes there are abstractions which are primitive or built-in, in the sense that they cannot be expressed by macros.
If spending 5 minutes a day gets me a context-specific, data-driven model about where to spend my refactoring time, that’s totally worth the trouble.
This is an interesting point of view. I don’t imagine refactoring as something that gets its own time slot, but as a mandatory part of standard programming practice. I don’t like the idea of refactoring time, because it implies that there is time which is earmarked for feature development or managerial whims. However, maintainable code must be refactored as needed, not as time permits.
A missing element from the author’s analysis is the fact that mathematics provides us with a catalogue of abstractions. Some constructions in this catalogue, like monads or semirings, didn’t have obvious applications to computer science at first, and it’s been a long journey to learning how they can be applied. Even today, we are still not ready to recognize when a sheaf or comonad would be appropriate.
This axiom is wrong. Whether something is adequately abstracted changes with every commit, so the future should not be taken into account when making this decision.
For example:
I’m adding a new feature that replaces an old feature. Before my change, a function with just two lines of code got called four times from different parts of the code.
In my change, I remove three of those calls, so after my change, this function is used only once.
Now the abstraction needs to be de-encapsulated and the code moved inline. Ideally in the same commit, or at least in the same PR as the rest of my change.
Before the change, not abstracting the code would clearly be wrong. After the change, abstracting the code would clearly be wrong.
Parameterization is only the beginning.
This opening statement relies on the idea that all abstractions are macros; that is, that every abstraction corresponds to some polynomially-sized local (hygienic) expansion around the abstraction’s call/use site. However, as Shutt proves in their post on Abstractive Power, sometimes there are abstractions which are primitive or built-in, in the sense that they cannot be expressed by macros.
This is an interesting point of view. I don’t imagine refactoring as something that gets its own time slot, but as a mandatory part of standard programming practice. I don’t like the idea of refactoring time, because it implies that there is time which is earmarked for feature development or managerial whims. However, maintainable code must be refactored as needed, not as time permits.
A missing element from the author’s analysis is the fact that mathematics provides us with a catalogue of abstractions. Some constructions in this catalogue, like monads or semirings, didn’t have obvious applications to computer science at first, and it’s been a long journey to learning how they can be applied. Even today, we are still not ready to recognize when a sheaf or comonad would be appropriate.