In addition to understanding how computers work, I would add understanding how compilers work. I see two mistakes in the wild. One, people assume the compiler will optimize something it clearly cannot. Two, people make all sorts of inconsequential premature optimizations that the compiler will do.
I don’t know about the first one but I’m sure I’m guilty of the second.
Premature application of premature optimization is evil, is evil… ?
It’s almost as if being a skilled developer isn’t about following other peoples' rules but rather being able to understand when and where to apply particular them.
It shouldn’t be surprising that a complicated practice like software development, or law, or medicine should not be learn-able through application of simple rules, outside of a thorough and empirically understood knowledge of when and how to apply them.
The actual fallacy is that anyone ever presented or anyone should ever practice any rule as applicable in every situation.
What I get so tired of is how this is always misquoted.
“We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.” -Donald Knuth
Knuth is not saying that optimization is wholesale bad, but rather we shouldn’t waste time on optimizing the pieces that don’t matter and instead focus on the big wins where things really count.
And use a profiler. There is no point optimizing a function for 3 days and finding that it did absolutely nothing because it is hardly ever called. And add unit tests before refactoring anything.
This article is right on the money.