The password discussion part bothered me a little bit, as he didn’t talk about password storage (hash type) impacting crackability, yet /did/ say “takes 3 days to break”. Then provided a passphrase that he said would take about “550 years to break”, but no mention that dictionary attack resistance is important. I assume he kept it at a high level, just to move the talk along though, and not get fixated on it.
The other part that jumped out at me a bit was his recommendation to “avoid YAGNI”. I found that particular element a bit odd, but perhaps he was speaking about /perceived YAGNI/ that someone lacking domain knowledge may seek to avoid, and /real/ avoidable complexity due to adding unnecessary features/flexibility.
Otherwise a pretty good talk.
The recommendation about rejecting YANGI is that people with domain knowledge know what will be needed and therefore design their systems with those things. The idea behind YANGI is precisely the rejection of the idea that you know what you will need ahead of time. I think of YANGI as a sort of anti-intellectualism which rejects domain knowledge and its value in building systems.
Hmm. Then perhaps I have been lucky enough to predominantly encounter a simplified/distilled/corrupted version of YAGNI, and not its “higher form”. I have generally seen it invoked when a developer is adding what appears to be unnecessary flexibility or modularity to a component long before such a thing would be needed (if ever). More of a reminder to “start small”.
If a domain expert thought a feature was necessary, or could for-see that complexity was indeed relevant in the future, then of course you would want to bake that in as soon as possible, so you can have a more cohesive design. With your description, it does now seem like this is more what the presenter was aiming for.
The interesting thing I took from his talk was the comparison between “testing efficiency” and the test pyramid, which I intend to follow up, although I do not understand how he linked the pyramid picture on the left of his slide with the numbers on the right (other than to say “here is a thing I believe is wrong”).
The next section of the talk was all about how various “facts” in software (and more generally in team organisation) are wrong, uninformed or misrepresented opinion; I’ve read the Leprechauns of Software Engineering and will accept that there are indeed uninformed and misrepresented opinions that have become the basis for belief in software engineering. Coplien then talked about how this was all an example of the Dunning-Kruger effect and presented a version of something he called the Dunning-Kruger effect that is distinct from what was presented in the paper using a graph of a completely different shape from the paper.
I don’t want to give away the ending but it’s Coplien shouting about how much the agile people got it wrong until his slot finishes.
You missed the most important point in the talk, really what the talk was about. That the IT industry focuses too much on technology and not enough on building deep domain knowledge. And to improve the industry is to support structures that develop and capture deep domain knowledge. That the mechanics of programming can be taught to 14 year olds. The hard stuff is in building understanding in the product, users, and business so you can make great software. The actual practices come from that.
Yes, I did, as the presentation and inconsistency got in the way.
That’s fair. It seemed pretty consistent to me but I understand how it isn’t everyone’s style. I understood the flow to be he focused on practices that programmers think work and then gives a reason why they don’t. Then goes on to talk about what does seem to work, meaning having deep domain knowledge. And that the industry can cultivate systems which capture and develop the deep domain knowledge. That would by my TL;DR
It’s not the style (though I found that difficult), it’s literally the consistency: he says we’re all fools with our folk understanding of psychology (Maslow’s hierarchy) and software engineering (Cohn’s test pyramid), then produces a graph describing what he calls the Dunning-Kruger effect that looks nothing like any diagram in a paper by Dunning and Kruger. He then uses that graph (his, not theirs) to draw conclusions about why we misunderstand the other things.