Or maybe the metrics are bad? Without further information this is just a bunch of graphs with axis labeled some numbers of unknown meaning. How is a tool deciding what is maintainable and what isn’t?
The tool is not “deciding” what is maintainable and what is not.
Maintainability index, cyclomatic complexity, (a|e)fferent coupling, … are all industry standard software metrics. For more details on how they’re calculated and what they mean: https://www.cauditor.org/help/metrics/maintainability_index etc.
Or just Google for them :)
A high score on a certain metric doesn’t necessarily mean something is “bad”. A controller will usually be more complex than a model, for example. It’s still up to humans interpret the data, in its context.
As far as the metrics being bad: no, they’re fairly simple standard calculations (you can argue how useful the formulas are, though). But there definitely is not enough data to state without a doubt that programmers don’t evolve. So far, it’s just an observation based on a very small sample of data.
Obviously, one can question the metrics. We know that good programmers don’t necessarily write more code than bad code, and defining “maintainability” is going to be difficult because it’s both subjective and nuanced.
Nonetheless, I’m not surprised by this finding. My experience is that most programmers don’t improve significantly over time. It takes a long time to get good at it, but spending a long time doesn’t guarantee improvement. The typical software work environment emphasizes hasty delivery and personal availability over technical excellence, so it’s a place where people are bound to pick up and reinforce bad habits.
Being a good programmer isn’t rewarded in most software workplaces, and it’s often punished. Moreover, the Agile/Scrum culture makes no distinction between senior engineers and junior engineers: everyone has to justify work, often down to humiliating 4- or 8-hour increments (“story points”), in terms of “user stories”. Since the senior engineers have to spend time fighting political dragons and justifying their existence as well, they have no time to mentor the juniors. No one gets better, and the senior engineers might actually decline (since they’re spending so much time on politics).
Software is a deep field in which it’s possible to continue evolving for 40 years and still have a lot to learn. That doesn’t mean that most people do. The vast majority of software work environments actually make a person stupider, the more time one spends in them. This is especially true in “unicorn” startups (which are managed by marketing people, not technologists, these days) and even in the marquee tech companies, excluding their R&D divisions.
Harvests private email addresses and wants to create and edit web hooks? No thanks.
email address is needed to extract your commits & put them on a trendline.
hooks are needed to be able to analyze new commits.
But you don’t even have to login, analyzer can also be run manually from your machine (in which case only the code metrics & project data are submitted): https://www.cauditor.org/help/import/manual_submission
What is the basis for the hypothesis that complexity declines over time? Is software done when the complexity hits zero?
I have a theory. The nature of the problems the engineers worked didn’t change over time. They didn’t end up working on harder problems with more complexity…
As far as I can tell, this tool only analyzes PHP code.
Commits without impact on the code (e.g. documentation, typo fixes) are ignored.
That’s bloody hilarious–that writing docs doesn’t matter for the maintainability index!