First impression: achieving log(n) rating for the software engineering section seems laughably trivial compared to the computer science section. Is that just my collection of biases, or do you agree?
Agreed, random skew across the page.
Having the same column have good knowledge of Fibonacci heaps etc. and «Has tried out a DVCS»?
A few areas ranking knowledge just by assumed linear ordering of approaches (with some key approaches missing; sometimes it is an assumed order on programming languages per se).
Requirement work — «can suggest alternative requirements» is level 3? It would seem to me that advanced level would be about the trade-offs between the choice of requirements and project costs/risks.
Yeah, the software engineering section would be better named “Use of Tools”.
The biggest two things that make engineering engineering are:
* Dealing with other people (customers or teammates)
* Bounding risk
In the SE section, there is no mention of estimating projects, setting up or managing teams, managing feature requests or clients, budgeting, resolving conflict, developing specs, or anything. There’s similarly no mention of things like cyclomatic complexity, mutation or property testing, static analysis, or MISRA/any other systems for reducing code defects using best practices.
There’s not even a mention of bug tracking or quality assurance. The closest they get is “assuming tester does their job”, without mentioning being able to write a good testing script or bug repro script.
That whole section misses the mark.
It misses the mark even as use of tools. Advanced knowledge of DVCSes should at least include understanding why rebase/merge commit-by-commit can give different result from simple merge. Understanding the difference between different merge strategies is a bonus.
Even within the sections there’s a lot of skew. log(n) for algorithms is “knows when a problem is NP-complete”, whereas for systems programming it’s “total understanding of the hardware stack”.
Well, to be fair «algorithms» include «good knowledge of graph algorithms» and the same for numerical algorithms which can include an arbitrary large amount of stuff.
I think «can suggest alternatives to requirements» vs. «understands how DB indexes are stored internally» is a larger contrast.
That’s because software engineering is a lot simpler than computer science.
Not that I don’t have some peeves with review. E.g. stating that logic programming is a step over functional programming suggests little experience with either. But overall the table is a good starting point.
UI/UX level 0: page does not work on mobile. EDIT nevermind made in 2008
Thanks to John Haugeland for a
reformatting of it that works much more nicely on the web.
At least now we know who not to hire for our own sites.
O(1) - has authored own maturity model.
Looks like this was 2011?
2008 in fact https://weblogs.asp.net/sjoseph/programmer-competency-matrix
FWIW, this was posted before by the same person, and generated pretty much the same response. I think we can all agree this article is mediocre at best, can it please stop being reposted?
I feel dumb asking this, but what is the relevance of the levels being represented as 2^n, n^2, etc.? Is ‘n’ supposed to have a meaning (which could be confusing, differrent values of n put the results in different orders) or is it just a representation that the concepts 2^n, n^2, n and log(n) are increasingly hard to understand…?
They are a standard notation in computer science to describe how the cost to execute an algorithm scales vs the size of its input (typically expressed as ‘time complexity’ for number of instructions or ‘space complexity’ for RAM required.
ah, it’s a reference to understanding big O notation..?
Yes - they are, in order, the most commonly encountered growth curves.