While I think a lot of this advice is fine as far as it goes, this still hews to the model of candidates running a gauntlet of interviews. These have very little in common with real work no matter how much you try to make the questions “work like”.
If you’re a very large company - a google or amazon - it makes sense to use a series of exercises where top performance in the exercises correlates with good job performance, then try to interview every engineer in the world. Those companies have the scale to spend a lot of effort on interviewing, and cream off people who are not likely to be turkeys. Indeed, at their scale this is probably the most reliable way to do things.
For the rest of us, we have a different problem - how do we find and attract good people without trying to encounter every engineer, how do we in a tractable amount of time fill the positions we need to fill, and how do we avoid bad people while not filtering out good people (if you’re small you can’t really afford to make a bunch of type I or type II errors). So, the problem to solve for is how do you figure out what it would be like to work with each candidate, having first filtered them in as possibly having the attributes you need, and then based what it would be like to work with them, decide if they still look like the person you need.
These are some exercises I’ve seen that address this problem relatively directly:
Take home exercise: some groups of engineers (e.g. web developers) won’t do them (mostly), while other engineers like data engineers and devops engineers seem to prefer them. These can run unnecessarily long unless the interviewers take the time to time trial it repeatedly internally. This can be done very well by the interviewers, if they actually take time testing and scoping seriously.
Ask the engineer how they should be assessed. This has the advantage that the candidate should be at their best, and barring bad administration of the assessment, should be very fair to them. You get information both about how they think from the nature of the exercise and how they carry it out
Ask the candidate to explain some code of their own choosing. In my own experience, this is the single best way to figure out if a candidate actually understands what they do, or if they just sort of muddle through things. I’ve seen plenty of candidates who aced all other rounds reveal a fundamental lack of understanding. This also allows the whole team to join in the session. It also allows the whole team to potentially nudge or help out the candidate - this controls for the biases of a single or couple of interviewers
The one thing I will say is that a single live coding session should occur at some point in the process, as there are a very small number of candidates who will cheat on a take home, and in theory they could prep extensively with a coach for the explanation session.
Your 3 suggestions are very good. In fact, I think they’d improve things at big tech companies too. There’s no such thing as “a series of exercises where top performance in the exercises correlates with good job performance”, evidently :(
While I think a lot of this advice is fine as far as it goes, this still hews to the model of candidates running a gauntlet of interviews. These have very little in common with real work no matter how much you try to make the questions “work like”.
If you’re a very large company - a google or amazon - it makes sense to use a series of exercises where top performance in the exercises correlates with good job performance, then try to interview every engineer in the world. Those companies have the scale to spend a lot of effort on interviewing, and cream off people who are not likely to be turkeys. Indeed, at their scale this is probably the most reliable way to do things.
For the rest of us, we have a different problem - how do we find and attract good people without trying to encounter every engineer, how do we in a tractable amount of time fill the positions we need to fill, and how do we avoid bad people while not filtering out good people (if you’re small you can’t really afford to make a bunch of type I or type II errors). So, the problem to solve for is how do you figure out what it would be like to work with each candidate, having first filtered them in as possibly having the attributes you need, and then based what it would be like to work with them, decide if they still look like the person you need.
These are some exercises I’ve seen that address this problem relatively directly:
The one thing I will say is that a single live coding session should occur at some point in the process, as there are a very small number of candidates who will cheat on a take home, and in theory they could prep extensively with a coach for the explanation session.
Your 3 suggestions are very good. In fact, I think they’d improve things at big tech companies too. There’s no such thing as “a series of exercises where top performance in the exercises correlates with good job performance”, evidently :(