This blog highlights how set recruiting tests can be counter productive.
I had a similar experience with Google. It’s been several years, but I remember that I rated myself as competent at Linux and one of the phone screen questions they asked was “What is the order of fields in the shadow password file?” Even worse, I had a recruiter a few years ago who wanted to give me a “programming test.” Okay, sure. They asked me to choose languages, and one of the ones I listed was PHP (I was young). The test expected me to know how to write text to an image without searching the web.
Ahh, yes, imagettftext, the most frequently-used of all standard library functions.
I love how they will prevent you from using the stdlib and write the function in Google Doc to prove you know what you’re talking about. Write tests? LOL. Use a REPL? LOL. Write it all your code in this Google Doc, you know its just like your daily coding environment. And don’t make any mistakes, or ever think your program works unless you know it works, but good luck with that.
My problem with this method is that it tests to see if you are a very specific type of person. Good luck with that failing diversity initiative Google.
And enjoy the feedback you get on your interview. Why weren’t you a fit? Oh. the interviewer didn’t leave any feedback.
Yeah, I went through a similar first interview there a couple years ago. I was invited back for more interviews but was so put off by their process that I declined and went with a different job offer instead. Having seen a few friends burn out there I don’t regret that decision.
I too had a similar experience as the OP, with many of the same questions, although in my case the interviewer was thoughtful, kind, had a good ‘close enough’ filter, and although I passed and they continued to show interest, I found a better fit/opportunity before they called back.
That said, in my opinion there are probably better questions to ask in a tech filter than what signal number correlates specifically to a particular signal enum name, or what the set of calls you need to make in C in order to get to a full TCP connection are. Those are handy to know off the top of your head, but they’re second- or third-order indicative of what I believe Google really wants, and may tilt towards people who have very recent CS degrees (as those are the people who have recently implemented languages or bare metal network servers in classes).
On the third hand, Google appears to be doing fine and going from strength to strength. So it’s possible that the process, as convoluted and error-prone as it may superficially appear to us on the outside, is just what they need, even if it gets there surprisingly.
Dan Luu has tweeted about this. And some more tweets.
There is, of course, a thread on HN (posting it here against my better judgement). According to one Googler:
I managed to find [the questions] and I don’t work in recruiting, they are for SRE pre-screens. The guy misunderstood most of the questions which is why he failed and then worded them incorrectly on his blog, it wasn’t the fault of the questions or the interviewer.
( edits are mine)
Make of that what you will, but the whole practise of this type of test (whatever the questions) is rather off-putting to me.
I can understand completely why Google interviews take the form they do - with the volume of applications they get, they need a system that filters the wheat from the chaff quite quickly. The problem I have is that the rigid Q&A with no room for discussion strikes me as far too inflexible.
That’s because the whole premise is predicated on the power imbalance of “we’re Google, so jump through these hoops” rather than a discussion that paves the way to a deeply technical discussion. I’m not saying they’re being nefarious here, it’s more this weird institutional behavior that results from achieving any sort of notoriety, where the bar gets raised ridiculously high for potential hires because “omg one bad hire could ruin us.”
There are definitely interviewers that delight in this sort of thing, but I really believe this is a breakdown in a system where every candidate, even if they come in for an interview, is automatically ‘not-fit,’ and must perform near-perfectly in order to become ‘fit.’
it’s more this weird institutional behavior that results from achieving any sort of notoriety, where the bar gets raised ridiculously high for potential hires because “omg one bad hire could ruin us.”
“One bad hire could ruin us” is an admission of managerial incompetence. If a company is so fragile against bad hires that an incompetent junior programmer can take the whole thing down, then maybe the VPs and the C-words earning $250,000 per year aren’t doing their jobs.
Also, “false negatives are better than false positives” is not always true. False negatives lead to false positives, because you still have to fill the role and if you shut out too many good people, you end up deeper in the barrel. Besides, people can’t be linearly ranked. The person who’s too picky to date people with/lacking Superficial Feature X at age 25 ends up dating a larger proportion those with Serious Deficit Y at 30, because that person rejected too many people for bad reasons.
the power imbalance of “we’re Google, so jump through these hoops”
Exactly, and much like the “Techtopus” wage fixing scandal, the hoop-jumping just spreads from firm to firm. Some time back I read of someone interviewing with Amazon and (IIRC) he had seven interviews before being made an offer. Sheesh, I’m pretty certain medical doctors don’t have it so hard!
I interviewed for a job at Mozilla and went through six interviews before being rejected. I did poorly on the sixth interview, and I understand why they passed after that, but the fact that there were that many interviews was a bit ridiculous.
I think they didn’t want to do a panel interview, so each member of the team I’d have potentially joined did their own interview. I would have preferred a panel, if only because it wouldn’t have used so much time or required so much reorganization of my schedule to accommodate.
What exactly does “six interviews” entail?
1 phone screen. 5 technical interviews over Skype, each on a separate day, each requiring me to rearrange my work schedule to be at home in the early afternoon during the work week.
where the bar gets raised ridiculously high for potential hires because “omg one bad hire could ruin us.”
I don’t think this frames it the right way. I have been heavily involved with the interview process for 2 unicorns under heavy growth phases. I’ve helped develop the interview process as well as performing the most interviews of my department last year. So I’ve done a lot of interviews. From that I can say, it’s not the idea that “one bad hire could ruin us”, it’s that when you’re trying to hire hundreds of people in a short period of time you can can let tens of bad hires in in one round, and that has the potential to be pretty bad. Striking that balance is really really challenging and it’s simply much easier to be conservative about it if you can afford it.
filters the wheat from the chaff quite quickly
Or like…tosses a coin or whatever. ;P
I do wonder how long until they start to see a large increase in declines from actually taking part in their interview process.
I obviously think that they will still get a steady stream of CV’s from fresh grads but I do think they will meet more ‘not interested’ replies from people they spear fish themselves. I know of at least a few people that don’t even want to bother with them but they would gladly go through the hoops a couple of years ago.
They contact me every year or so, and I always say: “Would you still expect me to relocate to the bay area?” and the answer, so far, has always been “yes,” so…
I’m done with them. Never again. But I’m not who they’re looking for anyway.
I think the bigger risk is that by using this approach they will narrow their potential field of possible employees and thus end up with a lack of diversity.
If you all think the same way how can you solve those problems that require a different approach?
This is exactly the problem. If you talk to their recruiters they’ll tell you they are having a really hard time with diversity. When you go through their process, you’ll see why.
wow. Protocols of interviews should not be accessible to the whole company. Also sharing a summary of them is a second privacy breach. Both things would be illegal in my country.
How can sharing a list of questions that get asked during interviews be a privacy breach?
Don’t know if we are on the same page. I was complaining about a Google employee writing a comment on HN based on internal information, that should be kept confidential (protecting the individual). So I am not against sharing the list of questions, I am against a Google employee sharing an assessment of the applicants performance.
I don’t have much against that guy sharing the questions he was asked in the interview.
I interpreted the comment to say “I looked at the actual questions we ask, and the ones in this post are similar but not the same.” If someone accuses a company of asking shitty questions, I think it’s fair game for the company to respond and say that the allegedly shitty questions have been misrepresented. The googler didn’t just show up out of the blue and announce “this guy sucked”. If you don’t want people discussing your interview performance, don’t write a blog post about your interview performance.
Hmm. For what it’s worth I’ve seen this guy, Pierre, author of G-WAN, argue a lot online with anyone who says anything negative about G-WAN. I got the impression he was a colossal douche, an infantile egomaniac.
I think this is him, but I didn’t see an author tag. If it is, I expect this blog post is a blatant lie, and he probably failed because he’s socially intolerable. A director of engineering has to work with people, not just be a good coder.
That being said, Google does have a pretty jacked up interview process and a lot of other quirks besides, that doesn’t mean I trust this post at all.
I posted about this on twitter and got a response from someone I trust saying they got a nearly identical pre-screen in 2011, and their experience almost exactly matches the post. (https://twitter.com/lafp/status/786971586792259584). Also https://twitter.com/littlesteve/status/787072870673293312
Remember, even a stopped clock can be right sometimes.
Yes, it his him (Pierre Gauthier). As someone else commented, the story has links to his LinkedIn profile, etc. so perhaps he had an ulterior motive…
Folks, much as I like to see rage about interviewing processes, this submission is basically just hearsay. Watching the HN thread devolve into ever crazier finger pointing and shilling and counter shilling makes me wonder if there is really anything to be gained from it.
Next time, let us try to stick to more reputable or at least better documented sources?
EDIT: let us avoid becoming a clearinghouse for click bait rage fests.
but this is part of the issue with interview processes - they are rarely backed up with any evidence as to the whether the process used is more reliable then throwing darts at a list of candidates on a wall…
Although this research found that
Results reveal that what you see in the interview may not be what you get on the job and that the unstructured interview is particularly impacted by these self-presentation tactics. Additionally and surprisingly, moderator analyses of these relationships found that the type of research design (experimental vs. field) does not moderate these findings.
I semi-agree. Reason being the empirical method is forming our hypotheses about what’s going on from as much objective data as possible. People at Google, prior employees, and people claiming recruiting contacted them all say similar things. Even the supporters on HN implied it by telling us it was a pre-screen with a specific name. One director said something like he got the wrong process or they were supposed to stop doing it this way. Many said the actual interviews after pre-screening were much better. Just as I write this, Dan Luu chimes in saying same with examples.
So, it’s totally justified to believe something similar to his claims happened based on diverse data. Their screening process is apparently broken if someone with his projects and work experience has to answer arbitrary, algorithmic questions to a non-tech to talk to engineers or mgmt.