It takes bravery to admit that you’re not very good at these sorts of interviews. I’m not very good at them either. I have a friend who passed a FAANG interview by studying from Cracking the Coding Interview by Gayle Laakmann McDowell. It’s like SAT prep. It has its own industrial complex. For folks like us, the choice is to either swallow our pride and cram useless info or just say, “Not for me, thanks.”
At best, absurd challenges are just elaborate secret handshakes. At worst, they’re like regurgitating awkward, incomplete ports of Donald Knuth in a vacuum and passing them off as our own work while the interviewer justifies the exercise with, “It shows me how you think.” How is it a good idea to start a relationship at the Bridge of Death scene? The candidate either answers correctly or is cast into the void.
I’ve mentioned it elsewhere here, but I’ve found a very effective way to interview. I have the candidate walk us through some code they’d written prior to the interview. If they don’t have anything they can share either because the entirety of their experience is closed source or because it’s a very junior position, I give them a take-home exercise. Open ended questions like, “How is this structured?”, “What did you learn?”, and, “What would you change?” give a great sense of the applicants level of skill, maturity, style, versatility, and values. The only time this doesn’t work is when candidates assume that I am looking for The One Right Answer to my open ended question and freeze like deer in the headlights while they try to decipher what it could possibly be. If I’ve started the interview well, candidates are relaxed and understand that we’re just people trying to get a sense of what it’d be like to work together.
At best, absurd challenges are just elaborate secret handshakes.
I hear this talking point, but imo it misses their true function.
(I should caveat the following by saying I think these kind of problems are far from perfect, will definitely lead to false negatives, and do not represent day-to-day programming work.)
But what they do accomplish is a proof that:
The person has basic coding skills.
The person has a mind of capable of learning, remembering, and applying non-trivial algorithms. Not everyone can do this, even with practice. And, other things being equal (ofc they’re not), you’d rather hire someone with this capability than not for programming work.
If you are more worried about false positive hires than false negative ones (a nice luxury if your company is popular among developers), this makes a lot of sense. It is not a sufficient predictor, but it will certainly filter out a lot of bad candidates along with some good ones. Of course, it will not tell you if the person works well on a team, if they are hardworking, or if they have other qualities you might want to hire for. But that’s not why they’re being used.
There are a million ways to do this which are more realistic than typical tech interviews. And the problems chosen for typical tech interviews are almost always wildly inappropriate for the actual job – many revolve around problems that took some of the most brilliant people in theoretical CS years to solve, and expect that the baseline for “coding skills” now is to be able to derive the same thing, on command, in 20 minutes. Which means it’s really a test of “did you memorize the answer in advance”, not a test of “basic coding skills”.
And, really, that’s a large part of the point: they’re openly and unashamedly tests of “have you recently graduated from a university which taught coding interviews as a final-year course” or “have you recently read Cracking the Coding Interview or similar”. Neither of which are particularly useful indicators of “coding skills”.
The person has a mind of capable of learning, remembering, and applying non-trivial algorithms.
Well, no. It just proves the person can “study the test” and give you a convincing patter as they pretend to “solve” the problem you posed. In effect you are testing their ability to lie to you about their capabilities. If you think that’s a good thing, or that this is an important job skill in need of routine exercising, I’d hate to work with you or in the kinds of places where you feel comfortable.
Meanwhile, actual real-world programming basically never involves deriving the correct one-off algorithm from first principles, and a co-worker who insisted on trying to do so would, I hope, either manage to change quickly, or else be fired quickly as unsuitable and unproductive.
Well, no. It just proves the person can “study the test” and give you a convincing patter as they pretend to “solve” the problem you posed. In effect you are testing their ability to lie to you about their capabilities.
This is simply not true, unless you are passing people that fail to solve the problem but have “convincing patter”. The test is: you solve the problems or fail.
So “studying for the test” successfully establishes a minimum bar for the candidate’s programming intelligence. As I said, this is not trivial. You need to learn graph algorithms, dynamic programming, sorting, queues, etc, etc, and then have internalized everything well enough that you can solve 1 or 2 random problems from any category quickly and correctly.
I’m not defending this as the ideal interview style, but claims like “it’s meaningless,” “it’s merely a secret handshake,” and so on, are clearly not true. It provides a strong filter with real signal, so long as you don’t mind rejecting a decent number of good candidates. That may feel unfair when you are good candidate being rejected, but it’s a perfectly logical business strategy for in-demand companies, and, if administered rigorously, is actually harder to game than the alternatives most people propose, which strongly depend on subjective judgments of the interviewers. Note: The existence of coding interview prep courses do nothing to mitigate this point, because you need to have raw ability and practice a lot to “game” the system. Not everyone who tries can do it.
Meanwhile, actual real-world programming basically never involves deriving the correct one-off algorithm from first principles, and a co-worker who insisted on trying to do so would, I hope, either manage to change quickly, or else be fired quickly as unsuitable and unproductive.
I never claimed this (in fact I went out of my way to say I don’t believe this), and I don’t think the companies doing these interview think this. It’s a straw man argument. The actual claim is: A person capable of learning and fluently applying these algorithms will probably be able to do the kind of work we actually do day-to-day, which is generally less demanding. The critics say: “Well why don’t you just test ‘the day to day’ directly?” And indeed that would be best, but this is harder and more time-consuming to do. The real test would be: Just work with them for a month. But this is rarely feasible.
Which means it’s really a test of “did you memorize the answer in advance”, not a test of “basic coding skills”.
It is much more, unless you get the exact same problem you studied, which you won’t. Because “memorization” here means understanding a basic kind of solution, recognizing when it applies to a problem, and then correctly applying it. To solve a randomly chosen dynamic programming problem that you’ve never seen before, even if it’s a variation on one you’ve solved, means you understand dynamic programming. This is not comparable to “memorizing” a bunch of dates for a history exam, for example.
The “secret handshake” stuff is like… for example, you ask someone “how do you figure out if there’s a loop in your linked list in constant memory”. Most of my coworkers and myself know this, but we know it because we were told it at one point in the past.
Many of the tiny self contained things have these baked answers and interviewers expect it. It’s not that you can’t come up with it from first principles but hell it took 10 years for researchers to land on an actual bug-free implementation of quicksort…
As to gaming fears…. spending 45 minutes on a “real looking” problem in your codebase sounds way more valuable than doing the binary tree balancing act. Hard to game the thing when the task is “do work like what your real work is”.
(full disclosure: I do think that most people should still know how to balance a binary tree, just feels like a bit of a silly thing to ask during an interview).
To be clear, I (personally) would never ask how to balance a binary tree in an interview. It’s too specific and even if you’re good you likely won’t remember unless you’ve looked at it recently. Nevertheless, correctly doing it quickly is a positive signal, especially if, say, the same person can also solve 1 or 2 more problems from different categories.
For screening basic coding skills, I prefer problems that are basically “a harder FizzBuzz” – not literally, but in the sense that they require no specific algorithm knowledge, yet still verify your ability to put together a solution to a not-totally-trivial problem in a reasonable amount of time.
spending 45 minutes on a “real looking” problem in your codebase
We’ve done this kind of thing a lot, as well as the “walk us through a large-ish project you’ve built,” in a group interview setting. I do think these tell you something, but I also know there is a lot of subjectivity involved in the interpretation of results, and a lot of variation in what conclusions the interviewers draw. Also, people will ask “leading” questions, and it’s easy to consciously or unconsciously skew the results. Having a pre-defined rubric of questions or points you expect the candidate to comment on can help.
So “studying for the test” successfully establishes a minimum bar for the candidate’s programming intelligence.
No, it successfully establishes a minimum bar for studying the test. It tells you nothing useful whatsoever about the candidate’s “coding skills”, because the “coding problems” being posed so often are ones that flat-out cannot be “solved” from scratch in the allotted time unless the candidate already memorized a complete answer in advance.
And that’s without getting into problems with the interviewer. One of the smarter people I know once interviewed at a household-name tech company and was given a problem to solve that involved working with graphs. It was a problem that turned out to have two standard solutions (depending on what tradeoffs you want to make). The candidate produced one of them. The interviewer only knew the other, and auto-flunked the candidate for not producing it.
That is not the only story of that type I could tell. It is not the only story of that type out there. Google even somewhat-infamously at one point had non-technical people doing some of the early phone screens, where pass/fail depended on coming up with the answer on the interviewer’s answer key. If you deviated too much in your phrasing or explanation you could fail because the interviewer literally did not have the ability to understand what you were saying.
So, again, these processes offer no useful signal.
The critics say: “Well why don’t you just test ‘the day to day’ directly?” And indeed that would be best, but this is harder and more time-consuming to do.
I have designed and run such processes. I know for a fact that at least one giant tech company (Netflix) has used such a process. The fact that other places don’t do it has nothing to do with it being too much effort (if anything, most tech companies spend much more effort running their badly-broken processes than they would running good ones).
So, again, these processes offer no useful signal.
Let’s do a thought experiment. You are forced to hire one of two candidates, and you won’t get to interview them or meet them. You are told one has passed the google algorithmic interview process, and one has failed. You know nothing else about them. If you believe your claim, then you would just flip a coin to decide, because the information is meaningless. Is that honestly what you would do?
Well, there’s a correlation between being good at “competitive programming” and doing well on Google interviews. And according to Google there’s also a negative correlation between competitive programming and doing well on the job. So the best option, if forced to pick one, might actually be to hire the one who failed Google’s interview.
(in other words, yes really, I really am really saying for real, really, that I really do not think the Google style of interview provides useful positive signal, really)
Anyway, it’s a bold claim. If you’re right, it means that one of the world’s largest employers of software engineers – who’s been doing this for 20+ years and tracking and analyzing results – is not only using a suboptimal interview method, but is using one that is effectively picking engineers at random. Or worse – I’m curious where you got the competitive programming stats?
So, do you believe google engineers are basically a random sample of the market, in terms of skill and talent? It seems your position commits you to that, unless you think the non-algo parts of their interview process are somehow making up the difference and still selecting better than average engineers? The conclusion seems highly implausible to me, even if I ignore my own instincts about it and just take a econ/incentives perspective on the situation.
I’ve worked with plenty of people who’ve been at Google. My impression is that… there’s nothing super-special about them, they’re the same as everyone else. Some are quite good, some are not, some are average… there’s really nothing that stands out.
And the stories they tell about working there are all pretty similar, and are similar to the stories you hear from just about anyone who works or worked there: that all the algorithm-challenge stuff is useless on the job, because the average Google programmer’s work looks like the average non-Google programmer’s work: taking specs, turning them into software using standardized tools and libraries applied in standardized patterns.
If this seems “implausible” to you, I don’t know what to tell you. My own large-tech-org experience (Mozilla) was the same way. People I know who’ve worked at other big tech companies all have similar stories. There’s just nothing magic about the big places.
In Google’s case, they don’t have some sort of secret massive rockstar guru ninja wizard 10000x advantage in programmers or in building new software – in fact, arguably the opposite, since for a long time now it’s been rare that anything impressive and novel came from within Google (rather than things developed externally and acquired by Google later on). Their success as a company is entirely due to being the biggest player in online ads, which seems to have more to do with… let’s call it “willingness to make use of their pre-existing position in the market”… as with any outright technical superiority. I don’t doubt that their marketing and business folks are some of the most ruthless and fierce in the whole industry, but we’re not talking about the interviews for those positions.
You misunderstood what I said. I don’t think google devs are magical. Please feel free to read back over what I’ve written and see if you can find any of my actual words that imply that. It’s not implausible at all to me that they’d be similar to Mozilla devs, or the devs at my company, etc. What is very implausible is that they are random sample of the applicants, which is your claim.
And your informal description “some are quite good, some are not, some are average” does not sound like a random sample of applicants. From my experience interviewing, even with HR phone screens, I’d say that between a third to a half of applicants are very underqualified…. they are clear, unambiguous “no”s. If our hiring process were random, the competence of our team would be very different from what it is now, and I would not describe our hiring bar is extremely high.
You seem to have a number of axes to grind about google that I have no opinion about, or don’t disagree with, and none of those axes have anything to do with the claim I am debating, and everything you say only makes me more confident of the claim: which is that those algo interviews most certainly do provide a real signal, even if it is imperfect and no better than other methods of interviewing.
TLDR: It sounds to me like your actual claim is something like “the google algo interview process is no better, and maybe even worse, than other types of interview processes that are way less silly and annoying.” That is a claim I’m be open to believing is true, but it is a much weaker claim than that Google has been randomly hiring engineers for 20 years.
If you’re designing any kind of admissions test then you generally fit one of two categories: selection or deselection. A deselection test is intended to filter people out. If you’re aiming to get people in the top 10% of applicants then you want a test with high discrimination around 50% so that you can be confident that there is a high probability that anyone failing is in the bottom 50% and anyone passing is in the top 50%, but any signal beyond that can be very weak. For a selection test, you want very high discrimination around your cut-off boundary. If it’s scoring 0-10 then you want people who are in the bottom 80% to get around 3-4, people at the top to get close to 10, and any score from 4-10 to correlate strongly with their position on the list.
Phone screens are deselection tests, often actually aiming to get rid of the bottom 70% or so (this varies with the job - for entry-level positions, it may be far lower). The problem is often construct validity: Is the test actually measuring something that correlates with the desired outcome? If you can’t define what it is that you actually want from a person in a particular role, then your chance of being able to design a test that accurately filters at any level is very low. I think this is what the article is trying to say: the phone screens that he’s encountered may have high discrimination at the desired interval for a particular set of skills but those are not the ones that the company actually wants from a developer.
Unfortunately, the same problem applies once you’re past the phone screen. Here, people often give up on trying to design a proper assessment and instead use the ‘I know what a good developer looks like’ metric. This is a really great way of maximising implicit bias in the process.
Others have mentioned this, but I want to underscore that the kinds of companies that have these interview processes would rather not hire someone than hire them. People always say “your process is bad because it will miss competent people” but they would rather miss some competent people than get some people they don’t want. Especially in big tech testing for “are you compliant, will you jump through our hoops” is almost as important as any test for technical skills.
they would rather miss some competent people than get some people they don’t want
Two problems with this:
The processes get copied by smaller companies which cannot afford to just toss qualified people in the trash on a regular basis (Google can afford it because Google will get another hundred or more applying the next day; your company does not have that luxury).
This infamous talk covers a lot of ground, but one story that sticks out: the presenter was on a hiring committee at Google, and a recruiter once decided to gently point out some problems with their standards by lightly anonymizing the committee members’ own original interview packets and resubmitting them. The committee then voted “no hire”… on themselves. This does not indicate a well-calibrated and standardized process. Yet it’s the reality not just at Google but across a huge swathe of our industry. We might as well go back to the old joke about randomly throwing some applications in the trash can so as not to hire any unlucky people.
Especially in big tech testing for “are you compliant, will you jump through our hoops” is almost as important as any test for technical skills.
Curiously, this is not the universal. Google and FB, yes. But a while back I did Netflix’s interview (and got invited back after the technical portion) and it was remarkably straightforward and centered around things that would actually be relevant on the job and in the specific team I was talking to. None of the “dance for me, monkey!” stuff you hear about from the other places. Now if only people would rush to copy that process instead…
The processes get copied by smaller companies which cannot afford to just toss qualified people in the trash on a regular basis (Google can afford it because Google will get another hundred or more applying the next day; your company does not have that luxury).
Google can afford to hire a few unsuitable people; in a small company this is much more damaging, especially when hiring for a senior role; assuming you find out about it quickly, it’ll take a month to restart the recruitment process, and major projects will get delayed in the mean time.
Google makes it extremely hard to fire people because they believe that security is a key factor to job performance. The only reason they can do that is because the hiring and promotion process is so focused on avoiding false positives. There is a current philosophy behind the strategy.
The committee then voted “no hire”… on themselves. This does not indicate a well-calibrated and standardized process
Doesn’t it? I’d posit that it might not be the right process for some companies per point one, and one might object to it on other grounds, but I think for some situations this might be exactly what is intended.
If you assume there are some elements of individual bias and randomness in the process of selecting hires, then you should accept that some portion of your current employees might have been “wrong” decisions according to your metrics, but eeked by because of those confounders. (note I’m not judging their actual performance in the job, simply that they may have gotten through your process despite not really clearly meeting the bar you intend to set.) Given that, you should expect that reevaluation by those same metrics might make a different decision for some current employees. If that were not the case, you’d actually be iteratively lowering your standards over generations of this… and in a company with aggressive/significant hiring this can quickly compound. This is part of the thought behind the aphorism of “A’s hire other A’s while B’s hire C’s”
If you assume there are some elements of individual bias and randomness in the process of selecting hires, then you should accept that some portion of your current employees might have been “wrong” decisions according to your metrics, but eeked by because of those confounders.
If it were perhaps one or two, maybe. But rejecting all of their own packets for not being up to par? That is, again, not a sign of a well-calibrated and standardized process.
And if it is correct in retrospect to say all of them should have been no-hires, then what you’re effectively saying is that Google’s much-vaunted and much-copied hiring process is effectively worthless crap. There’s no interpretation in which the process comes out looking good here.
This is part of the thought behind the aphorism of “A’s hire other A’s while B’s hire C’s”
People who think like this are, to put it bluntly, a gigantic part of the problem.
Were they hiring for the same level as themselves or more junior? When you’re hiring at your own level, the recommendation is to try to hire people who are better than you (on the basis that there’s some margin for error and if each successive round lowers the bar then pretty soon you get useless people). If so, I’d expect the folks to all reject them because their interview packets show a less experienced version of themselves.
then what you’re effectively saying is that Google’s much-vaunted and much-copied hiring process is effectively worthless crap
Yes? I mean no one really seems to be claiming to have figured out hiring. At best you hope that you have a system that gets you a sufficient number of of successful new hires without falling over completely, consuming all staff time in interviewing, letting in too many bozos, or getting yourself sued.
People who think like this are, to put it bluntly, a gigantic part of the problem.
Especially in big tech testing for “are you compliant, will you jump through our hoops” is almost as important as any test for technical skills.
Agreed. I have a little over ten years of industry experience now too, and have become increasingly convinced that this is a major factor in the design of these kinds of interviews, whether the interviewers know it or not.
… the kinds of companies that have these interview processes would rather not hire someone than hire them.
If that’s true, then they are going about this all wrong. Most of the comments here seem to be focussing on this same idea: that the purpose of these tests is to weed out the bad candidates, even if they also exclude some good candidates as a kind of collateral damage. But in my experience these types of interview do not weed out bad candidates at all. I have worked with people who aced this kind of interview but were terrible software engineers.
I have not seen any evidence that these kinds of interviews either select for good candidates or deselect bad candidates.
But in my experience these types of interview do not weed out bad candidates at all. I have worked with people who aced this kind of interview but were terrible software engineers.
This sounds like a biased sample set. Did you work with any folks who failed the tests? I’ve seen some spectacularly bad professional programmers who couldn’t implement the kind of thing that I used to set as exercises for 11-year-olds learning to program for the first time and definitely couldn’t manage anything from an undergraduate-level data structures course. This is what you’re trying to weed out with these kinds of test. The folks I’ve seen who have past them tend to have the kinds of problems that are simple inexperience: they may never have shipped something with high security or availability requirements or with a very large installed base and long-term support requirements.
That said, I think the right way of designing this kind of thing is to first identify the set of skills that you think your team is competent to teach. Nothing on this list goes in the interview. Next, identify the skills that your team definitely isn’t competent to teach (or doesn’t have the time to teach) and which you need to assume everyone can do. Start designing a test to measure these things. A lot of the tests of this kind that I’ve seen have been measuring things that someone on the team is an expert in and can easily teach to a new hire, rather than things that a new hire would need to know.
Well, yes. It’s my personal experience so obviously it’s a biased sample!
However, I still think it serves to show that these kinds of interview tests are ineffective at filtering out poor software engineers. Perhaps they do filter out the completely incompetent, i.e. people who really can’t program at all, but so does every other form of programming test.
I’m convinced that this kind of phone screen is used as a way for FAANG companies to weed out people who don’t want to spend tens of hours studying Cracking the Coding Interview. Maybe they’ve found some sort of correlation with those people dedicated to getting into a “prestige” company and how long they will stay at a job, which helps reduce costs associated with hiring and retraining or morale hits from people leaving more often. And because big companies do it, other companies cargo cult the idea even though the reason behind doing them doesn’t fit that.
That’s just speculation, of course, but I’m convinced it’s something like that. There is a time and a place for whiteboard interviews, but trying to move that step to a phone screen makes it worse than worthless.
I’ve recently taken over most of designing the hiring process at my current job, and my top priorities are:
Don’t do anything that might drive away the best candidates
Ensure the candidate can do or quickly learn what we’re hiring them to do
Figure out if we can provide a work environment that aligns with the candidate’s needs and competencies in a way that they will be interested and fulfilled in their work
Not waste anyone’s time
Using Kevin as an example, if he applied for a position it would be clear from his Github and blog that he was competent and motivated. There would be no reason to ask algorithm or whiteboard questions, and an interview would be about seeing if we have aligned goals, and if we do, trying to convince him that he should choose us over any other offers he has.
Maybe some high salary or prestige companies can afford a high rate of false negatives, but algorithm puzzle phone screens and 5-hour take-home exercises for every candidate no matter what is a good way to fall behind on hiring goals and let great candidates slip through your fingers.
It’s only an anecdote, but I did not study for my Google interviews and got offered a job anyway. The technical questions which involved writing code were given in front of a whiteboard; I did not even have the luxury of Coderpad. And I had not taken the opportunity seriously, other than participating; I did not prepare much because I did not think I would get an offer.
I mostly recall your comments on lobste.rs revolving around topics of higher mathematics and FP which, in my opinion, go well beyond a knowledge of an average “coder”. Even though you were not prepared for an interview itself, your above average cleverness might have been picked up by an interviewer? For people closer to the central part of Gauss curve it might require some additional effort, I don’t know :/
Perhaps. But I think that this is hero worship, or at least belief in meritocracy. Also, it implies that Google’s interviews are singularly difficult, which is definitely not the case. In terms of FP, they offered to interview me in Haskell but I chose Python instead since I felt stronger with it.
It is more likely that, due to an adverse childhood, I have internalized certain kinds of critical/lateral/upside-down thinking as normal and typical ways to examine a problem. I was hired as an SRE at Google; I was expected to be merely decent at writing code, but even better at debugging strange problems and patiently fixing services. There were several interview segments where I discussed high-level design problems and didn’t write any code.
As another anecdote, I once failed an interview at Valve so horribly that they stopped the interview after half a day and sent me home.
Maybe some high salary or prestige companies can afford a high rate of false negatives
The cost of a false positive outweighs the cost of ten false negatives, arguably moreso for the smaller place. Not filling a slot is… not filling a slot. It sucks, you keep trying until you find the right match. Filling a slot with the wrong person is a mess. You lose money, you throw the team off its stride, you disrupt the hiring pipeline… really, everyone suffers.
We don’t do algorithm puzzles, but we do give out a take-home coding task as an early screening stage. The tasks are fairly easy (but not ones you’ll find on the internet), well-defined, grounded in reality (usually they’re a toy version of something the team in question actually works with), there’s no time limit, and candidates are allowed to use whatever tools and languages they like, although we do state our preferences. The criteria we judge on are basically:
Did they submit something that builds and runs at all?
Does it do something close to what the instructions asked for?
Does the code have any kind of structure or organization?
Did they document any known bugs, shortcomings, edge cases, or places where they found the requirements unclear — or are they claiming it’s a perfect flawless gem?
Actual bugs aren’t held against the candidate (unless they’re many and egregious), they just form the basis for a discussion in the interview.
It’s rare that we even have to make much of a judgment call, because most people fail one of the first two points.
I think the algorithm white board tests exist mainly because the Supreme Court outlawed IQ testing for jobs and this is a proxy to get around it. If they could just test for IQ and do a pair programming session they would prefer that.
I was worried this comes off as harsh - this isn’t a knock against OP, I am not as good at these tests as I would like either and know people who are a lot better at them than I am. If you are a company trying to minimize false positives IQ is one of the axes that you care about (along with not being a jerk, persistence etc) and that you would like to test if you could.
…if this was true interviewers would not fail people for correctly coding the brute force solution, or they’d more often provide the instructions for implementing an algorithm, which they never do.
Interesting, I wouldn’t have thought of that possible pitfall. I pretty much always try the brute-force solution first, and if anything consider it wrong to not do so! I first found the effectiveness of this in the old ACM International Collegiate Programming Contest. The questions were usually written with a clever algorithm as the “intended” solution, but I found you could often hit the stated time limits using a brute-force algorithm and a standard bag of optimization tricks (memoize, use a suitable data structure, maybe convert from memoization to dynamic programming if you need to squeeze out a little extra performance, etc.). And doing so was more likely to get a correct solution in fewer programmer-hours. Nowadays my heuristic is to try the simple brute-force solution first, the optimized brute-force solution second, and a clever algorithm only if this clearly doesn’t hit performance targets. Due to constant factors mattering, #3 isn’t always reached.
This is pretty interesting to me for a number of reasons.
I do well in phone screens, and I don’t use a very customized development setup in real life. I do phone screen type tasks in Python, which is ideally suited to the types of problems posed in these interviews. There is no evidence that my programming in real life is especially fast - one part of this that’s interesting to me is the suggestion one can become faster.
The other part of this is what is a phone screen for, and for whom? If you’re designing hiring at FAANG you expect at a first approximation that your recruiters are going to try to approach every programmer in the world to see if they want to interview at your megacorp. These corps definitely don’t want to hire anyone who sucks, so for them a low pass filter that false-rejects a small proportion of very good engineers is probably a good idea.
For a much smaller company, the capacity to source potential candidates and interview them is a limiting factor. Nevertheless one reason many candidates prefer phone screens to any of the alternatives is that the screen itself is a minimal time commitment, and it’s a chance to actually meet an engineer at the company.
I think the question for @kb and others in his boat is: to what extent does it behove you to figure out how to be good at phone screens? Would that be a better use of your time than failing phone screens and blogging about how phone screens suck? I don’t have an answer and I think it’s going to vary from person to person. Certainly doing the average thing is likely to produce average results, which is certainly not a goal I would adopt.
“…
Objectively the screeners are making a good decision to fail me, - I probably look like I have no idea what I’m doing and I’m only sometimes able to get a good result by the end of the interview. But this is a bit like declaring someone a bad sous chef after asking them to bake a rare pastry without a recipe. I am more likely to get an offer from an onsite interview than I am to pass an initial phone screen, which seems backwards.
…”
They are failing you because, they have no idea how to hire talent efficiently.
Here is an analogy: A company claims that they are looking for writers, that can write small books, may be even multi-volumes books that are meant to capture imagination of consumers, offer something that was thoughtful, yet original.
But during an interview they present applicants with a ‘Jeopardy-like’ test. And filter out applicants that do not pass it within the time frame allocated.
Writing books, and doing Jeopardy – both use words, right?
Another analogy that comes to mind: there is a difference between multi-instrument symphony composers, and a rap music artists.
Both is music, both are art, right?
But different skills, experiences and cognitive gifts are needed there….
I hope the above analogies demonstrate why the interview processes you are referring to, are simply incompetent.
They are not making a good decision when they are filtering you out.
And certainly (in my humble opinion) – you should not be developing tools that would make it easier for these companies to continue this travesty.. (may be I will come up with a better name for this in the future :-) ).
Instead, may be, you could think up of a system that helps discovering talent with different level of experience in building things. People who can ‘map out a journey’, ‘anticipate obstacles’, ‘pickup up good practices’ and think up ‘differentiating features’ of that journey – so that at the end, the industry is producing competent alternatives of products and solutions.
You will have a lot of customers for that SaaS :-)
I think your interest in consulting is also a right interest, if you want to take that path.
You do not need to write out an ‘array merge’ or ‘find closest neighbor’ algorithm in 10 minutes on a phone call to earn your pay. You are more talented, more competent and more experienced than being insulted by those ‘interview’ filters.
And certainly (in my humble opinion) – you should not be developing tools that would make it easier for these companies to continue this travesty.. (may be I will come up with a better name for this in the future :-) ).
AGPL?
Your comment reminds me about the homebrew guys misadventure at Google. He sure was smart enough to design a tool they use, yet “not smart enough” for Google to invest in him. In my humble opinion he deserved a small fraction of time they saved by using his tool.
It takes bravery to admit that you’re not very good at these sorts of interviews. I’m not very good at them either. I have a friend who passed a FAANG interview by studying from Cracking the Coding Interview by Gayle Laakmann McDowell. It’s like SAT prep. It has its own industrial complex. For folks like us, the choice is to either swallow our pride and cram useless info or just say, “Not for me, thanks.”
At best, absurd challenges are just elaborate secret handshakes. At worst, they’re like regurgitating awkward, incomplete ports of Donald Knuth in a vacuum and passing them off as our own work while the interviewer justifies the exercise with, “It shows me how you think.” How is it a good idea to start a relationship at the Bridge of Death scene? The candidate either answers correctly or is cast into the void.
I’ve mentioned it elsewhere here, but I’ve found a very effective way to interview. I have the candidate walk us through some code they’d written prior to the interview. If they don’t have anything they can share either because the entirety of their experience is closed source or because it’s a very junior position, I give them a take-home exercise. Open ended questions like, “How is this structured?”, “What did you learn?”, and, “What would you change?” give a great sense of the applicants level of skill, maturity, style, versatility, and values. The only time this doesn’t work is when candidates assume that I am looking for The One Right Answer to my open ended question and freeze like deer in the headlights while they try to decipher what it could possibly be. If I’ve started the interview well, candidates are relaxed and understand that we’re just people trying to get a sense of what it’d be like to work together.
I hear this talking point, but imo it misses their true function.
(I should caveat the following by saying I think these kind of problems are far from perfect, will definitely lead to false negatives, and do not represent day-to-day programming work.)
But what they do accomplish is a proof that:
If you are more worried about false positive hires than false negative ones (a nice luxury if your company is popular among developers), this makes a lot of sense. It is not a sufficient predictor, but it will certainly filter out a lot of bad candidates along with some good ones. Of course, it will not tell you if the person works well on a team, if they are hardworking, or if they have other qualities you might want to hire for. But that’s not why they’re being used.
There are a million ways to do this which are more realistic than typical tech interviews. And the problems chosen for typical tech interviews are almost always wildly inappropriate for the actual job – many revolve around problems that took some of the most brilliant people in theoretical CS years to solve, and expect that the baseline for “coding skills” now is to be able to derive the same thing, on command, in 20 minutes. Which means it’s really a test of “did you memorize the answer in advance”, not a test of “basic coding skills”.
And, really, that’s a large part of the point: they’re openly and unashamedly tests of “have you recently graduated from a university which taught coding interviews as a final-year course” or “have you recently read Cracking the Coding Interview or similar”. Neither of which are particularly useful indicators of “coding skills”.
Well, no. It just proves the person can “study the test” and give you a convincing patter as they pretend to “solve” the problem you posed. In effect you are testing their ability to lie to you about their capabilities. If you think that’s a good thing, or that this is an important job skill in need of routine exercising, I’d hate to work with you or in the kinds of places where you feel comfortable.
Meanwhile, actual real-world programming basically never involves deriving the correct one-off algorithm from first principles, and a co-worker who insisted on trying to do so would, I hope, either manage to change quickly, or else be fired quickly as unsuitable and unproductive.
This is simply not true, unless you are passing people that fail to solve the problem but have “convincing patter”. The test is: you solve the problems or fail.
So “studying for the test” successfully establishes a minimum bar for the candidate’s programming intelligence. As I said, this is not trivial. You need to learn graph algorithms, dynamic programming, sorting, queues, etc, etc, and then have internalized everything well enough that you can solve 1 or 2 random problems from any category quickly and correctly.
I’m not defending this as the ideal interview style, but claims like “it’s meaningless,” “it’s merely a secret handshake,” and so on, are clearly not true. It provides a strong filter with real signal, so long as you don’t mind rejecting a decent number of good candidates. That may feel unfair when you are good candidate being rejected, but it’s a perfectly logical business strategy for in-demand companies, and, if administered rigorously, is actually harder to game than the alternatives most people propose, which strongly depend on subjective judgments of the interviewers. Note: The existence of coding interview prep courses do nothing to mitigate this point, because you need to have raw ability and practice a lot to “game” the system. Not everyone who tries can do it.
I never claimed this (in fact I went out of my way to say I don’t believe this), and I don’t think the companies doing these interview think this. It’s a straw man argument. The actual claim is: A person capable of learning and fluently applying these algorithms will probably be able to do the kind of work we actually do day-to-day, which is generally less demanding. The critics say: “Well why don’t you just test ‘the day to day’ directly?” And indeed that would be best, but this is harder and more time-consuming to do. The real test would be: Just work with them for a month. But this is rarely feasible.
It is much more, unless you get the exact same problem you studied, which you won’t. Because “memorization” here means understanding a basic kind of solution, recognizing when it applies to a problem, and then correctly applying it. To solve a randomly chosen dynamic programming problem that you’ve never seen before, even if it’s a variation on one you’ve solved, means you understand dynamic programming. This is not comparable to “memorizing” a bunch of dates for a history exam, for example.
The “secret handshake” stuff is like… for example, you ask someone “how do you figure out if there’s a loop in your linked list in constant memory”. Most of my coworkers and myself know this, but we know it because we were told it at one point in the past.
Many of the tiny self contained things have these baked answers and interviewers expect it. It’s not that you can’t come up with it from first principles but hell it took 10 years for researchers to land on an actual bug-free implementation of quicksort…
As to gaming fears…. spending 45 minutes on a “real looking” problem in your codebase sounds way more valuable than doing the binary tree balancing act. Hard to game the thing when the task is “do work like what your real work is”.
(full disclosure: I do think that most people should still know how to balance a binary tree, just feels like a bit of a silly thing to ask during an interview).
To be clear, I (personally) would never ask how to balance a binary tree in an interview. It’s too specific and even if you’re good you likely won’t remember unless you’ve looked at it recently. Nevertheless, correctly doing it quickly is a positive signal, especially if, say, the same person can also solve 1 or 2 more problems from different categories.
For screening basic coding skills, I prefer problems that are basically “a harder FizzBuzz” – not literally, but in the sense that they require no specific algorithm knowledge, yet still verify your ability to put together a solution to a not-totally-trivial problem in a reasonable amount of time.
We’ve done this kind of thing a lot, as well as the “walk us through a large-ish project you’ve built,” in a group interview setting. I do think these tell you something, but I also know there is a lot of subjectivity involved in the interpretation of results, and a lot of variation in what conclusions the interviewers draw. Also, people will ask “leading” questions, and it’s easy to consciously or unconsciously skew the results. Having a pre-defined rubric of questions or points you expect the candidate to comment on can help.
No, it successfully establishes a minimum bar for studying the test. It tells you nothing useful whatsoever about the candidate’s “coding skills”, because the “coding problems” being posed so often are ones that flat-out cannot be “solved” from scratch in the allotted time unless the candidate already memorized a complete answer in advance.
And that’s without getting into problems with the interviewer. One of the smarter people I know once interviewed at a household-name tech company and was given a problem to solve that involved working with graphs. It was a problem that turned out to have two standard solutions (depending on what tradeoffs you want to make). The candidate produced one of them. The interviewer only knew the other, and auto-flunked the candidate for not producing it.
That is not the only story of that type I could tell. It is not the only story of that type out there. Google even somewhat-infamously at one point had non-technical people doing some of the early phone screens, where pass/fail depended on coming up with the answer on the interviewer’s answer key. If you deviated too much in your phrasing or explanation you could fail because the interviewer literally did not have the ability to understand what you were saying.
So, again, these processes offer no useful signal.
I have designed and run such processes. I know for a fact that at least one giant tech company (Netflix) has used such a process. The fact that other places don’t do it has nothing to do with it being too much effort (if anything, most tech companies spend much more effort running their badly-broken processes than they would running good ones).
Let’s do a thought experiment. You are forced to hire one of two candidates, and you won’t get to interview them or meet them. You are told one has passed the google algorithmic interview process, and one has failed. You know nothing else about them. If you believe your claim, then you would just flip a coin to decide, because the information is meaningless. Is that honestly what you would do?
Having been through the Google algorithmic interview process, I absolutely and completely honestly would flip a coin in that situation.
Well, there’s a correlation between being good at “competitive programming” and doing well on Google interviews. And according to Google there’s also a negative correlation between competitive programming and doing well on the job. So the best option, if forced to pick one, might actually be to hire the one who failed Google’s interview.
(in other words, yes really, I really am really saying for real, really, that I really do not think the Google style of interview provides useful positive signal, really)
But what do you honestly think? (jk)
Anyway, it’s a bold claim. If you’re right, it means that one of the world’s largest employers of software engineers – who’s been doing this for 20+ years and tracking and analyzing results – is not only using a suboptimal interview method, but is using one that is effectively picking engineers at random. Or worse – I’m curious where you got the competitive programming stats?
So, do you believe google engineers are basically a random sample of the market, in terms of skill and talent? It seems your position commits you to that, unless you think the non-algo parts of their interview process are somehow making up the difference and still selecting better than average engineers? The conclusion seems highly implausible to me, even if I ignore my own instincts about it and just take a econ/incentives perspective on the situation.
I’ve worked with plenty of people who’ve been at Google. My impression is that… there’s nothing super-special about them, they’re the same as everyone else. Some are quite good, some are not, some are average… there’s really nothing that stands out.
And the stories they tell about working there are all pretty similar, and are similar to the stories you hear from just about anyone who works or worked there: that all the algorithm-challenge stuff is useless on the job, because the average Google programmer’s work looks like the average non-Google programmer’s work: taking specs, turning them into software using standardized tools and libraries applied in standardized patterns.
If this seems “implausible” to you, I don’t know what to tell you. My own large-tech-org experience (Mozilla) was the same way. People I know who’ve worked at other big tech companies all have similar stories. There’s just nothing magic about the big places.
In Google’s case, they don’t have some sort of secret massive rockstar guru ninja wizard 10000x advantage in programmers or in building new software – in fact, arguably the opposite, since for a long time now it’s been rare that anything impressive and novel came from within Google (rather than things developed externally and acquired by Google later on). Their success as a company is entirely due to being the biggest player in online ads, which seems to have more to do with… let’s call it “willingness to make use of their pre-existing position in the market”… as with any outright technical superiority. I don’t doubt that their marketing and business folks are some of the most ruthless and fierce in the whole industry, but we’re not talking about the interviews for those positions.
You misunderstood what I said. I don’t think google devs are magical. Please feel free to read back over what I’ve written and see if you can find any of my actual words that imply that. It’s not implausible at all to me that they’d be similar to Mozilla devs, or the devs at my company, etc. What is very implausible is that they are random sample of the applicants, which is your claim.
And your informal description “some are quite good, some are not, some are average” does not sound like a random sample of applicants. From my experience interviewing, even with HR phone screens, I’d say that between a third to a half of applicants are very underqualified…. they are clear, unambiguous “no”s. If our hiring process were random, the competence of our team would be very different from what it is now, and I would not describe our hiring bar is extremely high.
You seem to have a number of axes to grind about google that I have no opinion about, or don’t disagree with, and none of those axes have anything to do with the claim I am debating, and everything you say only makes me more confident of the claim: which is that those algo interviews most certainly do provide a real signal, even if it is imperfect and no better than other methods of interviewing.
TLDR: It sounds to me like your actual claim is something like “the google algo interview process is no better, and maybe even worse, than other types of interview processes that are way less silly and annoying.” That is a claim I’m be open to believing is true, but it is a much weaker claim than that Google has been randomly hiring engineers for 20 years.
If you’re designing any kind of admissions test then you generally fit one of two categories: selection or deselection. A deselection test is intended to filter people out. If you’re aiming to get people in the top 10% of applicants then you want a test with high discrimination around 50% so that you can be confident that there is a high probability that anyone failing is in the bottom 50% and anyone passing is in the top 50%, but any signal beyond that can be very weak. For a selection test, you want very high discrimination around your cut-off boundary. If it’s scoring 0-10 then you want people who are in the bottom 80% to get around 3-4, people at the top to get close to 10, and any score from 4-10 to correlate strongly with their position on the list.
Phone screens are deselection tests, often actually aiming to get rid of the bottom 70% or so (this varies with the job - for entry-level positions, it may be far lower). The problem is often construct validity: Is the test actually measuring something that correlates with the desired outcome? If you can’t define what it is that you actually want from a person in a particular role, then your chance of being able to design a test that accurately filters at any level is very low. I think this is what the article is trying to say: the phone screens that he’s encountered may have high discrimination at the desired interval for a particular set of skills but those are not the ones that the company actually wants from a developer.
Unfortunately, the same problem applies once you’re past the phone screen. Here, people often give up on trying to design a proper assessment and instead use the ‘I know what a good developer looks like’ metric. This is a really great way of maximising implicit bias in the process.
I’ve used that technique and I really think it’s the best.
Others have mentioned this, but I want to underscore that the kinds of companies that have these interview processes would rather not hire someone than hire them. People always say “your process is bad because it will miss competent people” but they would rather miss some competent people than get some people they don’t want. Especially in big tech testing for “are you compliant, will you jump through our hoops” is almost as important as any test for technical skills.
Two problems with this:
Curiously, this is not the universal. Google and FB, yes. But a while back I did Netflix’s interview (and got invited back after the technical portion) and it was remarkably straightforward and centered around things that would actually be relevant on the job and in the specific team I was talking to. None of the “dance for me, monkey!” stuff you hear about from the other places. Now if only people would rush to copy that process instead…
Google can afford to hire a few unsuitable people; in a small company this is much more damaging, especially when hiring for a senior role; assuming you find out about it quickly, it’ll take a month to restart the recruitment process, and major projects will get delayed in the mean time.
Google makes it extremely hard to fire people because they believe that security is a key factor to job performance. The only reason they can do that is because the hiring and promotion process is so focused on avoiding false positives. There is a current philosophy behind the strategy.
* coherent. Effin autocorrect
Doesn’t it? I’d posit that it might not be the right process for some companies per point one, and one might object to it on other grounds, but I think for some situations this might be exactly what is intended.
If you assume there are some elements of individual bias and randomness in the process of selecting hires, then you should accept that some portion of your current employees might have been “wrong” decisions according to your metrics, but eeked by because of those confounders. (note I’m not judging their actual performance in the job, simply that they may have gotten through your process despite not really clearly meeting the bar you intend to set.) Given that, you should expect that reevaluation by those same metrics might make a different decision for some current employees. If that were not the case, you’d actually be iteratively lowering your standards over generations of this… and in a company with aggressive/significant hiring this can quickly compound. This is part of the thought behind the aphorism of “A’s hire other A’s while B’s hire C’s”
If it were perhaps one or two, maybe. But rejecting all of their own packets for not being up to par? That is, again, not a sign of a well-calibrated and standardized process.
And if it is correct in retrospect to say all of them should have been no-hires, then what you’re effectively saying is that Google’s much-vaunted and much-copied hiring process is effectively worthless crap. There’s no interpretation in which the process comes out looking good here.
People who think like this are, to put it bluntly, a gigantic part of the problem.
Were they hiring for the same level as themselves or more junior? When you’re hiring at your own level, the recommendation is to try to hire people who are better than you (on the basis that there’s some margin for error and if each successive round lowers the bar then pretty soon you get useless people). If so, I’d expect the folks to all reject them because their interview packets show a less experienced version of themselves.
Yes? I mean no one really seems to be claiming to have figured out hiring. At best you hope that you have a system that gets you a sufficient number of of successful new hires without falling over completely, consuming all staff time in interviewing, letting in too many bozos, or getting yourself sued.
Of which problem?
Agreed. I have a little over ten years of industry experience now too, and have become increasingly convinced that this is a major factor in the design of these kinds of interviews, whether the interviewers know it or not.
If that’s true, then they are going about this all wrong. Most of the comments here seem to be focussing on this same idea: that the purpose of these tests is to weed out the bad candidates, even if they also exclude some good candidates as a kind of collateral damage. But in my experience these types of interview do not weed out bad candidates at all. I have worked with people who aced this kind of interview but were terrible software engineers.
I have not seen any evidence that these kinds of interviews either select for good candidates or deselect bad candidates.
Yes, I explicitly did not say they were meant to test ability to get shit done. Based on my time at Google, I’m not sure anyone there gets shit done 😂
What I said was this:
Fair enough! You are probably right: big companies selecting for compliance is depressingly plausible.
This sounds like a biased sample set. Did you work with any folks who failed the tests? I’ve seen some spectacularly bad professional programmers who couldn’t implement the kind of thing that I used to set as exercises for 11-year-olds learning to program for the first time and definitely couldn’t manage anything from an undergraduate-level data structures course. This is what you’re trying to weed out with these kinds of test. The folks I’ve seen who have past them tend to have the kinds of problems that are simple inexperience: they may never have shipped something with high security or availability requirements or with a very large installed base and long-term support requirements.
That said, I think the right way of designing this kind of thing is to first identify the set of skills that you think your team is competent to teach. Nothing on this list goes in the interview. Next, identify the skills that your team definitely isn’t competent to teach (or doesn’t have the time to teach) and which you need to assume everyone can do. Start designing a test to measure these things. A lot of the tests of this kind that I’ve seen have been measuring things that someone on the team is an expert in and can easily teach to a new hire, rather than things that a new hire would need to know.
Well, yes. It’s my personal experience so obviously it’s a biased sample!
However, I still think it serves to show that these kinds of interview tests are ineffective at filtering out poor software engineers. Perhaps they do filter out the completely incompetent, i.e. people who really can’t program at all, but so does every other form of programming test.
I’m convinced that this kind of phone screen is used as a way for FAANG companies to weed out people who don’t want to spend tens of hours studying Cracking the Coding Interview. Maybe they’ve found some sort of correlation with those people dedicated to getting into a “prestige” company and how long they will stay at a job, which helps reduce costs associated with hiring and retraining or morale hits from people leaving more often. And because big companies do it, other companies cargo cult the idea even though the reason behind doing them doesn’t fit that.
That’s just speculation, of course, but I’m convinced it’s something like that. There is a time and a place for whiteboard interviews, but trying to move that step to a phone screen makes it worse than worthless.
I’ve recently taken over most of designing the hiring process at my current job, and my top priorities are:
Using Kevin as an example, if he applied for a position it would be clear from his Github and blog that he was competent and motivated. There would be no reason to ask algorithm or whiteboard questions, and an interview would be about seeing if we have aligned goals, and if we do, trying to convince him that he should choose us over any other offers he has.
Maybe some high salary or prestige companies can afford a high rate of false negatives, but algorithm puzzle phone screens and 5-hour take-home exercises for every candidate no matter what is a good way to fall behind on hiring goals and let great candidates slip through your fingers.
It’s only an anecdote, but I did not study for my Google interviews and got offered a job anyway. The technical questions which involved writing code were given in front of a whiteboard; I did not even have the luxury of Coderpad. And I had not taken the opportunity seriously, other than participating; I did not prepare much because I did not think I would get an offer.
I mostly recall your comments on lobste.rs revolving around topics of higher mathematics and FP which, in my opinion, go well beyond a knowledge of an average “coder”. Even though you were not prepared for an interview itself, your above average cleverness might have been picked up by an interviewer? For people closer to the central part of Gauss curve it might require some additional effort, I don’t know :/
Perhaps. But I think that this is hero worship, or at least belief in meritocracy. Also, it implies that Google’s interviews are singularly difficult, which is definitely not the case. In terms of FP, they offered to interview me in Haskell but I chose Python instead since I felt stronger with it.
It is more likely that, due to an adverse childhood, I have internalized certain kinds of critical/lateral/upside-down thinking as normal and typical ways to examine a problem. I was hired as an SRE at Google; I was expected to be merely decent at writing code, but even better at debugging strange problems and patiently fixing services. There were several interview segments where I discussed high-level design problems and didn’t write any code.
As another anecdote, I once failed an interview at Valve so horribly that they stopped the interview after half a day and sent me home.
The cost of a false positive outweighs the cost of ten false negatives, arguably moreso for the smaller place. Not filling a slot is… not filling a slot. It sucks, you keep trying until you find the right match. Filling a slot with the wrong person is a mess. You lose money, you throw the team off its stride, you disrupt the hiring pipeline… really, everyone suffers.
We don’t do algorithm puzzles, but we do give out a take-home coding task as an early screening stage. The tasks are fairly easy (but not ones you’ll find on the internet), well-defined, grounded in reality (usually they’re a toy version of something the team in question actually works with), there’s no time limit, and candidates are allowed to use whatever tools and languages they like, although we do state our preferences. The criteria we judge on are basically:
It’s rare that we even have to make much of a judgment call, because most people fail one of the first two points.
I think the algorithm white board tests exist mainly because the Supreme Court outlawed IQ testing for jobs and this is a proxy to get around it. If they could just test for IQ and do a pair programming session they would prefer that.
I was worried this comes off as harsh - this isn’t a knock against OP, I am not as good at these tests as I would like either and know people who are a lot better at them than I am. If you are a company trying to minimize false positives IQ is one of the axes that you care about (along with not being a jerk, persistence etc) and that you would like to test if you could.
Interesting, I wouldn’t have thought of that possible pitfall. I pretty much always try the brute-force solution first, and if anything consider it wrong to not do so! I first found the effectiveness of this in the old ACM International Collegiate Programming Contest. The questions were usually written with a clever algorithm as the “intended” solution, but I found you could often hit the stated time limits using a brute-force algorithm and a standard bag of optimization tricks (memoize, use a suitable data structure, maybe convert from memoization to dynamic programming if you need to squeeze out a little extra performance, etc.). And doing so was more likely to get a correct solution in fewer programmer-hours. Nowadays my heuristic is to try the simple brute-force solution first, the optimized brute-force solution second, and a clever algorithm only if this clearly doesn’t hit performance targets. Due to constant factors mattering, #3 isn’t always reached.
This is pretty interesting to me for a number of reasons.
I do well in phone screens, and I don’t use a very customized development setup in real life. I do phone screen type tasks in Python, which is ideally suited to the types of problems posed in these interviews. There is no evidence that my programming in real life is especially fast - one part of this that’s interesting to me is the suggestion one can become faster.
The other part of this is what is a phone screen for, and for whom? If you’re designing hiring at FAANG you expect at a first approximation that your recruiters are going to try to approach every programmer in the world to see if they want to interview at your megacorp. These corps definitely don’t want to hire anyone who sucks, so for them a low pass filter that false-rejects a small proportion of very good engineers is probably a good idea.
For a much smaller company, the capacity to source potential candidates and interview them is a limiting factor. Nevertheless one reason many candidates prefer phone screens to any of the alternatives is that the screen itself is a minimal time commitment, and it’s a chance to actually meet an engineer at the company.
I think the question for @kb and others in his boat is: to what extent does it behove you to figure out how to be good at phone screens? Would that be a better use of your time than failing phone screens and blogging about how phone screens suck? I don’t have an answer and I think it’s going to vary from person to person. Certainly doing the average thing is likely to produce average results, which is certainly not a goal I would adopt.
They are failing you because, they have no idea how to hire talent efficiently.
Here is an analogy: A company claims that they are looking for writers, that can write small books, may be even multi-volumes books that are meant to capture imagination of consumers, offer something that was thoughtful, yet original. But during an interview they present applicants with a ‘Jeopardy-like’ test. And filter out applicants that do not pass it within the time frame allocated. Writing books, and doing Jeopardy – both use words, right?
Another analogy that comes to mind: there is a difference between multi-instrument symphony composers, and a rap music artists. Both is music, both are art, right? But different skills, experiences and cognitive gifts are needed there….
I hope the above analogies demonstrate why the interview processes you are referring to, are simply incompetent.
They are not making a good decision when they are filtering you out.
And certainly (in my humble opinion) – you should not be developing tools that would make it easier for these companies to continue this travesty.. (may be I will come up with a better name for this in the future :-) ).
Instead, may be, you could think up of a system that helps discovering talent with different level of experience in building things. People who can ‘map out a journey’, ‘anticipate obstacles’, ‘pickup up good practices’ and think up ‘differentiating features’ of that journey – so that at the end, the industry is producing competent alternatives of products and solutions. You will have a lot of customers for that SaaS :-)
I think your interest in consulting is also a right interest, if you want to take that path.
You do not need to write out an ‘array merge’ or ‘find closest neighbor’ algorithm in 10 minutes on a phone call to earn your pay. You are more talented, more competent and more experienced than being insulted by those ‘interview’ filters.
AGPL? Your comment reminds me about the homebrew guys misadventure at Google. He sure was smart enough to design a tool they use, yet “not smart enough” for Google to invest in him. In my humble opinion he deserved a small fraction of time they saved by using his tool.
A very truncated version of the comment I have:
Phone screens are only on the surface level about code. Which is where you’re focusing. The human factor always gets in the way.
You’re trying to be a good coder. Try to be a charming coder.
Yes, the system is entirely bonkers.