1. 8
  1.  

  2. 17

    This is everything that’s wrong with programmer hiring practices.

    1. 10

      “For entry-level roles I give bonus points if there’s some sort of testing, but more experienced roles I penalize candidates who don’t at least list relevant test cases.”

      No test cases for your whiteboard code? SURPRISE, GOTCHA! What’s next? “I docked points for interviewees who did not also provide an autotools configure.in to build their whiteboard code.”

      1. 1

        This is an unfair comparison, knowing how to write good tests is not the same in importance as reciting build rules. Ideally you should be submitting tests alongside code in every commit. It’s a critical piece of SWE knowlege.

        1. 7

          Ideally you should be submitting tests alongside code in every commit. It’s a critical piece of SWE knowlege.

          This right here is religion.

          And again, that someone doesn’t write a test for their whiteboard doodle doesn’t mean they don’t know how to write good tests. That’s the SURPRISE, GOTCHA! The rules of the game are quite arbitrary.

      2. 1

        I couldn’t agree more. Thanks for sharing your thoughts.

        1. 1

          But I LIKE these puzzles. I got fixated on the fact that I can’t dial 5 this way.

          1. 5

            Oh, me too. But I’m wildly fed of up clever-clever programmers who think that their cute way to encode a dynamic programming problem counts as a valid hiring filter.

            Extra points are awarded for problems which turn out to have solutions that vastly outperform the dynamic programming one, especially if the problem only happens to be amenable to dynamic programming due to some special features that you’d never see in a real world example.

            (The other favourite appears to be ‘lets see if the candidate can spot the graph problem I’ve just described’.)

            1. 1

              Oh, totally :)

              In the past, we’ve used actual problems from our research as a joint white board brainstorming session and used that as an excuse to figure out how the candidate works. It’s possible it unfairly filters out candidates who are more comfortable using a few days to think about a problem and are not so quick verbally.

              We had a different strategy where we would send out a do at your leisure coding test which would work out more for such types.

              I don’t think we ever synthesized the two tests meaningfully.

        2. 6

          Google has multiple algo/data-structure interviews over one day because

          1. they test knowledge of basic computer science vs trivia or current hotness (since employees are expected to learn new tech and switch teams occasionally, knowing the basics is what counts)
          2. they test problem-solving ability (interviewers can still pass you if you solve a problem systematically but don’t get the best answer)
          3. such interviews can be performed by thousands of interviewers a year (the process scales)
          4. once created, the questions can be reused. (no need to find a small project in your team for every candidate, etc)
          5. allow for some signal to be generated in 45 minutes vs a longer time / multiple days
          6. allow for many to be conducted in one day. (more interviews reduces variance based on a particular problem or interviewer)
          7. creates a standardized system that can be replicated across offices, teams, locations, etc
          8. the candidate just needs to take one day off to go through the process (vs a multi-day test project, etc)

          Each interviewer writes up what happened during the interview. The packet is analyzed by a committee. This reduces bias and leaves the decision up to an experienced group of interviewers/employees. This process also means that the hiring bar for the whole company is relatively even. (Versus a manager hiring for their team.)

          There is a strong element of standardization and indirection to help reduce bias and produce more consistent results. (Debiasing is so important that many interviewers, including me, will use “TC” or “the candidate” or “they” to refer to the candidate instead of their gender. Every little bit helps.)

          There are many problems with this system, but it does have its advantages. The system probably wont make sense for other companies in vastly different circumstances. In fact, because these companies do it this way, your company has the opportunity to pick up good people who don’t pass this guantlet. (Why compete with Google at their own game?)

          1. 3

            I turns out this problem has one more solution… I didn’t even know it existed until one of my colleagues came back to his desk with a shocked look on his face and announced he had just interviewed the best candidate he’d ever seen.

            It doesn’t reflect well on Google for their interviewers to be ignorant of the optimal solution to their own problem.

            1. 2

              I completely disagree. I think that’s absolutely fine. Nobody in the world knows the optimal solution to any of the real world problems Google faces. Working in software is not about finding the optimal solution to problems, and it’s certainly not about suggesting solutions until someone goes ‘yes that’s optimal’ then stopping. What’s wrong with the interviewer not knowing the optimal solution? Does that take away from the process in any way?

              1. 1

                Effective software development interviews use toy problems with known answers rather than research problems with unknown answers. This is because a candidate’s answers to the latter do not provide a reliable indicator of the candidate’s ability to develop software.

                1. 1

                  I completely and utterly disagree. I think the ability to solve toy problems with known answers has nothing at all to do with someone’s competency as a developer.

                  1. 1

                    Say a candidate is unable to solve FizzBuzz. Does that give you any indication of whether or not they can program?

                    1. 1

                      If someone can’t solve FizzBuzz they can’t solve real problems either.

                      1. 1

                        How does that not contradict what you wrote earlier (about the ability to solve toy problems having nothing to do with competence)?

                        1. 1

                          Because that was obviously intended to be interpreted as ‘the ability to solve toy problems does not demonstrate competence’

                    2. 1

                      Replying here, because below you said you meant to say this:

                      the ability to solve toy problems does not demonstrate competence

                      Filtering out incompetent applicants is valuable.

                      1. 1

                        I couldn’t give two shits about whether people use toy problems.

                        My response was to:

                        Effective software development interviews use toy problems with known answers rather than research problems with unknown answers. This is because a candidate’s answers to the latter do not provide a reliable indicator of the candidate’s ability to develop software.

                        That is what I disagree with.

                        1. 1

                          You have not explained how a candidate’s answer to a research problem provides a reliable indicator of their ability to develop software.

                          1. 1

                            Yes I have.

              2. 3

                My maths is getting rusty, but I feel this is a combinatorics problem that does not require execution on a computer. The whole premise is wrong then.

                1. 2

                  You can reduce it to a matrix exponentiation problem which is O(ln N) in the length of the generated number. I don’t believe there’s a direct algorithm you could carry out in reasonable time with pen & paper.

                  1. 1

                    I take your word for it as am really on thin ice here. It did look similar to the knights problem though, which IIRC does have a formula.

                2. 2

                  now implement it on a rotary phone