1. 11
  1.  

  2. 6

    Triplebyte confuses me.

    I’ve taken several of these white-boarding-as-a-service’s hiring tests (Hacker Rank, Triplebyte, Hired, etc.) and Triplebyte’s is the only one that I failed. By their writing I would assume that they would be the one’s most compatible with how I think hiring should go (I found Hacker Rank to be especially CS/Algorithms heavy), but their questions were rife with confusing questions that would go as follows:

    You need to deliver <some_product>, how do you go about implementing this?

    A) Solve the problem an obviously incorrect way. May not be apparent if you haven’t solved a similar problem.

    B) Solve the problem correctly, but in a somewhat crude way that can serve as a MVP.

    C) Solve the problem correctly, but in a way that will require a prolonged engineering effort, but is the most robust solution.

    I can’t help to think that I chose B-style solutions where they were looking for C-style. There’s a lot of business context that can help sway whether you’d want a B or C-style solution that is ignored in this multiple choice format.

    1. 1

      There was some good stuff in here, this paragraph stuck out to me, though, because this is a common debate I experience:

      This is achieved by making interview question as similar as possible to the job you want the candidate to do (or to the skill you’re trying to measure).

      I am against this method of interviewing for the following reasons:

      1. A candidate may be good but not have the skills, at the time of the interview, to solve the particular problems they will at work. This is fine, if you determine they can learn the new skills, but giving them a work task to solve that they can’t because they lack that specific knowledge will result in more false negatives, I believe.
      2. A candidate may be really good at the problems like the ones you’re solving right now at work but incapable of expanding their expertise. What happens if your team changes their focus? Or the company pivots? Or just the team no longer is needed and the members have to join other teams doing different work?
      3. In the particular example given (and there are many like it) I think it is far too easy to fall into very uninteresting debates about the candidate that have nothing to do with if they are qualified. What if they don’t do a proper RESTful API (whatever that means)? What if they use a framework you don’t like? These “real world” question are often very challenging to evaluate without evaluating how the candidate should have solved the problem if they were on your team. I’m sure some people are good at evaluating these but I have been in many evaluation meetings where it just falls apart due to nonsense around these questions.

      The way I am trying to get interviews to be done is more like a test in school. We give the candidate study material before the interview (in the general sense of we tell them what aspects of computer engineering they will get quizzed on) and then we perform a quiz and we evaluate it. It’s more complicated than that, but the point being we give candidates a way to prepare and we test how good they prepare for a task. It does have the bias that if the candidate happens to be good at the material we will interview them on they are more likely to pass, but at least any candidate is given the information they need to pass.

      1. -1

        Did the author even consider running his post through a spelling and grammar checker?