1. 63
  1.  

  2. 21

    100%. I tell all my directs not to worry about being the best programmer. Worry about code readability. Can you tell me what function x does? Can you tell me why L65 is a bug? Writing pretty code or people who have mastered an abstract algorithm is useless. Communication, planning, code readability is paramount. Your linter handles and fixes the rest.

    1. 8

      I agree that the currently commonplace practice of “whiteboard coding” in interviews is unhelpful, but I wonder whether just replacing it with predicting program output at execution is the right direction to go. I think asking higher level questions after reading code would be more useful. For example, “here’s a function X that does Y. If you want to add functionality to do Z as well, which part of the code would you modify? What new functions would you need to make?” This kind of problem mirrors what developers actually see every day (being tasked to work on an existing code base that other people have written), whereas the reading exercises the author suggests are more reminiscent of coding exams in school.

      1. 8

        I assumed that predicting output would be a jumping off point for different design questions. You can get into a lot of interesting questions: How would you abstract this? How would you test this? What bugs can you see? How would you refactor this code?

        1. 4

          And each of those questions would be excellent for testing new developers (in my opinion). I was just disappointed that I couldn’t find any of those design questions mentioned in the original blog post. There’s just one off-handed reference at the end “This gives me extra time to ask follow-on questions” without a discussion as to what those would be.

          1. 1

            What questions would you ask?

      2. 5

        This is the first time in a while I’ve seen a compelling new alternative to rewriting Knuth on command. I would argue that it’s still a bit Bridge of Death, though. Being an effective reader of code usually involves some combination of stepping through it, logging, and experimenting, which is difficult to do naturally in an interview setting. Evaluating candidates on their ability to read obscure code also rewards the clever candidate, who has no qualms writing prematurely optimized, unnecessarily obfuscated code, over the good-teammate candidate, for whom such code is anathema. Yes, some code is difficult to read by necessity, but a team composed of people who make a cult of it eventually becomes deadlocked by its proliferation.

        1. 6

          These types of code-reading exercises have been around for years and people have been recommending them for at least as long (I remember doing one of these when interviewing somewhere 7-ish years ago). But most companies stick to algorithm challenges because they think all the big and successful companies do it, too.

          (some of them still do, others don’t)

          Personally I’ve always preferred to do this in the format of a code review, because there are all sorts of multi-axis signals that come out of running the exercise that way.

          1. 4

            ability to read obscure code also rewards the clever candidate, who has no qualms writing prematurely optimized

            People that can read obscure code also prematurely optimize code, how does that follow?

            some code is difficult to read by necessity

            How is the lack of readability a necessity for the code itself?

            1. 1

              Perhaps I should amend that to say, “Evaluating candidates solely or even primarily on their ability to read obscure code…” Although reading code is a prerequisite to writing code well, it’s the writing that is by far the most relevant.

          2. 4

            I think the author is on to something here. Most of what we do is reading and understanding code: an existing code base or how to implement an existing library’s api (documentation and examples). And someone getting interviewed can’t practice that. There is an infinite amount of code they can be presented with. Not sure what percent of an interview should be reading comprehension vs writing code, though.

            1. 3

              This one took me aback. It’s a great idea. I would love to see it exercised and reflected on after a period of time. I wonder if it ends up optimizing against false positives like live coding seems to. The comments on the article have some good insight too.

              1. 2

                I like this a lot, I’m going to start incorporating this more into my interviews.

                One small warning: I’ve actually been “on the receiving end” of this style of question a little bit before, I think it’s very easy to start asking questions that rely on essentially resolving ambiguities within the language in question. Stuff like assignment order within a for loop preamble, or e.g. I’ve seen this in Python:

                def func(element, to=[]):
                    to.append(element)
                    return to
                
                print(func(14))
                print(func(42))
                

                In my mind these are kind of like those boring math “brainteasers” you see online that are really just arguments about order of operations. In contrast, it might be acceptable to ask why [] === [] in JS is false, so it’s not necessarily the case that all questions should be completely independent of language decisions.