1. 12

This is really, really good. I shoot for the top 10-20% of code reviewers/review turnaround time wherever I’m at. Unfortunately you can’t really measure it, at least when you’re hiring or looking for a new position.

(although maybe… you could set up webhooks w/ github to publish data on turnaround time per person…)


  2. 4

    My comments on the matter (originally emailed to the author):

    I read & liked your article on code reviews, with one exception - I think you’re significantly underestimating the cost of context switching.

    You write:

    “If you have time to get up to eat or go to the bathroom, you probably have time to run through your code reviews before paging your code back in.”

    In my experience, if you’re doing effective code review you need to get your head into the problem space (sometimes, language, OS and platform differs).

    This WSJ article claims 23 minutes per context switch, which I find believable based on my own experience:


    So that’s three quarters of an hour per review spent on context switching. Either that, or pay the price of poor-quality, distracted work on either the reviews or your original code.

    The approach I’ve seen that works well is to treat code reviews as cards on the wall like any other, to be moved, prioritised, assigned, and completed. This is much more productive and humane than attempting to be interrupt-driven.

    1. 3

      In my experience, if you’re doing effective code review you need to get your head into the problem space (sometimes, language, OS and platform differs).

      IME, if it take a long time to get into a code review to do it, the code reviews are too large. I tend to have very succinct code reviews explicitly to ensure my reviewers can have really good SLAs are responding. Big dump code changes are necessary sometimes but often avoidable. And they cause a lot of bugs and pain.

      1. 1

        To be fair, having a bunch of branches out for review means the author needs to context switch as well/ re-load the code back into memory.

      2. 2

        I would have liked more detail on the code review practice itself - how to do it, what to look for, what tools could be used and so on.

        1. 4

          This is an area where I notice a lot of difference between reviewers. I tend to look for stuff like (no particular order; numbered only for reference):

          1. Is there tests? Are the tests appropriate for the documented change?
          2. Are there any fishy algorithms introduced? (The classic is a nested loop which works fine with the test data but explodes on larger inputs. See also point 1!)
          3. Are functions & variables named sensibly?
          4. No double negatives?

          I felt that I did fairly casual reviews, looking at the changed code in isolation. I assumed that the automated build would do its thing and the code would actually run. I have had colleagues that would run the code, and even step through it in a debugger before giving it the green light. Those colleagues didn’t manage to get through many reviews, and often required a lot of chasing…

          1. 4

            Names are huge for me. In my experience, algorithms, data structures, even component boundaries and (some) platform choices are all easier to change then a bad name that has had some time to settle in. So I am very aggressive when reviewing code that I think introduces such a name. (Of course I’m referring here to names that will be used widely within a project; very local names are not as big a deal.)

            1. 2

              Where I work we port to 4 different platforms and I’m one of the guys that has to do that so these are additional points for me (We use C/C++, every application uses the same cross platform library):

              1. Did you use platform specific (It’s always Win32) code in a cross platform file? If you edited a cross platform file, did you think about how it will compile/work on other platforms?
              2. Did you use Ansi/UTF16 and assume that all characters can be treated independently of what comes next? Did you assume that wchar_t is 16 bits? Did you assume that char will only hold English characters?
              3. Did you assume that Ok/Cancel/Help buttons are the same size and are laid out in a particular order?
              4. Did you assume the user has a mouse/keyboard?
              5. Did you assume the user has a 1080p or larger display and that the widest side of the display is horizontal?
              6. Did you assume the menu is connected to the window?
              7. Did you assume the application should close when the user hits the x/cross on the main window?
              8. Did you assume the user is an administrator or has certain privileges such as being able to install applications/drivers? Did you assume the application could view/manipulate/control other applications?