1. 8

  2. 2

    The first couple seem closely related (to me).

    Either rushing or putting off the review is typically the result when the author is thinking about the code but not the people (particularly common examples are large changesets or using tools/features not found elsewhere in the codebase).

    I think point four is great, but could go further - there’s vast scope for automation to support a reviewer. Some examples of things a review-support bot could do:

    • For CSS changes, automatically attach before/after screenshots in common browsers.
    • Provide a link to view all calls to a modified function.
    • Identify new code paths that aren’t covered by tests (they are worth knowing about even if you leave them as is).
    • Inform the reviewer how frequently this code is modified (ideally, frequency of bugfix vs feature changes).
    • Identify which new tests would fail on the old code & which would pass
    1. 1

      I would really like some of those tools myself. Today I generate the screenshots manually for visual changes. :0)

      Codecov.io is a tool I’ve used which provides your third bullet. It will analyze the pull request diff and the code coverage report for that branch, and generate a focused pull request-specific coverage report. I like it a lot. Here’s a very simple example: https://codecov.io/gh/scottnonnenberg/eslint-compare-config/commit/1c8115c3870d7306140e2c928b1964b1ea494c79