All but one of these function to make reviews stricter - ie: reject more stuff - but none of them really are ways to make reviews more effective. Effective does not always mean “stricter”.
My personal experience is that there is a strong bias among programmers to feeling good that they rejected stuff and “were a tough reviewer!” and never considering the false positive side of that where you end up asking people to make a bunch of time-consuming changes based on perceived strictness but which are not really worthwhile. Talking about preventing “sub-optimally factored code” sounds like micromanagement at the review stage and does start to ring alarm bells for me!
This one in particular really rubs me the wrong way:
To review a change, it is important to agree on the problem to solve. Ask the author to supply a problem statement if it is not presented in the change. Only once you understand the problem statement you can evaluate the quality of a solution. The solution could be over- or under-engineered, implemented in the wrong place, or you could even disagree with the problem statement altogether.
If you are ever in the situation where someone has spent time implementing something and their entire approach gets rejected at the final review stage you should absolutely not be patting yourself on the back for that. You need to immediately introspect about why your team process has spent time producing code that was fundamentally wrong from the beginning.
If you are ever in the situation where someone has spent time implementing something and their entire approach gets rejected at the final review stage you should absolutely not be patting yourself on the back for that.
I think there’s some nuance here in terms of where the change came from. In open source, it’s fairly common for someone to show up and drop a PR on your project with little or no explanation. That doesn’t make it bad, and it doesn’t make the person who submitted it bad. However, understanding the problem they are trying to solve is still important and may just need to happen in the code review. On the other hand, if we’re talking about a corporate setting, or a PR submitted by a regular contributor to a particular open source project, then it should already have been discussed (and if it wasn’t, there are probably larger communication issues).
From personal experience, I’ve helped someone maintain a friendly fork of an open source project because they wanted a change that I definitely did not want upstream, but which solved a real problem for them (working around a bug in something else that was specific to their deployment). I think ‘this is a great implementation of what it does, but we don’t want it’ is a fine outcome for open source code review.
I’ve definitely met some people who went too far (in my opinion) when it comes to requesting changes in code review. I prefer to conduct code review in person or in a freeform chat, rather than gerrit / PR conversations / etc. because I think there’s a bit of nuance to it. In my opinion things should go something like the following:
Major bugs / functional issues: no real question here, send it back to be fixed.
Unhandled edge case: fix if possible, but an assert guarding against the unhandled condition, and a TODO comment explaining why you might actually need it at some point, are an acceptable alternative if there’s more important work to be done.
Violation of something explicitly set out in the company style guide: fix unless emergency.
Refactoring opportunity / optimization opportunity / “I wouldn’t do it that way”: discuss, don’t insist. Sometimes the reviewee will agree that you have the better idea and they’ll go and make the change. Sometimes they’ll prove you wrong because they learned something during implementation that you overlooked from your lofty position. Sometimes they’ll agree that you made a good suggestion, but you’ll agree together that now isn’t the time to squeeze in the change (but the reviewee learned something, and you can come back to it later). And sometimes you’ll just agree to disagree on a matter of style. I think a lot of senior devs push too hard for having things done exactly their way, and end up wasting time and goodwill on things that are ultimately immaterial.
I have almost word-for-word the exact same outlook. One trick I use (which it sounds like you already know) is to just phrase pretty much every piece of feedback as a question rather than a demand. Most people are good-natured and will either make the change you’re implying or will explain why not (often convincingly). One of the very worst things you can do is get up on your high horse about something in the code before hearing about it from the author.
Yeah, I tend towards that as well. I don’t hold fast to it, but I agree. Even in the most straightforward cases, “hey, won’t the code blow up here if this variable is less than zero?” sets a much better tone than “this is broken.” Not to mention, it makes me look better in case I made a mistake and there isn’t actually a bug.
On the contrary, I have a really hard time rejecting stuff. I suspect a lot of people in the industry are probably similar to me (shy, imposter syndrome, etc). I’m sure there are just as many on the other side of the spectrum (socially tone-deaf, cocksure, etc).
It has to be pretty “bad” for me to reject it or ask for a large refactoring of their approach, for better or worse. Most of the time, if I don’t see actually errors/bugs with the approach, I try my best not to shit on the whole approach just because it was not the way I like it done.
On the contrary, I have a really hard time rejecting stuff. I suspect a lot of people in the industry are probably similar to me (shy, imposter syndrome, etc).
I have reviewed and subject my code from/to hundreds of other developers and what I observed is that you suspicion off. By and large, the majority, think 90%+, get high on power and are eager to reject patches for everything and nothing. White space, bracket style, indentation, style, naming conventions, linting and other secondary details always end up taking up like 75% of the work involved in code reviews. Grand parent is spot on.
To make the assertion that everyone is in position to review everyones code is a very silly premise, and can only result into non sense like this. We then get sprinkled with posts like this one on “how to do it right”, as if they have the magical formula and everyone else is doing it wrong.
In the occasions I had ownership of the code I end up asking fellow developers to not submit the code to review unless they feel uncertain about it or need additional input for any specific reason.
“At least two pairs of eyes”, “a last line of safety before the main trunk” and other silly mantras are non sense to account for lack of competence or code quality. You can throw 20 pairs of eyes at it, if those 20 are bad developers, than it is worst than just one pair of eyes. There is no shortcut to competence.
Furthermore, the idea that the whole code can be reviewed bit by bit doesn’t make sense. You can have 20 patches that all look perfectly fine by themselves, but when you put them together are a disaster.
An experiment for anyone to try: gather the reject ration of your coworkers and compare with how you would rate them in terms of skill/competence. These are usually inversely proportional.
White space, bracket style, indentation, style, naming conventions, linting and other secondary details always end up taking up like 75% of the work involved in code reviews.
Other than naming conventions, humans shouldn’t be wasting their time reviewing any of those things anyway; automated tools should be rejecting those things before a code review request even goes out.
On teams I’ve been on where we’ve had automated checks for machine-detectable violations of the project’s style conventions, code reviews have tended to stay much more focused on the stuff that matters.
If you are ever in the situation where someone has spent time implementing something and their entire approach gets rejected at the final review stage you should absolutely not be patting yourself on the back for that. You need to immediately introspect about why your team process has spent time producing code that was fundamentally wrong from the beginning.
I’d agree strongly with this. Having implementation approach be part of the PR feedback cycle makes development inordinately time consuming and adversarial; technical approach needs to be either (ideally) ingrained into the day-to-day communication on your team through some mechanism like slack, daily sync meetings, etc. or else formally deliberated on before the time is spent to build a complete solution. It’s better to take the time to have a half an hour meeting than it is letting a two-day change be implemented in an unacceptable fashion.
It’s better to take the time to have a half an hour meeting than it is letting a two-day change be implemented in an unacceptable fashion.
Doesn’t that depend on how often the problem happens? It is, I think, not better to have a few dozen half-hour meetings over the course of a year when you would only have prevented one or two rejected PRs. Those meetings don’t come at zero cost to the team.
It does depend on how often the problem happens but I think the probabilities you’re implicitly assuming aren’t quite right (IME, ofc). Disagreements over approach at review stage are quite common if the person was not involved at the design stage, largely people there are just a lot of ways to do something. In the worst cases you will find a senior or lead developer will be too busy to attend design sessions but will sit as gatekeeper on PRs. That tends to lead to lots of rework and a very slow effective pace of improvement.
To make it explicit with some numbers: getting a design agreed usually only takes a 10-15 minute discussion between a 2-3 people, one of which can do it and one of which can review. If you skip that it I would say that you have a ~15% chance - and that’s conservative - of drama raised at review stage and rework. Solving for x, if your piece of work takes >5 hours, you should always have the design session. Obviously this is approximate maths, I just hope to give a sense of when the conversation is worthwhile and when it’s not.
The fact that I agree with you makes me a bit sad. Why are we willing to accept engineering practices for constructing things that we use every day that we wouldn’t trust for safety-critical systems?
I’m also sad because, from what I’ve learned about pacemaker software, the to-do list app probably has higher code quality with fewer security vulnerabilities.
All but one of these function to make reviews stricter - ie: reject more stuff - but none of them really are ways to make reviews more effective. Effective does not always mean “stricter”.
My personal experience is that there is a strong bias among programmers to feeling good that they rejected stuff and “were a tough reviewer!” and never considering the false positive side of that where you end up asking people to make a bunch of time-consuming changes based on perceived strictness but which are not really worthwhile. Talking about preventing “sub-optimally factored code” sounds like micromanagement at the review stage and does start to ring alarm bells for me!
This one in particular really rubs me the wrong way:
If you are ever in the situation where someone has spent time implementing something and their entire approach gets rejected at the final review stage you should absolutely not be patting yourself on the back for that. You need to immediately introspect about why your team process has spent time producing code that was fundamentally wrong from the beginning.
I think there’s some nuance here in terms of where the change came from. In open source, it’s fairly common for someone to show up and drop a PR on your project with little or no explanation. That doesn’t make it bad, and it doesn’t make the person who submitted it bad. However, understanding the problem they are trying to solve is still important and may just need to happen in the code review. On the other hand, if we’re talking about a corporate setting, or a PR submitted by a regular contributor to a particular open source project, then it should already have been discussed (and if it wasn’t, there are probably larger communication issues).
From personal experience, I’ve helped someone maintain a friendly fork of an open source project because they wanted a change that I definitely did not want upstream, but which solved a real problem for them (working around a bug in something else that was specific to their deployment). I think ‘this is a great implementation of what it does, but we don’t want it’ is a fine outcome for open source code review.
I’ve definitely met some people who went too far (in my opinion) when it comes to requesting changes in code review. I prefer to conduct code review in person or in a freeform chat, rather than gerrit / PR conversations / etc. because I think there’s a bit of nuance to it. In my opinion things should go something like the following:
Major bugs / functional issues: no real question here, send it back to be fixed.
Unhandled edge case: fix if possible, but an assert guarding against the unhandled condition, and a TODO comment explaining why you might actually need it at some point, are an acceptable alternative if there’s more important work to be done.
Violation of something explicitly set out in the company style guide: fix unless emergency.
Refactoring opportunity / optimization opportunity / “I wouldn’t do it that way”: discuss, don’t insist. Sometimes the reviewee will agree that you have the better idea and they’ll go and make the change. Sometimes they’ll prove you wrong because they learned something during implementation that you overlooked from your lofty position. Sometimes they’ll agree that you made a good suggestion, but you’ll agree together that now isn’t the time to squeeze in the change (but the reviewee learned something, and you can come back to it later). And sometimes you’ll just agree to disagree on a matter of style. I think a lot of senior devs push too hard for having things done exactly their way, and end up wasting time and goodwill on things that are ultimately immaterial.
I have almost word-for-word the exact same outlook. One trick I use (which it sounds like you already know) is to just phrase pretty much every piece of feedback as a question rather than a demand. Most people are good-natured and will either make the change you’re implying or will explain why not (often convincingly). One of the very worst things you can do is get up on your high horse about something in the code before hearing about it from the author.
Yeah, I tend towards that as well. I don’t hold fast to it, but I agree. Even in the most straightforward cases, “hey, won’t the code blow up here if this variable is less than zero?” sets a much better tone than “this is broken.” Not to mention, it makes me look better in case I made a mistake and there isn’t actually a bug.
On the contrary, I have a really hard time rejecting stuff. I suspect a lot of people in the industry are probably similar to me (shy, imposter syndrome, etc). I’m sure there are just as many on the other side of the spectrum (socially tone-deaf, cocksure, etc).
It has to be pretty “bad” for me to reject it or ask for a large refactoring of their approach, for better or worse. Most of the time, if I don’t see actually errors/bugs with the approach, I try my best not to shit on the whole approach just because it was not the way I like it done.
I have reviewed and subject my code from/to hundreds of other developers and what I observed is that you suspicion off. By and large, the majority, think 90%+, get high on power and are eager to reject patches for everything and nothing. White space, bracket style, indentation, style, naming conventions, linting and other secondary details always end up taking up like 75% of the work involved in code reviews. Grand parent is spot on.
To make the assertion that everyone is in position to review everyones code is a very silly premise, and can only result into non sense like this. We then get sprinkled with posts like this one on “how to do it right”, as if they have the magical formula and everyone else is doing it wrong.
In the occasions I had ownership of the code I end up asking fellow developers to not submit the code to review unless they feel uncertain about it or need additional input for any specific reason.
“At least two pairs of eyes”, “a last line of safety before the main trunk” and other silly mantras are non sense to account for lack of competence or code quality. You can throw 20 pairs of eyes at it, if those 20 are bad developers, than it is worst than just one pair of eyes. There is no shortcut to competence.
Furthermore, the idea that the whole code can be reviewed bit by bit doesn’t make sense. You can have 20 patches that all look perfectly fine by themselves, but when you put them together are a disaster.
An experiment for anyone to try: gather the reject ration of your coworkers and compare with how you would rate them in terms of skill/competence. These are usually inversely proportional.
Other than naming conventions, humans shouldn’t be wasting their time reviewing any of those things anyway; automated tools should be rejecting those things before a code review request even goes out.
On teams I’ve been on where we’ve had automated checks for machine-detectable violations of the project’s style conventions, code reviews have tended to stay much more focused on the stuff that matters.
I’d agree strongly with this. Having implementation approach be part of the PR feedback cycle makes development inordinately time consuming and adversarial; technical approach needs to be either (ideally) ingrained into the day-to-day communication on your team through some mechanism like slack, daily sync meetings, etc. or else formally deliberated on before the time is spent to build a complete solution. It’s better to take the time to have a half an hour meeting than it is letting a two-day change be implemented in an unacceptable fashion.
Doesn’t that depend on how often the problem happens? It is, I think, not better to have a few dozen half-hour meetings over the course of a year when you would only have prevented one or two rejected PRs. Those meetings don’t come at zero cost to the team.
It does depend on how often the problem happens but I think the probabilities you’re implicitly assuming aren’t quite right (IME, ofc). Disagreements over approach at review stage are quite common if the person was not involved at the design stage, largely people there are just a lot of ways to do something. In the worst cases you will find a senior or lead developer will be too busy to attend design sessions but will sit as gatekeeper on PRs. That tends to lead to lots of rework and a very slow effective pace of improvement.
To make it explicit with some numbers: getting a design agreed usually only takes a 10-15 minute discussion between a 2-3 people, one of which can do it and one of which can review. If you skip that it I would say that you have a ~15% chance - and that’s conservative - of drama raised at review stage and rework. Solving for x, if your piece of work takes >5 hours, you should always have the design session. Obviously this is approximate maths, I just hope to give a sense of when the conversation is worthwhile and when it’s not.
Huh? Isn’t it more like 9/10?
Other than that, I mostly agree with what the author is saying, although I’m missing some more actionable items in the post.
When people here talk about how “code review” should go, could we say what kind of code we’re reviewing?
The code review for my pacemaker had better look pretty different from the code review for my todo list.
The fact that I agree with you makes me a bit sad. Why are we willing to accept engineering practices for constructing things that we use every day that we wouldn’t trust for safety-critical systems?
I’m also sad because, from what I’ve learned about pacemaker software, the to-do list app probably has higher code quality with fewer security vulnerabilities.