Reviewing pull requests can take a short time or a very long time, depending on the size, complexity and how diligently you go about it.
I don’t think all pull requests demand the same level of rigour. If you have a skilled team and a high level of trust, I think you can leave it up to the author to have an idea of how thorough a review needs to be, and communicate that.
I think most of us have tried to send off pull requests where we’re uncertain if our understanding of the problem or solution is right.
We’ve probably also tried the opposite where you’re sure what you’re doing is right, and you’ve spent significant time thinking about edge cases.
The amount a reviewer ought to spend on the pull request in the first case is probably significantly higher than in the second case.
With that in mind, I propose a taxonomy of review thoroughness. When submitting a pull request the reviewer can ask for a specific level of thoroughness. The requested level should be considered the minimum level of thoroughness and the reviewer should be free to be more thorough if they have time or decide it’s warranted.
Let’s go through the different levels, from least time-consuming to most time-consuming.
An author might not be ready with their entire implementation, but partway through they might already have some things they know they’re unsure about.
If so, they might ask for some answers to specific comments, or a “comments only review”, where the reviewer doesn’t look at the entirety of the code, but only answers specific comments/questions the author has left.
This type of review is a bit of an outlier as it generally doesn’t end in an approved pull request. However, it’s a great tool in your toolkit. Asking for a “comments only” review while working on something else is generally a good strategy.
By the time you’re done with whatever else you were working on, you will have gotten outside input allowing you to continue your work.
Even for the other types of reviews, it’s also a good habit to add comments to your pull requests to guide the reviewer and highlight things that you’d like them to pay extra attention to.
The first and quickest level of reviews that can actually end in an approval is “skimming”.
It consists of reading through the code in Github or your diff UI of choice and seeing if anything catches your eye.
The sort of feedback skimming normally results in:
Skimming explicitly doesn’t look at anything but the diff, so it generally won’t be able to give feedback like:
Skimming in general is useful for
This is the level of thoroughness where you not only look at the changed code, but also that everything fits well into the big picture. Generally this includes checking out the code locally so you can jump between methods and look at more code than just the diff.
For code with UI it will generally also include running the code and trying out the new interface, perhaps also trying to break it for a few minutes.
In a regular review you would consider these things (in addition to what you considered in skimming):
Regular reviews take longer than skims. Pulling down the code and trying it out takes time. Ensuring things fit into the big picture also takes more thought and time, especially if the reviewer isn’t as familiar with the given codebase.
Thorough reviews are for critical or security-related code. It’s for code that’s extremely important to get right for one reason or another, and should probably be used sparingly.
A thorough review essentially does the same thing as a regular review, but spends longer on it. It might consist of:
As you can see, this isn’t a hard taxonomy but rather a sliding scale that depends on both the reviewer and the reviewer of the pull request. A few examples where it slides:
Even if this is a rather fluid scale, I still think it might serve as a sensible starting point to adjust your expectations and communicate what sort of review you’re expecting.
I’m not sure what should be the standard level of thoroughness. We do have some scientific studies that say you need to be rather thorough for code reviews to catch all of your defects, but I think the cost-benefit analysis probably varies depending on the tech you use, the area of the codebase, your test coverage and the criticality of your systems.
Anecdotally most pull requests I perform are skimming with diving into a “regular” level of thoroughness only for a few files here and there where I don’t have enough context based on skimming. Personally, I rarely dive into thorough reviews unless it’s very critical or security-related code.Did you enjoy this post? Please share it!