Part of being a great software engineer and team player is doing proper code reviews. Some hate it cause it feels like a chore, but it’s extremely important normalizer that every team member should do well on to keep the codebase healthy and developer productivity high.
Understanding the “Why”
We do code reviews for mainly two reasons: correctness and understandability of the reviewed code.
Correctness: We want to catch bugs before we actually ship it
Does the code implement the feature(s) as agreed upon in the functional specification? Continuous integration (CI) is great for weeding out troublemaking pull requests and code changes, and this can be baked into code reviews as well. More on this below.
Unfortunately, there are still cases where bugs can sneak out due to insufficient testing, be it on unit tests, functional tests or even code that aren’t so robust on stress tests at scale. This happens from time to time, and the author might not even notice it. This asymmetry is common in larger teams with individual contributors being more abstracted away from the more complex systems that run said code. Hence, having the “right” eyes (reviewers) on code that will be checked in reduces the risk of a system failure early on along the pipeline.
Understandability: We want code to be clean and easy to maintain
Is the code understandable and maintainable? Is it well documented? Putting correctness aside, the ground truth behind understandability of code is whether an engineer unfamiliar with this code able to work with it without much pain down the road.
There are also specific technical tools and best practices for using them that a more senior engineer might have better expertise on. Should you be using library x when there’s a known security vulnerability? Should you be calling a GET on all devices when there’s already a cached response? Someone new/unfamiliar with the codebase is able to write correct code, but not necessarily a performant one, thus increasing technical debt on the overall system. This is also a great knowledge transfer opportunity for junior engineers being mentored by their more senior coworkers.
Continuous Integration and Diffs / Pull Requests
I briefly mentioned continuous integration, and I think it’s worth pointing out that automated systems and testing should be done prior to a code review.
We as an industry has made significant advancements in certain areas , especially DevOps. The code review process has benefited from this since a lot of code vetting steps can be done by proper CI. Throw in an automatic linter too while you’re at it.
Any component of the pull request that doesn’t pass these tests should be fixed prior to a request for human reviewer(s). This is done for two good reasons:
- Code is incorrect. Any regressions in the eye of an automated build/testing system should be resolved as part of the pull request event.
- Reduce distraction and unnecessary context switching for reviewers. There shouldn’t be a need to do this if your code isn’t correct anyway.
Reviewers play an important role in code gating and should be the last layer of defense.
Please keep your code reviews / pull requests small. It reduces risk of mishaps in slipping bugs into production.
Try your best to split up huge feature pull requests if it’s going to require many more lines of code. Large pull requests with lots of code changes tend to carry a bigger risk in introducing bugs. They are also harder for reviewers to catch since there’s a lot more going on. Reviewers will then shy away from reviewing it.
Most code review tools like Arcanist will include a template that will allow the author to add a description on how said code was tested. It’s recommended to fit all the steps taken to test the code as well as steps to reproduce the desired output.
Over the years, I’ve devised a simple checklist of things that I should be doing when I conduct a code review. This may or may not work for you and your team, and you’ll probably need to fork it and come up with some variation of it.
Review them ASAP while it’s Hot
You want to do this as the high level features/requirements are still fresh on both you and the author’s mind. Leaving a review/branch stale for too long can also introduce side effects such as a diverged branch (happens more regularly in teams that adopt monorepos with tons of checkins), causing painful rebases down the road.
Check for Testing Procedures
- Were there added unit tests for the features in the pull request? Do they exercise important code paths and have reasonable code coverage?
- Did the necessary tests/builds passed? This can be a combination of unit/functional/smoke tests etc. If there are manual steps involved, is the author’s approach valid?
- Are there warning flags or weird edge cases that needs to be addressed at time of checkin?
Code Sanity Checks
- Is the code implemented correctly as specified in a design doc?
- Are there language specific gotchas that the author missed?
- If configuration files are updated, are the right files correctly modified in the right places? Some might have schema forms that need to be backwards compatible and unfortunately, might not be caught by automated tests. Do they look right to you?
- Did the author follow the agreed upon programming language style guidelines?
- Are there sufficient comments in “hard to parse” code segments? Are they verbose enough to understand without tracing through the implementation?
That’s it for now. I’ll leave you with this: