Effective code review through principled pragmatism

Mike McGahan
Making Meetup
Published in
13 min readOct 22, 2018

‘Modern’ code review has become mainstream in the software industry, with generally favorable results. However, the practice itself is frequently unstructured, with very little shared understanding of best practices and how to use it as a tool not just for catching errors, but aligning software engineering across an organization. Part of the challenge is that code review affects such a broad set of behaviors, and a ‘happy middle ground’ must balance a number of tradeoffs.

Although other tools exist to improve/ensure certain aspects of code quality in a more automated way (e.g. unit and integration testing, linting), code review by peers provides unique benefits primarily driven by its focus on collaboration and knowledge sharing — leveraging the power of community to improve outcomes. However, unlike automated approaches to improving code quality, code review ostensibly works against raw engineering throughput. A second set of eyes injects experience, domain knowledge, and error checking through an intentional deployment bottleneck. In theory, this short-term loss of throughput eventually reduces long term losses associated with fixing uncaught bugs, specialized training on idiosyncratic architecture and implementation patterns, and the eventual introduction of new errors based on faulty understanding of poorly-implemented code.

The open question, then, is how to review code in a way that ensures quality gains are worth the (short term) throughput loss.

Strategies

A minimal code review boils down to two things: being able to see the weaknesses in a code block, and being able to communicate in a way that results in an improvement. Having great technical skills and experience serves the former, having great people skills serves the latter, but those are often very independent skillsets for reviewers. I’d like to discuss both aspects, but before detailing what to look for in a code review, it is important to establish the how — the right mindset and communication patterns make a huge difference in the effectiveness of a review to balance the needs of individual developers in addition to the organization as a whole.

Empathy

First and foremost, code review is primarily a social process, not a strictly technical one. ‘Good’ reviews can’t be scripted, and many experienced developers have effectively argued for the value of empathic, inclusive approaches to review, particularly through the language of collaboration (see Read more links below). This means inviting rather than ordering, suggesting rather than declaring, and starting conversations about code rather than issuing critiques.

Empathic code reviews have a number of benefits — they avoid morale pitfalls, increase team communication and bonding, and provide a high-bandwidth channel for sharing relevant information, skills, and development patterns. They promote a virtuous circle — positively-framed, empathic reviews build confidence while increasing quality, and quality code prompts positive, engaged reviews. They also ensure that reviews are mutually beneficial — the reviewer can (and ideally should) learn as much as the author, resulting in a shared sense of responsibility for the code that is ultimately shipped.

The risk of empathic code reviews is getting mired in indirect critiques that start with “I would like to hear about how you approached this problem”, continue with a discussion that is only partially relevant to the problematic code block, and end with a particular implementation recommendation that might have been better stated simply and directly up front. In a highly async (e.g. remote-friendly) organization, empathic code reviews framed as oblique concerns can introduce more long-running intermediary steps between the start of review and final approval, and can therefore greatly reduce the code throughput despite effectively ensuring team cohesion and high quality code.

However, given the fundamental value of building trust and alignment within a team (or ‘community’ for OSS projects), empathy must be the baseline approach for all code review. Beyond this core value, teams must also define a pragmatic approach to addressing code quality through review.

Principled pragmatism

If the world stopped when code went into review and resumed when it was approved, most code reviews could be ‘perfect’. With enough time and a static software environment, every questionable logic choice, ambiguous variable assignment, and unintended side effect could be identified, addressed, and fortified against future degradation. In the real world, code review is always a compromise. To address this challenge while maintaining consistency, one of the best things your organization can do is establish a set of orienting principles that set a shared language for evaluating implementation decisions. At Meetup, we use three Architecture Principles that provide a consistent reference for justifying/questioning the particularities of an implementation, and help frame ongoing discussion about the tradeoffs involved.

Build for Change • Build for Understanding • Build Meetup

These principles are intentionally open-ended and context-dependent, but establish a resource that has improved alignment of our engineering values when making decisions about our code. If your organization has matured enough to use code review, it should also make an effort to define its engineering or architecture principles.

What to look for

Practically, code review orients toward identifying ways in which ‘this code’ deviates from some concept of ‘good code’. To a large extent, then, a quality code review is dependent on the reviewer’s idea of ‘good code’ and their ability to effectively communicate that idea to the author. Bar none, the best way to develop a sense of ‘good code’ is to write and experience the consequences of ‘bad code’, and then be intentional about identifying weaknesses and applying the lessons learned. However, for anyone looking for a TL;DR in place of actual experience, my personal favorite is The Zen of Python (PEP 20), which, like Meetup’s architecture principles, is still highly ambiguous and open to interpretation based on context:

>>> import this
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!

More practically, here are the questions I ask when reviewing code, starting with the most well-defined:

1. Is this code necessary?

Most developers have had the experience of being asked to implement a feature or fix a defect and then immediately opening their editor to write code. It’s natural, but it is a lost opportunity to write an integrated solution. Fully leveraging existing tools takes time and planning, and in large codebases, useful tools or relevant contextual information about the data may be too hard to find to make the effort worthwhile. The habit of ‘coding our way out of a problem’ can blind us to valuable context, and code review is an opportunity to reset with a fresh perspective that can take in the bigger picture. However, many reviewers repeat the mistake by focusing exclusively within the boundaries of the diff and are reluctant to see how the new code relates to existing code.

So, as a reviewer, how can I ensure that I bring this value to a review? At the macro level, it’s important for me to stay up to date with in-house code libraries (e.g. standard React components and date utilities) so that I can point code authors toward them in reviews, and at the micro level, it’s useful for me to expand the code around the diff to see what it interacts with — there are frequently details that prompt questions or recommendations on how to simplify the code to work more efficiently. I just have to be aware of them in order to know when to ask: ‘can we use the existing code more effectively instead of writing something completely new?’.

2. Are side effects justified?

The complement of considering how code context affects the new code is considering how the new code affects its context — its ‘side effects’. Watch for formal side effects like re-assigning a variable in parent scope, or more diffuse side effects like adding a new third-party dependency. These effects should be evaluated similarly to ‘is this code necessary’, but typically have more concrete solutions — better encapsulation, ‘pure’ implementations, or simply explicit documentation to highlight the expected impact on other parts of the application. Side effects can be difficult to see and evaluate, particularly for less experienced developers, so it is important for a reviewer to offer specific solution ideas that clarify the problem and provide a path forward.

3. Has performance been balanced with clarity?

In the push to meet launch deadlines, technical performance sometimes gets left behind, but code review is still an opportunity to consider whether a particular implementation is efficient (or efficient enough) for the application. Even when performance has been considered by the author, a reviewer can bring contextual knowledge and ensure that the author is accounting for different kinds of performance, particularly in UI-driven applications in which raw execution speed may matter less than perceived interface responsiveness. There is always a balance to be considered when discussing performance — sometimes the optimal implementation from a performance standpoint also requires using exotic patterns that may ultimately be more bug-prone or otherwise difficult to maintain. Other times, however, the most performant implementation is also the most succinct, and reviewers should consider what alternative approaches with better performance characteristics might look like.

The only reason to choose a less-performant solution is superior code clarity.

4. Does this code leverage established code conventions?

The ideal shared codebase looks like it was written by one person — the more consistent the implementation patterns, the less cognitive burden there is for future maintainers because there are fewer mechanical implementation decisions to be made. Linters and code formatters like Prettier or gofmt help with particular syntax idiosyncrasies, but in most languages (and particularly JavaScript) there are a handful of ways to solve a particular problem. Good code re-uses similar approaches to similar problems, with consistent levels of abstraction, congruent data structures, and predictable flows of information through the application. The code ‘rhymes’. It also is arranged in a way that groups related behavior.

Bad code tries something new at every opportunity. Good code leverages what your team already knows.

In order to keep the code rhyming, however, it’s important for reviewers to listen closely — if a particular syntax choice feels wrong, riff on the idea to see if you can find something that settles into its context better. If the code introduces a new library, experiment with how the same behavior could be accomplished using existing tools or by writing a more targeted utility. An effective review starts with an awareness of the conventions, but also communicates justified understanding of the value they bring, even if it’s consistency for consistency’s sake.

5. Is the intent clear?

This is one of the most common critiques I offer and receive, and is a great channel for developing a shared understanding of the problem space around the code. Comments usually start with something like “I’m not sure about…” or “This is confusing to me…” and integrate the perspective of future maintainers who may have even less contextual information about the code than the reviewer, much less the author. Implementation clarity relates to everything from variable names and syntax choices to data structures and branching logic, and good reviewers are able to surface the needs of hypothetical future maintainers before the code even ships.

When I’m reviewing code, I am constantly attempting to mentally model what the code is doing, and if I can’t, that usually means that the code should, at a minimum, get some extra comments to prevent the next maintainer from hitting the same stumbling block. If I can almost model it in my head but feel like there’s something that could be expressed more clearly, I usually take a step back and attempt to write my own MVP code of the behavior I think is desired, and then compare it with what was written by the author. Discrepancies are opportunities to improve clarity, so I frequently include these ‘free hand’, untested code blocks of what I think the code is trying to do directly in my review comments in order to ground subsequent discussion with a tangible reference.

This kind of review comment — offering near-complete alternative implementations that serve as a basis for discussion or refactoring — avoids the frustration of ambiguous prose and forces me as the reviewer to think deeply about the code in order to meet the author where they are. A significant percentage of my code comments end up being deleted before the review is submitted because, through writing them, I am able to identify the flaws in my mental model and realize that the author is accounting for edge cases I have not considered. This is collaborative learning at its best, so ‘be generous with code examples’.

The key to making such detailed comments work in practice is ensuring they are offered with a question mark rather than an exclamation point — code reviews should not be fundamentally framed as demands for change except in very particular circumstances (e.g. a security bug introduced by the code and can be avoided with an alternative approach). Even something as simple as “this implementation feels awkward, would something like this work better?” can often be enough of a sanity check for both the author and the reviewer to reveal weaknesses in the implementation or their shared understanding. As an engineer, such comments are gold, and my favorite reviewers are the ones who I trust to find the parts of the code I was uncertain about writing, and pull them out for reconsideration.

As your engineering experience grows, you will also develop a stronger sense of how application architecture choices support implementation clarity, and addressing these choices can be one of the most difficult aspects of code review. When a code change is fundamental enough to the behavior of the application to be treated as ‘structural’, it is critical that it be reviewed not just at the implementation level, but the conceptual level, pushing the review into much more subjective territory. In many cases, such reviews turn into discussions on the nature of application design and programming paradigms, where there is not necessarily a “right answer”, but the author and reviewer can help drive the solution toward one that accounts for their combined experience and informed expectations.

Wrapping up a review

Even when you have a clear set of goals for a review, the most insightful response can be undermined by poor packaging, i.e. how the review is delivered in terms of language and peripheral content. Even if a code review starts with checking the WTFs per minute, it doesn’t have to be presented that way. This partially means treating the author with inclusive respect — the empathic approach described earlier — but it is more than just treating others as competent developers. It requires treating the code review as a collaboration in which each party is responsible for making some attempt to answer their own questions before expecting an answer from the recipient. As a reviewer, I check Stack Overflow, reference relevant blog posts I’ve come across, run the code locally to test hypotheses, and make an effort to understand the original issue that prompted the code change. That’s on me, unless the original issue is undocumented or the code change is not clearly justified — then it’s on them.

Good code review is thoughtful. Perfunctory, rubber stamp reviews are worse than useless — they introduce risk to your application. So take the time to mull implications, identify implementation details you want to learn about, and ask questions. Encourage code authors to share the things they learned with the wider team. Have high expectations of others, and encourage them to have high expectations of you when they’re reviewing your code. If an implementation has not met a high standard, ideally one grounded in shared principles, use that as an opportunity to communicate honestly in a way that builds trust. The engineers I enjoy working with most are the ones that push me to be better — I want to be called out when I’m cutting corners I shouldn’t, but I want to build from that position, not be torn down. Make this the norm in your organization. Don’t just ask a passive-aggressive question. Don’t just say it’s wrong. The author has thought about every line they’ve written, but they may not have had (or applied) all the relevant considerations, so it’s your job to help.

As with any other activity, intentional practice with code review is the best way to get better, and good reviews have a strong multiplying effect on the value of every developer in your organization. If you haven’t started reviewing, start. If you’ve started, do more. Make an effort to be a conduit for contextual information that can help authors improve. Build a culture of review based on agreed-upon principles, and you’ll build a culture of high throughput, high impact software engineering.

Read more

There are a lot of resources about code review out there, but it can be hard to find actionable, experience-driven advice that can be applied to a variety of organizations. Here are some articles that I’ve found useful.

--

--