Humans Reviewing Code

Heuristics for Better Code Reviews

Adam Thody
4 min readNov 26, 2013

--

Show me ten teams and I’ll show you ten different processes or conventions for reviewing code. I am fascinated by conditions such as these. These are opportunities.

There are two easily accessible explanations for why a multitude of different approaches have been adopted to accomplish the same common task: poor communication; or there is no clear, single winner. With respect to the latter, a pessimist’s view might be “They all suck.”

First, let me state that I will not be talking about tools. Tools can be leveraged to better enable a process, but they should not dictate its parameters.

Because there is a great deal of attention paid to finding better ways for teams to communicate and because the nature of development leads to frequent cross-pollination between teams, I chose to focus my efforts on understanding why the most adopted practices are still considered ineffective.

In this context, what does “suck” actually mean?

Before we can delve into the efficacy of any particular approach, we must first agree upon a set of objectives. While there are many less tangible benefits to code reviews, here I will propose that the single most important objective of code reviews is to catch obvious errors in logic as close to their point of origin as possible.

Show a developer ten lines of code and they will find ten errors. Show them one hundred lines and they will find nothing wrong.

You may be familiar with this adage, or some variation of it. When I first heard it, my reaction was to chuckle, quietly reflect, then acknowledge this to be generally true. It did not occur to me at the time that I could leverage this information effectively.

I attribute my casual discarding of this valuable nugget of behavioural insight to my inability to quickly rationalize it. My intuition was that it was true, but why on Earth was this the case? “Laziness”, as obvious an answer as it might have been, would have been a lazy conclusion.

It wasn’t until I read Daniel Kahneman’s wonderful Thinking, Fast & Slow recently that I felt I could articulate a reasonable explanation of this phenomenon.

I now attribute it to three key factors.

Firstly, a reviewer’s default response is approval. In other words, if the time allocated to reviews goes down to zero, we are far more inclined to accept a set of changes, than we are to reject them. This may be because we have faith in our teammates or that our automated tests will catch most regressions, but a more likely explanation is that we do not want to have to explain why we have unjustifiably stood in the way of “progress” (progress being hypothetical, as without proper review, we don’t truly know in which direction we are headed).

Secondly, we are prone to cognitive bias. In code reviews, this means that recognition plays a significant role. Reviewers ask themselves, “Does this look like good code?”

We are easily fooled by source formatting, proper method/class length and other visual indicators that code is “good”, when it may in fact be horrible flawed in logic.

Lastly, we naturally resist a state of prolonged cognitive expenditure. Code reviews are physically and psychologically draining. They require intense analysis, often for an extended period of time. No matter how many you do, your brain can not adapt by delegating them to reflex or instinct as it eventually does with many other repetitive cognitive tasks.

As a result, the longer we are subjected to code reviews, the more likely we are to fall back on our defaults and our biases to ease the cognitive load.

Armed with this greater understanding, I believe in the following heuristics for code reviews, which take our own human tendencies and constraints into consideration.

Review very little code at a time.

We have a limited supply of mental energy. We can expend it evenly over a large amount of code, or focus it on a small amount. In my experience, focusing it on a small amount (a single class, or even a single method), especially when it is critical, or unfamiliar code, yields greater returns.

Review it with as little context as is reasonable.

Too much context can distract us and we fall into the trap of evaluating form, as opposed to function. Let automated tools evaluate form and focus your human brain on determining whether or not a small piece of code is logically correct according to the specification. Tests can only do this for you if the tests are also correct.

Budget a small amount of time to the review. Spend no more and no less.

I find five minutes to be approximately the maximum amount of time that I can be critical of someone else’s code effectively. Beyond five minutes, my subconscious begins to take shortcuts and the quality of my review nose dives. The act of time-boxing also allows you to free your mind of external commitments and reallocated that part of your brain that is looking for excuses.

For five minutes, you are reviewing code.

Astute readers will recognize that this is not a process — they are merely guiding principles. I believe how you actually review the code is less important. I personally favour reviewing pull requests independently, you may prefer sitting with the author at their desk. The same principles apply.

At the end of the day, you will need to find out what works for you and your team. Agreeing upon any heuristics will make it easier to have consistent and routine reviews — I recommend starting with the above.

As usual, less is more.

--

--

Adam Thody

I help teams find sane approaches to building valuable products and services for a networked world. Director of Engineering at Heroku/Salesforce.