Technical peer reviews are a powerful software quality practice. In a review, a group of people examine a work product for defects and improvement opportunities. I’ve been a big fan of peer reviews since about 1988 and would never work in an organization that didn’t practice them routinely.
Any type of software work product can benefit from review: requirements, designs, code, tests, plans, documentation, process descriptions. Yet teams often have difficulty getting a review process going. In this article I describe the symptoms of seven common review problems and some solutions.
Sin #1: Participants Don’t Understand the Review Process
Symptoms: Software developers don’t instinctively know how to conduct and contribute to software reviews. Review participants may have different understandings of their roles and responsibilities, and of the review steps. Team members may not know which items should be reviewed, when to review them, or what review approach to use
Team members may not understand the various types of reviews. They may use the terms review, inspection, and walkthrough interchangeably, although they are different methods. This lack of common understanding can lead to inconsistent review practices. Too much material may be scheduled for a single review, because participants aren’t aware of realistic review rates.
People might not be able to judge when a review meeting is warranted and when it’s unnecessary. Discussions can lose focus, drifting from finding defects to solving problems or debating coding style. The results are overlooked defects, frustration, and an unwillingness to participate in future reviews.
Solutions: Team members should be trained to gain a common understanding of effective review methods. Written procedures can help review participants understand their roles and activities, so they can consistently practice effective and efficient reviews.
The formal review technique called inspection is the most effective review method, but having a palette of options will let team members choose the most appropriate review method for each situation. Adopt standard forms for recording issues found during review meetings, and for recording review data. My book Peer Reviews in Software has examples of such forms.
Sin #2: Reviewers Critique the Producer, Not the Product
Symptoms: Reviews sometimes lead to personal assaults on the author’s skills and style. A confrontational style of raising issues makes the author feel beaten down, defensive, and resistant to legitimate issues the reviewers find. Authors who feel personally attacked won’t want to submit their future work for review. They might even look forward to reviewing their antagonists’ work for revenge.
Solutions: The defects are the bad guys here, not the work product’s author. A review isn’t a place for a reviewer to show how much smarter he is than the author. To avoid sounding accusatory, use the passive voice to share your observations: “I don’t see where these variables were initialized,” not “You forgot to initialize these variables”
In an inspection, someone other than the author serves as the moderator and leads the inspection meeting. This minimizes the confrontational nature of describing a defect directly to the person who is responsible for the work product. The best results come when both reviewers and authors check their egos — and their weapons — at the door and focus on the work product itself.
Sin #3: Reviews Are Not Planned
Symptoms: Reviews often do not appear in the project plan or schedule. If they are there, they might be shown as milestones, rather than as tasks. Because milestones take zero time by definition, the nonzero time reviews do consume could make the project appear to slip its schedule. Another consequence of failing to plan reviews is that potential participants haven’t allocated time to participate when a peer asks them to.
Solutions: Not including tasks in your plan doesn’t always mean that you won’t perform them. But when you do perform them, the project will take longer than expected. When planning a review, estimate the time needed for individual preparation, the review meeting (if one is held), and likely rework. The optimism of software developers often leads us to forget about the rework that follows most quality activities, but some rework probably will be needed.
The only way to create realistic estimates of the time needed is to keep records from previous experience. Suppose your last 20 code reviews required an average of 6 labor-hours of individual preparation time, 8 labor-hours of meeting time, and 3 labor-hours of rework. Such data helps you plan the effort the project will need for future reviews. Without collecting such data, your estimates will forever remain guesses.
Sin #4: Review Meetings Drift into Problem-Solving
Symptoms: Software developers love sticky technical challenges, exploring elegant solutions to thorny problems. Unfortunately, this isn’t the behavior we want during a technical review. Reviews should focus on finding defects, but too often an interesting defect triggers a spirited discussion about how to fix it.
When a review meeting segues into problem-solving, the progress of examining the product slows to a halt. Some participants might become bored and tune out. Debates ensue as to whether a proposed bug is real, or whether an objection to the author’s coding style indicates brain damage on the reviewer’s part. When the reviewers realize the meeting time is almost up, they hastily regroup, flip through the remaining pages quickly, and declare success. But the material they skipped could contain some major problems that will come back to haunt the team later.
Solutions: Software peer reviews have one primary purpose: to discover defects. Solving problems is a distraction that siphons valuable time away from the focus on error detection. One reason inspections are more effective than less formal reviews is that they have a moderator who controls the meeting and keeps the discussion focused. It’s fine to have meetings for brainstorming, exploring design alternatives, and solving problems. Just don’t confuse that with a defect-focused review meeting.
My feeling is that if a problem can be solved with no more than one minute of discussion, go for it. You have the right people in the room and they’re focused on the issue. But if it looks like the discussion will take longer, note the item and ask the author to pursue solutions later.
Sin #5: Reviewers Are Not Prepared
Symptoms: You come into work at 8:45a.m. and find a stack of paper on your chair with a note: “We’re reviewing this code at 9:00 in conference room B.” There’s no way you can properly examine the work product in 15 minutes. Why even go to the meeting? Another symptom of inadequate preparation is that the work product copies that reviewers bring to the meeting aren’t marked up with questions and comments. Unprepared reviewers might not actively contribute to the discussion.
Solutions: Since most of the defects found during inspections are discovered during individual preparation, skipping that step badly hampers a review’s effectiveness. An inspection moderator begins the meeting by collecting everyone’s preparation time. If the moderator judges the preparation time to be inadequate, she should reschedule the meeting. Try to provide the materials to reviewers two or three days prior to the scheduled meeting.
Most people hate to interrupt their own pressing work to review someone else’s product. Recognize that the time you spend reviewing a colleague’s work will be repaid by the help you’ll get from your friends to improve your own work.
Sin #6: The Wrong People Participate
Symptoms: If review participants lack appropriate knowledge to find defects, their review contributions are minimal. Participants who are there only to learn may benefit, but they aren’t likely to improve the product’s quality. Management participation in reviews can also lead to problems. If reviewers fear the manager is counting the bugs found to hold against the author, they might hesitate to raise issues that could make their colleague look bad.
Large review teams also can be counterproductive. I once was in a review with 14 participants. Fourteen people can’t agree to leave a burning room, let alone agree on just how a requirement should be phrased. Large teams can generate unconstructive side conversations that slow progress.
Solutions: Review teams of three to seven people are most effective. Reviewers should include the work product’s author, the author of any predecessor or specification document, and anyone who will be a downstream victim of the product. For example, a design review could include the designer, whoever wrote the requirements, the developer, and a tester.
Team size is less of a problem if you aren’t holding a review meeting. Such asynchronous reviews can collect input from many participants, but someone still must reconcile conflicting comments.
Management must be committed to peer reviews, but whether managers participate depends on the mutual respect and attitudes of the team members. A technically strong manager could contribute usefully to reviews. My policy is that only a first-level manager is permitted in a review, and only if the author agrees.
Sin #7: Reviewers Focus on Style, Not Substance
Symptoms: A defect list with mostly style issues makes me nervous that reviewers missed substantive errors. When code review meetings turn into debates on style and layout, they aren’t focusing on finding logic errors and missing functionality.
Solutions: Style can be a defect. Excessive complexity, obscure variable names, and coding tricks can make code hard to understand and maintain. Control the style distraction by adopting standard templates for project documents and coding standards. These help make the evaluation of style conformance more objective. It’s fine to share your ideas and experiences on programming style, but don’t get hung up on minor issues and respect individual differences when it’s a matter of preference rather than clarity.
Many organizations have enjoyed excellent results from their review programs, achieving returns on investment of up to 10 to 1 from well-executed inspections. Others, though, perceive reviews as frustrating time wasters, usually because of these seven deadly sins. Stay alert to these common problems to make your technical reviews a powerful contributor to your software quality objectives.
This article is adapted from Peer Reviews in Software: A Practical Guide by Karl Wiegers. If you’re interested in software requirements, business analysis, project management, software quality, or consulting, Process Impact provides numerous useful publications, downloads, and other resources. Karl’s latest book is The Thoughtless Design of Everyday Things.