I’ve been a fan of software peer reviews and inspections for more than 30 years. I’ve seen the benefits, and I’ve learned something from every review I’ve been in. Peer reviews are a vital component of a software development culture that is focused on quality.
Simply asking a colleague to look over something you’ve created is a great start. Establishing a peer review program and weaving reviews into the cultural fabric of an organization takes time, though. A new review process is fragile, being easily disrupted by unpleasant experiences (“my reviewers treated me like an idiot”) or ineffective results (“we wasted all that time and didn’t find a single major bug”).
Peer reviews are tricky, as they involve technical, social, and cultural dimensions. This article describes eight factors that can make a review program work and points out several traps to avoid.
Critical Success Factors
The people involved and their attitude toward quality are the greatest contributors to a review program’s success. The first critical factor is for your team members to prefer to have peers, rather than customers, find defects. Your “customers” include anyone whose work is based on your deliverable, such as a tester who will develop system tests from a requirements specification.
Practitioners must appreciate the many benefits that peer reviews can provide, including early defect removal, reduced late-stage rework, document quality assessment, cross-training, and process improvements for defect prevention. Once your team members understand cost-of-quality and return-on-investment concepts, they can overcome barriers such as the perception that adding reviews to the project schedule delays delivery.
Even motivated team members will struggle to perform reviews if you don’t obtain management commitment. Commitment isn’t simply a matter of giving permission or saying “Everybody do reviews.” You don’t need permission! Management commitment includes establishing policies and goals; providing resources, time, training, and recognition; and abiding by the review team’s decisions. You absolutely do need such commitment if you want to establish — and sustain — an effective review program.
A third critical element is to train reviewers and review leaders, as well as the managers of projects that are conducting reviews. Nearly half of the respondents to one survey indicated that untrained practitioners impeded their initial use of inspections. Training can teach people why and how to perform inspections, but only experience enables them to provide and receive insightful comments.
Be sure to allocate time in the project plan for reviews and rework. Despite their good intentions, swamped practitioners will skimp on reviews when time pressures mount. This leads to even greater time pressure in upcoming months as the latent defects become evident. Devoting several percent of a project’s effort to peer reviews is a powerful sign of management’s commitment to quality.
Groups in which I have worked found it valuable to set goals for the review program. One team committed to reviewing 100 percent of its requirements specifications, 60 percent of design documents, 75 percent of the code, and so on. Setting numerical goals forced us to track the quantity of each kind of artifact we created so we could measure progress toward our goals. We achieved our goals, and in the process ingrained peer review as a routine practice. Another goal might be to reduce your rework levels from a known starting point to a specified lower percentage of your total development effort. Make sure your review goals are attainable, measurable, and aligned with your organization’s objectives.
Next, identify a review champion, perhaps yourself, who has personally experienced the benefits. The champion serves as a local advocate, speaking from experience rather than from theoretical knowledge. A review champion who is respected by other team members — whether he is a technical peer, a manager, or a quality engineer — can break down the initial resistance to peer reviews. A highly capable developer who invites reviews sends the signal that everyone can benefit from input from his colleagues.
Plan to review early and often, formally and informally. Use the cheapest review method that will satisfy your objectives for each situation and each work product. A quick ad hoc review sometimes suffices. At other times, you will need the brainstorming of a walkthrough, the structure of a team review, or the rigor of an inspection. Informal, incremental reviews of a work product under development can filter out many defects quickly and cheaply.
One risk is that the reviewers might tire of examining the same document repeatedly, perhaps starting to feel that they’re doing the author’s work. Another risk of reviewing early is that you’ll have to repeat the review because of modifications made after other documents have changed. Nonetheless, using multiple types of reviews in sequence provides a powerful way to improve document quality.
Your first reviews won’t go as well as you’d like, thanks to the unavoidable learning curve. To improve their effectiveness, analyze your early reviews. Set aside a few minutes at the end of each peer review to collect some lessons learned, to help make all future reviews better. Continuously improve your peer review procedures, checklists, and forms based on experience.
Review Traps to Avoid
Several traps can undermine the success of a peer review program. These problems occur most commonly with inspections; informal reviews that lack a defined process aren’t susceptible to having the process be misunderstood or not followed. Watch out for the following pitfalls.
Trap #1: Participants don’t understand the review process. One symptom of this trap is that team members do not use an accurate, consistent vocabulary to describe peer reviews of various types. Another clue is that review teams do not follow a consistent process. Inappropriate behavior, such as criticizing the author instead of pointing out issues with the item being reviewed, is a clear sign of misunderstanding. Training and a practical, documented peer review process are essential. All potential reviewers must understand the what, why, when, how, and who of reviews.
Trap #2: The review process isn’t followed. Before taking corrective action, learn why the process isn’t being followed. After you have diagnosed the underlying causes, select appropriate actions to get the review program into gear. If the process is too complex, practitioners might abandon it or perform reviews in some other way instead. If your managers have not conveyed their expectations through a policy, practitioners will perform reviews only when it’s convenient or personally important to them.
If quality is not a success driver for a project, the quality benefits of peer reviews won’t provide a compelling argument for performing them. However, the productivity enhancements that reviews can provide might support a project goal of meeting an aggressive delivery schedule. Introducing reviews on a project that is already in deep trouble with schedule overruns, churning requirements, and tired developers will be hard, but it will be worth the effort if the reviews help get the project back on track.
Trap #3: The right people do not participate. Inappropriate participants include managers who came without being invited by the author and observers who attend without a clear objective. While you can include a few participants who are there primarily to learn, focus on inviting reviewers who will find problems.
Some reviews will be crippled if key perspectives are not represented. As an example, a requirements specification review needs the customer’s viewpoint to judge correctness and completeness and to quickly resolve ambiguities and conflicts. The customer could be represented by actual end users or by surrogates such as marketing staff. To underscore the need for the right participants, below is an e-mail I received from one of my consulting clients, describing her experiences with requirements specification reviews:
The reviews were extremely helpful, especially given that the users were in-house and were very motivated to influence project decisions. User contributions to the requirements reviews were highly valued by all participants. We canceled more than one review for lack of user participation, and I remember one review where we called in a user unexpectedly because the others had failed to show up. The one who came had no preparation time, and we delayed the start of the meeting waiting for her to show up, but she provided very valuable insights and suggestions nevertheless. User participation in the reviews was an unqualified success and led to software that was more valued by its users and to a better working relationship between the project and the users.
Trap #4: Review meetings drift into problem-solving. Unless a review has been specifically invoked as a brainstorming session, the group should focus on finding — not fixing — errors. When a review meeting switches to finding solutions, the process of examining the product comes to a halt. Participants tune out if they aren’t fascinated by the problem being discussed. When the reviewers realize the meeting time is almost up, they hastily flip through the remaining pages and declare the review a success. In reality, the material they glossed over likely contains major problems that will haunt the development team in the future. Moderator failure is the prime contributor to this problem.
Trap #5: Reviewers focus on style, not substance. An issue log that lists only style problems suggests that the reviewers were distracted by style, were not adequately prepared, or did only a superficial examination. To avoid this trap, define coding standards and adopt standard templates for other project documents. Coding standards address layout, naming conventions, commenting, language constructs to avoid, and other factors that enhance readability and maintainability. As part of judging whether a work product satisfies its inspection entry criteria, have a standards checker see if it conforms to the pertinent standard. This helps reviewers focus on the important logical, functional, and semantic issues.
What Slows You Down?
I’ve heard people protest that they can’t perform peer reviews because reviews slow the project down. Reviews don’t slow you down: defects do! Peer reviews are wasted effort only if they don’t find any bugs, there aren’t any bugs to find, or people don’t fix the bugs the reviewers discovered.
If you keep these success factors for peer reviews in mind and avoid stepping into the traps I’ve warned you about here, you’ll have a good chance of implementing a successful review program in your organization. I’m confident you’ll be glad you did.
This article is adapted from Peer Reviews in Software: A Practical Guide by Karl Wiegers. If you’re interested in software requirements, business analysis, project management, software quality, or consulting, Process Impact provides numerous useful publications, downloads, and other resources.