An earlier article summarized an interview that Bas Peters (Solutions Engineer at GitHub) conducted with Karl Wiegers (Principal Consultant at Process Impact) on the importance of building a healthy software engineering culture. Our discussion also covered the importance of growing a culture that is focused on quality, and in particular the valuable contribution that technical peer reviews can make to such a culture. This article summarizes that interview focused on software quality.
Karl has been interested in software quality and peer reviews for more than 30 years. His book Peer Reviews in Software is a concise yet thorough presentation on multiple ways to conduct both software inspections and other types of peer reviews.
In software development people often talk about quality. How would you define quality?
I’ve heard many definitions, such as fitness for use, providing value to a customer, conformance to specifications, and the absence of defects. Those are all aspects of quality, but you can’t choose any one of them as a complete definition.
Quality has multiple dimensions. We’ve all used products that did what they were supposed to do functionally but had problems in some nonfunctional areas, such as performance, reliability, or usability. These factors are often called quality attributes.
Often you must make trade-off decisions among various quality attributes, such as security versus performance or usability. You can’t optimize all of these attributes simultaneously. So each project team must consider what “quality” means for their product so they can work toward common objectives.
How does a lack of quality practices reveals itself in an organization?
One obvious indicator is a lack of customer satisfaction. But you don’t want to wait until after delivery to discover quality problems. That’s one advantage of agile approaches. Some working software is delivered periodically so you can begin collecting that feedback and make appropriate course corrections.
The amount of rework the team must do can also indicate a lack of quality practice. Few organizations measure how much of their total effort is spent on rework, both during development and post-delivery. If you do measure that, you could get a pretty scary number. Perhaps 30 to 50 percent of your total effort is devoted to rework. This cuts right into your productivity and your delivery schedules. People rarely plan for rework but it always happens.
Excessive rework generally results because of problems introduced earlier in the development process that weren’t caught. If you find yourself spending a lot of time fixing problems, do some root cause analysis. Learn why those errors are being introduced and how you could either prevent them in the first place or detect them earlier. Otherwise, people complain about fixing bugs but they don’t change their practices to try to reduce them.
In our previous discussion you mentioned culture when you were talking about good quality practices. Are culture and quality related?
Definitely! In a healthy software engineering culture, quality is a priority for all team members and managers. One cultural principle of a group I led was that we prefer to have a peer, rather than a customer, find a defect. If you believe this, then you adopt processes that lead to high quality products — like technical peer reviews — as standard practice. An organization that doesn’t share this value is more likely to ship products that have defects and just wait for customers to complain.
When it comes to quality, it’s common for people to say, “You can pay me now or you can pay me later.” But I say, “You can pay me now, or you can pay me a lot more later.” The later a defect is discovered, the more it will cost you more to fix it. If it got into the customer’s hands, you must deal with product updates, bad reviews, possibly recalls or lawsuits, and the potential loss of goodwill from unhappy customers.
Where do defects generally originate from?
Any development phase activity can introduce errors: requirements, design, or coding. Some studies indicate that up to 50 percent of software defects originate in the requirements. The longer a defect goes undiscovered on a project, the more work people do based on that error that has to be redone later on, when the requirement error ultimately is discovered. Missing requirements, for example, are a common source of defects. So making requirements better is a high-leverage quality practice.
It’s best if each organization tracks the defects it discovers so they know where they are coming from. That way they can focus their improvement efforts for the greatest leverage.
How can we avoid making similar errors in the future?
Through learning from the past. Some defects are just one-off errors, simple human mistakes. But sometimes you’ll see recurring patterns of defects, less-than-optimal coding techniques, or problems with how requirements are written. In such cases, think about the root cause and whether there are ways to create fewer of those errors in the first place. That can pay benefits on all the future work the team members do for the rest of their lives.
The best software engineer I ever knew got nervous if he couldn’t find people to review his code. He knew how valuable that quality step was. One reason he was such a great engineer was because of all he had learned from reviewer input over his career. I’ve learned something from every review I’ve ever participated in, whether I was the author of the work being reviewed or a reviewer.
Should teams always do peer reviews?
Yes! I would never want to work in an organization in which peer reviews were not a standard part of the culture. They are not free, but they more than pay for themselves once the team members become skilled at doing them.
As with all quality practices, though, you need to balance the cost of performing the practice against the risk. If the probability of having a defect in a certain product is low, or if the impact of having an undiscovered effect is minimal, maybe performing a peer review is not cost-effective. That’s a thought process you should always go through when deciding how best to spend your time on a project.
What should be reviewed? Just code?
Consider reviewing any work product that has the potential of containing an error that could cause harm to the project or the customer. The cost-benefit leverage of removing defects is greatest when you find them early in the development lifecycle.
For example, if you find a defect during testing that you can trace back to an ambiguous, missing, or unnecessary requirement, all of the design, coding, and testing work that was spent on that requirement might have been wasted. But if you could find that defect much earlier, not only is it probably cheaper to find, but you don’t have to redo so much work.
What is a good time to review?
Invite people to review your work early and often, formally and informally. You can start collecting feedback from peers when you have little or no code but want to share some screenshots or general ideas, when you’re stuck and need help or advice, or when you’re ready for someone to carefully examine your work.
It’s a good idea to get at least an informal review before you think you’ve completed a body of work, for two reasons. First, early reviews can detect systematic errors that you might be making or suggest ways to improve the rest of the work that you do on that deliverable. If someone reviews 1000 lines of your code and suggests some better approaches, you’re probably not going to go back and incorporate all those changes. However, if you got those same suggestions on just the first 200 lines of code, it’s a lot less effort to make those changes and then write the rest of the program better.
The other reason for reviewing before you think you’re done is psychological. When you think something is finished, you really don’t want someone to tell you that it’s not. You can have a lot of psychological resistance to review input at that point, because you’re ready to move on to the next task. It’s easy to push back against any suggestions for changes. This is not a constructive attitude toward peer reviews or a good use of a reviewer’s time.
I’m never thrilled when someone finds an error in my work, because it means I made a mistake. However, I’m much happier when one of my colleagues finds the error instead of having a tester or customer find it later on. The phrase that pops to my mind whenever a reviewer points out a mistake is “Good catch.” I fix the error, learn from it, and try not to make the same error again.
What describes a good review process?
One metric of a good review process is effectiveness. What percentage of the actual defects in the work product do the reviewers discover? The higher, the better. Another metric is efficiency. On average, how much does it cost the team to discover and correct a defect by peer review? I have a chapter on review metrics in my Peer Reviews in Software book. It’s not as hard or time-consuming as you might fear.
Other signs of a good review process are softer. If people in an organization value the process, they will willingly submit their work for review and they’ll be willing to participate in reviews of work that other people do. Even if there was an enthusiastic discussion, they can walk out of the review meeting and still be friends and respected colleagues. If someone walks out of a review feeling beat up and swears that they’re never going to go through that again, that’s definitely not a sign of a good review process in a healthy culture.
It’s essential to be respectful of the work product’s author. Be thoughtful about how you present comments. I don’t like only to use the word “you” during a review, as it can sound accusatory or insulting. I prefer to ask questions or to state what I observe.
If peer reviews are new to a team, what important issues should they be aware of?
Remember that peer reviews are both a technical practice and a social interaction. Asking someone to review your work and tell you what you might have done wrong is not an instinctive behavior: it’s a learned behavior. Each of us must reach a point where not only are we comfortable soliciting input on our work, but we actually become uncomfortable if we haven’t had others examine what we’ve created before we inflict it on an unsuspecting world.
Instilling a peer review program into an organization is a bit tricky. I’ve written an article called “Making Peer Reviews Work for You” that describes several critical success factors and warns of several review traps to avoid.
If team members are reluctant to submit their work for review, learn why. Some people are afraid that defects found in their work will be held against them by management. In some cultures, that does happen. I can tell you horror stories that people have shared with me. Like software metrics, the results of reviews should be used to learn and improve, not to punish or to reward.
People should also understand that there are numerous ways to perform peer reviews. You can hold asynchronous reviews, in which people provide feedback at their convenience, often with the aid of a collaboration tool like GitHub. Or, you can hold synchronous reviews in which people share their input in a meeting. Or both, at the right times with the right people. My book Peer Reviews in Software discusses the pluses and minuses of different review approaches.
National and organizational cultures also have an impact on whether reviews can be effective. In some environments, people just aren’t comfortable providing negative feedback on someone else’s work. One woman who tried to get a peer review program started in her company became frustrated because everybody wanted to be so nice that they would never point out an error they found. That protects the author’s feelings, but it doesn’t make for a successful review.
Can reviews also help to improve the software engineering culture?
They can, but depending on how they’re performed, they can also be harmful. Reviews can enhance a quality-oriented culture by exchanging knowledge. You can learn better practices by looking over someone else’s shoulder through a review. I remember walking out from reviewing someone else’s code with ideas of how I could do better in my own programming.
Effective reviews build trust among team members, an important component of a healthy software engineering culture. People know they can rely on their colleagues to help them do a better job by finding problems they didn’t see themselves. In this way reviews help to build a culture with a shared commitment to quality, whatever you decide “quality” means to you.
On the other hand, if people feel the review is a torture session and are anxious about going into them, then that’s a negative contributor to the culture. Someone once told me that in their organization, they referred to heading to a review meeting as “going into the shark tank.” Who wants to do that?
People will find ways to protect themselves from being hurt. They might, for instance, put a lot of extra effort into perfecting their code before they let anybody else look at it, to reduce the chance that someone finds anything wrong. That extra time cancels some of the benefit of the review. If you get a little help from your friends through reviews, that can save you time and provide a net benefit to the whole organization.
People sometimes don’t want to review someone else’s work. They ask, “Aren’t we all responsible for doing our own work correctly? Why should I spend my time finding your bugs? What’s in it for me?”
That question of “What’s in it for me?” comes to mind anytime someone is asked to do something new or extra on a project. It’s an understandable reaction, but it’s the wrong question. The correct question is what’s in it for us?
If you spend an hour performing a certain activity maybe you don’t personally get an hour’s worth of benefit from it. But perhaps the project, the customer, or the organization as a whole gets more than one hour of benefit from your effort, like by preventing downstream rework. People need to think about it beyond their own self-interest and personal benefit.
If we each spend some of our time helping our other team members to improve their work, that means others are going to spend some time helping us, and we all come out ahead. At least that’s my experience.
If you’re interested in software requirements, business analysis, project management, software quality, or consulting, Process Impact provides numerous useful publications, downloads, and other resources.