Development Team Report Card, Part 2: Technical Practice

This is the second of a three-part series aimed at managers and directors, meant to help with assessing the effectiveness of your software development teams. In Part 1, we talked about how collective ownership and collaboration reduce risk and make your team more consistently productive. Now, let’s talk about the quality of the work they’re performing: are your programmers crafting their code well? Are they producing a high-quality product?

The bitterness of poor quality remains long after the sweetness of low price is forgotten.
— Benjamin Franklin

The quality of a software product is directly proportional to the attention that the development team (and the organization around it) gives to technical practice. Poor technical practice yields buggy software. Good technical practice, of course, breeds higher quality. Here’s how to judge how well your development team’s technical practice is operating:

  • Grade: F
    You have programmers, and you have testers. Your programmers fix the bugs that the testers find, and the testers find lots of bugs. Your programmers spend a significant portion of their time (more than 25%) fixing bugs. Often, fixing one bug creates several more. Your organization lives and dies by the bug tracking system and the efforts of the testers. If your testers all disappeared tomorrow and your developers were responsible for all the testing, you could not ship the next release of your software. Your developers do not attend technology user groups, don’t go to conferences, and few (if any) of your developers can easily and quickly recommend at least four favorite technical books or authors. You may have to implement policies like code freezes to prevent critical bugs from sneaking out the door. The developers don’t appear to be in control of the codebase and seem somewhat afraid of making changes to it. There’s a general sense that the code itself has a mind of its own, like Skynet, and everyone knows that Skynet doesn’t like humans.
  • Grade: D
    You have a general sense of where the riskiest parts of the application are, and which parts will accept changes easier. Naturally, when developers make changes to the former, the testers groan because that part of the software is also the hardest to test. You have to carefully balance “low hanging fruit” feature delivery with long-term efforts to make the code “better” so that you can (hopefully, eventually) deliver bigger, higher-value features without going bankrupt. However, you don’t have any quantitative measures of code quality, and therefore are somewhat blind to whether or not the code is getting better or rotting as developers make changes to it. Devs still spend at least 10% of their time fixing bugs. You may see some technical books laying on some developers’ desks, but they’re usually closed.
  • Grade: C 
    You have made some efforts to categorize bugs and track their frequency and use that as a general guideline to understand timeframe and risk when your team is developing new features. Your developers and/or testers may have written some automated tests; however, a full pass of testing through the entire software is still very much manual. If you continue down this path, you may be able to automate a full regression pass in a decade or so, which gives your testers a warm, fuzzy, my-job-is-safe sort of feeling. You have some “hero” developers who handle the hardest bugs in the scariest part of the application, but the “heroes” (often) aren’t the ones writing the automated tests. Your developers may be attending some technology user groups, and some of the automation and technical practice is inspired by what they’re learning there.
  • Grade: B
    Much of your application has some level of automated testing, and those tests are executed automatically whenever developers check in their code. Your testers (if you have any) get an automated release of the software frequently as developers make those checkins, potentially multiple times a day. If you have significant areas of the software that are buggy and in need of repair, your developers have quarantined those areas and mitigated much of the risk. Most (if not all) of your team regularly attends at least one user group a month. Your developers often spend time together over lunch learning new technologies even if they’re not applicable to their jobs. Even if it’s not written down, there is a well-understood standard among your team about what is acceptably good code and what is not. Your developers review each other’s code, and learn from one another.
  • Grade: A
    The sole deciding factor for whether or not the software is release-quality is whether or not its automated test suite passes. In fact, the necessity of a bug tracking database for your organization is questionable, because bugs are so infrequent and short-lived. You are frequently surprised by how little development effort is required to add a new feature. You have developers who regularly teach at local user groups, and you have uttered the phrase “What bugs?” with zero irony.

Make the Computer Do It

You may be wondering why, in an article discussing technical practice, I’m focusing so much on test automation, and using that as the primary criterion for judging a team’s technical prowess. Here’s why:
technical practice begins with test automation.

Every time a programmer adds a feature to some software, that adds additional paths through the software that require testing. For each bit of conditional logic (that is, any code that does one thing or another based on evaluating some true/false condition), the number of paths through that area of the code double. To put it another way, the testing burden has an exponential relationship to new features.

Each new feature adds double the testing burden.

You can tackle this problem in a few different ways. The most common one (and the one that I give a grade of ‘F’) is to have the developers sling code, have the testers test it later (usually in a “testing phase”), and hope nothing breaks. Naturally, this gets a grade of ‘F’ because there is no possible way to keep up with a productive team of developers adding features and increasing the testing burden at an exponential rate. No matter how many testers you have, this will not work. You can’t keep up with an exponential curve, no matter how hard you work. In a battle between people and math, the math will win every time. The only solution to the ever-increasing testing burden that will actually work is to automate the testing.

Beyond the mathematics of the testing burden, the presence of automated tests is what facilitates technical design that can move and flex as business needs shift. The “soft” in “software” is supposed to imply malleability.

“The differences between code bases that have tests and those that don’t are so significant in most cases that they swamp most other criteria for good design.”
 — Michael Feathers,
Working Effectively With Legacy Code

To put it another way, the absence of tests makes a codebase a liability rather than an asset: it’s a money pit. Changing it is dangerous, and replacing it would take too long and cost too much. Therefore, untested codebases yield environments where risk aversion rules all, leaving dismal ROI for software development costs.

Don’t Trust Your Heroes

Often, poor technical practice is perpetuated by an unhealthy relationship between the business and what I call “hero developers.” These are the people who deliver the critical feature after back-to-back eighty hour weeks or fix the show-stopping bug at 2am the night before a big demo. They’re the ones who have exclusive rights to certain areas of the software: they’re the only ones that can make changes to “that one critical piece.” It’s easy to see the work ethic of such people and feel as though they deserve more clout. However, many “heroes” have damaging contributions to technical culture.

“Heroes” don’t mitigate their bus factor. If they were to get hit by a bus, it would be difficult (or impossible) for the team to carry on in their absence. Many hero devs maintain a possessive relationship with code that they feel ownership of, and work hard to belittle or undermine any other programmers’ contributions in “their” code. They disguise this aggression, often, with a display of superiority, citing long lists of esoteric details that must be accounted for when modifying such important code.

They wear their overtime as a badge of pride. “Heroes” often use overtime as a stand-in for quality work and act dismissive of those who maintain healthy work-life balance. Overtime in limited amounts may help a project skid across the finish line, but its necessity is often a trailing indicator of some upstream problem in process, practice, or culture. Overtime also diminishes the mental capacity of a developer: hours of overtime usually turn into days (or weeks) of bugfixes. Burnout also increases the incidence of absenteeism due to illness, as well as the risk of attrition as developers seek other employment with healthier work-life balance expectations.

Ultimately, heroes damage your culture of craftsmanship, because they emphasize human effort over harnessing the computer to do the tedious work. Heroes are often intelligent: dizzyingly, intimidatingly, how-could-anyone-fit-so-much-in-their-head intelligent. This intelligence is the Achilles heel of a hero. Wise, productive developers are humble enough to write software in a way that doesn’t require herculean mental effort. The best developers make the computer do the hard work so that they can keep their minds clear, focused, and ready to tackle tomorrow’s problems.

Cross-pollination

The improvement of technical practice will inevitably reach a plateau if your developers aren’t engaged in a frequent exchange of ideas. As I mentioned in Part 1, collaboration is essential to the health of a development organization. Healthy collaboration is the breeding ground for improving practice.

Beyond that, your developers should also participate in a technical community that is broader than your own company. Most cities have technology-specific user groups where developers can go and interact with other developers and exchange their experience. Most cities are a day trip’s distance from at least a dozen technology conferences every year. Encouraging your developers to participate in these events can help safeguard your organization from technical practice dysfunctions.

Align for Quality

Quality and technical practice go hand-in-hand. If you want the highest quality from your development team, encourage, empower, and incentivize them for developing their technical practice. Quality problems (bugs, downtime, etc.) are a trailing indicator of an upstream technical practice problem. To consistently deliver quality, make sure that the structural choices you make as a manager don’t hinder your team’s success: many managers inadvertently structure their organizations in a way that actually incentivizes poor practice.

Clearly communicate with your team that you are trusting them to improve their practice because you expect improved quality and agility. Make sure that you’re keeping your finger on the pulse of what they’re doing, why they’re doing it, and be ready to just smile and nod if they start talking a bit too deep into the technical details. Programmers get excited when they can learn and grow.

Don’t be too quick to abandon a change that appears to be harming quality or productivity. It’s common to see an uptick in quality problems as your developers begin automating tests and improving the code’s design. Often, this isn’t an indicator of new problems, it’s the surfacing of problems that were already there. As the saying goes, “you have to spend money to make money.” Be ready to support them with things like build servers, analysis tools, and paid training. Communicate about the quality metrics that you’re watching — your team may help you identify better metrics that reveal the progress they’re making.

And finally, remember that the effort that goes into technical practice improvement isn’t free. It requires investment. You may see a temporary decrease in productivity if technical practice improvement is new to your organization. Remember that technical practice improvement is an investment, and that early investment pays off big later. If you want to see bigger returns on development effort, make sure that you’re empowering your team to craft their code well.


Next, the final article in this series, Product Design.