Published in


What Software Development Leaders Should Know When Setting Unit Test Goals

Written by: Alvin Fong, Senior Engineering Manager, TribalScale

Photo by Austin Distel on Unsplash

As a CTO / Director / Engineering Manager, you have just joined a new team and discovered zero unit tests were created since the project started. Horror! So you declare to your team: Every line of code must be unit tested! Our goal is 100%* coverage of our software!

Your CTO smiles with pride, the programmers seem enthusiastic to learn new skills, the product manager assures you the team’s velocity will drop no more than 10%.

…Problem Solved?

Over my last decade as a software developer, I have joined many teams where unit tests had not been written since the project inception. The above situation has manifested many times, invariably to be dropped when one of the below happens:

  • A frantic release schedule leads to many hotfixes and last-minute-features. Unit tests break down, potentially taking hours to fix. In the rush to unblock the release process, a developer may comment out the failing test, often with one of these lines:
//TODO: fix this test after release X.X.X
  • Each person on the team has a different idea on what a unit test should look like. When one developer leaves the team, nobody knows how to maintain the test. The team decides to delete the test.
  • Code Reviews and Releases are processed manually, with no dependency on the unit tests passing. Since the tests don’t contribute to release success, tests are not written nor updated.

What caused this initiative to fail? Let’s look at four factors. First, what are the leadership goals when setting a code coverage metric, and how does it benefit the product and the team? Second, and most importantly, we’ll consider what prerequisites the development team needs before they can effectively produce both features and unit tests. Third, we’ll look into the project management side of things — and how to set realistic expectations regarding feature and release timelines. Finally, we’ll cover caveats to keep in mind when creating your unit test plan.

Leadership Goals

Photo by Jason Goodman on Unsplash

Code coverage is a tempting target for engineering leaders tasked with improving code quality. Given code quality is by its nature qualitative, coverage goals appear to be both Specific and Measurable. Given your team and team lead’s experience level, it ought to be Achievable; Relevant (a failing test definitely indicates either the code or the test was incorrect); and Time bound (just in time for next quarter’s OKRs).


Your application is likely structured in several layers, perhaps one controlling how data is retrieved, one presenting the data, and one manipulating this data:

  1. For the majority of companies, data retrieval should be done with industry standard libraries (e.g. Square’s OkHttp) that other organizations test and guarantee. Instead of duplicating other organizations’ test cases, Cloud-based site availability monitoring tools are a much better way to guarantee your data is available to your application. Unit testing your application’s data retrieval likely involves mocking of the datasource. Such tests only measure that: Assuming the data source is working, the app will work fine! This is likely a far brittler conclusion than one would expect.
  2. Data presentation testing can be a difficult balancing act. A test can be written to ensure elements are placed pixel-perfectly on screen; but this leads to brittle tests that can easily fail and require rewriting, especially if your product is rapidly evolving. Unit tests are not meant to replace Manual/Exploratory testing. Ensure that at some point in your development cycle that your QA team can capture visual errors within your application. The QA team will need to work with product owners and UX designers.
  3. Data manipulation (also known as business logic) is the layer that benefits most from quick, isolation unit testing. If your application relies on some complex algorithm to change your data, your Unit test might define cases in terms of inputs and expected outputs, then run some minimal amount of code to ensure the transformation is correct. Later, if your team comes up with a more efficient way to perform the same manipulation, your unit tests serve to guarantee that the same cases are correctly transformed by the new algorithm.

At this point you should see that not all test coverage is created equal. An across-the-board 100% coverage goal no longer makes sense. Your team can reap more benefits from testing certain code areas over others. Work with your development team to define what code modules and layers will benefit the most from unit testing.


A unit test only provides benefits if it’s executed at the correct times. The majority of CI/CD software and code repository platforms support webhooks that enable your team to set up the process once, and receive the benefits over the project lifetime. It is critical that your release process includes these automatic failsafes:

  • When code is committed or pushed to a repository (notify the developer if tests fail)
  • When a pull request is created (and disallow merging if any tests fail)
  • During the software release process (and stop the release if any tests fail)

Many software teams have policies that ‘require’ developers to manually run their tests before pushing code — inevitably, someone will forget to do so and introduce errors into the codebase. Developers are only incentivized to maintain unit tests, if passing is critical to their code being merged. Finally, create a process that ensures new unit tests are written; guide the development team to an understanding that unit tests are required for approving code written in the tested application layers.


How achievable is the goal of adding unit test coverage? Software development and testing are very different facets of software development. Does your team have the correct skill sets to use the relevant testing frameworks, as well as write software delimited in ways to enable easy testing? Can your project absorb the additional time / complexity needed for 1) skill training and 2) adding tests? We will explore these considerations in depth in the next two sections.

The Coding Team

Photo by Jason Goodman on Unsplash

At the core of your project is your development team. As an empathetic leader, the first step is to understand why unit tests were not a priority. Here are several potential challenges to:

Q: What value do they gain from unit testing?

Ask your developers and team lead what value they see from adding unit tests. A basic, correct answer could be “so we know our code is correct” — however, as one cannot prove a negative, we cannot prove that there are no bugs in the code. Why would we write a unit test if it cannot guarantee correctness? The answer is that unit tests don’t guarantee correctness. All a unit test says is: given a set of preconditions, and a triggering event, we expect some set of postconditions.

The power of a good unit test is that the test is not dependent on the behind-the-scenes implementation. For example, let’s say we are making some kind of maps application, and we want to list a series of locations by distance to our location. There are many possible ways to sort the locations — but there is one common correct output. Once your unit test is in place and running regularly, your team can change the sorting procedure or fix bugs without worrying about introducing errors into the sorting output.

Q: Does the team have a common understanding of testing?

As with most things technical, there is a large amount of jargon surrounding unit testing: mocks, stubs, fakes, spies, dependency, assert, verify, etc. Have a plan to ensure your team members have adequate knowledge of these terms and when to use them.

For maintainability, it is crucial to have a common test structure. The Given-When-Then structure developed from Behaviour Driven Testing is a great tool, and there are other similar ways to organize unit tests. As a minor point, consider what naming conventions would the team prefer — some developers may have formed habits from previous projects. The specific conventions are not terribly important; but it benefits everyone to have consistent styles. The team will have an easier time reading, reviewing, and understanding the tests.

Q: Does the team have the correct skills to create useful test cases?

This should not be an assumption. Allow a safe space for your development team to self-access their understanding of testing. It may be useful to remove managerial staff from these meetings, to avoid the team feeling pressured to claim more expertise than they truly feel. After that, evaluate where the team stands and set up a plan for improving testing skills. This can involve seminars, lunch & learns, or working sessions. Identify your team members who can share their testing expertise, and encourage them to implement examples in your code base.

Q: Is the codebase written in a way to facilitate testing?

If there are no unit tests in your codebase, the answer is almost certainly no. While there are definitely other advantages in committing to best-practices in regards to coding and architecture styles, they truly shine when they facilitate testing. A developer can misuse the MVVM pattern and the feature can still pass QA and satisfy users, but a misused Viewmodel will be difficult to test if its dependencies are not property injected.

Are there large amounts of global state that your app maintains? Unit tests evaluate code in isolations, global state prevents such isolation. That is to say, a unit test can show the validity of some code when the global state is in a particular state, but any other code unit could change the global state without your knowledge and invalidate your test precondition.

Evaluate your codebase on it’s adherence to Separation-of-Concerns. Within each module, are there specific files to handle display logic, business logic, networking, and storage concerns? Ensure your code modules OS and framework dependencies

If any of the above factors are problematic, unit tests cannot be simply added without modification to existing code. Make sure your developers have the bandwidth to modify the code to allow testing, and your QA team has the resources to test for regressions.

Q: Does the dev team see unit testing as a chore/slowdown to their delivery velocity?

It’s a common complaint that writing tests take time. When the team is rushing to meet a deadline, test rigor is often the first item to be dropped. After your team has a couple of weeks’ worth of experience in adding unit tests, they can be ready to quantify how much effort the unit tests take.

Q: What is the plan to enforce and measure code testing?

The first step towards testing success is having a plan for enforcement of unit testing habits. Success is more likely when multiple steps are automated:

  • Integration with code review process — Having a Github webhook set on your repositories to execute all unit tests when anyone initiates a Pull Request. This relieves your developers of the responsibility to run all tests on their machine (which they should be doing anyway, but is often forgotten). Ensure that if unit tests fail the PR cannot be merged into your main/shared branches.
  • Early on in the project, set up a coverage tool such as Jacoco or SonarQube to measure and report coverage. This can help you identify modules or classes that are inadequately tested. The team can then explore pathways to add test coverage to these modules.

Next, discuss with the team what expectations are achievable. Will all Code Reviews be expected to have test class updates? Consider that some UI fixes may not require changes to business logic. What files and code are expected to be unit tested? A good starting point is to verify any publicly-visible function or data output field in viewModel (MVVM) and presenter (MVP) classes. Utility functions (such as functions converting data strings to objects) should be tested.

Q: Are there any other blockers to testing success?

Check with your team and investigate what other factors and concerns may prevent them from forming testing habits.

Product and Product Management

Photo by Jason Goodman on Unsplash

Earlier in my career, my team’s manager requested unit testing of new features. The team’s project managers and the development team lead gathered to discuss what additional effort (in terms of story points) should be allocated to work tickets. The PMs anticipated an additional 10% time, but the developers warned it could be up to 50% of the non-unit-tested effort.

Why was there such a large discrepancy? While it’s true that developers who have mastered TDD thinking are able to create the same production code at little or no additional time cost, the majority of the team had little experience in unit testing- several had not used testing frameworks in their careers! Evaluate your team’s testing experience, and be realistic about your team’s skill level.

Below is a broad estimate of the additional effort required to add testing to a feature.

  • Developer with Test Driven Development (TDD) mastery: 0–10%
  • Developer doing Implementation-first testing, but skilled in post-development unit testing: 10–30%
  • This is due to refactors required to convert dependencies in feature code into testable code. TDD is faster because since the tests were written first, the code that follows must naturally be testable.
  • Unskilled/ untrained: up to 50%
  • This can be reduced dramatically by having team members experienced in testing regularly pair-program with others in the team, possibly in a round-robin schedule.

These numbers are certainly intimidating, but the payoff includes reduced future QA bugs, better adherence to the ticket’s intent, and more maintainable code. It can be useful to reduce velocity expectations for several sprints to allow developers to train in testing while working on features.

A Unit Testing Plan

Photo by Jason Goodman on Unsplash

Create a plan to migrate to test-covered code and consult with your team’s technical leaders and project managers. It should involve the following:

  1. Evaluate your team’s testing skill level and create a training plan. This may involve seminars, pair programming, etc.
  2. Consider your team’s skill level and consult with project managers on timeline expectations. Adjust scope or deadlines as necessary.
  3. Examine your software development life cycle and what changes are needed to support automated unit testing. Examine your software release process and ensure tests are executed at the correct milestones.
  4. Evaluate your codebase’s testability. Work with your team to improve coding habits to support testing.
  5. Achieve a team-wide understanding of what code areas/ layers/ modules should be tested.

For more information on setting unit test goals, click here to speak to one of our experts.

I’m Alvin Fong, a Senior Engineering Manager at TribalScale. I have been involved in projects developing software on Android, GCP, and Flutter platforms. My current interests include learning more effective testing techniques, the latest architecture patterns, and how to build a culture of open and clear communication in our remote work world.

TribalScale is a global innovation firm that helps enterprises adapt and thrive in the digital era. We transform teams and processes, build best-in-class digital products, and create disruptive startups. Learn more about us on our website. Connect with us on Twitter, LinkedIn & Facebook!



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
TribalScale Inc.

TribalScale Inc.


A digital innovation firm with a mission to right the future.