Test Driven Development Revisited

Ed Wentworth
Vivid Seats Tech Blog
6 min readMay 16, 2021

In 2005 or 6 I came to the realization that test driven development actually was worth it. In particular, automated unit testing benefits outweighed its costs, at least for the types of software projects I was involved in. However, it is always worth revisiting long-held beliefs. And I hope I haven’t closed my mind to alternate views during that time.

To me, Test Driven Development (TDD) is a software process where automated tests are developed at the same time as the software component itself. In other words, work on a component is not complete until it and its tests are written and the tests validate that the component meets the requirements. These automated tests are run frequently if not continuously as the component is being developed, and previous tests are also run as ‘regression tests’ to assure no defect is introduced in previously written code. The test can be written first (‘test first’) where the test case is coded first and then the component is created or modified until the test passes; or the test is written shortly after the code to verify it works as required, but it is essential that a component is not considered complete until both the code and the test are completed and the test verifies the requirements.

Most frequently the scope of these tests are ‘unit tests’ — functional tests of a unit of software that can be run in isolation. If a unit has any dependencies, these dependencies are ‘mocked’ with known state so as to focus the test on the functionality of the component itself. These tests should be extremely quick to run, quick to write and they should cover nearly every functional aspect of the system. Coverage of unit tests can be computed through code coverage tools (like jacoco) that show not only which component is tested and what lines of code are hit, but all of the conditional paths that are traversed during the test execution. The corpus of this test code will be large, likely exceeding the system itself in size. The key however is that the tests sufficiently cover the functionality of the system, rather than some ratio of test code vs production code. Better test frameworks can reduce this ratio without compromising the functional coverage.

Prior to 2005/6 I was most concerned about the cost of detailed unit test plans whether automated or not. To be fully effective, these units require a test code base typically significantly greater than that of the units themselves. It would theoretically be better to develop multiple duplicate code bases in complete isolation and run them redundantly, with a voting method to assure agreement or error if they disagreed. In fact, extensive rewriting and reviewing by multiple coders could do the trick. So I concluded that since the cost of writing and maintaining automated tests is very high, it seems likely that it might not be worth writing these tests just to verify the initial functionality. But I failed to consider the cost of change over time, and the other benefits of this practice. The key assumption:

Most code will have to be changed over time, and the cost of change can overtake initial development costs very quickly

So it is worth investing in many techniques to mitigate this cost, including automated unit testing.

My rationale:

  • Investing in automated tests means you can easily run regression tests, reducing cost of change
  • Because the cost of change is reduced, it is more possible to refactor code and improve its non functional aspects, making it more useful, readable, performant, tolerant, etc.
  • Writing automated tests, particularly unit tests, also improves documentation, exposing requirements of the component (particularly in negative test cases) not easily seen though code review
  • The very act of writing automated tests while developing code means developers tend to write components that are more testable; and more testable code is more focused, composable and isolatable, which generally makes it more easily understood and used, and even reused
  • A commitment to comprehensive automated testing (and automated build and deploy) is essential to realizing the goals of Continuous Integration (CI) and Continuous Delivery (CD)

Yes, creating automated tests is expensive. But like any good investment, it pays off in the future.

How does the TDD approach work?

The process might look like the following:

  1. Write tests — create a test that verifies aspects of required functionality (test cases) then run tests (2)
  2. Run tests — always run all tests after any change (typically new test cases will fail first time): on fail and test is broken then fix tests (1), if fail and code is broken then fix code (3), if pass and code smells then refactor code (4), if pass and incomplete (features or coverage needed) then write more tests (1), if pass and complete (including code coverage) then deliver (5)
  3. Write code — fix any failing tests, new or regression failures or write new functionality first, then run tests (2)
  4. Refactor code— static code analysis may reveal places to improve code or design, or non functional aspects like security or reusability, o reveals opportunity to refactor then run tests (2)
  5. Deliver code — deliver to the next step when complete, this could be delivery for integration and system testing that follows a similar process, or code meets definition of done (run in production and accepted)

repeat: Write new tests until verifiably functionally complete

The cost of development of tests varies depending on several factors

  • the scope of what is being tested
  • the tools and frameworks used to write the tests
  • how well designed the code base is to be tested

Typically the larger the scope of testing, the more expensive it is to run and maintain automated tests. To be cost effective, it is better to invest more automated tests for the more isolated and smaller parts of a system (i.e. unit tests), and steadily decrease coverage as the scope and complexity of what is integrated increases because more the integrated the code the slower and harder it is to test. So integration tests may cover fewer cases than unit tests. System tests least of all, focusing only on aspects that cannot otherwise be handled with unit or integration tests. These costs can be illustrated in Mike Cohn’s test pyramid.

I began to become more comfortable with investing in unit tests with the introduction of Spring, and the development of JUnit testing framework. Using appropriate tools and frameworks is critical to reduce cost. And it is also critical that the way the code be written to be testable. The Dependency Injection pattern is a very good way to help make code testable by making it more easy to mock dependencies. Also, for unit tests, it is easiest to write in the language of the production code, and a good test runner particularly integrated with an IDE can easily run tests and report results. And good mocking frameworks and assertion utilities are critical to keeping test code focused and simple.

With Integration tests it should be simple to connect only the components needed, and mock the rest. But in the case of distributed systems this may mean writing test doubles that mock behavior of external systems and even simulate the system to some degree, including controlling some level of the state of the mocked system. It may be needed to create test harnesses that wrap controlling an integrated component to expose it in a standard way to automated testing. However, the more components being tested in integration, the more possible paths need to be mocked or controlled. And some aspects of the system are likely not under your control and may not have been written to be tested.

System tests (like end-to-end tests and UI Tests) end up testing an entire integrated system by simulating the behavior of external clients such as human users or external systems. Setting up test environments can be expensive, run times long, and uncertainty is introduced as the system state changes over time. All of these levels of automated tests need to be maintained to remain useful.

At Vivid Seats we use Test Driven Development processes. And our teams are committed to metrics driven quality processes. Along with code reviews, we leverage SonarQube for static code analysis, including code coverage targets. And we support automated testing using tools and frameworks like JUnit, REST-Assured, WireMock and Selenium. Other tools and frameworks support Javascript and front-end code testing. Once changes our pushed to GitHub our Jenkins automated CI and CD processes are triggered, integrated with sonar and automated test tools, to support multiple levels of build-deploy-test flows, including component, integration and system levels. Using these processes and tools gives us the confidence to release changes as frequently as possible, and react quickly to our customers needs and wants.

--

--

Ed Wentworth
Vivid Seats Tech Blog

30 Years in development at all levels, I still love the challenge. Love art, music and writing too. Cant wait to see the next 30 years!