Testing Software: What is TDD and should I use it?

Maximilian Beck
Next-Generation Web
8 min readJun 11, 2022
Photo by Arnold Francisca on Unsplash

TDD gives you a suite of tests that you trust with your life. And when you have that suite of tests, you can do many other things like cleaning code and cleaning the design. – Uncle Bob Martin

What is TDD?

Test-driven development – here referred to as „TDD“ – is a concept in software engineering that encourages the software engineer to write the test before the actual implementation.

„Writing the whole test before the implementation? This seems curious because when facing complex problems, I mostly don‘t know the code so I also don‘t know how to test it.“ – This is the statement I hear the most when introducing people to TDD the first time.

While this statement is true, it also indicates a misunderstanding in the concept.

The workflow of TDD will help you to break down these complex problems and to solve them step by step while each step is tested.

What is TDD not?

When TDD is explained as „write the test first“, some people might think of integration/acceptance tests. Tests that start your service and test e.g your API to have an expected output, given a certain input.

These tests are easy to write upfront because you might imagine how your API is designed before jumping into the code. Writing this kind of test before implementing your feature is very important, but this is not TDD at its lowest level. It is a different testing concept (E2E API test, contract test or integration test) worth mentioning in a separate article.

TDD is situated on a lower level (but not too low), facing the units (classes or functions) of the software system. — That said, when writing the tests, they don’t necessarily have to be coupled completely to one single function or class.
It could happen, that you are writing test cases for one functionality (module/package) that ends up getting split into multiple classes or functions behind one facade.

TDD is also no real guidance on the higher architecture level. You should have a basic understanding of which components should exist in your system to implement the requirements.
On a lower architecture level, TDD is a extremely helpful mechanic to verify your level of abstraction by considering testability at every time you make a change to the code. It helps you defining and maintaining the boundaries of software components by always checking if the unit is testable independant from other software components.

What does the TDD workflow look like?

The workflow follows a principle called „red-green-refactor“. This name indicates the steps the software has to go through while it is under development.

But this is basically just the “loop in the middle” as there are also some steps in the beginning and probably also in the end.
If you work with a module that is already existing, you might also skip the steps in the beginning and directly jump to the red-green-refactor phase.

The conception:
As said earlier, TDD is about testing modules, and therefore you should have an initial understanding of how your module should be shaped. So the first step is to make some design of what you will need to implement your feature and how the classes are layered.
1. How should the interface the consumers of the new functionality communicate to be shaped?
2. What business logic do you need?
3. Do you need some additions to your persistence system or call any other third-party API?

If you have a rough understanding of how you want to build the feature, it will be time to start with the implementation:
For that, I’d recommend starting with the business logic you want to implement for your feature because this ensures a good decoupling of your domain with other application layers.

Starting with the implementation, you would create the first file for your test and the first file for your component (class/function).
The goal here is just to do the basic setup to get starting writing some functionality. The order of when you create which file doesn’t really matter that much at this step in my opinion.

I mostly like to create the outline of the class and a simple test which asserts that the subject under test is an instance of the class I just created, just to verify my code compiles and the test suite is set up properly.

As the groundwork is done, the red-green-refactor phase could start now.

Red: You start writing a test case for a low-level functionality that should be covered by the unit.
When writing this test, remember that tests are the perfect documentation for your software because they never get stale — they will/should fail when they are not up-to-date anymore.

The overall rule in this phase is: Focus on writing the test first and just write the bare minimum of code to make the test compile (not pass, just compile).
- If you want to call a function, that does not exist currently, just write the bare minimum of the function to make the code compile (mostly just the declaration and some kind of mocked return value).
- If you need some kind of DTO/ValueObject/Entity to assert on, just write the bare minimum of the new class with just enough to do the assertion or return it from the function you want to test.
- Another hint is to always keep an eye on your software layers and to use proper abstractions when you are reaching the boundaries of the current layer you are testing.

When your test is completed and the code is compiling, execute it. If everything is done correctly, the test should fail.
Why is it important to execute the test when you know it will fail?
It is important to see the test failing so that you know your code is compiling and the test is actually testing something (false-positive tests are occurring actually quite often (e.g. when components are not properly mocked or the wrong assertion method has been picked) and are hard to find once placed in your software). You can see this as a manual test that your automated test is working.

Now that you are having a solid, properly written failing test, it is now the time to make this test green:
This step is all about making the software fulfill the requirement demanded by the previously written test. The code written in this step should only be the bare minimum to make the test pass. This code could be cluttered or hardly readable (if you can still read it 10 minutes later) but should really only include the change needed to make your tests pass.

Now that all the tests are passing, the fun part is coming along. The next step is to refactor your code if necessary:
With unit tests that are verified to test your actual requirements (as explained in the steps above) in your back, you will be comfortable moving code around inside that unit as you want to.
Possible refactorings are for example splitting methods into multiples to get better readability, or grouping several methods in other composite classes (that won’t need extra tests as they are covered by the test of the component’s interface you’ve written).

If your refactoring includes changes in the public interface of the tested component, which would be a breaking change for other components, the test will notify you and, of course, needs to be adjusted (that’s why a little design up front is necessary).

The refactoring step is also not only limited to the logic. Also, think about refactoring the tests if needed. As said earlier, well-maintained software tests are the best documentation of your software to other developers. Such refactorings could include renaming test cases, moving duplicate code into setup and teardown phases, or re-grouping test cases in separate contexts by moving them into separate files as the amount of tests grows (at this step maybe also moving tests to composite classes if created).

After the refactoring step, celebrate your success and commit the change. Once the change is committed, you can repeat the cycle starting at the next “red” step by writing the next failing test for the next small requirement. Having each cycle as one single but dedicated commit helps identifying breakages later with e.g git bisect.

While everything is constructed around the red-green-refactor cycle, it is important to think about the bigger picture between the cycles.
Pair-Programming would help with this even better.
While the driver is focussing on the current cycle, the navigator will look at the bigger picture and evaluate if the direction is still appropriate.

If you are not pairing, you will have to re-evaluate your approach from time to time and check if adjusting the planned design needs to be adjusted. This will of course happen from time to time as the implementation gets more and more concrete.

Conclusion

This is a lot of process for developing an application. When hearing about this the first time, thoughts may arise that the process is bloated, time-consuming, and won’t work as described.
It is true that TDD needs a lot of practice to get good at and feels more exhausting than “just coding”. It will most likely not be done perfectly in the beginning. Learning new software engineering techniques is mostly a thing to train and practice by just doing it, going through hell in the beginning but getting really good at it when sticking to it over a longer period of time.

It will also be more time-consuming but this will decrease as you continue practicing the technique.
Practicing TDD will consume more time in the initial development than “just coding” or writing the test after the implementation. BUT practicing TDD (if done right) will save you a lot of time in the long run. As said in for example the report Effects of Test-Driven Development: A Comparative Analysis of Empirical Studies by Simo Makinen and Jürgen Münch: “One of the promising effects was the increased maintainability and reduced effort it took to maintain code later”.

Practicing TDD facilitates software design as you have to think about abstractions (along with testability) and the usage of your components before you actually write them. Once written, the tests will help prevent regressions as a high coverage will automatically be achieved when sticking to this workflow.

TDD goes hand in hand with other testing and extreme programming techniques such as ATDD, BDD, or pair-programming just to name a few. All with the goal of providing faster feedback cycles, safer integration, and releases for long-living maintainable software by raising the quality standard to a high level.

So I assume you can guess my opinion: I would encourage everyone to try TDD and use it.
It doesn’t need to be applied to every single functionality because some classes (for example classes that are adaperts to third party services like storage repositories or adapters calling external APIs) are humble to unit-test by their design and better be covered by different kinds of testing (like contract or integration tests).
I would definitely use TDD for every kind of business logic or data transformation my services do because these things are core parts of the service.

How can TDD be learned and practiced?

There are many great resources out there to get started with TDD. One of them would be https://cyber-dojo.org as a testing playground. The best way to learn TDD is in my opinion to try the technique in the next upcoming project (preferably a side project) or to ask more experienced developers in your project/team to pair up and practice TDD together.

--

--

Maximilian Beck
Next-Generation Web

CTO of WEBTEAM LEIPZIG — Writes about tech, software architecture, company structure, and overall experiences from software consultancy. https://bmaximilian.dev