TDD as an essential consequence

Roberto Gallea
6 min readDec 2, 2019

--

Testing code is the only way we can verify correctness of our software. Anybody does it. However there are ways better than other.

For example, let’s tell the story of two guys: Manuel and Ted.

Manuel’s development workflow

Manuel is a student, he started to learn to code a couple of years ago and his approach to testing it manual. Basically he does the following:

Manuel’s development workflow based on manual testing

He start writing a new functionality, then he testes it manually. Of course, something will go wrong at first, so he edits the code and test manually again. He repeats this loop until he achieves the desired behavior.
Of course, such approach is all but good, because it is easy to understand, that, as many features and functionalities are added, and variations to requirements are introduced, complexity, regression and malfunctioning are introduced. With only one possible consequence:

Consequences of manual development, continuous spotting and fixing of problems, which generates even more problems

Development very soon become a vicious cyrcle. Problems arise, and trying to solve them generates even more problems, and even worse, he hasn’t a way to identify bugs without manually testing the whole system by hand.

Even though it is intuitively bad working in this way, many experienced developers still use manual tests.

Ted’s development workflow

Ted, instead, works in a medium-sized software house, using automated testing as a constant part of their work cycle.

Ted’s development workflow based on a-posteriori automated testing

In a nuthsell, he goes a step further than Manuel. When he completes a feature, he writes an automated test which verifies what he has completed. In this way, every time he needs to refactor a piece of code, or to change a feature, he has confidence that if something breaks, he would immediately get a feedback and can rollback to a stable codebase.

This concept is good and in theory would work. Unfortunately, following this approach is not feasible, because actions are simply taken too late. Existing code results to be untestable because it wasn’t thought to be tested, and making it testable would require too much work. Also, refactoring has to be done without confidence due to the lack of supporting tests (that doesn’t exist yet). All of these problems make writing tests after production code has been written unpractical if not totally unfeasible. As a consequence, a-posteriori automated tests result fragile and incomplete and rarely useful.

Priorities in development

So, what is the solution?

Test-driven development to the rescue!

Before introducing it, let’s focus on what development priorities are:

  1. Make working code: code has to do what is expected to, of course.
  2. Make maintainable code: code has to be maintenable, especially in the long-run
  3. Make testable code: as resulted from previous discussion, code has to be both effectively and efficiently testable.

That seems quite obvious, and indeed it is. However, the order used for fulfilling these priorities generally is the following:

Working → Maintainable →Testable

This leads to Ted’s errors.

Let’s instead assume to start from the bottom, writing code that is testable by design:

Testable → Maintainable →Working

It results that writing code testable by design, easily produces more maintainable coide, and reaching out for working code is just a bit more than a “mere detail”

Test-driven development

Testability by default is the single focus of test-driven development. It flips the order in writing production and test code. It focuses on writing tests before production code. In this way, fulfilling tests is the main requirement of production code. Of course, tests should reflect actual feature requirements, so, testability is just a further requirement.

TDD work-cycle is very simple, as it has just three phases:

Test-Driven Development work-cycle

Red phase: in this phase developer writes a new failing test, referencing functionalities and behaviors that do not exist yet.

Green phase: is the phase where production code that satisfies test requirements is written. It is complete when the current test and all the other tests in the suite are passing

Refactor phase: optional in a single TDD cycle, existing code may be refactored, reorganized, cleaned to be more readable or optimized, with the confidence that tests warn the developer if something gets broken.

That’s all, the rest is some simple principles and a few guidelines. And experience, a lot of experience.

TDD principles

TDD has three simple principles:

  1. Write production code only if a failing test requires to. Any new production code must be written with the purpose of making the one current failing test pass.
  2. Do not have more than one failing test. At any time, there should be no more than a single failing test. This allows the developer to concentrate on a single aspect at once.
  3. Do not write any more production code than necessary. Actually, this is a corollary to first principle, developer should not anticipate anything. For example, do not anticipate “if-then-else”, special cases, exception throwing, unless you are not considering a test dealing with those.

TDD guidelines

Guidelines for TDD are not absolute. Developers can adopt their own, and teams should agree on a common set. However, here some I found useful:

  1. Focus on the requirements. Don’t waste your mental energy starting writing code without knowing where to go. Hang to a single requirement, better if with small steps.
  2. Make a mental map and a list of required tests. Think beforehand what you want to achieve (not how) and make a list of the tests you need to write in order to accomplish your work.
  3. Make the test pass. This should be your sole purpose during the green phase. Pass the test in the simpler and quicker way possible.
  4. Write self-explanatory tests. Writing self-explanatory code is a well-know clean code prescription. Adopt the same for test code, for example use descriptive names for test names, variables and anything else.
  5. Tests are clients. Don’t forget that tests are the first client for your system, even if it doesn’t exist yet. Thus, if tests are simple to understand and use, the resulting system will be simple to understand and use as well.
  6. Refactor often. Refactoring allows your code to evolve more easily. Consider to refactor often, both production and test code.
  7. Tests are not proofs. Don’t mistake testability for proof of correctness. Having a full passing test suite doesn’t mean your system is error-, bug- and/or malfunctioning-free. Whenever an issue is identified, find the conditions generating it, make a failing test which replicate them. Then, fix the problem by making the test pass, without breaking other tests.
  8. Don’t drift from discipline. If you choose to follow TDD, never drift from it. Maybe when under pressure or on a deadline, you could be tempted to omit writing tests to save time. Don’t do it! You are taking a shortcut to ruin! Missing tests become technical debt you probably would never have the chance to pay back. New deadlines will approach, and more tests would be missing. Soon you will lose any confidence for refactoring and/or experimenting with your code. Soon you will lose any advantage you gained.

Conclusion

As automated testing is required for every project, and a-posteriori automated testing is generally not a feasible approach, TDD results as a worthwhile approach. Contrary to what is possible to believe, the additional work required for writing tests is soon paid back in terms of maintainability and robustness of produced code.

If you plan to investigate more about it, I suggest you to start with a small side project, which you could extend and evolve. You will soon grasp the true advantage and benefits TDD can offer.

--

--

Roberto Gallea

PhD in computer science, from Palermo, Italy. Involved in all kind of technologic stuff, both sw and hw, applied to a wide range of fields.