Test-Driven Development (TDD) is a programming technique in which automated software tests are written before the code that will pass them. I was introduced to the concept very early in my learning process. Initially, I rejected it — why write more code when things already work? Soon after, with professional experience, I began writing tests for work that I would submit. Finally, I decided to go full TDD. Why did my view change? Quite simply, I realized that it works: TDD just makes sense to me. Here is why.
All software will be tested. In order to build any software, at any level, in any language, there must be some method of verifying that it works as intended. Perhaps it will be compiled and run. Perhaps automated tests will be run. Perhaps, unfortunately, end users will try to use it and find bugs. Whatever the method, software will be tested.
Consider the following process:
A developer, tasked with building a feature for a project, writes a few lines of code. The developer compiles and runs the program. Several possible scenarios are experimented with in the running program. Incorrect behavior is discovered and corrected.
This process makes sense to most people. There is nothing obviously wrong with it. It is the way many, many people write software. It will get it out the door. There is a problem, though: how will the developer know, six months from now, that the program works? The step of outlining and testing possible scenarios will have to be repeated.
A person with a programmer’s mindset automates repetition. The whole purpose of programming is to replace repetition with software. A programmer should never have to do something more than once.
Naturally, this should be applied to the process above:
A developer, tasked with building a feature for a project, writes a few lines of code. The developer details a few scenarios and writes the expected outcome for each as a test. The developer compiles the program and runs these tests. Incorrect behavior is discovered and corrected.
This new process should take about the same amount of time as the old. Now, however, repeating the tests in six months will take an instant instead of an afternoon. The developer does not have to outline the scenarios again and does not have to go through each of them at a human pace. This leaves one question, however: did the new code make it work or was there something else going on?
False positives can have many causes. A naming conflict or a misplaced/absent `=` might cause a frustrating surprise later. These can be hard to track down. Tests should at least rule out the possibility that something was already working. They should also identify unintended behavior immediately. A better solution, therefore, is to test first.
Apply this, once again, to the scenario above:
A developer, tasked with building a feature for a project, writes a test for a single, basic scenario and verifies that it fails. The developer writes the code to pass this test. The program is compiled and the tests run to verify. Continue until the feature is complete.
Every new piece of functionality was tested. The program did not already have the behavior and new code necessarily implemented the behavior. Tests can be easily run again to verify that the program behaves as intended. Following this process, the developer enjoys confidence that the program works and is free to improve its functionality through refactoring.
Note that the descriptions of these scenarios are all about the same length. For any single behavior, Test-Driven Development may take more time. However, for a program of any substance, it will make up for this almost immediately — and repay for itself every time functionality must be extended. That is why TDD works.