TDD? A dated post about a touchy subject

Update: I wrote this blog post years ago and only recently moved it to Medium. Testing has evolved so much recently that this is a little dated now. Developers writing tests has moved on, especially following trends in web development, APIs and micro-services. BDD evolved from TDD and it is now very popular for developers to use BDD tooling to exercise their APIs. Often these are coupled with unit tests to provide comprehensive test coverage — especially as API testing will also can often include communicating with your datastore and other dependencies. Developers now more and more are responsible for unit testing, acceptance tests through BDD, even elements of exploratory or property-based testing through fuzzing and other tools, e.g. go-fuzz. Dev teams are more and more seen something akin to the two-pizza teams like those made popular at Amazon. The team is responsible top to bottom for the product/service, including all kinds of testing and deployment, monitoring etc. This is especially prevalent when teams are responsible for services in a wider service-oriented system. What testing approach do you take to a micro-service architecture? Where value is provided by several services combined, where you want to test resilience of services to other service failures, to test services in isolation. TDD as a development technique for unit testing is a minor detail, a footnote, in systems like these.

TDD, or Test Driven Development, prescribes thinking about and writing tests first, then writing the code that passes those tests, then refactoring while continually running the tests as a safety net. I wonder who still actually does TDD exactly as prescribed (did they ever really?)? The true red / green / refactor? Writing tests before any code. Taking baby steps as you add new code etc.

Not one developer I have ever worked with in any team in 13 years across 8 companies truly followed TDD as prescribed. I tried it religiously for a few months and it slowed me down a bit but not dramatically (a common complaint from people who have never attempted it) but honestly I felt it resulted in absolutely no benefit to not doing TDD. “Code is better designed”, “more testable so more flexible”, “you think like the consumer of your code”. All those things I already think about when coding, I don’t need to add the overhead of stop / start/ stop / start of TDD to help. I write some tests as I go, some after I write my code (Oh the horror! The blashempy!).

Do you know where TDD, as its currently understood, came from? Kent Beck. And he said it helped him (in this video series) just because of the way his brain works. That’s it. He rediscovered it because it works for him. Nothing more. It may help you. It may not. Yet it has a cult-like almost religious following. And where there are followings and preaching, there’s consultants and money to be made!

There are lots of research papers that attempt to prove TDD in some ways improves code quality. However a lot of these papers, most if not all that I’ve read, do not factor in other influences. For example, this thesis taking X open-source projects where some use TDD and some do not and then comparing things like cyclomatic complexity of the codebase etc. But what about the teams, the people who contributed to those projects? Were they just better engineers? Did they just have higher standards of acceptable code and require testing for all new and changed code? Were they more collaborative and have a better design to start off with? Did they perform quality code reviews? Perhaps the teams that try TDD are simply more inclined to be more curious about different ways to try and improve code quality more than the teams that did not? Meaning they might just always be thinking more about code quality and TDD does not magically create better programming results.

Here are some benefits that paper lists for TDD, but are these really solely attributable to TDD? I don’t think so at all;

  • Predictability: Beck (2002) suggests that Test-Driven Development allows engineers to know when they are finished because they have written tests to cover all of the aspects of a feature, and all of those tests pass.

Note “suggests”. Also, coverage and passing tests is not just a TDD thing.

  • Learning: Beck (2002) also claims Test-Driven Development gives engineers a chance to learn more about code. He argues that “if you only slap together the first thing you think of, then you never have time to think of a second, better thing”.

Note “claims”. Again, that’s not just a TDD thing. That’s just bad programming practice. If TDD helps *you* slow down and think about this, then okay no problem. But don’t attribute this just to TDD — if you don’t do TDD, your code will be slapped together. That’s just bullshit.

  • Reliability: Martin (2002) argues that one of the greatest advantages of Test-Driven Development is having a suite of regression tests covering all aspects of the system. Engineers can modify the program and be notified immediately if they have accidentally modified functionality.

Note “argues”. Yet again coverage is not exclusive to TDD.

  • Speed: A work by Shore and Warden (2007) points out that Test-Driven Development helps develop code quickly, since developers spend very little time debugging and they find mistakes sooner.

Being devils advocate here — first of all, TDD without a doubt adds time to your development. (Yes, it may save time later IF by doing it you write better code with better tests than you would have without TDD). Second of all, you can write good code and tests without TDD and still see the same benefits of little time debugging etc.

  • Confidence: Astels (2003) maintains that one of Test-Driven Development’s greatest strengths is that no code goes into production without tests associated with it, so an organization whose engineers are using Test-Driven Development can be confident that all of the code they release behaves as expected.

You can achieve the same results by setting standards in your team, practising quality code reviews and continuous integration. This is yet again not an exclusive TDD thing.

  • Cost: It is argued by Crispin and House (2002) that, because developers are responsible for writing all of the automated tests in the system as a byproduct of Test-Driven Development, the organization’s testers are freed up to do things Quantitatively Evaluating Test-Driven Development 8 like perform exploratory testing and help define acceptance criteria, helping save the company developing the software precious resources.

Developers can be responsible for writing automated tests (and acceptance tests and whatever other tests you can think of) without doing TDD.

  • Scope Limiting: Test-Driven Development helps teams avoid scope creep according to Beck and Andres (2004). Scope creep is the tendency for developers to write extra code or functionality “just in case,” even if it isn’t required by customers. Because adding the functionality requires writing a test, it forces developers to reconsider whether the functionality is really needed.

TDD might help some people with this. It might not help others.

  • Managability: Koskela (2007) points out that human beings can only focus on about seven concepts at a time, and using Test-Driven Development forces programmers to focus on only a few pieces of functionality at a time. Splitting the big system into smaller pieces via tests makes it easier for programmers to manage complex systems in their heads.

What?

  • Documentation: It is noted by Langr (2005) that Test-Driven Development creates programmer documentation automatically. Each unit test acts as a part of documentation about the appropriate usage of a class. Tests can be referred to by programmers to understand how a system is supposed to behave, and what its responsibilities are

Again, not exclusive to following TDD…

I find mostly conjecture and anecdotes and that I *think* it works better, I *feel* better. That’s fine. But I reject it as a de-facto method everyone should follow that is guaranteed to produce results. I reject TDD because it doesn’t do anything for me. If it works for you — great. Have at it. But I would never force it on a development team. We all want good test coverage along with nice concise and readable code. There are lots of different ways to get to that point.

Set standards and a culture around quality code and good coverage and useful tests (unit and automated acceptance tests) but don’t prescribe the developer technique to get there. What standards? Example: All new code has corresponding unit tests and AATs. These run on a CI server and must always pass. Or another example: We use dependency injection for all object dependencies. Another example: here are our API standards. Or another example: We use this MVC framework, here’s where you controllers go and here’s where your models go. Don’t expect TDD to magically result in better designs without having standards and well architected systems. If you have problems with poor code that is difficult to test then you probably have problems in your team and development environment that TDD ain’t gonna magically fix.

P.S. Pet peeve — please don’t use code coverage percentage mark to aim for, to beat up developers. “We must have 80% coverage” Why? “100% coverage” Hmm, okay. Let me unit tests these getters and setters here…Coverage does not equal quality tests. So just use it as information.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.