What is Test-Driven-Development?

Jay Barden
Capgemini Microsoft Blog
10 min readAug 20, 2019
Photo by Nicolas Thomas

“Encourages simple designs and inspires confidence” — Kent Beck, 2003

Kent Beck is often cited as the creator of Test-Driven Development (TDD) but is more accurately the rediscoverer of TDD. His work on Extreme Programming (XP) (among other things) has helped him promote TDD.

The following is my take on TDD, and why I practice TDD (as well as the rare occasions that I don’t). Like many (if not all!) opinions, not everyone will agree with everything here — I look forward to the discussions this post will inevitably start.

Let me kick off by giving a brief outline of what I believe TDD is, and perhaps more importantly, is not.

What Test-Driven Development Is

  • TDD, in its simplest form, is a mind-set change. After all, when not practising TDD, production code is written and then — maybe — tests are added. In TDD, the first step (after thinking!) is to write a test that fails.
  • TDD provides rapid feedback on the design decisions made before writing the production code. This ensures that you do not spend too long going down a rabbit hole before you find that the design decision was wrong, or at least, in need of a change (or two)
  • TDD provides a repeatable suite of tests that can confirm that everything delivers the same result as it did earlier today / yesterday / last month / last year. (If a test fails that has previously passed, you have broken something).
  • TDD, by definition is test driven, ensures you do not write code that is not testable *

*To be more accurate, TDD forces you to recognise when you are writing code that is untestable (i.e. when consuming frameworks that do not offer a way to mock their implementation). With .Net Core, this happens less often and, System.IO.Abstractions and similar packages can greatly simplify testing of untestable code.

What Test-Driven Development Is Not

  • TDD is not a silver bullet. Simply practising TDD will not guarantee the perfect solution to every problem, every time.
  • TDD does not remove the need to think before writing tests and production code. Before even writing a test, you must think about what the test is for, and what it intends to prove.
  • TDD does not make sense in every scenario. For example, a 10-line console app that will be thrown away after a single use will, almost certainly, not benefit from TDD.
  • TDD is not just a fad that will go away in a year or two — I refer you to Kent’s quote (particularly the date!) at the top of this post for a simple demonstration of why I say that.
  • TDD is not a euphemism for Unit Testing. This is a popular misconception — especially in the C#-development world. As we’ll see, TDD covers the full range of test levels, not just Unit Tests.

The Testing Pyramid

There are many versions of the test pyramid, including as many as 7 levels of testing but, below is a simpler version that, usually, suffices:

An example of the testing pyramid

Test Levels — A Little More Detail

I rarely make the distinction between system and integration tests as, for most projects I’ve worked on, combining them into a single Integration project suffices but your mileage may vary.

  • *Acceptance tests — AKA UI tests. These are the slowest and most brittle of the automated tests and, as a result, are usually the smallest number of tests. Whether developing a UI or an API, I believe good acceptance test can, and should, exist at this level. In an ideal world, these are pre-written in GWT (Given…When…Then…) format and can be copied directly from the User Story. These tests can be viewed as confirmation that the public interface is as expected — hence any API should be included.
  • Integration tests — these will test more than a unit test. This may be the result from several classes interacting with each other or separate modules interacting. On rare occasion, Integration Tests may also include external dependencies, but, for me, these are more System Tests. As a result, the run-time for an Integration Test is longer.
  • System tests — not always implemented separately from Integration Test, but they test the system functionality. This usually excludes any UI but covers the system and interaction with external dependencies.
  • Unit tests — usually considered the starting point of TDD (which is incorrect and why I’ve listed last, but…). Ideally, these test a single, public method and mock all dependencies. A unit test should complete in < 1 second — preferably a few hundred milliseconds at most. The core purpose of a unit test is fast feedback.

What Do I Consider Worth Testing?

  • Everything! *
  • I test all public methods (via the Interface, not directly). This includes any Data Transfer Objects (DTOs) and all properties of said DTOs. Any model or DTO that forms part of the public interface (even for an internal API) exists for a reason and therefore, should be tested even if only a get/set property without logic. Too often I’ve encountered breaking changes that no-one knew about until production. I don’t sleep very well at the best of times, but I don’t want to be woken at 3am just because a property has been removed from our API response!

* Well, not everything — just the code I write / am responsible for. If I need to write a line of production code, I need to write a test first to confirm the functionality is working as expected. Before I write the code, TDD makes me stop and think about what I want to happen, not the how it happens.

What Do I Not Consider Worth Testing?

This is simple, but probably controversial too.

  • Framework code — whether Microsoft or a third-party NuGet package / another source. It may be a rash approach, but I rely on Microsoft etc. to test their own code!
  • When connecting to a remote system that I don’t control, I am happy to assume it knows what it needs to do. Of course, where applicable, I still test the result was as expected.

TDD — An Example Via Statistics

Ok, so far, I’ve said a lot about what I consider TDD to be (and not to be) but, below are a few statistics to support the why I consider TDD to be so valuable to my daily development:

  • 2-programme of development
  • 1 WebAPI2 (hosted in ServiceFabric)
  • 7 Endpoints
  • 15 Microservices (also hosted in ServiceFabric)
  • 3,500 Unit Tests
  • Code coverage from Unit Tests — average of 93% (framework code etc. was excluded)
  • ~100 Integration Tests
  • 50 Acceptance Tests
  • 2 Years in Production, three major releases (and counting!)
  • Thousands of calls to the API / endpoints each day
  • Bugs raised? NONE! 😃

Now, I don’t claim that I wrote all of the above, nor do I claim that there isn’t a single edge case bug lurking in the code waiting to be discovered. What I can say is that, our use of TDD was, in my opinion, the reason for the last bullet point. We were able to refactor existing code, add new functionality and perform almost any change we wanted with the knowledge that, if something broke, we would know almost immediately.

Frameworks I prefer for testing

Like so many developers, I have used nearly all the core frameworks from MS-Test, NUnit, XUnit, Jasmine, etc. — below is my current preference for testing:

  • XUnit — one of the xUnit family of Test frameworks and performs the heavy-lifting for running the tests
  • NSubstitute — used to mock dependencies and thus decouple the test from said dependencies. NSubstitute is like MOQ etc. but I find it has a more fluent interface which suits my style / preferences
  • FluentAssertions — the XUnit Assert is fine but it is, in my opinion, not the best way to express the expectation. FluentAssertions, as the name strongly suggests is more fluent (with a couple of C#-enforced idioms — such as Should() rather than simply the Should that I would prefer).
  • SpecFlow — using SpecFlow greatly simplifies (when used correctly) development of acceptance tests.

Downside of starting at the top of the test pyramid

How can there be a downside to testing from the top down? After all, that is the original definition of test-driven development isn’t it? Yes, it is but, by definition, the test will fail for a lot longer than say a Unit Test will. For a complex feature, the Acceptance Test could fail for days (or even weeks!) but a Unit Test should fail for seconds-to-minutes at the most.

I believe this is why TDD has become synonymous with Unit Tests and why we have A-TDD (Acceptance-Test-Driven Development), DDD (Domain-Driven Development), etc. which are, in reality, names for starting at a different point in the test pyramid / grouping the data or functions differently.

Can’t I just write the tests afterwards?

Yes. Of course. With at least 2 caveats:

  • if you do, you almost certainly won’t cover all the potential flows through your code. NCrunch (or similar) can help identify which ones you’ve missed but better / easier to get right up front.
  • Any test written after the production code is, by definition, not a TDD test

To expand, any test written after the production code (potentially days or weeks later) relies on you remembering why you did something as well as actually coming back to it — how many times have you revisited code that is now live in Production only to find a “ToDo” in it? How many times have you wondered why you wrote the “ToDo”?

If you write the test afterwards, can you be sure that it proves anything and isn’t just achieving the code-coverage statistic you’ve been tasked with meeting? Or, worse, the test proves the code is doing what it is doing rather than what it should be doing…

Can I add tests later?

Isn’t this a duplication of the previous section? No, there are valid reasons for writing tests at a later point in time. In fact, you should add tests later! These extra tests though are, by definition, extra and are usually for one of two reasons:

  • A new edge-case has been discovered and you need to prove the code still works
  • A bug is discovered. Here the test is created to initially prove the bugs existence and then to prove that it has been resolved (and thus prevent it being unknowingly re-introduced later).

The second reason (a bug has been found) should be viewed as essential. This will add to your regression suite and give even greater confidence that the minor refactoring you want to make doesn’t cause unexpected side-effects (OK, let’s be honest — another bug!).

Benefits Of TDD

So far, I’ve said a lot about what TDD is / isn’t with an example of the why I practice it (via some statistics), but I’d like to take a moment to revisit the benefits of TDD:

  • Bug-free code! *
  • A pre-built regression suite — remember, TDD is not just about Unit tests. You may be selective about running certain tests in different environments but, the full suite should be run somewhere to ensure no breaking changes occur outside of the immediate focus of the Project / Solution.
  • Confidence to refactor your implementation whilst not affecting what your method, and by extension, system does
  • A good night’s sleep **

* OK, sadly nothing can guarantee that (other than not writing code at all) but, TDD, done properly can get you very close.

** OK, again I cannot claim that seriously, but releases are stressful enough — even if you are operating full CI/CD (Continuous Integration / Continuous Deployment) to Production. TDD won’t remove the stress that can occur, but it will greatly reduce it. After all, all those Unit, System, Integration and Acceptance Tests say everything works as expected!

Test-driven approach — AKA Red-Green-Refactor

For those unfamiliar with the Red-Green-Refactor mantra, nearly all the test-runners will show a failing test with a red cross or circle and a passing test with a green tick or circle. These two colours are by design and clearly fit the “Red-Green” in Red-Green-Refactor, but what is the Refactor all about?

Whether you start with the tests from the top of the pyramid or the bottom, there is always a period when you reach functioning code. But, is it the best code it could be? Probably not! This is a good thing though!

In the section above describing the benefits of TDD, I mentioned the Confidence to refactor the code. Refactoring can take many forms and entire books have been written on the subject — including Improving the Design of Existing Code by Martin Fowler (with Kent Beck) which I thoroughly recommend — but this is the time we take a step back and review the working code with:

  • the confidence that it really does do what it should do (and we have the tests to prove it) and
  • The intention to improve the implementation without breaking the functionality.

Wrap up / recommendations

I cannot recommend TDD enough. A good suite of tests, covering all of the levels, aids confidence — whether to refactor a particular method or simply that it will work when released.

Books

There are too many books for a comprehensive list of recommendations, but below are a couple that I cannot recommend enough:

  • The Art of Unit Testing with Examples in .NET — Roy Osherove
  • Improving the Design of Existing Code — Martin Fowler (with Kent Beck)

PluralSight training courses

PluralSight is updating so regularly that the below may be out of date before this article is published, but there should be something for everyone / every level of developer:

Join the Capgemini Microsoft Team, open roles here.

--

--