Becoming Driven by Tests
We don’t really “do tests”
This quote popped up during a discussion with a family friend while our kids were at a playground and it stopped me in my tracks. Yes, in silicon valley you can witness an argument about unit testing by fathers pushing their kids on swings.
First, let’s get something straight.
Do you write console applications? When you write new code, how do you know it works? You run it on the console and visually confirm the behavior is as you expected? That’s testing.
Are you a library developer? When you write your API methods, how do you know they work? Do you write helper code that uses your methods and then inspect the interaction? That’s testing.
If you verify your newly written code works, you’re “doing tests”.
The difference is how you test it.
What is doing the assertion? Your eyes and brain? Or code?
If you do everything above manually, then all of that testing is thrown away as soon as you are done. If you change behavior then you need to retest manually. As projects grow the level of testing and the chance of regression increases substantially. Visual inspections don’t scale.
So you can test with the myriad automated tools and that sounds great, but if you come from a background of manual tests like above then it’s not obvious where to start. Wedging test frameworks into barely testable code that is loaded with side effects is cumbersome, has limited value, and gives people the idea that tests are more effort than they are worth.
This is how most people get to the point where they say they don’t “do tests.”
How to become test driven
The most difficult part of testing is knowing how to write testable code. The first step is to ask yourself the following question.
What should this code do?
It sounds simple. Every programmer has that question float around in their brain at some point, though there may not be a fully formed answer before code starts getting written.
This question forces you to think about the API and how it should work before you implement it.
Becoming test driven means answering this question and programmatically asserting it beforehand. Test Driven Development (TDD) actually means that you should write a failing test first. That is, write a test that you know will fail because you are using code that doesn’t exist or doesn’t work yet.
This may sound strange and silly, but bear with me. You are stating a hypothesis and writing code until your hypothesis is true.
Part 2 in this series will go more in depth as to how to actually do this because examples are the only way to really know what it looks like.
Automate your tests
Tests that aren’t run automatically are ignored. If there is an option to not run tests, they won’t be run. You need to ensure that any tests that exist are run automatically as a gate during the development flow.
It could be done via a Continuous Integration service like Jenkins or Travis where it can block a merge, as part of a build/compilation process that is run before anything is viewable, or with a VCS hook that enforces they are run before committing. Tests have to be run before code is integrated.
Automatically running tests is how you can be sure no one else breaks your code. If a team member submits changes that break your assertions and it isn’t found for one or two weeks then it will eventually look like your test is broken and you will be looked at to find the cause. This causes undue burden on the well intentioned and makes the battle for tests more frustrating than it needs to be.
Automating tests ensures that the developer who breaks the test is alerted and is tasked to fix it.
TDD in Practice
Next post will go into details about what TDD actually looks like in practice. For those who have never worked this way it is definitely alien (and potentially excessive) but you can look at it as similar to language immersion; you dive deep to figure it out and then you know it forever.