In the past few years, I’ve done kind of a 180° on unit tests.
There are a lot of really easy ways to rationalize not testing your code, and I’m probably guilty of saying each of them at one point or another.
- “It takes too much time”
- “That’s what QA is for”
- “A passing test suite doesn’t guarantee that you don’t have bugs”
- “I tested it myself before I committed to master”
- “[some feature I’m working on] isn’t really testable”
For some engineers, I think the reluctance to embrace unit testing is basically just FUD. Like so many other things, testing seems scary if you haven’t done it before.
But it’s also really difficult to fully understand the benefits of testing unless you’ve worked on a project that has good tests. So it’s easy to see why — without fully understanding the upside — many developers regard unit testing as an unnecessary step.
It takes too much time
This argument is actually pretty rational if you don’t understand the long-term benefits of testing.
For a lot of the projects I work on, testing a new feature takes longer than actually implementing it. At first glance, that seems like a crazy way to invest my time.
But there’s an important piece of information missing here:
Over the long-term, having good tests will save you a huge amount of time.
The first time I ever refactored a piece of code with good test coverage, I actually felt a little bit guilty. It was almost too easy. Since I knew there was a safety net, I was free to try whatever I wanted. I didn’t have to remember all of the weird edge cases, because if I forgot them, the tests would fail.
Having good test coverage gives you a crazy amount of confidence going into a large refactor. It helps to ensure that you’re not introducing new bugs, which can ultimately save you a whole lot of time in the long run.
That’s what QA is for
Nope nope nope nope nope.
QA does not exist so that we can be lazy and inattentive to detail. They’re a final line of defense against bugs. And they work at a much, much higher level of abstraction than engineers do.
Imagine an app with an off-by-one error. Maybe you’re rendering a list of users, but accidentally omitting the last one.
For QA to catch this, they need to know what the list should look like, and then notice that there’s an entry missing. Essentially, they need to be looking for this exact bug.
Then they need to reproduce it. They need to document it and file an issue. In all likelihood, they need to show their screen to some engineer who doesn’t believe that they could possibly have written code that breaks the app.
And all of that time is wasted, because a simple unit test would have caught the bug. Not just this time, but every time. Instantly and automatically.
A passing test suite doesn’t guarantee that you don’t have bugs
Wearing a seatbelt doesn’t guarantee you won’t be horribly injured in a car accident. Should we abandon any safety measure that isn’t 100% effective?
I tested it myself before I committed to master
Of all these terrible excuses, this is the one I’m probably most guilty of having believed in.
Of course it works. Why would I push code that doesn’t work?
This argument is compelling because there’s a tiny grain of truth to it.
If you’re attentive to detail and thoroughly test your new feature in the browser (or via HTTP or whatever the interface is), that can be functionally equivalent to unit testing.
But it’s hard to be sure that your new code didn’t break some other, unrelated part of the app. And more importantly, nobody else gets the benefit of the manual testing you did. That process happened once, and now it’s essentially lost.
Unit tests, on the other hand, are forever. They don’t just guard against bugs right now, they guard against bugs in the future.
[some feature I’m working on] isn’t really testable
For the past two years, every single application or library I’ve worked on has had 100% coverage.
If you’ve never used a code coverage tool (I like Istanbul for JS), it’s essentially a process that instruments your code and then reports back on whether or not each line was executed while your test suite ran. If a line or expression was never run, it means you didn’t test it.
Anyway, in all of that time, I’ve only encountered one or two functions that couldn’t be written in a way that made them testable. Looking back, I’m not actually convinced that I couldn’t deal with those now by using dependency injection.
Sometimes you have to rethink the problem. Occasionally you even need to settle on an interface that’s less than ideal. But at the end of the day, it’s extremely unlikely that you’re working on a new feature that is literally impossible to test.
If you’ve been convinced that testing is important but don’t know where to start, I’d recommend looking at some open-source projects. Most high-profile OS repos have excellent coverage, and they can give you an idea of how to effectively test your code.
If you’re worried about getting buy-in from the rest of your team, here’s my recommendation:
Start writing tests for your own contributions. Don’t even ask permission. Just do it*.
I’d be willing to bet that the rest of your team will start to see the benefits before long.
*I did this at my last job, and it worked brilliantly. Within about a week, people were actually excited about it.