The Five Mantras of Testing

I used not to do unit testing. I kept finding excuses. “My code doesn’t fit with testing.” “I can’t unit test, this needs a running database.” “I can’t test all cases and I don’t know what to test.” “There’s already so much untested code in the codebase, what’s the point in testing it now?”

What I finally understood is that I missed the TDD train back when I should have caught it, and I found it hard to learn as an autodidact.

What finally got me into it was Go. Having the testing framework baked into the stdlib and the toolchain really did it for me. I’ve been unit testing ever since, and reaping the many benefits. If you’ve been facing the same problems with the TDD learning curve, here are a few mantras I’ve been applying every day that changed my life.

Photo by Jed Adan on Unsplash

The Test Is Your First Client

How do you check a function or method you’ve written works? Do you run it, log the output, and make sure it matches your expectations? Do you use step-by-step debugging to watch what happens to your inputs? Stop doing this. If you have some known inputs and expected outputs to your function, don’t do this by hand: write the test that does this. Even if you only have one case. Code it, don’t just keep it in your head. It’ll be easier to add more cases to a pre-existing test, than to start writing them from scratch later.

Photo by Florian Klauer on Unsplash

If you can’t test it, break it into pieces

Unit testing taught me to break behavior into smaller, easier to understand pieces. Have you ever looked at a piece of code and thought: “I can’t test this, it does too many things”? That often means you can (and should) isolate independent tasks, separate them into “units” and compose them to get the expected behavior. Then test each of these units independently. You’ll find your code is much easier to reason with, understand, read, reuse, factorize… You’ll even find yourself getting better at thinking in terms of “mutability”, “const-ness”, “pure functions” since these concepts will have important repercussions on how and what you test.

Photo by NeONBRAND on Unsplash

In Tests We Trust

Exhaustive testing is impossible, but your end goal should be that your tests give you a good enough idea that your codebase works as a whole. In particular, passing your test suite should make you 90% sure that you can merge your code. This means that not only should you expect previous tests to break if you’ve introduced a bug, but that all code you write should be tested to give the next developer the same kind of assurance. It does not mean you should aim for an arbitrary metric like “99% coverage”, or be afraid to commit your code if the tests don’t cover every corner case. In practice, no form of testing or automation will avoid bugs, but it’s reasonable to assume that you can rely on tests to validate the behavior of your program, and work towards that goal. Speaking of bugs…

Photo by David Siglin on Unsplash

You Fix It, You Test It

So you tested everything and something still broke in production? Don’t panic. It’s to be expected. But there’s something to learn, here: your tests weren’t covering the buggy case. Once you’ve fixed the bug, add a test case for that specific issue. It will make sure your fix prevents the bug from occurring in the future. It will also prevent the bug from reappearing later in another commit. This is called a non-regression test. Additionally, the test itself serves as a historical marker of the project, and contains information as to where and when you found a special case about your program. The insights gained from fixing a bug are easier to share and talk about when there’s an explicit piece of code that guards against it.

Photo by Randall Bruder on Unsplash

F**k it, we’ll do it live

Often, it will seem hard to test code that depends on some “live” parts, i.e. dependencies from outside of your code. A database, an HTTP API, a filesystem. All these pieces are independent from your codebase, while the code you want to test performs — respectively—SQL queries, HTTP requests, file I/O… You’ll hear a lot about mocking, dependency injection, setup and teardown. These are all important things in the world of testing. But sometimes setting up a mockup of your database is more work than writing the code and the tests combined. Don’t be ashamed to actually test against a live instance of the database full of test values. Just make sure the test setup is well-documented and easily reproducible for other developers. Some will argue this is borderline “integration testing” and they’ll be right. Is the distinction that important? As long as you have a surefire, reproducible way to test your behavior, you’re one step closer to shipping your code. And good code at that. Remember: you aren’t shipping your tests.


These are a few of the guidelines I apply when writing tests. I used to dread testing because I didn’t know where a “unit” began and ended. I used to start writing tests and get “analysis paralysis” trying to cover every possible input and precondition. Following these few rules, I now feel confident delivering robust code, and I even have fun writing tests!

How did you start testing your code? Learn it in school? Had a mentor? What are your techniques to write better unit tests? Comment below and let me know.


If you liked this article, click the 👏 below, share and repost.

I am currently working on Panto. Panto is a modern, open-source monitoring solution, built for performance, that bridges the gaps across all levels of your company. The well-being of your infrastructure is everyone’s business. Keep up with the project