Why You Don’t Write Automated Tests, Even Though You Can and Should

Automated tests. They’re the magical solution that promises find many types of bugs before they ever reach production. Each test executes one or more actions in your system and compares expected with actual behavior way faster than any human tester can. They’re the secret ingredient of any high-quality software product.

Unfortunately writing tests is so much work and the business always pushes developers to deliver more features faster, so we don’t have time for them. Actually, hold on. Is that true?

During my first year of working in the industry, my perspective on automated tests changed drastically. I initially perceived the costs to be way higher than they actually were and I underestimated the benefits. In this article, I want to share my learnings to help you deliver better software with the help of automated tests.

(General sidenote: I’m mostly talking about unit tests in this article. End-to-end tests are a slightly different beast because they test your system through its UI, they’re harder to write, slower to execute and harder to maintain.)

The Big Book of Excuses

If you’re reading this article as a developer, chances are that you care about your craft and are always looking for ways to improve. You’ve probably considered the possibility of writing automated tests before, but for seemingly rational reasons you might have decided against it. Let’s have a look at common reasons not to write tests and see how well they hold up.

“Manually testing my code is much faster!”

Writing tests is a time investment. Therefore, it should be justified. If it takes less effort to test manually, why would you write a program that does what you would do? My first answer to that: regression. Ever seen a bug pop in testing or production and thought: “But this worked when I wrote the code!” Yes, but then you did some light last-minute refactoring, forgot to retest and now it’s broken.

That’s the thing about manual tests: because you often don’t have time to manually retest every code path that could possibly be affected by a seemingly innocent code change, issues will easily slip past you. They only come up after you’ve already submitted your bug to testing, or worse, after the bug has affected production. Fixing this type of bugs takes away time you’d have otherwise spent on writing new features. It’s no fun.

Now let’s say you take some time to write automated tests instead. If you run that automated test every time a code change is made, this can easily amount to hundreds or thousands of test runs over the lifetime of your code. Running automated tests at this scale costs a small fraction of their manual counterparts. Run them while you’re implementing the feature. Run them to verify a quick code change. Run them to verify that heavy refactoring leaves your logic intact. Easy peasy. The cost of rerunning automated tests is so ridiculously low that you’ll run more tests more often and end up with better code.

Although automated tests have an initial setup cost, the cost of running them is negligible. If you intend to test something more than a couple of times over the lifetime of your code, the automated test is much cheaper.

By automating your tests you will avoid costly manual retests, discover issues immediately and spend less time fixing bugs. Developers can then spend their time focusing on building shiny new things. Meanwhile, testers can focus on their human strengths: finding creative ways to break the system instead of executing mundane test scenarios as if they were machines.

“Writing automated tests takes a lot of time!”

I have some good news for you: it gets better. You’re likely near the start of the learning curve. You will learn the common usage patterns of your testing framework and build a routine. You will implement methods that provide good dummy data and reuse them a lot across your test suite. You will find low-hanging fruit and write smoke tests. Start writing tests and you will become very efficient at it.

“The business always pushes me to do more in less time!”

Communicating the importance of investing in technical quality to a manager can be tricky. It comes down to this: you cannot have speed without quality. If you hire a contractor to build a house under a tight deadline, you’re gonna end up with crappy electrical wiring and other quality issues. You’ll have to live with safety risks, unexpected maintenance costs and a bad feeling about your investment. You want to focus on buying cool stuff for your new house, but instead your time and money will go towards fixing quality issues.

The business doesn’t want the crappy electrical wiring. They probably just don’t realize that they’re pushing you in that direction. Give them the benefit of the doubt. They know that you need to build your product on a stable foundation to keep customers happy and based on that common ground, you can level with them.

Are you absolutely sure that they’re not interested in building quality products? If that’s the case, abandon ship and get a new job. Building and maintaining bad products will burn you out, so run while you can.

“Automated tests take too long to run!”

You should be able to run a test suite in a decent amount of time. If it takes you more than 15 seconds to run one test, there’s an inefficiency that needs to be fixed.

Most of my experience writing tests comes from developing and maintaining Django applications. When I wrote my first unit test, I noticed that the test runner would take a very long time to set up a clean MySQL database. The slow test performance was discouraging me from writing tests.

You have to find a way to clear these roadblocks because they increase the cost of running your tests. This makes you less likely to run them because of cost-efficiency, which will increase the amount of bugs that slip past you and impact the quality of your code. A dev colleague of mine found out that running our tests against a SQLite database drastically improved test performance, so we changed our test configuration to use SQLite. Boom. Roadblock cleared.

“I can’t easily test this piece of functionality!”

This can have a lot of causes, many of which are symptoms of code quality issues. It could be an architectural issue. Perhaps the piece of code you’re trying to test tries to do too much things at once and doesn’t have a proper separation of concerns. Separating concerns will improve your code in lots of areas, including its testability.

If you’re adding tests for legacy code, perhaps you can’t deduce what the expected behavior of a piece of code is. This is a big red flag. If you don’t know what to test for and your colleagues don’t have any clue either, you’re dealing with technical debt. If this happens a lot and you don’t find a way to address the debt, the quality of your product will start declining dramatically. But that’s a whole other topic.

Low testability is not always a code quality issue. For example, integrations with external systems are notoriously tricky to test, regardless of how clean your code is. Automated tests often bring the system in a specific, predetermined state before testing its behavior. Different input can lead to different behavior, meaning the test could randomly pass or fail depending on the input. If you’re testing against an external system, you’ll always have this problem because its behavior and current state are rarely controlled by you. In addition, you need to connect to “sandbox” version of the external system if you don’t want to risk affecting production with your test suite —and that’s not always an option. For these reasons, you could consider mocking the communication with the external system. This means that you won’t test the integration between the two systems automatically, but you will still be able to test how your system would react to certain input from the external system.

“I have a lot of existing code and I don’t know where to start!”

Start by focusing on the most problematic areas and areas that change most often. Every time you find a bug, write tests for the broken functionality first. Running the tests should produce a failure. By doing this, you can verify that your diagnosis is correct. Now fix the bug and rerun the tests to confirm that the issue has been taken care of.

You can follow a similar workflow with change requests: write tests for affected functionality, ensure they pass, save your work. Then update tests to reflect the new behavior. If you run them now, they should fail because they’re inconsistent with current behavior. Update your code to reflect the newly desired behavior and finally rerun the tests to ensure that your changes are correct.

This is just the beginning

Once you’re getting into the flow of automated testing, you’ll probably start asking yourself several practical and ideological questions. Can my IDE visualize which lines of code are covered by an automated test and which ones aren’t? Do I want to aim for 100% test coverage or should I take a more pragmatic approach? Can we integrate test coverage statistics in our code reviews? When are unit tests good enough and when do I want to use full-blown end-to-end tests?

I hope to tackle some of these questions in a future article.

Conclusion

Automated testing looks hard and costly if you’re not used to that way of working. In reality, it helps you deliver stable software faster — and keeps your software stable as it evolves. It reduces the time spent on bugs, which gives developers and testers time to do more interesting things. It improves the technical quality of your product, which keeps the business and your customers satisfied. Everyone wins!

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.