One guideline to test them all — Part 3

Is testing really necessary?

Mehmet Yatkı
5 min readSep 25, 2019

Nope, not always.

Photo by Evan Dennis on Unsplash

Yes, you read it right. Probably it looks strange to see that coming from a testing advocate. But I’d like to talk about this topic real quick here. (Also don’t get too excited, may be not always, but most of the time we’ll need to write tests :)

Now, the next couple of paragraphs might sound a bit controversial and generalizing too much. But all of them are based on my observations. Feel free to ignore them, if you didn’t have a similar experience with testing, I’d also love to hear your experiences in the comment section.

When I got into software development and at some point learned about testing, I always heard things like “testing is very important”, “testing improves code quality”, “testing makes a development process agile” etc. However, I got involved with so many projects that had “tests” but code quality wasn’t that good and processes in those projects were not that agile either. Also, I saw brilliant projects that didn’t have any tests but some were written very well, and some were still agile. This boggled my mind for so many years :) Why so many projects struggle with having good tests, while everybody agrees testing is important?!

Because, unfortunately, we tend to write tests for the sake of having tests without putting enough effort into thinking about what kind of value those tests bring if they bring any.

In my opinion, I don’t think testing improves code quality. I would say, static analysis tools and experience improves the code quality. Testing just catches bugs and code problems. A piece of code without bugs doesn’t mean that it’s high-quality code. That’s just code that works, which is fine. What I understand from high-quality code is the code following best practices, easy to understand and extend. Of course, I’d appreciate if it also works :)

I would also argue, a project can only be agile with having good project planning (proper task definitions/grooming/planning) and using good collaboration/automation tools like GitHub/Slack/Jenkins, etc.

So, before going into more detail on anything, I’d like to raise the most important question:

Why do we test our products?

We test our products to ensure delivering what we promised to deliver, while we are trying to deliver more.

First of all, I’d like to highlight the word “promise” here. Because that is what we do each time we publish a product. We promise to its users that our product is going to work as we described in the documentation. Of course, we can allow users to verify that by making our product open source. However, if we provide a closed source product, then users have to put their trust in us.

Even if we publish our product as open source and provide full test coverage, users may still want to get some sort of assurance before they trust and use our product.

So actually what is important is to gain and maintain that trust. Writing tests is just one way to do it. If your product never fails its users, users will trust you more, and it will only get easier to attract new users.

And of course, there are other ways to gain and maintain trust. For example, as the product owner, you may as well assure your users/customers that you’ll reimburse their monthly fee and pay an extra 1000$ on any service failure/interruption. Believe me, if their lives don’t depend on it, your users will be fine with that. Testing your product before publishing it, however, would be a cheaper solution for you.

Secondly, I’d like to emphasize “…, while we are trying to deliver more” part. You may have a product that was written without tests and it might be working like a charm. However, one day you decided to add more features. Now, what could go wrong? (Spoiler Alert: From now on, this question will be the backbone of this article series.)

There are plenty of things could go wrong. Even if you are the only developer or the user of the product, you can forget the code that you wrote a couple of months ago. Remember;

Any code of your own that you haven’t looked at for six or more months might as well have been written by someone else. — Eagleson’s law

You may break something that was working. And based on the number and complexity of features that you promised to deliver, it will only get harder to keep those promises.

If you are part of a developer team, you’ll definitely break something that somebody else built. Because if you don’t, they will. This has nothing to do with the competency of developers, it’s the nature of our job. We are humans! Our biggest strength is learning from our mistakes and keep repeating them until somebody solves them with automation.

So basically you may skip writing tests under one of the following conditions;

  • if you are not planning to add more features to your product and if you confirmed that what you initially delivered is working fine.
  • if you are the only user of your product, and you think you know what you are doing.
  • if you follow semver rules (“pre-release”) or give a disclaimer that states the product may be unstable when that’s the case.
  • if you think nobody’s life or business depends on your product and a failure of your product is tolerable and wouldn’t hurt its users/you/your company/your team in any way.

However, not having enough resources or time is never an excuse to avoid writing tests.

The purpose of this chapter was not giving you reasons to avoid writing tests. My goal with this thought experiment is to show you that you are allowed to and should question anything.

--

--