This article outlines some test automation best practices and answers the following questions:
- When and why should you implement test automation?
- How do you determine the right testing-technique combination?
- What methods are most common for the different testing levels?
When and Why: Test Automation Best Practices
It doesn’t make sense to test a system that hasn’t changed unless you want to know more about it. You can experiment with test cases to learn more about the behavior of the system than its documentation describes.
There are two reasons for testing:
- Testing enables change
- Tests describe the behavior of the system
The ultimate test automation goal is to ship faster and more often, with no bugs. Nobody wants their release delayed because the test cycle took too long. Shipping the software without first validating that it’s acceptance criteria have been met is a bad idea. When software is being refactored, automation can require a lot of maintenance. Building and maintaining the tests can be too expensive in some cases. Keep this in mind when you decide how to implement your automation.
5 Testing Considerations
No written law dictates when to implement what type of test. No magic formula dictates how to test a particular feature either. Testing an entire feature is usually done using multiple tools. After all, you can’t build a house using only a hammer; you’ll need a saw and a drill, too. And just because you can’t cut a plank with a hammer, doesn’t mean the hammer isn’t a useful tool.
Different types of testing methods solve different kinds of problems. Any of them can poorly perform when applied in the wrong situation. The challenge is to identify the issues at hand to find the right combination of test-techniques that will cover all the functional and technical aspects of the software.
Here are five things to consider when determining your testing method combination:
- TRUST: Is the test trustworthy?
- COST: How much will it cost to build and maintain these tests?
- SPEED: Do the tests run fast?
- RELIABILITY: Are each of the tests reliable?
- TARGETED: Will the test(s) point you in the right direction?
1) TRUST: Is the test trustworthy?
Can you trust the test to tell you if the system is broken (preferably functionally)? Or, if the test fails, what requirement is not met anymore? Is that block of code still relevant when the functional needs of the system change? Will this test help me decide if the test is wrong and needs to be removed or changed? Or will it show me if it’s the system that’s flawed, and that I need to fix a bug there instead?
2) COST: How much will it cost to build and maintain these tests?
Why would I automate tests if it slows me down? Make a business case. How many tests of this type are needed to cover all the test cases? How much will that cost? How often will they run? And what are the costs of changing them when the software changes?
Time spent building the test * amount of tests + times changed * cost of change = good or bad idea.
3) SPEED: Do the tests run fast?
As the clean coder “Uncle Bob” (aka Robert C. Martin) said, “What do we do with slow tests? Exactly…We don’t run them.”
You might run slow tests during the build and release, but eventually, you will disable them once their performance becomes an obstacle. Consider how much time it will take to run all your tests together (including integration tests and an end-to-end test). Be prepared to go for a different strategy if it takes longer than five minutes — max!
4) RELIABILITY: Is the test reliable?
Nothing is worse than a flaky test, in which case, we’ve created a problem that’s not reproducible. You know how hard those are to fix.
5) TARGETED: Will the test point the team in the right direction?
Testing is not just about pointing out the things that are broken. It’s about determining the state of the system and responding to issues as quickly as possible. To do this, your tests must be targeted and explicit about what’s broken and provide very specific feedback. Create tests that do both by taking trust, cost, speed, and reliability into consideration as well.
5 Testing Methods
Different testing methods can be applied to different testing levels. Here are five common ones:
Unit testing can be applied when testing small pieces of code. So, for example, if you were baking a pie, a unit might refer to the sugar. It’s one functional piece of code which is meaningless outside the context. You could, for example, test if the white powder you’re about to add to your pie is actually sugar by testing if it is sweet or not. A unit can (and should) be tested, but that test won’t guarantee apple pie. This test will provide documentation on a granular level which will probably be useful to the developer, but meaningless to the product owner.
Run these tests locally and in the build.
So, let’s continue with our pie metaphor. Imagine you’re baking a pie. You’ve tested the sugar, the apples, the butter, the flour, and the eggs. Will all these ingredients result in a pie after they’ve been stirred and baked? And does it taste like apple pie? This kind of testing doesn’t care about what type of butter or dough you used. It only tests the business value.
Run these tests locally and in the build.
Imagine a bakery. The bakery will sell all sorts of pie. Will the customers’ query for an apple pie result in him actually receiving an apple pie? The main concern of an end-to-end test is to see if all components can interact with each other. It will test the entire business flow from the beginning to the end.
You might conclude from the list above that you need to be very careful regarding when to apply an end-to-end test. Do appreciate that end-to-end testing can be a very effective way to test if the systems can interact with each other. Just make sure you don’t have too many.
Run these tests in your release pipeline after the system has been deployed, or locally, depending on your architecture.
Smoke tests are shallow, technical tests that run after deploying a new release. Smoke tests identify generic flaws (config, permissions, proper .net framework, among others). If the smoke tests don’t pass, the system will be broken and the release will be rejected or rolled back.
Run these tests only in your release pipeline after the system has been deployed. If the different services are unable to communicate with one another, there’s a 90 percent chance the problem is of an infrastructural nature. So there’s no use in running these tests before the system has been deployed.
This very technical type of testing is done to ensure the system’s ability to exist amongst other (external) systems.
Run these tests only in your release pipeline before deploying the system.
Testing can be challenging. No flowchart dictates which tests to implement in a given situation. It takes time and practice to get it right. The right combination of automation tests will run fast, test behavior, and allow for refactoring. It will provide targeted feedback for a faster response time. To determine your automation testing approach, analyze every situation and identify the challenges you need to address. By keeping these test automation best practices in mind, your software releases can be easy as pie!
Great.. Now how do I put that into practice?
Keep reading… Different types of testing put together can be an effective test automation strategy. There’s no silver bullet. The strategy that applies depends on the situation. There is a common approach, though. I’ve tried to describe it in this blog: A simple, effective test automation strategy.