All About Test Cases

Sohit kumar
10 min readDec 23, 2019

--

When I started my career I was asked to write test cases for my code. Like everyone or at least like most developers, I also thought why do I need to write test cases if I have tested my code. As time passed with the experience, I got the hang of writing test cases but still had few questions unanswered.

Undraw.co

Fast forward a few years I read a few books like Clean Architecture, Building Microservices, and DevOps Handbook. One thing common in these were they all spoke about writing test cases.

Working on few projects I had time to set up a new project so I put together my thoughts and notes from these books to answer a few questions around it.

Why do we need test cases?

What are different types of test cases?

What is the scope of the test cases?

How many test cases we should write?

Who takes ownership of the end-to-end test?

How long test-cases should run?

How to run and integrate test cases to our Pipeline?

Undraw.co

Why do we need test cases?

Any organization wants to reduce the lead time of their project, we want to deploy changes in production as soon as possible which means we want to continuously integrate our code and test that everything works as expected.

Test cases ensure us that when we add the new code, existing and the new feature will work in production as expected.

Without automated testing the more code we write the more time we will need to test it.

If there is any problem, we want to detect the problem early when the fixes are cheapest and fastest.

By having test cases we enable our teams to develop safely, test and deploy code in production, delivering value to the customer.

We want to get feedback from tests in the fastest way possible and fix it.

Key idea is that we should be able to release changes as soon as possible.

What are different types of test cases ?

Business facing

Automated Acceptance testing(did we build the right thing)

Manual Testing (How can I break the system)

Technology facing

Automated Unit Testing(Did we build it right) Automated

Automated Non-functional requirement testing(response time, performance, security)

Unit test cases

Unit test is about testing a single method or a function. When doing so, we will want numerous tests to give us quick feedback and catch most of the bugs, assisting developers that their code operates as designed.

This also helps us in refactoring knowing the small scoped test will catch bugs if we make a mistake.

We should keep unit tests stateless, stub out external dependencies.

Service Tests/Acceptance Tests

It tests a single service in isolation to find issues and fix the problem faster. To achieve isolation, we need to stub out all our external collaborators.

They have few moving parts so they are less brittle than large scoped tests.

Test application as a whole to assure that higher-level functionality work as expected.

After acceptance test build is available for manual /integration testing.

End-to-End Test/Integration Test

End-to-end tests run against the entire system.

We ensure our application correctly interact with other production services as opposed to stubbed interfaces

If we find that writing acceptance or unit test is too tough than it means our architecture is tightly coupled

Performance Test/Non-functional requirement(NFR) Test cases

Performance test cases validate the entire application stack (code, storage network) as part of the deployment pipeline so we can detect the problems. It includes tests for NFR (eg-Availability, capacity, security, scalability, Latency) security checks and environment configurations to ensure everything is correct and works as expected.

We should have a benchmark that performance should not go down below 2% or something and fail if it falls beyond a threshold .eg- Database query time increases non-linearly, Code changes causing an increase in the number of databases calls

Build a performance test environment and ensure to have dedicated resources/hardware for it before the project starts.

What is the scope of the test cases?

The increasing scope of test cases gives us more confidence that the given functionality will work. In case it breaks, it is difficult to determine which component has broken.

Test case Pyramid

As we go up in the pyramid, the scope of test cases increases and are difficult to identify and fix.

As we go down the pyramid, tests become faster and we get faster feedback. We find broken functionality faster, CI builds are faster and we are less likely to move to a new task before ensuring anything is broken.

How many test cases we should write?

Unit test cases are faster and have less scope. Hence, we want to have many unit test cases capturing more scenarios as it gives us faster feedback on what is broken and we know exactly which function/classes we need to fix.

The number of test cases should decrease going up in the test pyramid.

Test journeys, not User stories

If we add end-to-end tests for each story, we will have a bloated test suite that has a poor feedback cycle. Focus on a few core journeys to test the whole system. The journey not in end-to-end tests should be tested with services in isolation as part of unit/acceptance tests.

Inverted Test pyramid is an anti-pattern

Projects having many large scoped tests will have a slow test, will take more time to run and have a larger feedback cycle. In case of any issue when running large scope tests, the build can remain broken for a long time while we are trying to find out the root cause of the failure. We will fix the build before deployment and end up releasing a large amount of untested code causing production issues.

Once we have a bottleneck in end-to-end test replace with smaller scoped test

How long test-cases should run?

Keep the unit test case within 10 minutes

Integration test cases can take a couple of hours.

The idea here is that we want to get feedback faster and know about the problem as soon as possible. Earlier we get the feedback, easier it is to fix it.

We should run faster test cases first and then slower test cases in the below sequence

Unit Tests -> Acceptance Tests -> Integration Tests

As unit test-cases are large in numbers and have less scope, it should be fast enough to know about the problem.

End-to-End tests are tricky

This test has more scope and more confidence, but slower and harder to diagnose the failure.

Let’s say in micro-service architecture we want to release customer service. Which version of another service we should run? What if another service is also getting deployed?

A standard way to solve this is to have a fan-in model, run end-to-end tests if we change any service against the latest version of other services.

But there is a problem

Flaky and brittle test?

Multiple moving parts in the end-to-end test can introduce failures indicating that the functionality under test is not broken but something else is broken (like another service is down ).

When we detect the flaky tests, we make sure that we remove them. Otherwise, over time we become accustomed to test cases not working and no one fixes it. The build is fixed at the last moment resulting in large size deployment causing issues in production.

Better to check if we can replace flaky test with small scoped test

Who Writes end-to-end tests?

We need to understand the health of the whole suite. Everyone adding test cases results in an explosion of test cases(test snow cone). None cares about the failed test case assuming it to be else’s issue.

Have a Separate team for test cases?

No!!

Having a separate team for writing test cases distant the developers from the tests of their own code, leaving them with no idea on the test cases.Cycle/lead time increases due to hand-off delay as the developer waits for the test cases to be written.

We don’t want to duplicate effort and also we don’t want to centralize effort. What should we do?

Treat the end-to-end test cases as a shared code base with joint ownership. Teams are free to add test cases along with shared ownership on the health of the test suite between services.

How long for an end-to-end test?

The slowness of end-to-end tests can be a major problem. Given that it can be flaky, it can be much more harmful as it delays the feedback. If the test breaks after a day or two, the developer will have to switch context and fix it.

Remove test cases /duplicate test cases that are no longer needed.

If test cases are slow and it takes longer to run, great ‘pile-up’ can happen. While someone is fixing the existing test cases, other features may come in adding to the queue. Hence, the build can remain broken for a longer time.

Consumer-driven test cases

Consumer-driven tests define and capture the expectations of a consumer on service.

We should run these tests as CI pipeline and it should not break any contract.

If a service has two consumers, two tests must be created matching the expectations of both the consumers individually. Both the teams must sit together to collaborate and write the test cases.

Consumer-driven test cases are at the same level as service tests but have a different focus. If the test breaks, we get to know which consumer’s contract has broken. Accordingly, we can fix it or starts discussing the problem. These tests are faster and can replace end-to-end tests.

So should we use the end-to-end test?

Most people running at scale have seen Consumer-driven test cases with improved monitoring serves the need in place of end-to-end tests. End-to-end test decreases risk but increases feedback time (time to identify the problem)

Depending on our risk appetite, we can have a balance of CDC and end-to-end test. In either case, we will need to have effective monitoring and remediation in place in production.

Testing before and after production

Many organizations try to reduce the number of defects in production but have no plan to mitigate the failure in production if any.

Sometimes, putting in the effort to get better at production remediation is beneficial than adding more tests. This is often known as trade-off between Mean time to repair and Mean time between failures.

Below are a few ways to deploy our software and test it before directing the production load against it.

BLUE/GREEN deployment — release and test the new version of the software. Once everything is running fine, direct production traffic to the new version.

Keeping the old version running, for the time being, we decrease the risk in case of rollback.

Canary releasing — run both versions side by side. Check the performance from baseline by diverting a portion or copy of the production load.

Using the above techniques, test different NFR(Non-functional requirements) before and after release. Example- Latency of web page, the number of users the system can support.

We may want to track NFR tests for some services differently. Example -The durability of payment service should be high.

Key to enable fast and reliable automated testing

Always keep our system in a deployable state

Having just test cases is not enough. We need to integrate it into our deployment pipeline to get continuous feedback on our changes. A few key points to continuously integrate our code and run test cases.

Continuously build, test and integrate our code and environments

Continuous running of automated test suite allows us to be sure that we are always in a deployable and shippable state.

After every change, our deployment pipeline validates that our code is successfully integrated into the production environment.

Continuous Integration will build and package software, run automated unit tests, perform additional validations, style checking, static code analysis, test coverage. If successful, triggers the acceptance stage where the code is deployed in a production-like environment and acceptance tests are run.

Build fast and automated, reliable validation test suite

Run fast and automated tests in test environments so we can catch issues and fix them immediately.

Catch error early in our automated testing as possible

As unit tests are faster, have less scope and are run before service/integration test, we should be able to catch maximum errors in the early stage of testing.

Service/integration tests are slower in later stages. In such cases, we will have to wait for a longer time to catch the issue. Debugging and fixing is also time taking as they have a large scope.

Ensure test cases run quickly

We need to make sure that the test case runs fast and parallel so that there is no delay in identifying any issue.

For example, run acceptance tests and performance tests in parallel.

Always use the latest build that has passed all the test cases for manual, exploratory testing.

Write test cases before code

Practice TDD (Test-driven development)

Automate as many manual test cases as possible

To have a human executing test that should be automated is a waste of human potential. They should work on exploratory testing or adding the test case itself.
We should remove the unreliable test- make a few reliable automated test over many unreliable tests.

Integrate performance tests in our suite

Fail performance test if it falls beyond a threshold.

Fix immediately when the deployment pipeline breaks

Developer Job is to run a service not to write code

Why do we need to stop everything and fix the breaking test cases first?

Let’s say, someone checks in a code that breaks the build, and no one fixes it. Someone else checks in their code on top of the broken build which again fails the test. But no one realizes it and fixes it as the pipeline is already broken.

”Why bother builds are always broken ?”

Our test doesn’t run reliably so we do not build new tests. We fix build at the end of the project leading to a large batch size, big bang integration deployment. We do not test effectively and end up discovering most of the problems in production.

--

--

Sohit kumar

Swiss Knife, solves problems, building tech platforms. Follow me for intresting tech articles.