Automation stories: Key things you should remember in design of the Rest API Automation tests solution

You decided to create a new test solution for automation of Rest API Microservices. I was on your place too. Logically, when you start a new project, you don’t know what pitfalls can happened. I have already faced with some and maybe I will face with some more in the future.

In this article, I want to share some key things you should definitely think about, when you start design of automation test solution aka Framework(I still don’t like to call it like that). It’s up to you if you will use it or not.

This is just a list of notes from my experience. There is no any importance order. I just placed all of them one after another and I can move any of them on the top or bottom of the list. And one more thing. This is not article about “How to create your Test Automation Solution”. This one is about some key things you should think about when you design your test solution.

Before we start, I want to say thank you to two people who helps me with the articles:

So, let’s jump into details.

Split Client Projects and Test projects

The problem is, in the modern world, a lot of microservice can cross with each other. As an example, you may use Authentication Microservice in of all other microservices. Or maybe, some of your main data microservice can be used in the different places.

So, If your project includes more than one microservice, it’s a good idea to split Client implementation part with tests part. In this case, you may use Client project reference in a different test project without referencing all tests from another project.

If you have only one microservice, likely, you will have more in the future. So, you should think carefully if you need to create Client part together with tests part. It will be easier to design it in this way, rather than split in in the future.

API tests version management

Today, you have only one version of API. Let’s say - “v1”. In the future, you may have versions: “v2”, “v3”, “v…n”.

Some of the tests may run in all of the versions without changes.. Some will start in “v2” and will be available only in “v2”.So, you should definitely think how to manage all your tests during run..

I was resolved this problem providing some specific Tags/Attributes.

As an example, my endpoint has appeared in a version “v1” and it was changed in version “v3”. So, I want to run my test in version “v1” and “v2”.

I have created a dictionary with versions order

On test, I’m assigning specific attributes:

  • Required Category attribute (from Nunit) with a start version of the API e.g. “v1”. It means, this test will not be run a version earlier than “v1”
  • Optional Custom Attribute “FinalApiVersion” with version e.g. “v2”. It means, this test cannot be run in a version later than “v2”.

So, in the scope of BeforeEach (SetUp attribute in Nunit), I’m taking API version from configuration and comparing it with Test attributes. If version is not applicable, I’m just ignoring test with appropriate comment. BTW: This approach can be used to test class itself as well.

API tests request/response model version management

This is a problem similar to previous one. I believe you’re parsing your JSON response to some model in order to retrieve a data.

In one API version model maybe updated or totally changed (hope it is not critical changes). You should think about it, from the start of your project/solution.

First of all, you may need to create a folders with appropriate model versions.

Second, verify model differences. Maybe model was not significantly changed and you can use first model as base model to second one.

My suggestion here is to be careful and think before creation.

It’s not necessary to copy all the tests to the new version if some model part was changed

You have changed a model in a new version of API. Oh my god, I need to copy all the tests because I want to the same things in a new version and more.

Stop! Stop! Stop!

Let’s think from test design prospective. Probably I’m not going to verify all the fields/properties in each test. So, some of my previous test can be run on both previous and this version, because I’m verifying some logic without fields/properties verification.

The are a lot of examples. We can have a negative tests or we can have a tests verifies search and pagination.

So, the rule is the same, analyze what tests you can left for both versions.

Do not underestimate logging

This sounds pretty straight. Logger will help you a lot of times in the future. As an example, it will help easily to investigate requests and responses in order to understand core problem. Also, you may quickly provide bug details.

I suggest, to add basic log to both API requests and database scripts if you use it in your solution.

In my example, you may see I’m logged in all request important info. Maybe, except headers. But this one is test endpoint.

Think how you will resolve parallelization problems

This piece of advice is more easy to raise than resolve. Anyway you should try to do the best.

I don’t think, anyone, in the modern world, is trying to run the tests without parallelization. The problem is it can cause some data conflicts and you should carefully manage it.

There are few options you can do:

  • Try to create independent tests. It means test data should be specific for your tests or tests scope
  • Add additional filters if it’s possible. As an example, if you create some data and search total count, it may cause the conflicts. So, maybe it will be better to add some additional filter in order to avoid it
  • Try to apply some parallelization priority or make some tests NonParallelizable
  • Remove some of the repeatable asserts from the tests. The key mistakes I often see, people like to duplicate asserts from test to test. Even if test was designed to verify some other logic, people may continue to assert all the fields response. It’s not necessary to do. You have already done it in the tests responsible for appropriate logic. So, that smells like bad design. Plus if one test failed with this issue, this one will also fail with the same issue. The problem is this test was designed to verify some other logic.

Use retry logic

It also can help you to stabilize your tests. In some cases tests may fail not because of an issue, but because of parallel run or some other reason. You can implement a logic retries test run. It can also be applied to some test level operation. As an example, to your database methods.

On CI, in some cases, we catch database timeout problems when we ran a lot of tests. So, we have implemented DB request retry logic and it helped us to make test runs more stable.

Prefer API request under Db queries

In some cases, you don’t have a choice and should use only database in order to perform some CRUD operations. Anyway, if you have a choice, it will be better to use API requests instead of database queries.

Some of the production environments can be restricted for database access or access maybe limited. You will not have a chance to run the tests on these environments.

On one of my previous projects, we didn’t have access to DB at all. We build specific Test API endpoints in order to setup some data.

Do not forget about environment parametrization

I think, this one is obvious to all test projects. You should parametrize a test project in order to easily adapt it to different environments. Tests should be environment independent.

What you can put to your environment configuration?

  • Environment base URLs
  • Some credentials (Database, API and etc.)
  • API version
  • Some specific environment configuration (if you have it)

Note: In my case I also have a parameter enables or disables logging of tests.

Use Design Patterns in order to build effective solutions

This step is also required if your want to build effective solutions. You should remember, some people except you can also participate in design of automation tests. It will be easier to structure everything in some logical order. Good news, people already created some abstract templates called Design Patterns. It will be nice to be familiar with some of them and apply to your testing solution.

One of my previous articles can help you to find some useful Design Patterns for test automation solutions:

Don’t forget to clean up

You may delete it immediately after test run or after some scope run or in the end of all tests run. Anyway, it’s a good practice to clean up after yourself.

Of course you may have some exceptions in your project. But if you don’t, just do it!!!


This article was written based on my personal experience. It contains some of my mistakes I made of the start of my project and achievements (things I was included from the start of the project).

I believe there can be more of things I have described here. If you have something, please share! I just hope you read it and now think what you can do in your solution.

Thanks for reading!!! Good luck!!!



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Kostiantyn Teltov

Kostiantyn Teltov

From Ukraine with NLAW. Senior QA Automation/QA Tech Lead. (C#, TypeScript). Also, I have a dream to develop Indi video games with Unity 3d