Functional API Rest Testing for a Search Service

Ana González García
Empathy.co
Published in
7 min readDec 21, 2018

APIs are everywhere, although users are often not aware of them. Social networks, payment connectivity, search sites… All of them have one or several Application Programming Interfaces (APIs) that allow their customers to interact and consume data from different platforms.

EmpathyBroker’s Search API is another example, it allows users to employ any search function, score, facet, partial, spelling, classification and so on, from any environment with a response time of milliseconds. In order to ensure its reliability, we need to carry out tests independently of the user interface, this is so that we can isolate any potential problems and anticipate their detection.

Functional Testing

“The testing process to determine the functionality of a software product” is how ISTQB (International Software Testing Qualifications Board) defines this type of testing. Functional tests analyse what the software is supposed to do and make sure it actually does it. They’re included in the group of so-called black box tests:

“The specification-based testing technique is also known as “black box” or input/output driven testing because it considers the software to be a black box with inputs and outputs.

In other words, these tests are carried out without knowing what is happening inside the component or software that we are going to test.

Within this type of test, many other types are included, such as unit tests, smoke tests, UI tests, API tests or integration tests. Most of them can be performed either manually or automated.

In the pyramidal model of test automation that Mike Cohn details in his book Succeeding with Agile, three levels of tests are specified, ordered by the number of automated tests desirable within each of them. As we can see, there is a higher number at the base and a lower number at the top.

Ideal Test Automation Pyramid

At a second level, there are service layer or API tests. Component or integration tests are also included in this level. This is the most overlooked layer in functional test automation. Often tests that exceed the scope of a unit test are run through end-to-end test scenarios based on user interfaces. This solution may work, but it is a sledgehammer to crack a nut. As a consequence, the tests can become very complex, difficult to maintain, and slow in execution.

Therefore, it’s good to have an intermediate level between the unit tests and the GUI tests, where integration, API or component tests are performed.

API Testing

APIs typically allow communication between machines using standard languages or file formats. They free end users from the limits of using a pre-determined graphical interface and allow inputs and outputs to be integrated into their own custom applications.

Photo by Fabian Grohs on Unsplash

Rest APIs functional testing verifies that the responses and behaviour of the service conform to the specification. They also include checking for particular functionalities. These features represent specific scenarios to ensure that the API function behaves correctly within the planned parameters. As mentioned previously, API testing is an important activity on which test teams should focus. In addition, it offers a number of advantages that should be considered…

Early Bug Detection

Being early and independent in GUI testing means feedback will be obtained sooner and equipment productivity will be improved. The core functionality can be tested to expose small errors and to evaluate the strengths of each build.

Language Independence

Data is exchanged via XML and JSON, so any language can be used to automate the test, regardless of those used to develop the application. There are numerous libraries that support the comparison of data using these formats. Such as “jsonschema” or “schema” for python, which allow us to validate the schema of a JSON object against another previously defined one.

Better Coverage

Most APIs have specifications which allow us to create automated tests to follow them and therefore achieve high coverage.

On the other hand, there are, of course, potential drawbacks too:

Time Spent

These tests are slower and more complex than unit tests because they may need access to a database or other components.

As we’ve seen, APIs are clearly essential for both software and websites, and, therefore, are really one of the most important things to continually test.

How?

As seen above, API testing is at the intermediate level of Mike Cohn’s pyramidal model. This means that it’s a good idea and practical to have enough automated tests to ensure its functionality.

Usually the test design for the service layer follows the same scheme:

  • Use the test data to identify the entries
  • Determine what the expected result should be based on those entries
  • Execute test cases with the appropriate entries
  • Compare expected results with actual results

Starting to test at this level can be hard, especially if you don’t have the experience. Firstly, we must select the tool that best suits our needs. Postman, Insomnia or Paw are similar tools that allow us to test on APIs in a simple way, however, in this article we’ll use examples of testing on APIs created in Python.

Python is a general-purpose language, it’s dynamic and flexible. From a tester’s perspective, it has readily available modules and libraries that facilitate the creation of scripts. Tests can be written as xUnit style classes or as functions. It provides a complete automation testing solution for any type of project and is capable of unit, functional, system and BDD testing.

One of the most important testing frameworks for Python is PyTest. It offers a single solution for unit, functional and acceptance testing. The following example corresponds to a very simple test on an API search. Simply put, rest APIs consist of URL requests such as get, post and delete. In this case, we’re going to test EmpathyBroker’s Search API returning a collection of products that match a specified query. For this example, we’re using the request library to execute the requests.

This Python library allows us to send all kinds of HTTP requests and analyse the response later. The test verifies, through assertions, that the response code of the request is the one of error 400, since we sent the same one without one of the obligatory parameters of our API. We could verify that with the right entries all the desired errors occur and also check that the message shown is the correct one. Then, after sending the request with all the necessary parameters we get the answer 200 Ok. Finally, we want to verify that if we use an invalid endpoint in the API the response gives error 404.

def test_errors():
url = ‘https://api.empathybroker.com/search/v1/query/ebdemo/{endpoint}?'
# params of the url
params = {'lang': 'es','rows': 24}
path = url.format(endpoint=‘search') # make a get request
response = requests.get(url=path, params=params)
assert response.status_code == 400, 'Error code incorrect'
params['q'] = ‘camisa'
path = url.format(endpoint='search')
# make a get request
response = requests.get(url=path, params=params)
assert response.status_code == 200, 'Error code incorrect'
path = url.format(endpoint='searchv1')
# make a get request
response = requests.get(url=path, params=params)
assert response.status_code == 404, 'Error code incorrect'

Pytest also allows us, among other things, to parameterise the tests in a simple way. This enables us to test a number of different parameter combinations of our API.

In the following example, the test is parameterised with two different combinations of parameters for the request (the part after “?” in the URLs). Therefore, the test will run twice, once with each.

@pytest.mark.parametrize("lang, query, rows", [('es', 'camisa', 24),('es', 'pantalon', 18)])
def test_pagination(lang, query, rows):
path = 'https://api.empathybroker.com/search/v1/query/ebdemo/search?'
params = {'lang': lang,'q': query, 'rows': rows}
# make the request and get the response data
response = requests.get(url=path, params=params)
json = response.json()
# check that the number of product returned is as expected
assert len(json['content']['docs']) == rows, 'Rows are incorrect'

The parameterised tests are useful to specify input parameters and the expected results, as shown in the previous example.

In the following illustration, the first parameter passed “params” will be the input parameters, which in this case corresponds to the parameters of the request. Here we want to test if there are any attributes in the answer, specifically for each of the products returned in the list ‘docs’. As we’ve said before, this json response check can be done in a simpler way with the use of external libraries.

params1 = {'lang': 'es', 'q': 'camiseta', 'rows': '24'}params2 = {'lang': 'es', 'q': 'shirt', 'rows': ’35'}@pytest.mark.parametrize("params ,rows", [(params1, 24),(params2, 35)])
def test_response_structure(params, rows):
path = 'https://api.empathybroker.com/search/v1/query/ebdemo/search?' # make the request and get the response data
response = requests.get(url=path, params=params)
json = response.json()
# checks the number of rows
assert len(json['content']['docs']) == rows, 'Number of rows are incorrect'
# checks the attributes of every product in the response
for product in json['content']['docs']:
assert 'price' in product, 'Price not in the product'
assert 'name' in product, 'Name not in the product'
assert 'image' in product, 'Image not in the product'

If we execute these tests with the command pytest -v src/test_case.py we get the following result:

src/test_case.py::test_errors PASSED                                                                                                                  [ 20%]src/test_case.py::test_pagination[es-tv-24] PASSED                                                                    [ 40%]src/test_case.py::test_pagination[es-movil-18] PASSED                                                         [ 60%]src/test_case.py::test_response_structure[es-tv-24] PASSED                                        [ 80%]src/test_case.py::test_response_structure[es-movil-18] PASSED                             [ 100%]

This way we can test the functionality of each of the different endpoints that are part of the API. It’s also easy to run these tests after each build of the service and use them alongside a regression test, applying it to each functionality after each change.

Conclusion

Avoiding a graphical user interface when testing an application may be a good way of creating tests that are less scalable than the full end-to-end ones, while these will still cover a large part of the application.

This can be particularly useful when testing through an application’s web interface, which is particularly difficult. Sometimes we don’t even have a user interface of our own, but we serve a REST API that must work properly regardless of the user interface.

Either way, a test just below the GUI can take us very far without compromising on reliability.

--

--