API Tests using Rest Assured and Cucumber

Bitso QA Team
bitso.engineering
Published in
5 min readDec 18, 2023

Background

API test automation is vital to Bitso’s software development life cycle. Due to a shift in our language preference, we moved away from Pytest and evaluated alternatives like Spock, Micronaut and Rest Assured. Rest Assured emerged as the most practical choice in this exploration for its complete and simple combination of functionality.

A key strength of Rest Assured lies in its unique ability to engage developers actively in the testing process. Its Java-centric approach fosters developer involvement and establishes a seamless connection with the application’s codebase. This integration results in more effective testing practices, elevating the quality of the delivered software.

The declarative syntax of Rest Assured is another standout feature, significantly improving the readability and flow of our API test scripts. This clarity simplifies collaboration within the team, empowering members to understand and contribute to developing test scripts efficiently. Rest Assured’s approach to simplicity and robustness, coupled with a thorough analysis of alternative tools, firmly positions it as the cornerstone of our API test automation strategy at Bitso.

Cucumber

The decision to transition away from using Pytest as the stack for integration tests allowed us to explore alternatives that could enhance our testing processes. We aimed to address specific challenges, including:

  • Collaboration: Given that the team consisted mainly of Java developers, there was a learning curve for them to contribute effectively to the test implementation.
  • Readability: How tests were written posed a challenge for non-technical team members, such as product managers, to understand them quickly.
  • Reusability: While Pytest allowed for the reuse of specific methods, there were cases where duplicating tests and steps became necessary.
  • Costs: As a general guideline, we sought open-source tools to manage costs effectively.

Considering these criteria, we evaluated various tools, and Cucumber emerged as a fitting choice for our needs. Cucumber is an open-source testing framework that has gained prominence in software development and quality assurance, particularly in Behavior-Driven Development (BDD). BDD is an agile methodology that promotes collaboration between non-technical stakeholders and technical teams throughout the software development process.

At its core, Cucumber provides the capabilities to express and automate software testing, ensuring that it aligns with the desired behavior and functionality. What sets Cucumber apart is its use of the Gherkin language, a domain-specific language that allows the creation of plain-text descriptions of software behavior. These descriptions, known as “feature files” are the basis for automated tests.

With Cucumber and BDD, it’s possible to write test scenarios in collaboration with business analysts and other stakeholders, fostering a common language that both technical and non-technical team members can understand. This improves communication and results in more effective and maintainable automated tests that reflect the true intent of the software requirements.

Test Structure and Organization

Feature Files:

Test descriptions, which are the building blocks of Cucumber test scenarios, are available in the feature files. They serve as a blueprint for how the software should behave under different conditions. These specifications use the Gherkin syntax, a highly readable language designed to be understood by non-technical stakeholders. It uses keywords like Given, When, Then, And, and But to define test steps, making it easy for everyone involved in the development process to contribute to testing scenarios. For example:

Given a user already has a valid quote ID
When the user accepts the quotation
And a user wants to know the status of the transfer
Then the response returns the status of the transfer

Step Definitions

Step definitions are a vital component of the Cucumber testing framework. They are the code implementation of the plain-language steps defined in the feature files. These step definitions are the actions that correspond to the Given, When, Then, And and But steps in the feature file. They establish the link between the plain language of the feature file and the actual code execution.

Assertions

Assertions are essential for test automation frameworks, serving as critical checkpoints to validate expected outcomes against actual results. There are different ways to address test assertions, like JUnit; we decided to go with RestAssured and Hamcrest, since they provide built-in methods to assert conditions such as the HTTP status code, response headers, and the response body’s content. RestAssured assertions are less verbose, giving direct access to data, it provides idiomatic methods for arranging and asserting HTTP APIs, helping to ensure the readability and maintainability of the tests. Example:

user.getResponse().then()
.assertThat()
.statusCode(201)
.body("success", equalTo(true))
.body("payload.parameter_a", equalTo("1"))
.body("payload.parameter_b", notNullValue())))
.body("payload.parameter_c", equalTo("API Test"))

We configure our test executions to run in the stage environment. The tests are automatically triggered upon successful deployment to the stage environment and clearance of all build and unit tests. This process signifies the readiness of the changes for further evaluation in a production-like setting.

Here’s an overview of when our tests take place:

A sequencial flow with the following boxes: -Successful deployment of PR in stage -Execution of Cucumber tests -Upload of test results report to GitHub Actions -Addition of results message to PR

Executing Cucumber Tests
The successful deployment of a PR to the stage environment automatically triggers our Cucumber tests. Although we always run these tests, we don’t block the PR merge, as external circumstances could cause test errors. However, making these tests a prerequisite for merging adds a layer of evaluation before any changes transition to the production environment.

Uploading Test Results
As part of our testing process, we upload these test results to GitHub Actions, ensuring transparency and maintaining a comprehensive record of test outcomes. This helps us keep an eye on the quality of the changes and makes sure the software stays stable.

Commenting on PR
Upon the conclusion of the testing phase, an automated comment is promptly appended to the PR. This comment serves as a concise summary of the test results, facilitating communication within the team and ensuring that all members are well-informed about the test outcomes.

Two comment on a Github Pull Request, the first one informing that the tests failed and the second informing that the tests passed.
Comments attached to the PR informing test's result

This approach enhances our quality by ensuring that new code won’t break core features. The tests run basic scenarios and offer essential confidence that the software remains stable, ultimately contributing to a better developer and deployment experience.

Next Steps

In our upcoming steps, we plan to support adopting this flow and solution in more projects. This expansion will ensure consistent testing practices and promote the scalability and versatility of our testing framework. On this journey, we will assess whether the solution we have developed so far can smoothly integrate with different projects.

Additionally, we aim to centralize shared steps within a library, such as authentication and endpoint status checks. This library will be accessible for implementation in any of our projects, streamlining the testing process and enhancing efficiency.

This post was created by Daniel Mariotti, Erick Reyes and Nelson Gutierrez.

--

--