Building a Robust API Testing Framework in Playwright(Typescript version)

Orozbek Askarov
11 min readMar 17, 2024

--

Rest API

API testing is a critical aspect of ensuring the robustness and reliability of back-end services. With the advent of Playwright, a powerful and versatile testing framework, performing API testing has become more efficient and developer-friendly. In this article, we’ll explore the ins and outs of Rest API testing with Playwright, discussing authorization and authentication, parameterization and Data-Driven testing, debugging, and CI integration providing you with a comprehensive guide to elevate your testing practices.

This piece is a continuation of my earlier discussion on building a robust automation framework in Playwright.

Framework Architecture and Test Execution Flow

“A picture is worth a thousand words”. Let’s walk through the key components, utilities, and the overall flow of how the Playwright orchestrates the execution of tests.

Playwright API Test Execution Flow

We organize our code into three distinct categories: global setup and teardown, utilities, and tests. Initially, we establish the test environment, execute the tests, and subsequently perform a cleanup of the environment. To enhance the simplicity, readability, and reusability of our tests, we adopt a modular approach by creating utility functions.

You may wonder about the management of logs, reports, assertions, browser handlers, and more. The answer is straightforward — we don’t need to handle them manually. Playwright seamlessly manages all these aspects internally, alleviating the need for manual intervention. This ensures a streamlined and efficient testing process, allowing developers to focus on crafting robust tests without the burden of manual handling for auxiliary functionalities.

Global Setups

As shown in the diagram, the flow starts with global setups. There are two ways to configure a global setup:

  1. Using a global setup file and setting it in the config under globalSetup:
    This method enables the execution of a function before any other actions, specifically before the initialization of test methods, within the Playwright project. This pre-test function allows us to modify Playwright configurations and establish environment variables. One practical application of this setup is obtaining an authorization token. Given that authorization is not a component of the current service testing, excluding its results from the test report is accomplished through this approach.
  2. Using project dependencies: With project dependencies, you define a project that runs before all other projects. Using dependencies allows global setup to produce traces and other artifacts. For instance, we leverage this setup to test the health check endpoint of the service.

Note: Failure of any global setup means the rest of the tests will be skipped.

Using the project dependencies approach is the recommended way to configure global setup as with Project dependencies your HTML report will show the global setup, the trace viewer will record a trace of the setup and fixtures can be used. (Playwright Docs.)

Implementation:
For example, let’s implement a request which gets OAuth 2.0 access token

import type { APIResponse } from '@playwright/test';
import { request } from '@playwright/test';

async function globalSetup() {
// Read secrets from cloud and set as env vars

// GET SERVICE API AUTH TOKEN
try {
const resp: APIResponse = await (
await request.newContext()
).post(process.env.AUTH_TOKEN_URL as string, {
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
},
form: {
client_id: process.env.CLIENT_ID as string,
scope: process.env.SCOPE as string,
client_secret: process.env.CLIENT_SECRET as string,
},
timeout: 300000,
});

const respJson = await resp.json();
// set auth token as env variable
process.env.AUTH_TOKEN = respJson.access_token;

} catch (e) {
console.error('Unable to authenticate. Occurred error: ' + e);
}
}

export default globalSetup;

TestSuite.spec.ts

We will demonstrate the practical application of our utilities by writing a comprehensive API test. This test will encapsulate all our utility functions, ensuring a modular and organized approach to testing. As we proceed, we will adhere to several key principles, including clear and descriptive test case titles, precision in test implementation, and the separation of logic to enhance readability and maintainability. By parameterizing our requests and adhering to the DRY (Don’t Repeat Yourself) principle, we will ensure that our tests are flexible and easily adaptable to changes. Let’s dive in and witness these principles in action as we craft a precise and effective API test.

// ./src/api/search.spec.ts
import { test } from '@playwright/test';
import { getSearchUrl } from './utils/urlBuilder';
import { getSearchTestInputs, getSearchData } from './utils/dbData';
import { getSearchPayload } from './utils/payloadBuilder'

test.describe('Test Search Endpoint: ', () => {
const initialInputs: ISearchRequest = {
//... contains request default and initial values
clientId: 'abc123'
}
// build URL: baseUrl+path+apiVersion+...+endpoint
const url = getSearchUrl();
test.beforeAll( async ({}) => {
// Get test inputs from db
const testInputs = await getSearchTestInputs();
initialInputs.clientId = testInputs.id;
});

test('validate getting corporate clients', async ({ request }) =>{
// Arrange
const typeCorporate = 'C';
const requestInputs: ISearchRequest = { ...initialInputs, type: typeCorporate}
const payload = getSearchPayload(requestInputs);
const expectedData: ISearchItem[] = await getSearchData(requestInputs);

// Act
const response = await request.post(url, payload)

// Assert
await expect(response).toBeOk();

const body: ISearchItem[] = await response.json();
// Beware: ISearchItem[] does not convert the response or throw an error
// Assert schema

expect(body.length).toEqual(expectedData.length);
})
})

NOTE: We are conducting a test for a specific endpoint, which means there is only one URL involved, along with initial data.

initialInputs: We can keep well formatted, ideal request body with request defaults and hardcoded static data. It gives us flexibility to test quickly without relying or retrieving data from database. It gets dynamically updated once we query needed data in beforeAll hook.

requestInputs: By changing one or more properties of initialInputs, we can easily create any combinations of request body.

payload: By passing requestInputs object to payloadBuilder, we get ready payload which contains needed headers, and data converted to JSON.

expectedData: By passing the requestInputs object into the getDbData utility, we enable the creation of parameterized SQL queries. This approach allows us to retrieve specific data from the database and apply additional business logic within the getDbData utility to manipulate the data as necessary, ensuring that the results are obtained in the required format. This encapsulation and separation of data logic from the actual test case enhances readability and flexibility, enabling clearer understanding of the test case and facilitating easier maintenance and updates.

We covered the main points on creating a request. The next two sections contains detailed explanation about the utilities and assertions.

Utilities: Framework VS API

Let’s dive into the world of utility functions and understand their significance. A utility function, sometimes referred to as a “helper function,” is a compact, reusable, and versatile piece of programming magic. It serves a specific purpose or provides a particular functionality, without being confined to a specific object or class. Instead, these functions are designed to be adaptable, lending a helping hand in common programming tasks.

Consider an API test scenario — multiple elements like an endpoint URL, payload, and data need attention for construction or response validation. To maintain a focus on the essence of our tests, ensuring simplicity and readability, we employ additional logic encapsulated within utility functions. These functions act as the behind-the-scenes architects, allowing us to concentrate on crafting robust tests without getting bogged down by intricate details.

Framework Utilities are general helper functions such as:

  • Database(DB) utilities: handles configurations, connections, and running queries.
  • URL Utilities: simplifies managing URLs by modularizing URLs into the environment, API version, paths, request parameters, etc. Read BASE_URL only in one place and re-use it.
  • General Utilities: utilize to place any utility such as getting a randomly generated number or string, a date in a wanted format, etc.

API Utilities are API-specific helpers:

  • Types: This is specific to the Typescript environment and contains any type including request and response blueprint(like C# Data Contracts).
  • Fixtures: A Playwright fixture is a mechanism that enables the setup and teardown of resources or configurations before and after test execution, ensuring a consistent testing environment.
  • Payload Builder: handles parameterized request payloads.
  • Secrets: contains functions that reads secrets from cloud key-vaults. Exported function can be called anywhere in the test to get needed secret(s).
  • Data utilities: Often, it’s necessary to implement extra logic to retrieve data from sources in the desired format. It’s advisable to manage this process externally, using a utility like DbData.ts, and to centralize parameterized queries in a file like queries.ts rather than duplicating queries for each request.

Note: How you classify utilities is entirely based on your testing requirements. If there’s a chance of incorporating different testing types, such as End-to-end or Integration Testing (API), it’s beneficial to separate utilities that can be universally applied across all test types.

API Response Validation

When asserting responses in Playwright tests, it’s crucial to follow best practices to ensure your tests are reliable, maintainable, and provide valuable feedback. Here are some best practices for asserting responses in Playwright:

— Test Coverage: Make sure to cover all combinations(positive and negative scenarios) that can changes your response.

— Always ensure the request succeeds before parsing the response body to JSON. Utilize await expect(response).toBeOK() for validation. This method not only checks that the status code falls within the range of 200 to 299 but also provides comprehensive request information in case of assertion failures. This includes the request URL, headers, status code, status message, and more, enhancing the debugging process and facilitating quicker resolution of issues.

Schema Validation: TypeScript’s type checking is effective during the development phase but doesn’t extend to runtime. To ensure that our data conforms to expected formats, types, and structures beyond development, we employ schema validators. These validators serve as a runtime mechanism to validate incoming data against predefined schemas, ensuring that our application receives the expected data shape and format regardless of the runtime environment. JSON schema validator tools.

Validation Order: assert request success by toBeOk() (then(rq.status).equal(2XX) if precision needed), then validate the header, schema, etc., then continue with data validation.

Negative scenario validation involves adhering to API Design Best Practices, which emphasize the use of a standardized error response format. The format typically includes error, message, code etc. When handling general 400 errors, validating both the response status and message proves sufficient. This approach ensures consistency in error handling and facilitates clearer communication between the API and its consumers. Creating parameterized tests is best way to ensure property input validations.

Let’s validate error message of a specific input by using parameterized data-driven approach:

  const invalidIds = ['@bc', 'abcdefgh', 'abc0123456789'];
for (const id of invalidIds) {
test(`validate the error: id (${id}) must be alpha-numeric with min length of 9 and max length of 12`, async ({
request,
}) => {
// Arrange
const rd: IEndpointReq = {
id: id,
categoryId: td.categoryId,
};

// Act
const resp = await request.get(url, {
params: rd as Record<string, string | number | boolean>,
});

// Assert
expect(resp.status()).toBe(400);
const body: IErrorRes[] = await resp.json();
expect(body[0].message).toContain(
'Must be alphanumeric with a length of 9 or 12'
);
});
}

Test Data Management

API test data management encompasses the tasks of defining, generating, and maintaining test data to facilitate thorough testing scenarios. Implementing effective practices in test data management empowers teams to streamline testing processes, enhance test coverage, and identify potential issues early in the development lifecycle. Particularly in dynamic multi-environment setups, the most effective approach to data management often revolves around dynamically retrieving data. In this section, we will delve into the optimal methods for achieving this, focusing specifically on SQL databases. It’s important to note that we neither save nor generate test data; instead, all data is retrieved directly from the database, underscoring the importance of well-crafted queries. By parameterizing our SQL queries, we can efficiently obtain the expected data using a single query, thus ensuring precision and efficiency in our testing efforts.

// /api/data/queries.ts

// get clients data by request params
export const getSearchExpectedDataQuery = (reqInputs: ISearchReq) => {
const query =
`
SELECT Clients.ID as id, Clients.Name as name, Clients.Type as type,
Orders.OrderDate as orderDate
FROM Clients
INNER JOIN Orders ON Orders.ClientsID=Clients.ClientsID
WHERE 1=1
${ reqInputs.type ? "AND Clients.Type ='" + reqInputs.type + "'" : "" }
`

return query;
}

export const getSearchTestInputsQuery = (reqInputs: ISearchReq) => {
const query =
`
SELECT TOP(1) Clients.ID as id, Clients.Name as name, Clients.Type as type,
Orders.OrderDate as orderDate
FROM Clients
INNER JOIN Orders ON Orders.ClientsID=Clients.ClientsID
WHERE Clients.Type IS NOT NULL
`

return query;
}
// /api/data/getDbData.ts
import { IGenericObject, ISearchReq } from 'api/types/type';
import { runQuery } from 'utils/sqlDb';
import { getSearchTestInputsQuery, getSearchExpectedDataQuery } from './queries';

export const getSearchExpectedData = async (reqData: ISearchReq) => {
const res = await runQuery(getSearchExpectedDataQuery(reqData), 'dbServer2');

// implement additional logic here
// manipulate data to get expected results

const results =
res?.recordset as IRecordSet<IGenericObject>;
return results;
};

export const getSearchTestInputs = async (rd: ISearchReq) => {
const res = await runQuery(getSearchTestInputsQuery(rd), 'dbServer2');

return res?.recordset[0];
};

Are there any drawbacks to this approach?

While it offers complete control and flexibility, it also introduces complexity. One of the major disadvantages lies in data validation. In real-world scenarios, SQL queries may become overly intricate for Quality Engineers (QEs) to grasp and update in response to changes. Additionally, many APIs involve more than just fetching data from a database and sending it as a response. Complex business logic implemented within the API can pose challenges in validating everything solely based on database data.

So, what alternatives do we have? Replicating complex business logic within the automation framework often proves impractical. Doing so can lead to overly complicated tests, increased maintenance efforts, and decreased stability in the face of changes. Instead, relying on schema validation and validating critical scenarios with data can offer a more viable solution. I will be happy to have a discussion on this topic on the comment section.

Complex API. Credit: GIPHY API

Additional Recommendations:

— Read environment variables in one place. Avoid using process.env.VAR everywhere. For example: UrlBuilder read any URL related env vars.

— Modularize and extend the framework by using UTILITIES.

Use FIXTURES to create reusable commands/steps between test files — you can define them once and use in all your tests.

— Naming Convention: Have a standard naming convention across the framework.
Test file name can be method.endpoint.spec.ts OR method.endpoint.public.spec.ts for public tests.
For example: Start types with ‘T’ and interfaces with ‘I’.

type TCompany = 'C' | 'L' | 'P';

interface ISearchReq {
clientId: number,
type: TCompany,
name: string
}

Conclusion:

In mastering Rest API testing with the Playwright framework, we’ve uncovered a powerful approach to ensuring the reliability and robustness of backend services. Through our exploration, we’ve discussed key components such as authorization and authentication, parameterization, data-driven testing, debugging, and CI integration. By leveraging Playwright’s capabilities, testers can streamline testing processes and uncover potential issues early in the development lifecycle. This article serves as a continuation of our discussion on building a robust automation framework in Playwright, where we emphasized the importance of global setups and modular test organization. We’ve demonstrated the significance of utilities in managing various aspects of testing, from database interactions to URL management. Additionally, we’ve explored API response validation best practices, ensuring comprehensive coverage and accuracy in test assertions. Through effective test data management practices, particularly in dynamic multi-environment setups, we’ve highlighted the importance of dynamically retrieving data from SQL databases. By parameterizing SQL queries, we can efficiently obtain expected data, enhancing precision and efficiency in our testing efforts. These insights and best practices empower testers to elevate their testing practices and deliver high-quality software products.

--

--