Test Automation Framework · Reporting & Observability (Part 2/5)

Damian Moga
Globant
Published in
4 min readSep 12, 2022

Representing the Test Execution

As explained in the first article, the starting point for successfully applying reporting and observability to the test automation framework and its processes is to capture the execution information and test details and present them with a specific structure and data relationship.

Thus, there is a data contract for test execution that must be adhered to so that the test automation framework can create and update the information properly at runtime.

This information is divided into at least three parts: the test execution, its test details, and the defect details per test.

Test Execution

Test execution information is compiled as soon as the test run is triggered locally, remotely, or with any CI tool. Each test execution information is linked to each test, so one run contains multiple test details.

This information can include mandatory fields or optional fields. Depending on the implementation or scope, some appropriate and important fields can be defined as follows:

And of course, many more fields depending on the requirements.

Some fields such as: ‘id’, ‘date’ and ‘environment’ are defined immediately at the start of the run. Others like ‘rate’ are defined as soon as the execution is completed, evaluating each test result.

Test Detail

When a test is executed, a test detail is associated with the test execution information. These details are important for representing the test, accessing its information, and analyzing its result.

Some relevant fields are:

An important field is the ‘id’, a way to link the automated tests to the test management tool to later integrate with it and update each test result part of the execution.

Failure Detail

The failure details are an important element of the test details. If the test failed, it must contain failure information and evidence to allow analysis and categorization so that results can be reported later with complete evidence. Some of these important fields are:

The ‘Reason’ field and all other fields represented as ‘Error Type’ and ‘Error Message’ are extremely important for grouping errors, as it is very common for similar errors to occur on the same execution.

Performing this analysis and grouping errors into categories depending on the reason improves the visibility of test results and their analysis. Error analysis and categorization help to create custom messages for test notifications and other custom messages processed by the alerting system.

Capture the Test Execution · Hooks and Listeners

Now, how can we ensure that this information is collected at runtime? The answer lies in using a listener and/or hooks to collect test information at various stages of execution.

The test runner manages test execution and runs all the tests, but hooks and listeners are implemented to collect information or perform specific actions at runtime, while tests are executed.

  • A hook allows functions to be executed before or after each step in the test execution lifecycle. For example, before and after execution or before and after each test.
  • A listener responds to specific events in the test execution lifecycle and performs actions. For example, an event could occur when a test is completed and process the test result.

Finally, test execution runtime information is handled and organized in different contexts, separate from the one representing the entire execution and the one representing each individual test. Managing the information in this way is useful to process all the data and store it in an in-memory structure or in external data sources:

  • Test execution context: an entry point to have a set of test contexts and work in a thread-safe environment, considering parallel execution and multiple calls to the data.
  • Test context: represents each test detail collected during execution, is associated with a test execution context, and allows the test to store information in that context.

This process captures, processes, and organizes the information for later persistence and sharing in various reporting, alerting, and monitoring alternatives.

Knowing how the test execution is captured and organized at runtime, in the following article we will focus on visualizing this data by sending custom alerts and generating test reports.

--

--

Damian Moga
Globant

I’m Damian from Argentina. I am working at Globant as Tech Director