Test Automation Framework · Reporting & Observability (Part 5/5)

Damian Moga
Globant
Published in
5 min readOct 17, 2022

Monitoring test data, its details, and its progress over time improves visibility into how test activities are being performed, how the product is being tested to ensure the expected level of quality, and how KPIs or other project metrics are being achieved over time, with an appropriate process for implementing improvements or remediating situations that impact test activities. Without data and a process for monitoring the data in real-time, the test automation process can always be challenged and not fully transparent.

So do not think that a test report as output and a spreadsheet are enough to make a significant change. You must always remember to consolidate test data in an effective way to enable data-driven decisions with real-time dashboards and monitoring.

There are various tools available in the market with different features and at different costs. So choosing the right tool depends on how often the data is updated and how the data is displayed. In some cases, a custom dashboard in the test management tool is sufficient for data visualization, but in other cases, this is not enough and more powerful and dynamic dashboards are needed. For this reason, choosing the right tool is very important.

Below is a practical example of how test execution information is stored in a database on a daily basis and per execution. The historical information is analyzed against predefined models using machine learning, trends, and pre-analyzed data. The “Smart Data” is organized and sent to the visualization tool according to its type and characteristics.

Real-life example of using machine learning models to analyze test execution information and visualize it in real-time dashboards

1. Preparation

There are some important considerations before you send Smart Data and enable its visualization in dashboards.

First, you need to choose the right data source to keep the data temporarily or for a specific period of time. This choice depends on how you want to keep the data, but more importantly, how you want to access it, using specific query filters. Since DataStudio is the visualization tool, BigQuery is a good complement to provide external access to Google’s Dremel technology, a scalable, interactive ad hoc query system for analyzing nested data. BigQuery provides various functions for managing data, querying data, and also integrating with other tools or running machine learning models.

And secondly, you should provide an API endpoint to send the smart data as JSON and process the cloud function internally to store the data in the BigQuery data table.

2. Visualization

Normally, a dashboard should be created before sending information, but in any case, the information can be sent, stored, and later visualized in specific dashboards.

Depending on how the data should be visualized and what KPIs or OKRs a project has, one or more dashboards can be created with different sections and charts.

  • Test distribution: How the test cases are distributed per team, depending on the test type, category (Smoke, Sanity, Regression, Environment, Artifact, Priority) and of course whether the test is executed manually, scheduled for automation or automated.
  • Test coverage: how test coverage is distributed by business unit, critical function and per team, depending on the number of automated tests versus manual tests and their evolution over time.
  • Test execution: how are the tests executed? How is coverage distributed by environment, artifact and test type? And also the number of tests executed per team at any given time of day.
  • Test effectiveness: an indicator of defects per environment, per category, per priority or severity, and the number of defects opened that impact the business. Bug detection rate: how often bugs are found and in what environment. Are these bugs detected by automated testing?

With DataStudio you can do this and more using multiple charts (bar charts, line charts, maps, density maps, scatter plots, Gantt charts, and bubble charts), data controls (data lists, text box, advanced filters, and controls), raw data and additional features to beautify the information.

Below, we present some of these dashboards as examples of test information visualization that allow the analysis of coverage in different domains (not just at the test level), as well as execution results, frequency, and evolution over time. All data comes from BigQuery and was previously captured and processed by the test automation framework from test executions.

The first dashboard describes the distribution of test cases overall and also per project (in terms of distribution per team) and visualizes relevant information such as the percentage of manual test cases compared to automatable test cases, the distribution of test cases per type or the evolution of the testing process per period (accumulated test cases, TCs created per week, accumulated manual TCs, accumulated automatable TCs, number of obsolete TCs, etc.).

Total distribution of test cases and per team (filtering)

The second dashboard shows test automation coverage overall and also per project. It provides coverage indicators that consider the percentage of test cases that can be automated, automated test cases, and distribution by test type. It also displays other relevant information such as cumulative automated TCs, the total number of automated TCs per week, and the automated WIP trend.

Total test automation coverage distribution and per team (filtering)

The third dashboard breaks down coverage per layer and per test type, in this case, only coverage related to API tests and for artifacts deployed in a given environment that have direct and integrated coverage in the CI pipeline. This information is valuable to understand how the coverage and number of tests are distributed across the application layers and also allows searching for specific artifacts without coverage (in a separate dashboard). Furthermore, based on the test execution information obtained, additional information can be displayed, such as success rates per artifact, evolution over time (adding warnings or flags when coverage decreases in a given period), and the information can be organized not only per layer, but also per team or affected area.

Artifacts with continuous testing (API testing) in the CI pipeline

In conclusion, It is worth reminding ourselves that Reporting & Observability are complementary approaches supported by industry best practices, cutting-edge test automation solutions and frameworks, intelligent real-time monitoring, and data-driven decision-making. It is a continuous process of execution, data collection and visualization where teams and the entire organization can feel the value of testing by gaining proper visibility and transparency of every activity performed. And as result, with Observability in test automation, time to market is accelerated, the human effort for information analysis is reduced, and the entire quality process is intelligently and automatically ensured.

--

--

Damian Moga
Globant

I’m Damian from Argentina. I am working at Globant as Tech Director