And you ? Do you know how to assess the footprint of your automated functional tests ?

Julien BARLET
Decathlon Digital
Published in
6 min readSep 1, 2020

When we talk about unit tests on a project, we’ll easily look at the unit tests count but especially the footprint of these tests on the application code : it’s the notion of code coverage. This notion allows developers to set a coverage objective, in order to have a target to follow throughout the life of the project. The team commits to a coverage value and sticks to it.

When we talk about automated functional tests (with selenium, cypress or other automation tests framework), very often we loose this notion of the footprint of our tests on our application. Simply because it is difficult to measure and also because these are functional tests, the footprint has to be assessed on the functionalities, the use cases and not on the application code.

For teams with high test maturity, 2 ways to measure will be preferred:

  1. The theory-based one with Cohn’s pyramid, where the team will apply a factor per level of testing. Ex: Factor 10, i.e. for 1000 unit tests => 100 service tests => 10 HMI tests. The advantage of this method is to keep a balance between the different levels of testing but the disadvantage is that it does not give a clear measure of the coverage on the functionalities.
  2. The one based on the test coverage on User Stories. Less theoretical than the one presented above, this cover is well oriented on the functionalities. However it reaches its limits over time as User Stories do not represent a functional documentation ever up to date of the application

If these 2 methods are not entirely satisfactory, let’s imagine a new one and give ourselves the means to measure ourselves frequently and over the long term ?!

In this article, we will present the genesis of our method at Decathlon and the tool that allows us to measure ourselves.

The continuous test at Decathlon: focus on the method

The time to convince people about the value of having automated and industrialized tests is over. However, opinions can diverge when the implementation of these tests throughout the life of the application, from the start of the project until its stop in production.

I would like to share with you how at Decathlon (Decathlon is a French sporting goods retailer. With over 1,500 stores in 57 countries, it is the largest sporting goods retailer in the world), on our e-commerce program, we have developed our automated testing strategy according to the constraints that arise at different times in the life of the project.

First, let’s set up the context. We are fifteen teams of ten people. All in Agile, mainly in “Scrum”. Each team belongs to a specific functional area of e-commerce (product supply management, customer account, logistics and payment, etc.). Some teams are transversal, we find there the Front, the Ops and the build. Except operations and build teams, you will find in each team all Agile roles (Scrum Master, Developers, Product Owner… and QA).

Teams organization

At the beginning a strict application of the “BDD” methodology : Our acceptance tests linked to the user stories

Regarding the method, all the teams are in Agile and use the Behavior Driven Development. The “Definition of Ready” requires teams to have acceptance tests for each User Story before committing to implementation. And among the acceptance tests, the most critical (very probable use case and strong business impact) are automated.

Test mapping on user stories

This has the effect that each team will be able to provide an indicator for the coverage of automated tests on their User Stories. With results consolidation, you can have this coverage mesure on the whole e-commerce solution.

It worked well at the start of the project. The automated tests followed the development effort and especially the increasing functional contribution to the project. The coverage indicator was there to ensure that the effort on automated tests followed the effort set on development.

The User Story, this volatile information. Make way for features mapping

However, User Stories have a level of granularity specific to each team. When accepted by the tests, they will only have historical value. In this, they constitute volatile information and cannot be considered as a reference of requirements.

We had to revise our automated test coverage indicator so that it is no longer based on User Stories. In this, we have built a features mapping. The features ultimately remain the product of user stories with the difference that unlike user stories, they can evolve over the life of the application and with them the associated tests.

Test mapping on features

Concretely, our acceptance tests previously associated with our user stories were associated with our features. This means that an acceptance test can cover the acceptance criteria for one or more user stories of the feature. In fact, this led us to initiate a refactoring of our tests, which was generally enriched with actions and assertions. On the automated testing side, this saved us time in terms of execution but also led us to take care of the robustness of our tests, because the more steps in the tests, the more chances to get a false positive are high.

ARA: An open source tool to support and develop our features mapping

This cartography which became live and non-volatile documentation, had to be stored in a place accessible to all the actors of the project. It should easily be updated by the project stakeholders, provide us with the test coverage index for these features in real time, etc.

To meet this need, we decided to upgrade one of our tools from our ecosystem of tools. This tool was designed by our IT teams originally with the promise to facilitate the exploitation of our automated test results and to provide continuous measurements on our quality. Also having the possibility to manage the features mapping therefore made a lot of sense (Illustration bellow)

An illustration of this mapping in the tool — ARA tool

All the functions are organized in a tree, the leaves of the tree are the functions themselves. Their granularity is managed by the team responsible for them. For each of them, we have several associated information including:

  • The responsible team
  • Criticality
  • Code of the automated test …
An automated test referenced in the feature mapping — ARA tool

The tool also allows you to obtain the details of the associated tests by clicking on the cell where the number of tests is stipulated (Illustration above). Very helpful to get the test details associated to a feature.

And finally the tool allows you to provide in real time with the level of coverage of your tests on your functionalities overall, by team and by criticality level (Illustration below)

Coverage value by team / criticality level

ARA, for Agile Regression Analyzer, has become essential in our software quality system. Beyond its primary function which is to support the teams on the continuous exploitation of the results of the automated tests, to provide real-time measurements on the quality of the software, the tool allowed us to operate this change of method which made it possible to converge our tests towards a feature mapping.

The tool, developed by the IT teams of Decathlon is open source, you can contribute and use it freely by connecting to the github repository: https://github.com/Decathlon/ara

Thanks to Guillaume Troyon, Alissia Gelabert, and Decathlon QA Community for theirs reviews and advices.

--

--

Julien BARLET
Decathlon Digital

Engineering Manager @Decathon. Passionate about quality engineering