From zero (tests) to hero

Ricardo Lopes
Feedzai Techblog
Published in
8 min readJan 17, 2022

A story of cross-team collaboration to scale test automation.

Source https://2.flexiple.com/scale/all-illustrations

In this article we will present the strategy that allowed Feedzai to scale test automation and improve the test coverage across different projects by applying a cross-team approach. We focused on developing a library for automated tests with several principles in mind... But where the main focus was on people collaboration and reusability.

A single platform, multiple use cases

The Feedzai ecosystem is composed of a set of different products that together enable our clients to fight financial crime at different levels, such as during Account Opening, Anti-Money Laundering (AML), or Transaction Fraud for Retail Banking.

One of the platforms used is called Case Manager (CM). CM enables fraud analysts to perform a post-factum analysis over received transactions. When transactions reach CM they have already been automatically scored by a real-time streaming engine that integrates Machine Learning models and Rules. However, in some cases there is still a need for human analysis. That’s when CM comes into play.

Example of Case Manager User Interface

Each of the financial crime domains contain very specific operational flows for analysts (CM’s main users). This means that the operational flows from an AML use case is completely different from a Transaction Fraud use case. As a result, CM needs to be highly configurable. Essentially, our engineers are able to configure almost every CM screen by using configuration files in a way that better matches the use case needs.

Example of different widgets used across Solutions use cases

CM is developed by RiskOps, a cluster with multiple teams that are focused on improving analysts’ and ops managers’ efficiency. On other hand, the metadata and configuration layer from CM is developed by the Solutions cluster, a set of teams fully focused on offering an out-of-the-box product configuration to address different use cases.

As a result, there are differentiated releases for the CM platform itself and the business layer (out-of-the-box Solutions customization).

Initial status: 0 automated tests for the UI

The CM team enforces a set of quality practices during the development process and validates the platform at different test levels: unit tests (back-end and front-end), integration testing (back-end and front-end), API testing, and UI testing.

The Solutions team has also applied different levels of testing in their codebase but until now there haven’t been any UI automated tests (normally the top layer of the testing pyramid). Therefore, the team relied heavily on unit, integration, and API testing.

Comparison between Case Manager and Solutions automated testing pyramids

During the Solutions release process, and in order to mitigate the risk of wrong configurations or bad integration between CM and the Solutions customizations, we executed a set of manual smoke tests to build our confidence in the platform before the release.

However, this approach did not allow us to reduce our feedback loop. We work to automate our releases as much as possible, and as such it was clear that this gap needed to be addressed in the short term.

A Shared Testing Library: Motivations

In order to bootstrap this effort, the teams from both clusters were involved in discussions about what would be the benefits of developing a shared testing library.

Reuse: Work alongside the teams and promote the reuse of the same library instead of building yet another framework from scratch.

Focus on writing new tests: Enable QA engineers and front-end developers to write new, easy tests to cover the main user flows, rather than investing time on implementing the framework logic themselves.

Avoid maintenance costs: Shared libraries ensure that problems get fixed much faster compared to a library that is owned by a single team.

The next step was to analyze which UI testing framework to use as the underlying technology for the library. We focused on two tools that were already used internally: Selenium and Cypress.

But… instead of focusing on the technical aspects of these tools, we focused on people, and on what would be considered highly performant for the entire team: people over tools.

After this analysis, we moved along with Cypress for the following reasons:

  • The entire front-end team from CM (more than 7 engineers) has experience in Cypress (it was used in their integration tests) and can contribute to the library definition.
  • Most QA engineers have little exposure to Selenium. Therefore, the learning curve would apply to both languages (Cypress and Selenium).
  • Due to the aforementioned reasons, it will be possible to scale E2E UI test automation among different profiles (front-end and QA engineers) instead of using a tool that is only familiar to QA engineers, if we move along with Selenium.
  • Cypress was already being used for the Front-end Integration Test layer, so a lot of common ground already existed, which enables code reuse and refactoring.

A Shared Testing Library: Development Process

After the agreement on the framework to use, different teams started to work on the testing library.

In terms of the governance model, we created a new repository called case-manager-cypress-commands, which was owned by the CM team but anyone from outside was able to contribute.

While developing the library, a set of software design principles were followed:

Keep It Simple, Stupid! (KISS) We preferred to define simple Cypress commands focused on interacting with the UI Components instead of designing a more complex logic using the Page Object Model, as advocated by Gleb Bahmutov, former VP of Engineering at Cypress.io.

Don’t Repeat Yourself. (DRY) Both by refactoring most of the already existing logic in the integration test layer (that was moved to the library) and promoting a culture where every command that was developed in the scope of Solutions tests should be moved to the library repository.

You Aren’t Gonna Need It. (YAGNI) — We focused on developing commands only when they were needed in the scope of tests rather than having a Big Bang approach of creating all the commands and implementing tests afterwards, which would result in a lot of unused Cypress commands. Basically, our tests define what is needed.

Additionally, QA engineers were focused on writing new tests while the frontend engineers were focused on abstracting the CM Cypress Commands and extending the framework in order to accommodate the needs from the new tests that were being created.

This allowed frontend engineers to absorb some of the testing best practices from QA engineers. On other hand, QA engineers were able to quickly improve their knowledge on Cypress by learning with the expertise of frontend engineers.

Below we have a test scenario snippet written in Cypress, which is using the testing library. Commands such as getButton, getSelectInput, or getField are Cypress Commands that enable the interaction with some of the UI elements from CM.

̶̶Z̶e̶r̶o̶ ̶ to hero (tests): Impact

We were able to adopt the library in 3 different Feedzai Solutions codebases.

No more need to perform manual regression tests in every release to ensure that the CM out-of-the-box configuration was production-ready.

In the different codebases the UI test layer stage added 25 minutes of additional build time to the pipelines.

The Solutions teams did not have automated tests at the UI Testing layer, which means that we needed to perform manual tests in every release to ensure that the CM out-of-the-box-configuration was production-ready.

With the introduction of the CM Cypress Commands we were able to validate the main user flows in CM in a flawless way by applying the same strategy, codebase-wise.

If we take an example from a single Solution (Transaction Fraud for Retail Banking), in which we issued around 20 releases last year, we can do the following calculus for saved effort in manual validations:

Wrapping up

In conclusion, let’s wrap up the best things observed from the initiative.

Enabling cross-functional roles: Front-end and QA engineers work alongside and learning from each other.

Promote Team Collaboration: Leave your own island and break silos across teams.

Improved Team Morale: Build a productive company culture based on common goals and that emphasizes sharing over re-doing.

Quality as a Shared Responsibility: Recognize that quality is not a single person’s or a single role’s responsibility, but a cross-team responsibility.

People Over Tools: Selecting a testing tool is not all about the specs. Focus on the people you have and how they will be able to extract the best work using it. Remember: ‘a fool with a tool is still a fool’.

Feedback Loop and Test Coverage: The initiative’s main goal was achieved. Collaboration drove us to an improved feedback loop and an increased test automation coverage.

Final thoughts

As final thought and since one of the main goals from this initiative was people collaboration, we would like to call out all the persons that contributed to the testing library and also promote their usage across different projects: Bernardo Maciel, Bruno Lages, Emanuel Correia, João Dias, João Duarte Pinto, João Cesar, João Pina, José Sousa, Liliana Fernandes, Lucas Casanova, Luís Moura, Mickael Costa, Pedro Correia, Rodrigo Graças, Telma Correia, Vanda Barata.

--

--