Roberto Moschini
Globant
Published in
4 min readDec 11, 2020

--

Banking Digital Transformation — A Quality Engineering perspective

The end goal of Digital Transformation is to build memorable experiences in a never-ending cycle that increase the ability to retain, convert, and enrich the relationship between businesses and their users. Transformation should never be seen as a goal, but as a commitment to a new way of doing business where information technology is used in ever-more sophisticated ways to generate business value.

In the banking arena we are facing different size transformations from first initial steps to 100% Digital Banks with no physical branches. In this article we will focus on the lessons learned on large ones, those shaped as long term programs to design, build and implement new digital platforms supporting the overhauled customer journey and business model. These solutions involve new public and private websites, mobile apps and API, making extensive use of big data and machine learning, as well as best practices from e-commerce. In my experience this type of programs could involve above 65 quality engineer professionals across different teams, representing about 20% of the staff, performing Functional, Mobile, Automated and Performance Testing.

In this particular industry there are several quality dimensions to overcome and it is critical to ensure that nothing is at risk for hundred of thousand of clients operating the systems on a daily basis. In order to accommodate these challenges and goals some additional test stages were set from the development phase followed to a transition step with several testing activities prior to release to production.

In these notes I would like to highlight some of key points that help us to succeed in getting a robust product into the market. All of them have the same high level methodological approach: Assess — Plan — Deliver.

Since these types of programs have several teams working together, or in the same time frame Standardized procedures, templates and activities are a must. Work with your Quality Continuous Improvement or QA areas in order to tailor them and publish these artifacts in the team’s collaboration platform as early as possible.

Speak the same language than your client. A program like this will have hundreds of features to deliver and about the same number of Change Requests, that you will break down into thousands of User Stories and tasks. When the final stages and UAT come, those have to be presented in a simple list that your customer can follow and approve. Find a way to keep this updated and linked to your Product Backlog, probably an specific workflow for these features can be implemented in your issue tracking application which will also allow dependencies mapping.

Engage early in Automated Web Services testing: As they are the most common way to communicate to legacy systems they will be used across the board. Early issues detection here could save tons of downstream work. If a web service is not complete or has misleading responses they could also be anticipated. Another key note related to Web Services is that they are typically provided and maintained by third parties, hence it’s highly recommended to have an specific Defect Management System to handle issues with them.

For some time we worked grouped by platform or specific technology areas, later we reorganized the work to have a complete view of a feature, working it through Web services to Web and Mobile Applications. Even though this requires more expertise from the tester this ensured not only an integrated vision of a feature but also that the functionality was tackled seamlessly by all the involved teams.

Performance testing has some particular considerations to mention. Test every single Micro Service as soon as it becomes available, if possible test it in every available environment and manage to establish some test timeframes for production like environments. The other two key spaces are public web site areas and all functionalities consuming 3rd party services, these could generate very negative impact if not performing well.

A special note has to be made regarding Test Data. It might be difficult to have an integrated test environment with all legacy systems synchronized and running. In this context you will need to manage how to get the required data. I would recommend having someone leading and coordinating these efforts and the test data strategy not only with the client or the test teams but also with development and business analysts teams. In this process you will face some scenarios that would not be possible to test due to technical (Environment, data, interfaces, etc.) constraints so it’s a good opportunity to flag them, seek for the client agreement on the constraints and keep all actors involved informed about this situation.

Regarding automation aside from the common challenges I would like to mention two very important benefits that we obtained. One was the availability of running different duration test sets from Sanity/Smoke, light regression to full regression and the other one was being able to easily switch between execution environments.

Once you start with UAT or Community-Driven testing have your users entering issues directly in your Defect Tracking System, it sounds obvious, but using mailboxes or other communication channels will become unmanageable.

Finally I would recommend establishing a governance layer across the test organization so you can keep the strategic and tactical vision of the program on track. Keep in mind that these transformations are extremely challenging and in most cases they represent a paradigm shift.

--

--