How We Built a Digital Wallet

Anselmo Abadía
Flux IT Thoughts
Published in
7 min readAug 7, 2020

--

I wrote this article with my colleagues Juan Borda (DevOps engineer at Flux IT) and Matías Navarro (project leader at Flux IT). Enjoy the reading!

When we build any kind of digital product, there’s a context and a set of premises that will guide us during its evolution.

Building a digital wallet that allows us to log in, check the account balance, and transfer money seems to be, at first glance, an easy-to-implement functionality. However, as we start delving into the development, things become more complex. In this article, we’ll present the main challenges we faced to develop this kind of app for a well-known Argentine financial entity, as well as the project’s details.

The Challenges

Growing by Leaps and Bounds

When we started, there were only a few of us. Building a team and making it grow is not an easy task: new members bring along their own work culture and they have to assume responsibilities that no one had before and take part in processes that are not clearly defined.

Building a Financial Architecture From Scratch

This requires making many decisions: defining the stack, the infrastructure that will support it, the recruitment capacity we will have according to the technical decisions that have been made, among others.

“How a Bank Should Work”

Building a financial application tempts us to follow previous experiences, based on “how a bank should work”. It’s key to identify that things may be different and adapt or change usual practices at all levels.

To tackle this development, we had to make a lot of decisions; and, quickly, of course, we made mistakes, learnt and made the necessary changes.

Broadly speaking, we built a 100% cloud solution, using several SaaS tools with a strong focus on the DevOps culture, structuring our team through the squad model.

The Project

Organization

  • As regards methodology, we started with a four-member team in charge of designing the application’s architecture, configuring the necessary tools, and developing the app’s core.
  • Almost simultaneously, a UX three-member team was put together with the mission to design a native product. That adds the challenge of designing an application with typical Android or iOS components that looks the same in both platforms, but that also has its own brand identity.
  • While that happened, different teams were put together in order to implement the squad model, known for its success in Spotify. This methodology entails building teams (called “squads”) that are autonomous enough to develop functionalities without dependencies but are also interrelated to comply with the same standards and guidelines.
  • What bounds the teams together? “Chapter Leads”. These are people from different squads with the same role that define the area’s guidelines that will be followed by all the squads (for example, the architecture or UX chapter lead).
  • Each team was responsible for the development of an application module: onboarding, payments, foreign currency purchase/sale, among others.
  • As the squads were put together, the company’s growth accelerated, since the need for other areas to support the squads increased. In no more than a year, the company grew from 4 to 100 people counting the development, security, human resources, operations, and finance teams.

The Application’s Architecture

  • We developed an architecture aimed at microservices with a cross-channel functionality, and a BFF for each channel with the particular functionality of orchestrating the other microservices.
  • We divided the development stacks in two: Node.js/Express and Java/Spring Boot, in order to reduce the spread of different technologies, according to the bank developers’ capacity. We started testing some microservices that required high GoLang concurrency.
  • The persistence was specific to each business and microservice: we used Redis, RDS, Dynamo, and S3. We believe the best choice would have been to have fewer available options in the stack and gain governance.
  • We used AWS API Gateway to expose our BFFs. At this point, we defined the hosts to be exposed, we used SSL Termination and we added the token validations for authentication with Auth0.
  • In addition to microservices, we implemented some Lambda Functions to analyze the pros and cons of migrating our whole microservices scheme to a serverless architecture. We tried to use StepFunctions, but we didn’t have a good experience. Beyond how interesting that technology is, we believe that, due to the company’s size and all we had built in pursuit of automation, having weighed that cost and effort, serverless was a bad choice.
  • The solution, which was built on AWS, integrated to other clouds through Site-To-Site VPNs such as Azure, to use some third-party services.

Security

  • We resorted to Auth0/Oauth2 to handle the app’s authentication and authorization. It was an option that allowed us to quickly have a solution offered by Oauth2, as well as the possibility to have different authentication providers.
  • In order to have a 100% digital onboarding, we tried several biometrics solutions as complements to the Auth0 process: we went from an ad-hoc solution to testing a few products in pursuit of the highest success rate.

GitOps

  • Pipelines: We undertook the task of unifying and streamlining the integration and deployment pipelines. Many teams had taken part in the first implementation, but the current teams didn’t have the control or knowledge to handle or improve them. Thus, we intended to simplify the pipeline creation process so that the new pipelines were easy to read and transmit among cells, and so that the changes introduced could be tracked.
  • Infrastructure automation: We worked with the DEV-SEC-OPS team to modularize the environment deployment on AWS; we chose Terraform, which allowed us to modularize the environment construction so that cross components were in control of operations, while the cells were given independence to build the necessary resources for the apps: databases, Redis chaché, API gateway endpoints. without having to rebuild or compromise the common components base, prioritizing security and the reuse of processes.

In this regard, we were able to build an environment from scratch in less than an hour, which represented a valuable operational advantage for the team.

  • Continuous integration and continuous deployment was orchestrated on Gitlab Ci, combining workers devoted to the builds according to the technology we used: Node.js, Java and Go, and the Docker images building being able to scale according to any of the component’s demand.

Along with the versioning scheme and the deployment strategies that we’ll focus on later, we introduced the shift-right concept to build docker images, which entails building once and then going through different environments changing only the configuration, which accelerates deployments and makes them more secure.

  • Deployment strategies: For the applications and microservices deployment we chose to work with GITOPS: deployments on the AWS environments that reflect the changes on the source code. For this purpose, we built Docker images responsible for the deployment, with all the necessary tools to operate with Kubernetes clusters where the application operated.
  • Versioning schemes/Branching: in order to ensure that all processes were homogeneous and that developers and architects could contribute and migrate squads, we undertook the task of establishing a Git flow that would adapt to the needs of each product and its life cycle. In the first stages, we set the goal of creating flows for three major development groups: microservices or services, libraries, and mobile applications.

As regards libraries, we adopted a topic branch flow and a master flow (simple and easy to iterate); whereas for microservices and apps, we used GitFlow, which, combined with the deployment strategies, covered the life cycle of these type of developments.

That is how, within the scope of a DEV-SEC-OPS strategy, we achieved a correspondence between the environments and the code versioning that was easy to read and understand when it came to making adjustments or tracking bugs, keeping the whole application as a dynamic but observable object, where any team could resort to the source code and know exactly what had been deployed.

  • As regards traceability and monitoring, we chose Datadog, and we worked on it at several levels. First, we focused on definitions, establishing parameters and common formats to track the calls from the mobile app’s entry, going from microservices to calls from integrations, which were a lot!

Then, we worked on concept tests and examples of use for the different languages we used: Node.js, Java, Go, etc. And, lastly, we worked on the deployment and automation of the Datadog daemons in the Kubernetes clusters and in different environments to support the libraries’ implementation.

All this work allowed us to develop panels, alarms and use the Datadog APM to track the developments that were integrated into the log in standards. Although its adoption was gradual, each new team had available tools to implement it.

  • GitLab/Bitrise: we chose these two tools to implement CI/CD. For the mobile apps, we chose Bitrise, a SaaS that allows builds for iOS and Android with integrations to the stores. This choice brought us great results, mainly because of its simple configuration and the capacity to easily provide test versions of the app for the QA teams without having to go through Apple TestFlight or Play Beta.

For everything else (code and build management), we selected a GitLab platform, hosted on AWS in a Kubernetes cluster because of its simplicity, TCO scalability and features.

Building something from scratch and growing rapidly in all areas and aspects poses a huge challenge since it’s difficult to fit the new pieces harmoniously with a time to market that presses upon us.

In that context, as we’ve seen it in many decisions we’ve made, it’s key not to make things complicated “reinventing the wheel” (unless we create a much faster and stronger wheel). Today, there are cloud services that can quickly solve difficulties for us.

It is also key that, when it comes to developing, all these services are interchangeable. The worst thing we can do in pursuit of getting quick results is to fall into a vendor lock-in that is difficult to reverse. That is why concepts such as Cloud Native and 12-Factor are crucial in this kind of development, in order to ensure our product’s wellbeing and the possibility of future change.

Know more about Flux IT: Website · Instagram · LinkedIn · Twitter · Dribbble · Breezy

--

--