Tech at N26 — The Bank in the cloud

If you are in Europe, chances are that you have heard of N26. It’s a #NoBullshit bank that gives you complete control over your finances from your smartphone. It’s one of the most successful startups of Europe and it is what a bank of 2018 should be.

Also, it is a bank that is completely hosted in the cloud!

I joined N26 sometime back as a developer in the core backend team. I had done microservices, cloud, CI/CD, containerisation etc before, but I imagined that a bank would be more “traditional” — I am glad my assumptions were wrong. Since then I have been amazed by the sheer brilliance of the tech architecture and technologies that work together to make it all possible. Here are a few technologies that are extensively used in N26.

Bonus points if you recognise most of these

I think it is worth sharing our experiences, learning and how we do things. So this post is a mix between what I have learnt here in short time.

In the beginning, there was a Monolith (and rumours are, there still is!). As we started growing, we quickly realised that we need to have services that can scale independently. Taking from Conway’s law, it became essential to have cross-functional teams that are centred around features. As the understanding of the domain matured, we started slicing up the monolith into smaller logical services.

Currently, there are about 15 cross-functional teams that are responsible for 60+ micro-services. These teams are structured around business sub-domains and functionality and are responsible for managing corresponding services. This gives us more freedom to choose the right tools and helps us focus on how we want to build a particular service. This also lets teams experiment with new ideas quickly and iteratively.

Coming to tech — for a bank, we do things differently. Here are the things we are proud of:

CI/CD pipeline: As soon as the code is pushed to a branch of the repository, the CI server gets kicked in and runs automated tests and reports the build status. Every merge to master triggers the CD pipeline. Here, is how a typical pipeline looks like with different stages.

Once all the tests pass and there are no security flaws (and we have the necessary business approvals), then the application is deployed to different environments in blue/green deployment manner.

There are about 900 builds and 100+ live deployments every week.


Infrastructure as code: The N26 Core Teams (Security, SRE and TechOps) live by the discipline that the infrastructure provisioning should be automated. We are big fans of Hashicorp and use:

  • Nomad and Saltstack to provision and run infra. They enables us to quickly create and destroy entire servers and environments as and when needed.
  • Consul for service discovery and configuration management. It makes the services resolve other dependent services without actually knowing their IP. This is very powerful when you have services that scale up and down.
  • Vault for managing secrets. This prevents unauthorized access and establishes a strict security mechanism on the least privilege need-to-know basis.

Containerisation and Autoscaling services: The services are build as Docker containers. This ensures consistency across environments. The configuration is injected dynamically and the load balancers are configured to autoscale service instances up and down based on the load.

All services are designed in a way that they can be brought up or killed anytime. Our servers are Cattle and not pets . Containerisation also eases development and local testing. Setting up a dev machine identical to other environments is usually a matter of minutes.


Monitoring and logging: When you have a distributed system, you need to have an eye on everything happening in the system.

  • Having dashboards that capture metrics of user actions, inbound and outbound http traffic, requests, users, response status, times and everything else becomes essential. Our monitoring and analytics system gives us an eye into the running system and helps us pin point anomalies, threshold latencies and determine SLAs.
  • Centralised logging through ELK —The services send their logs to Logstash, which then pumps them into a massive cluster of Elastic which is then read from Kibana.

We Love to Experiment — A good company is not just about the tech but also about the culture it maintains. Yes, we are a bank. Yes, everything we do needs to be audited and pass through multiple gates of security. And yes, we are not afraid to try out new things!

Every 6 weeks, there are Getting Stuff Done Days —these are 2 days where everyone in the company is allowed to work on anything they want. At the company level that’s about 7500 hours of creative energy and motivated people doing what they feel improves things. We believe, if you have the right people and you trust them to do the right thing, it goes a long way.

As part of my first GSDD, I evaluated the newly released SpringBoot 2 against SpringBoot 1 and published my findings here that got published in the weekly Spring newsletter.

Another thing that came out from GSDD is that someone experimented with Kotlin, shared the learnings with others and now we are moving towards adapting Kotlin at a bigger level in the company — read why N26 loves Kotlin.


There is so much more to cover and I have just begun my journey here. I have just touched the different areas above that are oceans in themselves. I will continue learning and sharing.

There is so much that has been already done and still so much more to do. We are going to expand to UK and US soon. We are growing and if any of the above gets you excited — check out the positions that you can fill here.