Web security at N26

Hugo Giraudel
InsideN26
Published in
11 min readFeb 18, 2019

There is a popular saying that goes, “The web is accessible by default; it’s our job not to screw it up.” In many ways, security feels like the opposite. It’s poor by default, and you really have to work hard to make it decent, even more to make it good.

N26 being a bank, this is a topic with which we’re not messing around. Over the last year, the Security Team and the Web Team have been working closely together to dramatically increase the robustness and resilience of the N26 Web platform. This article is a tour of some things we do to protect our customers from various attacks and risks.

Protecting against DDoS

As a solid banking contender in Europe — and soon to be in the United States — N26 is more and more a potential target for Distributed Denial of Service attacks (DDoS for short). These attacks usually consist of unleashing a vast amount of traffic onto a server in hope of overloading and shutting it down. Maintaining such an attack over long periods of time is a common tactic to extort companies into paying ransoms.

The first line of defense against denial of service attacks is to impose some sort of rate limiting. That’s technical wording for blocking someone (an IP) from performing too many requests under a certain time frame. This ensures someone cannot automate thousands of requests per seconds or minutes against a server.

The reason this is not enough is because proper attempts at DDoS are unlikely to come from a single IP. A more advanced attacker might put their hands on hundreds, if not thousands, of low-end devices (sometimes called “a botnet”). Basically, anything that can access the internet and has default access passwords such as fridges, baby monitors, talkies, and whatnot; welcome to the Internet Of Shit™. With this remotely-controlled army, they can hammer a server with hundreds of thousands of requests coming from hundreds of IPs.

These types of attacks is way more difficult to defeat, especially if every device maintains their query rhythm just below the restricted limit. To help us counter these invasions, we have a proxy acting as a funnel in case of overload: it will slow down, abort or deflect requests so only a certain amount reach our services in a certain time frame. Think of it as a digital bouncer. On top of this and other various defense mechanisms, which we won’t discuss here, we have horizontal scaling in place for all services: as the load goes higher, we deploy more machines and balance the load across all of them, effectively reducing the load on each machine itself.

Rate limiting however, no matter how well implemented, might not be enough to prevent DDoS attacks entirely. Another way to slow down a server to the point where it can crash is by finding a speed bottleneck. Consider a request performing heavy image processing, doing validation on a given value or requesting a lot of interconnected data. It might be a computer-intensive request, with a slow response time. What happens if one keeps performing such a request (not necessarily in a super fast way), so many times that the server gets overwhelmed, some may ask?

To prevent such scenarios, we do some smoke checking on the request shape. Is it awkwardly large? Does it have an abnormal number of values? Are some values absurdly long? If we find something suspicious, we abort the request without treating it. Unfortunately, GraphQL is also sensitive to such attacks. Imagine querying information about a user. One property of a user might be their contacts. Each contact is another user entity. Which has contacts. Which are other user entities, and so on. A single GraphQL query could try to query this dozens or hundreds of levels deep, draining CPU resources with it. One way we prevent that is by measuring the depth of incoming GraphQL requests; if they are more than a few levels deep, we abort them right away.

In our case, the vast majority of CPU resources are consumed by server-side rendering with React. Server-side rendering not only helps us reducing the Time-To-First-Meaningful-Paint, it also enables us to provide the full N26 experience to users without JavaScript (whether disabled or failing). During attacks however, we might have to reduce CPU consumption in order to stay up. Turning off server-side rendering to fall back on a pure client-side single page application is one way to do so.

To enable this, we have some configuration that we can edit at runtime. When rendering a page, the server fetches that configuration to know whether it should pre-render the markup before sending it to the client. If server-side rendering is disabled to reduce CPU load, the server basically renders an empty <body> element alongside the JavaScript bundle so that everything happens in the browser only.

Protecting against impersonation

Lack of availability is obviously not the only risk we have to face. Many attacks — especially ones targeting banks — intend to steal or impersonate users in order to steal their data. They are usually way more creative and their feasibility and impact are very much dependent on the system they target. At the end of the day, all defense mechanisms boil down to ensuring trust. Is the person performing an action really who they claim to be? It turns out this is a harder question than expected.

Locks on the cookie jar

When using N26 for Web, the user’s session (such as their key to communicate with the API) is stored in cookies. There is technically no concept of sessions per se on our web server, which means we are not vulnerable to session hijacking attacks.

All sensitive cookies are defined in a way that they cannot be tampered with in the browser; they no longer show up in `document.cookies` and cannot be altered. This is important in order to make sure no one can steal these cookies with a hypothetical remote JavaScript execution. These cookies are also signed, which provides a little extra protection against “man-in-the-middle” attacks. Should a server be able to intercept our requests, they theoretically could tamper with its cookies. Signing makes this impossible by making sure they match the secret signing key used by our server.

CSRFing the security wave

Cross-Site Resource Forgery has been around for basically as long as the web. The concept is simple: a CSRF attack consists of performing an operation on behalf of someone else by taking advantage of authentication cookies. What’s tricky to understand is that an attacker does not actually need to be able to read or modify cookies in order to use them to their advantage.

If you sign into any website, you usually get an authentication cookie that is restricted to a specific domain. This cookie — which is sent along requests — is what authenticates and identifies you for the server. Now, someone on another website could create a form posting to this API without you knowing, and because cookies are forwarded with a request, you’d technically be authenticated despite having originally started on another website.

In our case, it means if you would authenticate on N26 for Web, then visit a malicious website, they could trick you into submitting a form that actually hits the N26 API, therefore performing actions on your behalf without you even realising. While that would be super difficult to actually exploit, that would still be a problem. We prevent that.

The mitigation against such attacks always looks the same: the server generates a pair of unique associated keys. One key is set in an HTTP-only cookie (non readable or tamperable with JavaScript), and one is sent to the document to be used in forms or sent as a header with XHR requests. When receiving requests, the server makes sure both values are part of the same pair. This works by having one of the values (the one in the secure cookie) being immutable.

Encryption within the browser

So far, one could say everything is just Security 101. Performing encryption on the client before sending requests was quite a first for me though. In fact, when our security team suggested it, I didn’t fully understand what they meant, and whether it was even realistically achievable in the first place. It turns out it is, although it is certainly a challenge.

In practice, front-end encryption works like this: on start, the server generates two keys, a public one which makes its way to the client in a cookie, and a private one which stays on the server. In the browser, the public key is used to encrypt a certain payload before sending it to the server via a XHR request. On the server, upon receiving the request, the payload is decrypted using the private key. It’s important the private key remains a secret and never gets leaked, as it is the only way to decrypt the data.

From afar, it looks like it does the same job as TLS (what enables HTTPS protocol): it encrypts the data between the client and the server. That’s true for the most part. However while TLS is an absolute wonder, it can fail in unlikely and extreme circumstances. Remember the Heartbleed weakness? More commonly, front-end encryption ensures that only the service that needs to access requests’ payload can access it; any in-between services cannot.

For very sensitive information, such as for the secret certification PIN, we perform a similar operation before hitting the backend API. The client sends a request to our GraphQL layer that is encrypted with the public key, which we decrypt on the server. Then, with a different set of keys and slightly different mechanism, we encrypt the sensitive information and pass it in a header towards the backend API. The latter can decrypt it with the private key on that server.

The idea is to have TLS protection during transfers between parties (client to server, server to server), and still have encrypted data (payloads and sensitive headers) when TLS ends (“TLS termination”).

Headers in the cloud

Over the last few years, many standards have been brought up to improve web security, many of which go through HTTP headers in order to make transactions between clients and servers more reliable.

With the X-Content-Type-Options header, we prevent some browsers from trying to guess the type of a resource. This reduces risk of unsolicited downloads and malicious user uploaded content that could be treated as a different content type (called “MIME sniffing attacks”).

The X-Frame-Options header protects us against “clickjacking attacks”. Such attack consists on loading the target website in an iframe, and then layering invisible elements on top of it to capture clicks, keystrokes and other sorts of information. This header makes it impossible to embed any of our websites within an iframe.

The web has a long history of providing insecure content, even though certificates are now cheap and easy to set up. Fortunately, it is possible to instruct browsers to force content to be served over HTTPS with the Strict-Transport-Security header (HSTS for short). You can read more about how we use HSTS at N26.

Protecting against XSS

We’ve been hearing about XSS for basically ever. It’s an umbrella term that encompasses any sort of client-side code (usually JavaScript) execution. The idea is to find a way to execute malicious code on a website. The purpose can be to steal information, trick users into performing actions, redirecting to malicious websites, and more. As a project gets bigger, the attack surface grows with it and preventing all attack vectors of this type at scale has proven to be a challenge.

For once, browsers can help us mitigating against XSS. All our domains enable their built-in protection mechanisms with the X-XSS-Protection header.

Moreover, React does the heavy-lifting for us. Not only does it escape values embedded in JSX, it also prevents spoofed React elements from being rendered. On top of that, we have a strict no-`dangerouslySetInnerHTML` policy and make sure our data hydration from server to client is not subject to scripting attacks, either. We also made our link component ensure the URL’s protocol is safe to render (therefore preventing `javascript:` execution or similar shenanigans). Similarly, all redirects go through a sanitiser to ensure they look legit and are not leaving our domain.

Enforcing rules with CSP

The interesting thing about XSS attacks is that a mechanism has been designed and widely implemented to prevent them once and for all: Content Security Policies (CSP for short). A CSP is basically a contract between the server and the browser; the former instructing the latter what can be safely loaded. It covers all types of resources — from images, CSS and scripts to embedding/embedded documents, fonts and XHR requests. For each type, it defines a list of allowed domains. The browser will decline resource requests from a domain that is not explicitly allowed.

In practice however, it is quite challenging to implement a strict security policy, especially when many actors and third parties are involved in a project lifecycle. Take the monstrosity that is Google Tag Manager for instance. It basically gives marketing teams a backdoor into their website’s internals by allowing resources loading and execution. Unfortunately, GTM and/or similar systems are usually necessary in large companies to collect data and drive business. Authoring a secure policy around such a system is tricky, because it basically relies on the fact that it needs to load a bunch of scripts and images from a bunch of third parties in order to work. We are still tweaking our content security policy on a regular basis and are working towards making it as strict as possible in order to prevent all sorts of XSS injections.

Along the same lines, we recently introduced a feature policy, which is a similar concept but for device features such as vibration, geolocation, notifications, camera, and so on. This header ensures we limit the access to the customers’ device capacities to the bare required minimum.

Protecting against ourselves

There is a lot of things we do to make sure the code running on our servers and our customers’ browsers is safe, yet a lot of risks happen entirely outside of this scope. One way for us to consider them is to embed security into our daily work, and stop considering it as a fire-fighting skill or an afterthought.

In the last few years, we started performing regular threat modeling sessions for all projects in which we determine risks, mitigations and trust boundaries. What happens if a dependency gets compromised? What happens if an employee goes rogue? What happens if a service gets shut down? While unlikely, all scenarios are technically possible.

Sometimes, our investigation translates into technical implementation. For instance, one way we minimise the risk of our npm dependencies serving as an attack vector is by auditing them when building our project. It certainly won’t catch everything, but it definitely can prevent some bizarre attack scenarios. If we find a vulnerability in a dependency, our build gets immediately aborted and we have to resolve it.

Similarly, after building our project with Webpack, we audit all our JavaScript bundles for private keys and secret variables. If a bundle contains something that should not make its way to the client, the build gets aborted and we need to fix the problem.

Wrapping up

Security on the web is not easy. It takes a lot of time, it takes a lot of effort and more than anything else, it requires a company to shift from battling security issues to proactively preventing them. N26 has gone through this transition over the last few years, and has thus become one of the most secure web banking platforms.

We are always looking for skilled people to join our security team. If you’re into puzzles, you can give our security challenge a go. If the prospect of joining us is not sparking joy in you, but you’d like to make a few bucks, we are on HackerOne, ready to receive your security-related bug reports.

If you would like to read more about security at N26, may I recommend our article on social engineering and our InsideN26 blog.

Interested in joining one of our teams as we grow?

If you’d like to join us on the journey of building the mobile bank the world loves to use, have a look at some of the roles we’re looking for here.

Follow us on LinkedIn and Twitter to keep in touch with us!

--

--

Hugo Giraudel
InsideN26

Non-binary accessibility & diversity advocate, frontend developer, author. Real life cat. They/them.