Mitigating OWASP Top 10 API Security Threats with an API Gateway — Part 2

Broken User Authentication, Excessive Data Exposure, Lack of Resources and Rate Limiting

Sanjula Madurapperuma
API Integration Essentials
9 min readApr 21, 2020

--

Figure 1 — API calls account for around 83% of all web traffic

In Part 1 of this series, you learned about what OWASP is, why we need to worry about the OWASP Top 10, what an API Gateway is, and detailed analysis about the first threat defined in the OWASP Top 10 list: Broken Object Level Authorization (BOLA).

This article will continue from there and go on to describe 3 more threats defined in the OWASP Top 10 list: Broken User Authentication, Excessive Data Exposure and Lack of Resources and Rate Limiting.

If you haven’t read Part 1 of this series yet, please do so here.

2 — Broken User Authentication

Authentication is a critical part of any application, but even seemingly solid authentication mechanisms suffer problems at the basic credential management functions including password change, account profile update, and other related functions.

Authentication vulnerabilities may exist if an application lacks proper protection mechanisms (i.e. API endpoints that handle authentication must be protected with extra layers of security than regular API Endpoints) and if there is an incorrect implementation of the authentication mechanism (i.e. the authentication mechanism is used without considering any of the possible attack vectors or the implementation has been made for the wrong use case).

The danger of such vulnerabilities is that the attackers are able to gain access to users’ accounts, their data, perform potentially harmful actions such as placing orders or withdrawing money or using account privileges to gain access to confidential information in a system.

One example of an application with broken or weak authentication mechanisms would be when the client username and password are being passed on as parameters in the URL. This would mean that the user credentials are stored in the browser’s history even after the user has signed out. Should an attacker come across this client’s browser history, he would be able to gain access to the application via the exposed credentials. This scenario is explained diagrammatically below.

Figure 2 — An attacker exploiting a client by replaying a request from the browser’s history

Another example would be if an application allows attackers to perform credential stuffing attacks without being detected. Credential stuffing attacks involve injecting large amounts of breached username/password pairs until they are matched to an existing account in the system, which the attackers can then use to perform potentially destructive activities within the system. Credential stuffing attacks have risen dramatically over the last year after a collection of credentials (known as Collections #1–5) from prior breaches was released in plain-text format for free. To bring the scale of the breach to context, the first release contained around 773 million unique emails and 21 million passwords while the other four collections had around 2.2 billion emails. Thereby highlighting the importance of preventing credential stuffing attacks.

Another important authentication vulnerability is not having proper session timeout mechanisms. It will increase the risk of attackers being able to break into an application that exposes an active session of a user. This can be performed using session sniffing, client-side attacks such as XSS, JavaScript codes, and Trojans, man-in-the-middle and man-in-the-browser attacks.

How can this be mitigated?

To be safe against exploitation of Broken User Authentication vulnerabilities it is recommended to bring in a middleman such as a Gateway between the client and the back-end. This would effectively thwart off any targeted attacks by adding a second layer of security in between the client and the back-end.

Take for example the scenario of a simple e-commerce application, where the client and back-end interact directly with each other, which uses the username and password of a user to authenticate a request to the /order endpoint as shown below (Figure 3).

Figure 3 — Client interacting with back-end directly

This would mean that the username and password required for the basic authentication for the back-end are stored on the client. Given that there are many clients for a particular back-end, having credentials saved on those clients pose a severe security vulnerability.

What makes this even more evident is that if one of those clients are hacked or hijacked, as shown below (Figure 4), the credentials saved on it are also exposed, which in turn could mean that the back-end will be easily penetrable until the attack on the client is identified and necessary measures are taken to revoke the credentials.

Figure 4 — Attacker using credentials from a hacked client to exploit application

A good solution to this problem will be bringing a Gateway between the clients and the back-end. Let’s take a look at how this would work.

Figure 5 — Using a Gateway to add another layer of security

As shown in the diagram above, the client will first send a POST request to the /order endpoint with an OAuth2 token provided by the Gateway. If the provided OAuth2 token is valid, then the username and password credentials stored within the Gateway will be retrieved and the intended request will be passed on to the back-end with a basic authentication header. The back-end will then process the request and return an API response which the Gateway will pass on to the client. This way it is always ensured that the back-end credentials are never exposed to clients as they are only stored on the Gateway.

3 — Excessive Data Exposure

Excessive data exposure is more prevalent with modern API-based applications since APIs return vast amounts of data back to the client for filtration. This allows attackers to sniff API traffic and analyze the responses for any sensitive information.

Many APIs rely on the clients to perform data filtering as developers tend to implement them in a very generic manner while disregarding the sensitivity of the data being transmitted. As a result of excessive data exposure, irrelevant but sensitive data may also be uncovered. This type of vulnerability is difficult to mitigate via automated tools as it is hard to distinguish between sensitive data and data that is actually needed given the nature of the API.

A scenario of exploitation of this vulnerability would be when an attacker identifies the API endpoint that is used as a data source for a comments section of an application:

This also returns sensitive data about the authors of the comments, such as their passwords and addresses. Such a situation can occur if the endpoint implementation returns the User objects containing the comments to be filtered on the client-side instead of just returning the comments as demonstrated diagrammatically below.

Figure 6 — Attacker exploiting the excessive data exposure vulnerability

How can this be mitigated?

An effective way to mitigate this threat is to enforce message mediation policies at the API Gateway level to filter any data that is being returned from an API call, which will ensure that unnecessary, sensitive data will not be exposed to the client.

These mediation policies will contain rules or references to code that should be executed while having access to all parts of a message being passed through the Gateway.

Fully-fledged API Management solutions would allow users to configure policies for incoming, outgoing and fault messages being passed through the Gateway.

Let’s take an example of an outgoing message mediation policy (also known as a sequence) and see how it would work.

Figure 7 — Using mediation policies to overcome excessive data exposure

First, the client intends to send a GET request to the /comments endpoint. This is processed by the Gateway and the back-end.

However, the back-end returns a response containing sensitive information about the author of a comment (email, password, and address) in addition to information about the comment itself. It poses a high-security risk if returned to the client as sensitive information could be leaked if the client happens to be compromised.

This can be eliminated by configuring a mediation policy at the Gateway level, which filters the response returned from the back-end according to predefined rules and returns the filtered data to the client. In this case, all the sensitive data about the author of a comment is removed before sending the response to the client. Thereby making sure that sensitive or excess data never leave the Gateway.

Another way to get rid of this vulnerability is to allow the Gateway to do certain processes where it is not feasible to change the back-end implementation.

For example, let’s assume that there is an application that must know whether the user is over 18 years of age. The current implementation (shown below in Figure 8) is such that the client sends a GET request to /age endpoint for a particular user, and then runs a client-side process to check if the user is over 18 or not based on the age returned from the back-end. This method is inefficient and would also mean that unnecessary data is being leaked from the back-end since what the application actually needs is a boolean value to know whether the user is over 18 or not.

Figure 8 — Exposing unnecessary data is both inefficient and dangerous

This can be mitigated by placing a Gateway in between the client and back-end (shown below in Figure 9), in cases where it is not feasible to change the back-end implementation, so that the client sends the request for the boolean value while the Gateway sends the request to the age endpoint, does the required processing to calculate the boolean value and return that boolean value to the client.

Figure 9 — Using a Gateway to run certain processes instead of clients

Offloading of such a process to the Gateway instead of the client would mean that there would be minimal exposure of any unnecessary data.

4 — Lack of Resources and Rate Limiting

API requests consume finite resources such as network, memory, CPU, and storage. Thereby, if there are little or no limits imposed on the usage of such resources, those APIs are vulnerable to attacks such as denial-of-service attacks which in turn lead to endpoint outages.

A simple scenario would be when an attacker exploits vulnerable queries that retrieve data from the server. Let’s say an application has a UI with a limit of 200 users per page uses the query:

If the attacker changes the size parameter to 200 000, it will cause performance issues on the database and the API will become totally unresponsive resulting in a crash of the application, especially if the hosting servers cannot handle such a large request at once.

How can this be mitigated?

It is recommended that rate-limiting (or throttling) is properly enforced on a Gateway to prevent the misuse of precious resources required for API services to run smoothly. Throttling allows API Developers to limit the number of successful hits to a particular API during a given time period.

Let’s take a look at how enforcing throttling policies on a Gateway works.

Figure 10 — Throttling policy at work

As shown above, there are four clients, each looking to execute 50 requests per minute. However, a throttling policy has been enforced in the Gateway which limits the number of incoming requests to 150 requests per minute. Given that Client 1, 2 and 3 execute their requests simultaneously, Client 4 will experience failing requests within the same minute due to the limit enforced by the throttling policy. This protective measure, in turn, ensures that valuable back-end resources such as CPU and memory usage are not misused.

In this article, you went through an analysis of the second, third and fourth threats on the OWASP Top 10 list and how you can easily mitigate them with some precautionary action and help from an API Gateway.

If you want to know more about the first 5 OWASP Threats, then watch this video:

Stay tuned for Part 3 of Mitigating OWASP Top 10 API Security Threats with an API Gateway where you would learn about some more threats on the list and how to mitigate them using an API Gateway!

Update: You can now read Part 3 over here.

--

--

Sanjula Madurapperuma
API Integration Essentials

full stack mobile and web developer | photographer | start-up enthusiast