An Overview on API Security

Chathura Ekanayake
WSO2 Solution Architecture Team Blog
17 min readOct 13, 2020

--

APIs are the entry point for accessing an organization’s functions and data. However, exposing an API to unintended parties can cause considerable damages to organization’s digital assets and could result in leakage of sensitive information. Therefore, security aspects related to APIs are a main concern when implementing a digital transformation project.

We have considered authentication problems of APIs in a previous article, which is also related to API security. This article looks at other important factors related to API security and possible methods of implementing those.

Access control in API invocations

First, let’s consider access control in APIs, which ensures that only intended parties can access APIs. The defacto standard for API access control is OAuth 2.0. Figure 1 highlights two main activities in an OAuth based API invocation. In simple terms, an application has to obtain a token from a valid identity provider (IDP) and send it to API gateway with each API invocation. Therefore, many access control scenarios can be built around this token. Note that IDP capabilities can be built in to the API control pane (as shown in figure 1) or it is possible use a separate IDP with the API deployment.

Figure 1: Simplified view on token based API access control

A common method of implementing access control is based on scopes. Any number of scopes can be associated with an OAuth token. For example, if we consider a warehouse management system, relevant scopes may be list_items and order_item. Then we can have a warehouse API with the below two methods:

  • GET hmart.com/warehouse/items — required scope: list_items
  • POST hmart.com/warehouse/orders — required scope: order_item

Now there can be token_1 which has only the list_items scope and token_2 which has both list_items and order_item scopes. Therefore, an application can only invoke the hmart.com/warehouse/items method with token_1. However, an application with token_2 can invoke both API methods.

Now we can focus on how an application can obtain a token. This procedure is mentioned in the OAuth 2.0 specification under grant types. Two useful grant types are the authorization code grant type and client credentials grant type. When the authorization code grant is used, applications have to provide application credentials as well as user credentials to obtain a token. When the client credentials grant is used, a token can be obtained by only providing application credentials. In both cases, required scopes can be specified with the request.

After getting an understanding about tokens, scopes and grant types, now we can consider how tokens can be used for API access control. Token based access control can be enforced in the following two stages: (1) token issuance and (2) API invocation.

Access control during token issuance

In a basic scenario, we can have role-based access control by restricting scopes only for certain roles. For example, we can have an access control policy stating that both list_items and order_item scopes are allowed for the warehouse_admin role. However, only the list_items scope is allowed for the warehouse_staff role. We can also have more advanced access control policies as well. As an example, we can enforce following three conditions in order to issue a token with order_item scope: (1) user in warehouse_admin role, (2) user works at HMart head office and (3) source IP address belongs to Australia. XACML can be used to implement such advanced authorization policies.

As it is possible to extend standard grants or introduce new grants, many access control scenarios can be supported during the step of obtaining tokens. For example, we can introduce approval workflows for issuing tokens so that a token request has to be approved by certain users before the IDP sends the token to the application. In this case, usually the token has to be sent to the application via a separate channel due to the long running nature of workflows. Figure 2 shows the use of access control policies and approval workflows for issuing a token.

Figure 2: Attribute based and workflow based authorizations during token issuance

Another possibility is that API consumers can request scopes in advance, prior to sending the actual token request. In that case, scope request can be processed by the IDP based some authorization policies or an approval workflow. In either case, if the scope request is granted, IDP can maintain a mapping stating that the given scope is allowed for the corresponding user. Then, when an application requests a token with that scope on behalf of a user, IDP can lookup the mapping and decide whether to issue a token with requested scope.

Access control during API invocation

As mentioned earlier, usually application credentials and user credentials (where applicable) have to be provided to obtain a token. Therefore, an application and a user account can be associated with each token. When a token is sent to the API gateway during an API invocation, API gateway can authorize the request with the help of IDP and other relevant components. This authorization step can enforce many access control policies which are not possible during the token issuance stage.

In the simplest case, gateway can check whether the token is valid (e.g. based on the signature), whether the token is expired and whether the token has necessary scopes to invoke the API method. However, it is also possible to enforce more complex policies as well, which depend on runtime data. For example, there can be a policy stating that a certain API method is allowed only for IP range x-y and only during office hours. Similar to the token issuance stage, this type of policies are usually implemented in XACML and the API gateway has to contact a XACML engine with relevant details for each API request to perform the authorization.

Furthermore, it is also possible to combine throttling capabilities to implement more advanced policies. As an example, it is possible to state that users working at the HMart regional office are allowed to invoke warehouse/orders method only 10 times per hour. API gateway, IDP and the throttling component have to work together to enforce such policies. API throttling policies can also be used to enforce some level of protection against application layer denial of service (DOS) attacks, if the throttling component provides functionality to limit traffic bursts (e.g. 2000 requests per day is allowed, subjected to a burst limit of 100 requests per second). Figure 3 shows the use of throttling policies and access control policies to secure APIs at the API invocation stage. Note that in this case a separate throttling component is used while the policy evaluation engine is built into the API control pane.

Figure 3: Scope, request context and traffic volume based policy evaluations during API invocations

Another security requirement is to prevent possible attacks based on web applications that use APIs. When a single-page-application uses APIs, it may store API tokens in browser’s local storage or session storage. Then if a user opens a malicious web page (from another site), it can access API tokens and invoke APIs. Furthermore, if an API gateway requires to send API tokens in HTTP cookies, malicious web page opened in the same browser session can simply send a request to the target API. Another possibility is that a malicious web page opened in the same browser session performs OAuth token grant flow with the IDP to obtain a valid token. Best method to prevent above attacks is to enforce Cross Origin Resource Sharing (CORS) policies for APIs. CORS policies allow API developers to state which domain names, HTTP methods, HTTP headers, etc are allowed for API invocations. For example, HMart API may have a CORS policy stating that only hmart.com and abcstore.com are allowed to make API calls, so that web browsers will block if attacker.com site tries to make an API call to HMart API.

Protecting messages

Another key aspect of API security is about protecting messages flowing through the API layer. As all interactions with an organization occurs via the API layer, we can ensure that unintended parties don’t receive sensitive information and intended parties receive correct information by enforcing message level policies.

TLS is a main method of achieving confidentiality and integrity at the transport level. By enabling TLS between client application and API layer as well as between the API layer and backend services, we can guarantee that message content is not modified and not exposed to unintended parties during transit.

However, there can be situations where we need more fine grained content protection. As an example, patients’ history details should only be visible to their assigned doctors, while full name, email, etc can be viewed by general hospital staff. In such situations, it should be possible to implement policies in the API layer so that patient’s history details are removed from the response payload if the API call is not made by a relevant doctor. This type of selective removal of information from message payloads is also useful in protecting Personally Identifiable Information (PII) when complying with regulations such as GDPR. Selective exposure of information can also be supported by encrypting certain parts of messages so that only authorized applications can read those details.

Another key point related to payload protection is to validate payloads against defined policies. One such use case would be to ensure that payloads contain all relevant fields in a certain format. For example, warehouse items listing response must contain item ID, unit count and unit price of each item. This type of message format validations are usually performed by XML schema or JSON schema validations. In addition to schema validation, API layer can also protect payloads by blocking any harmful content such as SQL injections, PHP injections, Javascript injections, etc. Such protections can be implemented as a set of regular expression validations at the API gateway. Figure 4 illustrates an example message level protections enforced at the API gateway.

Figure 4: Payload protection at the API gateway based on JSON schemas, content removal policies and TLS

Furthermore, API layer can also ensure that message content is not modified in transit by signing the payload with API gateway’s private keys. As API consuming applications as well as backend services trust API gateway’s certificate, API gateway’s signature is sufficient in most cases to confirm the integrity of messages. If TLS is used, message integrity will be ensured by the transport layer for the entire payload. However, if integrity validation is required only for certain sections of the payload, API layer can sign selective payload sections using XPath or JSON path expressions.

Security in backend services

Previous sections looked at security policies enforced at the API layer. However, backend services, which are exposed by the API layer, may also have various security mechanisms. For example, a backend service can itself be an API protected with OAuth. In that case, API layer should act as an OAuth client and provide a valid token with each backend call. One approach would be to embed a permanent token within the API layer. However, if tokens have an expiry time, API layer has to perform token refresh flows with a relevant IDP and renew the token when necessary (see figure 5). In a clustered deployment of API gateways, such backend tokens have to be made available to all gateway nodes in the cluster via a mechanism such as shared storage.

Figure 5: API layer accessing backend services secured by OAuth 2.0

API layer can perform access control only based on the information derivable from the request such as API method, user details, source IP, timestamp, etc. If it is required to perform any additional access control activities based on backend data, those policies have to be implemented within backend services.

Analytics based security

API layer is the central point of exposing all functions of an organization. Therefore, a large amount of information can be captured during various API operations performed at the API layer. This information can be used for getting security insights and predict possible threats.

First, we can consider auditing aspects. API layer is used to perform multiple operations by various user groups. API creators use the API layer to create APIs and publish them. Administrative users may create different policies to be applied on APIs. Application developers may subscribe for APIs and generate keys. Management level users may approve certain operations related APIs. API layer can record all these operations with involved users, timestamps and other relevant details to create audit logs. Whenever a security breach occurs, it is possible to track details such who has created the API, who has approved it and which applications were using it based on these audit logs. Such audit logs can be written into files or databases, which can later be processed by built in auditing components. Furthermore, it is also possible to pump these audit logs to analytics systems such as ELK or Splunk.

Figure 6: Using API invocation data to summarize security related information and to trigger notification for security related events

In addition to API operations mentioned above, another important information we can capture at the API layer is API invocation details. The number of API invocation operations are much higher than other API operations, which usually necessitates separate and more scalable analytics components for capturing and processing these events. As the number of API invocations can be millions or billions per month, we are usually interested in summarizations and predictions based these events. For example, we may observe that an application has been using IP range X during past 3 months and it suddenly sends a request from IP address y, which is outside of X. In this case, security analytics components can detect the change in the usual pattern and block the request or send a notification to administrators. Similarly, if an API receives 20 requests per minute during past 6 months and if it suddenly starts to receive 1500 requests per minute, analytics components can notify administrators about changes in the usual pattern. This type of pattern based security scenarios can be pre-defined within the analytics components (e.g. notify if the request count on an API increases 200% compared to the average request count during past 6 months). It is also possible to integrate ML modules with API analytics so that it can learn API invocation patterns and take an action if an invocation occurs outside of learnt patterns.

Governing APIs

An organization may involve with APIs in two aspects. First, as an API provider, an organization may expose its functions as APIs to internal and external consumers. Then as a consumer, various applications used within an organization may consume internal and external APIs. When considering the security of APIs and applications, it is critical to track all published APIs and API dependencies of all applications.

Let’s consider two examples, as an API provider and API consumer. Assume that the sales department of an organization maintains an API for its partners to place bulk orders. This department has improved this API over time and created multiple versions with various added features. As this organization has many partners and not all partners are ready to use the latest version instantly, it has to maintain multiple API versions in active state. Now let’s assume that the central IT division of the organization introduces a new policy stating that all partner APIs should only allow IP address ranges used by corresponding partners. If there is no central place to track all APIs offered to partners and their active versions, it is very hard to enforce this security policy for all relevant APIs. This makes it possible to miss out some APIs and open security vulnerabilities.

Now to consider the API consumer scenario, we can take an example of an application developer implementing a health insurance claims handling application for an insurance company. During the development time, developer uses sandbox versions multiple APIs such as CRM API and payment percentage computation API. Now when the claims handling application is moved to the production environment, it could be possible that the developer may forget to change some dependency APIs to production versions. As production level security policies are not enforced in sandbox APIs, this can result in leaking sensitive health related information of company’s customers.

Main methods of dealing with these problems is via API governance. API governance policies may state that all internally published APIs must be approved by a manager of the corresponding department and externally published APIs must be approved by a manager and the central IT team. It may also state that all APIs must comply with a certain list of security guidelines. Furthermore, according to the policy, all APIs may have to be published into a central portal, which facilitates tagging, searching and categorizing of APIs. Therefore, when it is necessary to introduce a new security policy, all active APIs can be easily discovered via the central API portal. In order to govern API consumption by applications, an organization may mandate that all dependency APIs need to be registered in a central API portal. Thus, applications have to subscribe to APIs in the central API portal and obtain tokens in order to use those. This allows administrators and IT teams to easily track which applications use which APIs and enforce policies based on dependencies (e.g. application is not allowed to be deployed in the production environment if it depends on a non-production version of any API).

Figure 7: API governance with API management platform features and central registries

Figure 7 highlights main components of APIM platforms related to API governance. A key features for API governance is the support for extensible API life cycle management. API life cycle management features allow administrators to define various life cycle stages of APIs (e.g. created, reviewed, published, deprecated, retired, etc) and their transitions. Furthermore, it is also possible to associate workflows for state transitions, so that for example, it is possible to specify that two manager level users have to approve an API before moving it into the published state. In addition, it is also possible to call external systems within state transition workflows, which allows us to publish APIs into a central API portal (or to a central registry) in situations where multiple API management deployments are used.

In addition to API life cycle management features, sophisticated API portals play an important role in API governance. Portals targeting application developers can track API dependencies of applications and also enforce policy based or workflow based authorizations for API subscriptions by applications. Portals used by API developers can control who can create, review or publish APIs, who are allowed to view and edit APIs, to which API gateways an API can be published based on roles of the creator, etc. If an API management platform does not support adequate features for governance, it may be necessary to utilize an external registry for governance. However, in such situations, considerable amount of integrations may have to be performed between the API management platform and the registry.

Securing API deployments

APIs cannot be secured only by using an API management platform or an API gateway. Deploying API platform modules, backend services and other components according to a secure architecture is also a key task in API security. Figure 8 shows an example API deployment.

Figure 8: Deployment with API management components, backend services, multiple identity providers and connections to cloud services

Each node shown in the diagram is usually cluster of two or more instances. API layer consists of following components: API gateways, API control pane, key manager, API analytics module and integration module.

API gateways acts as proxy between backend services and client applications. Therefore API invocation level security enforcements are performed at gateways. API control pane has features for publishing APIs, defining policies and subscribing for APIs. Key manager is used to issue and validate API tokens. Therefore, token issuance level security enforcements are handled by the key manager. Analytics module collects API invocation data from gateways and can be used to evaluate rate limiting policies. Integration module can connect with multiple backend and cloud services, and can perform necessary message transformations, protocol matching, message validations and service orchestrations.

Above described components of the API layer and their responsibilities are just an example of a deployment and vendors may implement those in different ways. For example, it is possible to combine key manager also into the control pane and implement analytics module as an set of extensions for an existing analytics platform such as ELK. Furthermore, it is also possible to implement integration features in the API gateway itself without having a separate module. However, having separate modules increases the flexibility of the deployment and allows us to scale individual components as necessary.

Now if we come back to deployment aspects, all API layer components can be deployed within the internal network of the organization, so that direct outside traffic is not allowed for any component. Then we can place a load balancer in the DMZ and allow load balancer to API gateway traffic through the firewall.

However, an organization may have multiple types of API consumers. First there can be public consumers (e.g. customers of an online shopping portal), who will access APIs over the internet. Then there can be partner organization which also access APIs over the internet. However, as there will be a limited number of partner organizations, it is possible to get IP address ranges used by those organizations. In addition, there can be branch offices located in different regions or countries. It is possible to have VPN connections with these branch offices. Now we may want to expose different sets of APIs to these consumers. Therefore, it is possible to use a separate gateway cluster for each consumer type and deploy only the APIs required for that consumer in to the corresponding gateway cluster as shown in figure 8. Then we can have firewall rules stating that public traffic is allowed only for gateway 1 and gateway 2 is restricted to source IP ranges of partners. Furthermore, gateway 3 is restricted only for VPN connections from branch offices.

Key manager component is responsible for issuing tokens and evaluating advanced runtime policies. For example, there can be a policy stating that only users belonging to the warehouse_admin role are allowed to invoke “warehouse/add_item” method after 6.00 PM. In order to evaluate such policies and perform user attribute based token issuance, it is necessary to associate a user store with the key manager. This can be an LDAP store or a database backed custom user store. Furthermore, an organization may have deployed a central Identity and Access Management (IAM) system to manage all user details. In such situations, API key manager component should be able to federate with the central IAM system so that users provisioned within the IAM system can seamlessly access APIs. In addition, an organization may also want its users to access APIs using their Google or Facebook credentials. In order to fulfill such requirements, key manager has to federate with such cloud identity providers using protocols such as OpenID Connect, SAML or custom protocols used by those providers.

Now we can consider backend services. These services can be deployed within organization’s internal network or in a separate network. It’s obvious that in either case, such services should not be exposed to external parties without going through the API gateway or some other protection mechanism. If services are deployed in a separate network, some type of secure connection (e.g. VPN) between the API layer and the second network has to be available as shown in figure 8. Furthermore, some of these services could be cloud based services or services offered by a partner company. In such situations, those services have some kind of protection mechanism such as OAuth, so that the API layer should act as an API client as discussed under protecting backend services.

As discussed in this article, it is necessary to consider multiple areas in order to properly secure APIs. Some of these security features are supported out-of-the-box by API management platforms, while some others have to be plugged into those platforms as extensions. Furthermore, it may also be necessary to integrate API management platforms with external tools in order align with security policies of an organization, while there can also be security policies that are related to the deployment of products rather than products themselves. Therefore, it is necessary to consider all these aspects in order to effectively secure APIs.

--

--