Photo by Jota Lao on Unsplash

Principles of an API Gateway

Microservices and API Gateway on turnoff.us

1. Scalability and Performance

If your platform serves millions/billions of requests a day, it makes sense to invest in the right technology that can support high traffic without impacting the performance. Asynchronous non-blocking I/O (NIO) based frameworks are best suited for such cases. One of the preferred frameworks is Netty and using it as the HTTP container has been proven to provide the desired performance boost. Typically, the API Gateway makes calls to multiple backend services for an incoming request. If your backend microservices take long to process, the thread resources in your Gateway may get depleted causing starvation and hence will impact the Gateway’s ATB (Availability to Business). Thus, writing the API code in declarative way using a reactive coding pattern is the right choice for such applications. For e.g., one can use JAVA 8’s CompletableFuture as the reactive abstraction. To conclude, for a high traffic application such as API Gateway, its best to use the NIO framework and reactive programming model for optimal utilization of resources which yields in high performance. Following graph compares the latency (in ms) of Gateway application using Tomcat and Netty as HTTP container.

API Gateway — Latency comparison (Tomcat vs Netty)

2. Request orchestration

One of the key functionalities of a gateway application is to fan out API requests to many backend microservices. To do this effectively, one needs to consider following crucial aspects:

  • Serial/Parallel invocation patterns: API Gateway should allow the invocation of downstream services in serial and/or parallel patterns. Along with that, the support for conditional routing should be available.
  • Configurable routing: The routing patterns shall require no new code (or minimal code) to create a new route to a backend microservice. In other words, service routing should be driven by configurations. This ensures that the Gateway codebase stays lean as new APIs are instrumented.

3. Response translation

API Gateway may orchestrate requests to multiple downstream services as we noticed in above section. The client however, may need a single response. In such scenarios, API Gateway shall take the responsibility of merging different responses and dispatching a single, meaningful response. One effective way to achieve this is a lookup table. A lookup table defines the permutations of responses and lays out the final outcome for a particular combination. For e.g., suppose a gateway service calls n microservices, it can consolidate the n responses to one by referring to the lookup table, and then dispatching the outcome which matches the response combination.

4. Enabling Fallback

Fallback pattern is defined as routing the request to a lightweight version of your service in case of the failure in your primary service. This routing pattern prevents complete failure of your business and acts as a last line of defense. If building a lightweight service is too expensive, the API Gateway can simply route the call to a default function, thus enabling your service to provide a best-effort response rather than a complete failure.

5. Rate limiting and Access Control

While on one hand, API Gateway enables your business to expose its operations via APIs, it is also responsible to protect your microservices from traffic bursts and unauthorized access. Building Rate Limiter in Gateway ensures sudden traffic spikes do not hurt your application and hence protects from intentional/unintentional DDoS attacks. Certain APIs in your platform may be sensitive, and you would like to govern the access to them. API Gateway can restrict API access and allow only authorized applications to call your services.

6. Monitoring and Analytics

For any platform, it is crucial to have a monitoring dashboard which displays vital system and application metrics. Since API Gateway is the platform’s facade, it makes sense to build those monitoring and alerting features at this layer. Few examples of important metrics to consider are ATB (Availability to Business), API Latency and API Error Counts which provide insights into the health of the API Gateway itself. Apart from this, API Gateway can emit useful information from the API request and response to an offline store (preferably Hadoop), which can provide detailed business analytics of the API.

API Gateway for Risk Platform at PayPal

In conclusion, in the world of many diverse microservices, API Gateway is a natural choice to expose the capabilities in a well-defined fashion. Various parameters must be considered while designing the API Gateway, including the right technology for scalability, design patterns supporting flexible orchestration/consolidation, fallback services, rate limiters, access controls to APIs and ample monitoring and analytics features. Our team at PayPal has built an API Gateway on these principles which serves as the single point of entry to all Risk APIs in PayPal. At present, it serves multi million API calls in a day and has latency of just 1.4 ms (average) and 3 ms (99 %ile). CPU and memory utilizations are low even at peak traffic and hence, it is easy for us to serve the growing transactions without horizontally increasing capacity.

PayPal Engineering

The PayPal Engineering Blog

Starakjeet Nayak

Written by

Product Manager @ PayPal

PayPal Engineering

The PayPal Engineering Blog