How To Choose an API Gateway
At Cloudcall we’re beginning the process of migrating from our legacy monolithic API to a microservice architecture. In the early days, it was prudent to build a single API service which provided many of the interfaces necessary to configure and use our product. However, as the complexity of the service and range of functionality has grown this has become a barrier to further progress.
Monolithic applications can be successful, but increasingly people are feeling frustrations with them — especially as more applications are being deployed to the cloud. Change cycles are tied together — a change made to a small part of the application, requires the entire monolith to be rebuilt and deployed. Over time it’s often hard to keep a good modular structure, making it harder to keep changes that ought to only affect one module within that module. Scaling requires scaling of the entire application rather than parts of it that require greater resource. https://martinfowler.com/articles/microservices.html
The Microservices architecture has developed in response to these issues. Microservice applications can be deployed and scaled independently. It encourages modular service design and supports efficient, focussed team effort.
A critical component of a Microservices architecture is an API Gateway. This is a reverse proxy which sits between API services and their consumers (typically elsewhere on the internet). Within the gateway, functionality common to all APIs can be abstracted away, further simplifying the implementation and expansion of the APIs.
API Gateways offer a great many benefits in a Microservices environment. However, the use of these benefits must be considered in the context of the current technology landscape within the organisation. We must not only recognise the “end-game” benefits, but also the early-stage requirements and have identified a clear migration path from one to the other. Given this, I believe that the following factors should be carefully considered when evaluating an API Gateway product.
An API Gateway can provide an authentication layer for back-end services, ensuring that only authenticated requests are passed on to back-end APIs. This allows authentication capabilities to be expanded without needing to make changes to back-end APIs. It also saves having to reimplement authentication in each back-end service.
The API Gateway may inject appropriate authentication headers (e.g. username, id) when proxying requests to back-end API services, which may use this information when servicing requests.
The API Gateway should be capable of feature-parity integration with our current authentication models, and accommodate the necessary transitional steps as we move towards expanding our authentication framework.
As with authorisation, the benefits an API Gateway can provide for authorisation are in abstracting common complexity away from back-end services. Authorisation may be applied to any request attribute (e.g. time, location) but in most cases, will apply to “who” is making the request. In this case authentication is a requirement for providing authorisation.
The API Gateway may inject appropriate authorisation headers (e.g. role, group) when proxying requests to back-end API services, which may use this information when servicing requests.
In the near term, it is expected that applications will continue to manage authorisation internally, though may need to be updated to keep in-step with advances in the authentication framework.
Longer term, service-level authorisation might be implemented at the API Gateway level. In this arrangement access to broad features will be centrally manged, while more fine-grained, locally scoped, authorisation will be provided by individual services.
The API Gateway should support current authorisation mechanisms; some services may require modification to achieve this. The Gateway should support migration to a service-level authorisation model over time.
Being positioned as a reverse proxy between clients and APIs, the API Gateway is in an excellent position to provide consistent logging across all service endpoints. This is not to say that back-end services do not need to log their behaviour — they should — but the API Gateway can provide a uniform, default logging capability for all.
To enable logs from multiple services to be analysed together, the API Gateway should provide a transaction id injected into the request headers, so downstream services can include it when logging their activities.
As with logging, an API Gateway would be well positioned to provided uniform, default monitoring across all services. Again, back-end services should provide appropriate monitoring for themselves, in addition to the high-level monitoring provided by an API Gateway.
The API Gateway should integrate with a comprehensive monitoring solution to track request/response times and service availability.
Monitoring at the Gateway level introduces the possibility of auto-scaling back-end services in response to use. In which, monitoring may detect unusually high service activity and start additional process to meet demand and improve service reliability. Inversely, monitoring may reduce the number of active process at quieter times, which may incur a cost saving of pay-per-use infrastructure.
An API Gateway would not be expected to provide auto-scaling, but should integrate well with services which provide this capability.
As a reverse proxy an API Gateway would be well positioned to support a sophisticated resource caching strategy.
An API Gateway should be able to inject the necessary headers for configuring client-side asset caching. It should also be expected to provide reasonable caching behaviour on the server, though for more sophisticated cache profiles a separate product would likely be required.
Related to authentication and authorisation, rate-limiting restricts access to APIs by permitting only a certain number of requests within a set time frame (e.g. 1000 requests per hour). This helps to reduce load on services and to prevent miss-use.
It may be desireable to rate-limit 3rd party access to APIs, which opens up the possibility of paid-for access to higher rate-limits as a revenue stream.
Payload transformation is the capability to modify request and response payloads on the fly. This capability is an integral component of many of the above requirements. It is critical that an API Gateway should permit highly bespoke payload transformations.
As the public interface to services, the API Gateway is a critical infrastructure component, and must support being run in a high-availability architecture. It should support load balancing, shared state and ideally auto-scaling. As a reverse-proxy, the Gateway should provide a meaningful response to clients when back-end services are unavailable.
As services grow and develop in response to business needs it may become necessary to make breaking changes across versions, refactor functionality or to decommission old service and replace them anew. This process of lifecycle management is greatly eased by an API Gateway abstracting the public interface of a service from internal mechanisms.
With a single monolithic application, updates in the part require deploying the whole. Feature deprecation is challenging. Publicly versioning a monolitic API is difficult. Enhancements are mostly achieved by making changes in the codebase and you may be firmly wedded to environments (e.g. development, QA, production) for the whole service stack.
Moving to a microservices model will allow more intelligent decisions to be made about how to manage the service lifecycle. It will be possible to publicly provide multiple versions of an API, allowing greater flexibility in early life testing of new features and end-of life support for old. When planning new feature developments, it will be possible to make non-breaking changes within a version, breaking changes across a version and introduce new capability in an entirely new service. All of this allows the scope to design more appropriate solutions to problems.