Managing Production-grade Asynchronous APIs

How you can utilise an API Management solution to manage asynchronous APIs at scale? What benefits you’ll gain?

Dunith Danushka
Tributary Data
7 min readAug 21, 2021

--

Photo by Lars Kienle on Unsplash

Asynchronous APIs are quite different from their traditional RESTful counterparts. While RESTful APIs have established practices for managing them in production, asynchronous APIs are slowly building up their ecosystem to get there.

This article discusses the importance of using an API Management solution to expose an asynchronous streaming service to receive production traffic from the Internet.

If you are new to the world of asynchronous APIs, I suggest you read my previous article for a primer.

Use Case — Streaming service for server telemetry

Imagine a metrics service that collects and streams telemetry data from a server cluster over a WebSocket connection. The service allows its consumers to subscribe to different “channels” to receive information about a specific server in the cluster.

Current solution in place

For example, a subscription to ws://localhost:808/metrics/1/memory URL results in a continuous flow of server health data coming from server 1.

wscat ws://localhost:808/metrics/1/memory
{"heap":67893392,"nonHeap":36260800,"timestamp":1614803952066}
{"heap":72591160,"nonHeap":37250808,"timestamp":1614803953067}
{"heap":72591160,"nonHeap":37251544,"timestamp":1614803954064}

Currently, this service sits behind the firewall, restricted for internal use. But the goal is to expose this to the Internet so that third-party developers consume them from the applications they build.

To make things simple, can we front it with a load balancer to make it public-facing and call it a day?

Absolutely no!

We need to think thoroughly before the service accepts production traffic from an untrusted place like the Internet. The rest of the article explains some critical factors to be considered and the process of making it a production-ready streaming API.

Coupling it with an API Management solution

Today, only a handful of people access the metrics service. But if you open it up for the Internet, things will be different.

The biggest concern is security. Can you trust every consumer coming from the outside? If they are not properly authenticated, data breaches are inevitable. Also, you’ll have to scale the service infrastructure to serve web-scale traffic. That might break your bank. But you can use techniques like rate limiting to shape incoming traffic so that it won’t crash the service. Besides, concerns like version management, analytics, API documentation, and monetization become “must-have” features when external consumers start using your API.

Therefore the recommendation is to use an API Management solution to expose the service to the public as they have the necessary features built-in to address the concerns above.

Recommended solution with an API Management that takes away many challenges from service developers.

Making it production-ready

In this section, we’ll look at the process of making our metrics service ready to accept production traffic from the Internet.

As we advance, I will take WSO2 API Manager as an example for the API Management solution. It is an open-source platform for building, integrating, and exposing digital services as managed APIs.

The API management features being discussed here are standard across many modern-day API Management platforms. I took WSO2 API Manager as a reference here due to its simplicity in getting started. But you are free to go ahead with any solution of your choice.

Document the service with AsyncAPI specification

If you are making a service production-ready, it must be well documented. That way, the service consumers will find vital information about the service without depending on a human. You can use the AsyncAPI specification for that.

The AsyncAPI specification allows developers, architects, and product managers to define the interfaces of an async API. Much like OpenAPI (fka Swagger) does for REST APIs. In our case, we will create the metrics-spec.yml file that tells the consumer where to discover the running service instance, what parameters are expected, and what the response schema would look like.

The AsyncAPI schema for the metrics service would look like this.

You can learn more about the AsyncAPI specification from here.

Creating the Metrics API

The next step is to create an API proxy (or simply an API) to make the metrics service public-facing. In WSO2 API Manager, you can create a new API from an already existing AsyncAPI specification.

For information about creating a new API, see Create a Streaming API from an AsyncAPI Definition.

WSO2 API Manager allows you to create an API from an already existing AsyncAPI specification.

Applying a rate-limiting policy

Now that we have created the API. But it is still in the CREATED lifecycle state. We need to pick a rate-limiting policy before transitioning it to the PUBLISHED state.

Rate limiting allows you to limit the number of successful hits to an API during a given period, typically in cases such as the following:

  • To protect your APIs from common types of security attacks, such as certain types of denial of service (DOS) attacks.
  • To regulate traffic according to infrastructure availability.
  • To make an API, application, or a resource available to a consumer at different service levels, usually for monetization purposes.

How rate-limiting works for streaming APIs?

In our example, the client application initiates a WebSocket connection with the WSO2 API Manager. Then the incoming data will be proxied to the metrics service.

Once the initial handshake is done, the communication in a WebSocket connection is done via WebSocket frames. WebSocket frames can be sent from client to server and vice versa. Each WebSocket frame is counted as an event. WebSocket frames will be throttled out once the aggregate amount of frames sent from client to server and vice versa reach the amount defined in the rate-limiting policy.

To activate a rate-limiting policy for the metrics API, you need to select a Business Plan.

A business plan acts as a rate-limiting policy

Managing the API lifecycle

Once a business plan is selected, you can promote the Metrics API to the PUBLISHED state. That deploys the API to the gateway, making it ready to receive external traffic.

Lifecycle transitions to PUBLISHED

If you are not happy with the current version of the API, you can always create multiple revisions and deploy again. That helps, especially when you want to introduce incremental updates to the API.

Create revisions for APIs

Making the API discoverable

Once you publish an API, it will appear in the Developer Portal.

The Developer portal acts as a public API catalog, allowing external consumers to search for APIs, read the documentation, try them out, and finally subscribe via an application.

In our case, developers can browse the AsyncAPI specification of Metrics API from the Developer Portal. Besides, they can engage with the community through the built-in forum, raise any questions related to the API and clarify their doubts.

Listing APIs in the Developer Portal is crucial if you want the APIs to be adopted by a broader developer community.

API documentation can be hosted in the Developer Portal

Securing the API

API security is a critical factor once an API is released to the public.

By default, the WebSocket APIs created in API Manager is secured with OAuth2. Meaning, the consumer MUST provide an OAuth2 access token along with the WebSocket API invocation request.

curl http://localhost:8280/telemetry/1.0.0/metrics/1 -H "Authorization: Bearer [accesstoken]"

When developers are satisfied with the Metrics API in the Developer Portal, they’ll decide to subscribe to it through an application. WSO2 API Manager provides necessary components for developers to create applications, generate consumer keys, and programmatically obtain an OAuth token using them.

In our example, we can offload the heavy lifting of API security to API Manager, allowing us to focus more on the metrics service implementation rather than fiddling with authentication schemes.

API analytics and monetization

Last but not least, analytics and monetization are not critical to launching your streaming API to the public. But they are helpful to determine if the API is returning the investment that you had already put.

WSO2 API Manager provides a cloud-based analytics portal that displays runtime insights of APIs such as usage, subscribe count, latency, etc. Also, it can generate alerts if something has gone wrong with the API.

API Analytics provides a broad range of API runtime insights

The built-in API monetization allows you to generate revenue based on API call volume. Developers subscribe to the Metrics API with a business plan (also called the rate-limiting policy). Based on their usage, they’ll be charged monthly. WSO2 API Manager uses Stripe as its sample implementation billing engine to handle the payments for API monetization. However, you can use any custom implementation with WSO2 API Manager’s API Monetization capabilities based on your requirement.

Takeaways

Deploying a streaming API to production goes beyond the load balancer level configurations. It requires you to think about scalability, security, and other critical aspects as well.

Using an API Management solution that can receive traffic on behalf of your streaming service is recommended if you plan to open it for a broader audience.

In this post, I just highlighted the benefits you would get from a typical API Management solution. I will provide a step-by-step guide in another post that explains the design, deployment, and management aspects of such an API.

--

--

Dunith Danushka
Tributary Data

Editor of Tributary Data. Technologist, Writer, Senior Developer Advocate at Redpanda. Opinions are my own.