How to deploy WSO2 API Manager gateway cluster with synchronization

Chanaka Fernando
WSO2 Best Practices
11 min readNov 24, 2020

--

Introduction

WSO2 API Manager is one of the leading API Management platforms according to various analyst reports including Forrester. It provides full API lifecycle management capabilities in a business-friendly open-source license model that allows users to try out the product without spending a dime. It has gone through several iterations of architecture improvements to the product so that it captures the essential needs of the users and think beyond that. The version 3.X series of WSO2 API Manager started off with several changes to its feature functionality that revolved around

  • UI/UX improvements with moving to react framework
  • API gateway security improvements with self-contained tokens, API keys, and basic auth support
  • Multiple protocol support (graphQL, gRPC, WebSocket)
  • API Monetization improvements

The latest version of the 3.X series which is 3.2.0 was released a couple of months ago (2020 Q3) with several changes to the architecture of the product and how it behaves in certain aspects like propagating changes across components and validating user requests. Most of the changes are focused on the usage of event-driven architecture to reduce the dependency of components and make it a real cloud-native, self-contained product. You can read more about these architecture improvements in the below post written by Shiroshika (Shiro).

https://wso2.com/ibrary/articles/wso2-api-manager-embracing-event-driven-architecture/

With these changes, one of the areas that need more attention is the deployment aspect of the WSO2 API Manager. In this article, I’m going to discuss setting up a WSO2 API Manager deployment with multiple gateways that can automatically synchronize the artifacts across nodes and scale based on the need.

What we are going to build?

With the latest improvements to the WSO2 API Manager product, the gateway has become a single component that requires scalability in most cases. That does not mean that other components like the publisher, key manager, traffic manager do not need to scale. But the rate of scale is pretty low when comparing with the gateways. Let’s take a look at what we are going to build.

Figure: Deployment architecture for artifact synchronization

As depicted in the above figure, we are going to build a deployment with 3 WSO2 API Manager nodes. Out of that, 1 node acts as the all-in-one profile that contains all the management and control plane functionalities (node1) and 2 other nodes with gateway functionality (node2, node3). The idea is to configure this setup so that any change we do to APIs, subscriptions at the master node get reflected at the gateway nodes automatically. Let’s go ahead and build it.

Before we start

You have to download the latest version of WSO2 API Manager with the updates from the following link.

https://wso2.com/api-management/#

Just tick the license agreement box and download the package you need. I would recommend the binary package (zip archive) for this exercise. But you can download any other package as well.

Once you download the package, extract it into a folder and make 3 copies of the extracted folder namely node1, node2, and node3. I’m going to use this convention for the rest of the article. We will be using the node1 as the all-in-one package and node2 and node3 are considered as gateways.

In this deployment, we are going to use 2 options that are available to deploy gateways with specific APIs.

Use API Gateway as a separate environment

In this model, we are going to define the node2 as a separate gateway environment within the publisher node of node1 and select that when publishing APIs.

Use API Gateway with labels

In this model, the API gateway will have its own label and does not require to be added to the publisher node as a separate environment. When publishing the APIs from the publisher, labels will be selected. In this scenario, labels need to be created through the admin portal. Let’s create the label by logging in to the admin portal.

Let’s configure

Configuring Node1

Let’s start with the node1 since that is where we have most of the configurations. Open the file APIM_HOME/repository/conf/deployment.toml and add the following changes.

Add node2 as a gateway environment in the publisher component

You can achieve this by adding a gateway environment to the toml file as mentioned below.

[[apim.gateway.environment]]

name = “env1”

type = “production”

display_in_api_console = true

description = “This is the gateway that handles production token traffic.”

show_as_token_endpoint_url = true

service_url = “https://localhost:9493/services/"

username= “admin”

password= “admin”

ws_endpoint = “ws://localhost:9149”

wss_endpoint = “wss://localhost:8149”

http_endpoint = “http://localhost:8330"

https_endpoint = “https://localhost:8293"

Please note that the port values are offset by 50 since we are going to use offset=50 for node2.

Configure throttling endpoints for the gateway that is running on node1

Given that we are using the traffic manager component running on node1 for event handling, we need to configure that for the gateway on node1 to use as the event hub.

[apim.throttling]

enable_data_publishing = true

enable_policy_deploy = true

enable_blacklist_condition = true

enable_persistence = true

throttle_decision_endpoints = [“tcp://localhost:5672”,”tcp://localhost:5672"]

service_url = “https://localhost:9443/services/"

You also need to configure the throttling url group configuration as mentioned below.

[[apim.throttling.url_group]]

traffic_manager_urls = [“tcp://localhost:9611”]

traffic_manager_auth_urls = [“ssl://localhost:9711”]

Enable artifact publishing in the publisher node of node1

Once the artifacts like APIs are created from the API Publisher, those need to be deployed into the gateways. The default mode is to publish the artifacts to the connected gateways directly. Given that we are configuring the gateways to synchronize using the built-in event-based mechanism, we are going to configure that using the database based model.

[apim.sync_runtime_artifacts.publisher]

artifact_saver = “DBSaver”

publish_directly_to_gateway = “false”

Configure gateway within the node1 to retrieve updates from databases

Once the artifacts are created and the events are received by the gateways on the new changes, gateways has to connect with traffic manager and get the updates from the database. This is configured with the following configuration.

[apim.sync_runtime_artifacts.gateway]

gateway_labels =[“Production and Sandbox”]

artifact_retriever = “DBRetriever”

deployment_retry_duration = 15000

data_retrieval_mode = “sync”

event_waiting_time = 5000

That’s all for the node1. This node will act as the Gateway1, Publisher, Developer Portal, Traffic Manager, Key Manager, and Admin portal.

Now go ahead and start the server with the following command running from the base directory of node1.

sh bin/wso2server.sh

Once the server is started, log on to the “admin” portal in the following URL and create a label for the node3 with the following details.

Admin portal URL: https://localhost:9443/admin

Click on the “Gateway” tab on the left panel and create a label.

Gateways -> Add Gateway Label

Figure: Add gateway label with the admin portal

Name: qa

Description: QA server

Host 1: http://localhost:8380

Host 2: https://localhost:8343

Now you have configured the node1 with the publisher, gateway (for node1), and the settings required for node2 and node3 functionality. Let’s go ahead and configure the node2 and node3 as well.

Configuring Node2

As we mentioned earlier, node2 is considered as a new gateway environment with the name “env1”. Let’s go ahead and configure it. Open the APIM_HOME/repository/conf/deployment.toml file and make the following changes.

Set the offset value to 50 since you are starting all the nodes in the same computer

offset=50

Configure throttling endpoints for the gateway that is running on node2

Given that we are using the traffic manager component running on node1 for event handling, we need to configure that for the gateway on node2 to use as the event hub.

[apim.throttling]

enable_data_publishing = true

enable_policy_deploy = true

enable_blacklist_condition = true

enable_persistence = true

throttle_decision_endpoints = [“tcp://localhost:5672”,”tcp://localhost:5672"]

service_url = “https://localhost:9443/services/"

You also need to configure the throttling url group configuration as mentioned below.

[[apim.throttling.url_group]]

traffic_manager_urls = [“tcp://localhost:9611”]

traffic_manager_auth_urls = [“ssl://localhost:9711”]

Configure gateway within the node2 to retrieve updates from databases

Once the artifacts are created and the events are received by the gateways on the new changes, gateways has to connect with traffic manager and get the updates from the database. This is configured with the following configuration. We are configuring the “env1” value as the gateway label here.

[apim.sync_runtime_artifacts.gateway]

gateway_labels =[“env1”]

artifact_retriever = “DBRetriever”

deployment_retry_duration = 15000

data_retrieval_mode = “sync”

event_waiting_time = 5000

That’s all for node2. Now go ahead and start the node2 with the following command.

sh bin/wso2server.sh

Configuring Node3

As we mentioned earlier, node3 is configured through the label based approach and we have created the label “qa” when configuring the node1. Open the APIM_HOME/repository/conf/deployment.toml file and make the following changes.

Set the offset value to 100 since you are starting all the nodes in the same computer

offset=100

Configure throttling endpoints for the gateway that is running on node3

Given that we are using the traffic manager component running on node1 for event handling, we need to configure that for the gateway on node3 to use as the event hub.

[apim.throttling]

enable_data_publishing = true

enable_policy_deploy = true

enable_blacklist_condition = true

enable_persistence = true

throttle_decision_endpoints = [“tcp://localhost:5672”,”tcp://localhost:5672"]

service_url = “https://localhost:9443/services/"

You also need to configure the throttling url group configuration as mentioned below.

[[apim.throttling.url_group]]

traffic_manager_urls = [“tcp://localhost:9611”]

traffic_manager_auth_urls = [“ssl://localhost:9711”]

Configure gateway within the node3 to retrieve updates from databases

Once the artifacts are created and the events are received by the gateways on the new changes, gateways has to connect with traffic manager and get the updates from the database. This is configured with the following configuration. We are configuring the “qa” value as the gateway label here.

[apim.sync_runtime_artifacts.gateway]

gateway_labels =[“qa”]

artifact_retriever = “DBRetriever”

deployment_retry_duration = 15000

data_retrieval_mode = “sync”

event_waiting_time = 5000

That’s all for node3. Now go ahead and start the node3 with the following command.

sh bin/wso2server.sh

Trying out the setup

Before we start

APIs are always sit next to a backend service that is running on your environment. If you don’t have a backend service to test the functionality, you can setup a simple REST service in 30 seconds with the simple json server as mentioned in the below link.

https://www.npmjs.com/package/json-server#getting-started

Let’s assume that you have this default API running on http://localhost:3000 URL with a few resources. We’ll use this as the backend.

Let’s create an API and publish

Now we have 3 API Manager nodes up and running with port offset values 0, 50, and 100. Let’s log into the API Publisher and create an API from the UI.

https://localhost:9443/publisher

You can log in with default credentials (admin: admin) and create a new API with the basic information.

Create API -> Design a new REST API

Figure: Create a new API

We are using the minimum set of information to create the API. Once you do that, it is still in the created state. Now, let’s go ahead and make some modifications to reflect with our backend service. Click on the “Resources” tab on the left panel

Resources ->

Delete the existing /* resource path by clicking on the delete all button and add a new URI Pattern “/posts” and select HTTP verbs POST, GET, PUT, DELETE and click on the “+” icon. Then click on “Save” button.

Figure: Add API resources

Let’s go to the “environments” section and select the gateway environment that the API is going to be published.

Environments ->

You can select the API Gateways environment “env1” which is the node2 of our deployment and the Gateway Label “qa” which is the node3. Then click on “Save” button.

Figure: Select API gateway environments

Now we have created the API and configured the relevant parameters. Let’s go ahead and publish that to the gateway environments. You can do that by going into the “Overview” tab and click on the “Publish” button.

Figure: Publish the API

Once you do that, you would see in the API gateway nodes, that the API is deployed immediately with a log similar to below in each node.

[2020–11–24 14:00:18,339] INFO — CarbonAuthenticationUtil ‘admin@carbon.super [-1234]’ logged in at [2020–11–24 14:00:18,339+0530]

[2020–11–24 14:00:18,377] INFO — DependencyTracker Local entry : b5c4ec64-d1d3–451d-a2dc-366283bf0876 was added to the Synapse configuration successfully

[2020–11–24 14:00:18,382] INFO — DependencyTracker Endpoint : PostsAPI — vv1_APIsandboxEndpoint was added to the Synapse configuration successfully

[2020–11–24 14:00:18,383] INFO — DependencyTracker Endpoint : PostsAPI — vv1_APIproductionEndpoint was added to the Synapse configuration successfully

[2020–11–24 14:00:18,427] INFO — DependencyTracker API : admin — PostsAPI:vv1 was added to the Synapse configuration successfully

[2020–11–24 14:00:18,427] INFO — API {api:admin — PostsAPI:vv1} Initializing API: admin — PostsAPI:vv1

Let’s subscribe to the API and consume it via different gateways

If you see a similar log in all 3 gateways, we can assume that APIs are deployed to all 3 gateways. Let’s log into the API publisher and consume the API.

https://localhost:9443/devportal

Once you log into the developer portal, you can select the API we just created and click on “Subscribe” option. This will allow you to subscribe to the API with the default application that comes with the product.

Figure: Subscribe to the API with the default application

Now you should see that the subscription is done and you can generate access tokens for that subscription. You can click on the “PROD KEYS” option in the “Subscriptions” section.

Figure: Generate production keys

Once you generate the keys, you can copy the key and click on the “Try Out” tab on the left panel. Here you can paste the token that you just created and select the environment that you want to test (You can select the node2 gateway from here since we added that as an environment).

Figure: Try out the API with the token

If you select the default environment (“Production and Sandbox”), that should send the request to node1 gateway and respond with the following details.

[

{

“id”: 1,

“title”: “json-server”,

“author”: “typicode”

}

]

You can change the environment to “env1” from the interface itself and try out the node2 gateway as well. You should see that the URL of the server is changed when you change the environment from the drop-down list.

Figure: Try out different environment from UI

Let’s try out the same request on node3 gateway. To do this, we have to use a REST client tool like Postman, SOAPUI or CURL. You can copy the CURL command from the UI itself and change the port of the command to redirect it to node3 gateway.

curl -k -X GET “https://localhost:8343/new-posts/v1/posts" -H “accept: */*” -H “Authorization: Bearer <XXXXXXXXXXX>”

Make sure you replace the <XXXXX> section with the token that you generated. It copies into the command itself and you don’t need to copy that separately. You should get the same result here as well.

[

{

“id”: 1,

“title”: “json-server”,

“author”: “typicode”

}

]

If needed you can try accessing the other 2 gateways with the CURL command as well by changing the port number of the URL.

Node1

curl -k -X GET “https://localhost:8243/new-posts/v1/posts" -H “accept: */*” -H “Authorization: Bearer <XXXXXXXXXXX>”

Node2

curl -k -X GET “https://localhost:8293/new-posts/v1/posts" -H “accept: */*” -H “Authorization: Bearer <XXXXXXXXXXX>”

Summary

This article discussed a possible approach to deploy WSO2 API Manager gateway in a multi-node cluster with automatic artifact synchronization and in a self-contained manner. In a production deployment, the traffic manager can be deployed as a separate component with high-availability since it plays a pivotal role in overall deployment.

--

--

Chanaka Fernando
WSO2 Best Practices

Writes about Microservices, APIs, and Integration. Author of “Designing Microservices Platforms with NATS” and "Solution Architecture Patterns for Enterprise"