A step-by-step guide to WSO2 IS worker-manager clustering with load balancing.

Yasara Yasawardhana
CodeX
Published in
5 min readAug 22, 2021

Multiple nodes of WSO2 products can be deployed in a cluster mode, in order to achieve greater scalability, increased resource availability, strategic resource usage and simplified management. Using this approach in production environments helps improve performance by seamlessly distributing requests across nodes while logically handling the traffic, thus achieving high cluster throughput. Simply stated, if one server in a cluster fails, the other servers configured in the cluster can pick up the workload. A load balancer is used to distribute requests among the nodes in the cluster. In this blog, I will be using NGINX load balancer for this purpose.

What is a worker-manager cluster setup?

Worker-manager cluster is a deployment model that consist of ‘worker’ nodes and ‘management’ nodes. Basically, a worker node is used to serve requests received by clients, whereas a management node is responsible for management related tasks such as to deploying and configuring artifacts.

In the following section, I will be guiding you through setting up a 2-node worker-manager cluster environment with load-balancing using MYSQL databases.

1. Setting up 2 nodes of WSO2 IS

1.1 Download the latest WSO2 Identity Server from here. For details on running the Identity Server, see Running the Product. I will refer to it as ‘manager’ node hereon.

1.2 Get a copy of the downloaded WSO2 IS pack, referred to as the ‘worker’ node here onwards.

1.3 Add the following configurations to deployment.toml files in both nodes in path <IS_HOME>/repository/conf in order to enable clustering between them. I have used Well-Known Address (WKA) membership scheme for clustering.

You need to specify the IP Address and port that is used to communicate cluster messages. Port number should be unique to each node.

[clustering]
membership_scheme = "wka"
local_member_host = "<IP_Address_of_the_editing_node>"
local_member_port = "<Port_number>"
members = ["<IP_Address_of_the_editing_node>:<Port_number>", "<IP_Address_of_the_other_node>:<Port_number>"]

For example, below are my configurations.

For manager node, [clustering]
membership_scheme = "wka"
local_member_host = "127.0.0.1"
local_member_port = "4001"
members = ["127.0.0.1:4000", "127.0.0.1:4001"]
For worker node,[clustering]
membership_scheme = "wka"
local_member_host = "127.0.0.1"
local_member_port = "4000"
members = ["127.0.0.1:4000", "127.0.0.1:4001"]

You can learn more about clustering membership schemes to decide on the best scheme suitable for your requirements from here.

1.4 Change the <IS_HOME>/repository/conf/deployment.toml file to access the servers using a hostname instead of the raw IP. The hostName should be resolved to the Load Balancer front-end IP address.

[server]
hostname = "wso2.is"

Add this hostname mapping to etc/hosts file too.

1.5 We need to set a different port offset value for one IS node since both nodes will be running on the same machine Thus, add the following configuration to worker node only in the <IS_HOME>/conf/deployment.toml file.

[server]
offset = "1"

1.6 Navigate to the <IS_HOME>/repository/conf/deployment.toml file and set the proxy port as 443as shown below. This is the Load Balancer frontend port.

[transport.http.properties]
proxyPort = 80
[transport.https.properties]
proxyPort = 443

2. Configuring Databases

2.1 Navigate to <IS_HOME>/repository/conf in your 2 nodes and configure both deployment.toml files to point to the database.

Following is a sample configuration.

[user_store]
type = "database"
[database.identity_db]
type = "mysql"
hostname = "localhost"
name = "regdb?verifyServerCertificate=false&amp;useSSL=false&amp;requireSSL=false"
username = "root"
password = "root"
port = "3306"
[database.shared_db]
type = "mysql"
hostname = "localhost"
name = "regdb?verifyServerCertificate=false&amp;useSSL=false&amp;requireSSL=false"
username = "root"
password = "root"
port = "3306"

Refer this documentation for more information on working with databases in WSO2IS.

2.2 Next, add the MySQL JDBC driver to <PRODUCT_HOME>/repository/component/lib directory.

2.3 Finally you need to execute the DB scripts in order to create the database locally. This can be done either automatically during server startup using the -Dsetup option or manually running the scripts using MySQL workbench, DBeaver etc.

3. Configuring NGINX load balancer

Now that we have finished setting up the cluster, let’s start fronting it with an NGINX load balancer.

3.1 First you need to install NGINX to your local machine. There are 2 versions of NGINX as plus and community version where I have used the community version in this demonstration.

3.2 Navigate to /etc/nginx/conf.d directory and create a VHost file and name it as is.http.conf

Add the following configurations and save the file. By this configuration, Nginx will direct the HTTP requests to the two IS nodes via the HTTP 80 port using the http://is.wso2.com/ .

upstream wso2.is.com {
server 127.0.0.1:9443;
server 127.0.0.1:9444;
}
server {
listen 80;
server_name is.wso2.com;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass http://wso2.is.com;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}

3.3 In the same manner, let’s configure NGINX to direct HTTPS requests too, via the HTTPS 443 port using https://is.wso2.com/

Create another VHost file and save it as is.https.conf in the same /etc/nginx/conf.d directory. Add the following configurations into it.

upstream ssl.wso2.is.com {
server 127.0.0.1:9443;
server 127.0.0.1:9444;
}
server {
listen 443;
server_name is.wso2.com;
ssl on;
ssl_certificate /usr/local/etc/nginx/ssl/wso2.com.crt;
ssl_certificate_key /usr/local/etc/nginx/ssl/wso2.com.key;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass https://ssl.wso2.is.com;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}

Tip: Add the ip_hash configuration to the file and reload nginx when trying to access the Management Console.

upstream ssl.wso2.is.com {
server 127.0.0.1:9443;
server 127.0.0.1:9444;
ip_hash;
}

3.3 Open a terminal and run the below command to reload the NGINX server after adding the above 2 configurations for HTTP and HTTPS requests.

$sudo service nginx reload

To read further about load-balancing capabilities and more configurations, visit here.

Now that we have finished setting up the necessary configurations, let’s start the Manager and Worker node servers.

  • Navigate to <IS_HOME>/bin folder of Manager node, open a terminal and run the following command.
sh wso2server.sh

Once the server starts, you’ll see a log in the terminal as below.

INFO {org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent} - Elected this member [xxxxx-xxxxx-xxxxx-xxxxx] as the Coordinator node
  • Similarly, navigate to <IS_HOME>/bin folder of Worker node, and run the same command in a terminal.
sh wso2server.sh

After starting the Worker node, a log similar to the following will be printed for the Manager node which is acting as the Hazelcast Coordinator, indicating that the Worker node has successfully joined the cluster.

INFO {org.wso2.carbon.core.clustering.hazelcast.wka.WKABasedMembershipScheme} - Member joined [xxxxx-xxxxx-xxxxx-xxxxx]: /127.0.0.1:4000
  • Also, try shutting down the Manager node where you’ll see the remaining worker node elected as the new coordinator of the cluster, as shown below.
INFO {org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent} — Elected this member [xxxxx-xxxxx-xxxxx-xxxxx] as the Coordinator nodeINFO {org.wso2.carbon.core.clustering.hazelcast.wka.WKABasedMembershipScheme} — Member left [xxxxx-xxxxx-xxxxx-xxxxx]: /127.0.0.1:4001

If you can observe the above behaviors, that means you have successfully set up a worker-manager cluster. Congratulations :)

Hope now you have a more comprehensive idea on how the reliability and availability issues can be easily solved by adapting to a worker/manager clustering mode without totally relying on a single instance of a product.

For further information, do visit WSO2 Identity Server Documentation.

and Thank you for reading!

--

--

Yasara Yasawardhana
CodeX
Writer for

Software Engineer @ WSO2 | ENTC @ University of Moratuwa