Configuring a Clustered Deployment Setup for WSO2 Enterprise Store
WSO2 Enterprise Store is capable of managing and provisioning the entire enterprise asset lifecycle for all the enterprise assets. WSO2 ES is a combination of publisher which allows users to create and manage assets and a central multi-tenant store that helps to increase the visibility of enterprise assets, by allow users to search and discover assets.
This article describes how to deploy WSO2 Enterprise Store (v2.1.0) cluster with two publisher nodes and two store nodes. Also this deployment setup includes Nginx as the load balancer, WSO2 Identity Server as the identity provider for the system and MySQL as the database server.
Setting Up the Database
Install MySQL Server and run following database script. Replace [ES_HOME] with root directory path of the WSO2 Enterprise Store.
Once the databases and the tables are created, configure the server to allow remote connections by commenting out following lines in my.cnf file.
#bind-address = 127.0.0.1
Make sure to restart the database server once the above changes have been applied.
Download MySQL JDBC driver and copy the JAR to [ES_HOME]/repository/components/lib directory in all the nodes.
Datasource and Registry Mounting Configurations
In order to share the same registry space across all the nodes of the cluster, we should configure the data sources as follows and mount the registries to the registry database we created before.Following are the configurations to properly configure data sources and these should be added to [ES_HOME]/repository/conf/datasources/master-datasources.xml
Username and password should be the credentials of the MySQL database user.
Also add the following configuration to [ES_HOME]/repository/conf/datasources/social-datasources.xml
Once the data sources have been added successfully user management data source should be configured in [ES_HOME]/repository/conf/user-mgt.xml file.
Then add following mounting configurations to [ES_HOME]/repository/conf/registry.xml in all the nodes to make them point to the same registry space. In this configuration, you should provide the [INSTANCE_IP:PORT] which is the IP address of the current instance and the which is the port of the carbon server (Typically value of the PORT is 9443).
More information about mounting and remote instance can be found here.
Configuring Clustering in Nodes
To configure clustering for the nodes following changes have to be done in [ES_HOME]/repository/conf/axis2/axis2.xml. These configurations should be applied to all the nodes in the cluster.
First of all, the Hazelcast agent should be enabled as WSO2 Carbon Platform inherits the clustering functionality from the Apache Axis2.
Then the membership scheme should be configured which defines how the nodes will be identified and connect to the cluster. Two options can be used as the membership scheme ‘multicast’ or ‘wka’ (well-known-address based multicast) methods as the membership scheme. As the information about other nodes are available in this setup, set the membership scheme as ‘wka’
Also the domain/group should be configured so that there will not be any interference from nodes in different domains. Domain can be any value and it should be consistent throughout the cluster.
Then the IP address bound to the network interface which is used for communicating should be specified as the localMemberHost and the localMemberPort should be the tcp port of the current node.
As ‘wka’ used as the membership scheme, we need to specify the host and port of members this member should connect first to get to know about the network group. Members should be specified as follows and at least one publisher node and one store node should be specified as well known members in order to fully discover the network.
Then comment out the sub-domain property as this setup does not support worker-manager separation.
Finally we have to configure the hosts for each node in [ES_HOME]/repository/conf/carbon.xml file as follows,
For publisher nodes,
For store nodes,
Configuring SSO with WSO2 Identity Server
Register following two service providers in [IS_HOME]/repository/conf/identity/sso-idp-config.xml
Then for the publisher nodes, following configuration should to be added to [ES_HOME]/repository/deployment/server/jaggeryapps/publisher/config/publisher.json
For store nodes, following configuration should to be added in to [ES_HOME]/repository/deployment/server/jaggeryapps/store/config/store.json
Configuring NGINX as the load balancer for the cluster
Installing guide for NGIX plus can be found here.
To configure NGINX Plus to direct the requests to the publisher nodes, create a VHosts file (es.pub.conf) in the /etc/nginx/conf.d directory and add the following configurations into it.
Similarly, to configure NGINX Plus to direct the requests to the store nodes, create a VHosts file (es.store.conf) in the /etc/nginx/conf.d directory and add the following configurations into it.
Add the following host entries to your DNS, or /etc/hosts file (in Linux) in all the nodes of the cluster. You have to map the host names with the IP address of the load balancer.
The next step is to start running all the ES servers. Your console indicates that members are joining the cluster.