A quick start guide to deployment of IBM Integration Bus on IBM Cloud Private using Helm charts

Amar Shah
IBM Cloud
Published in
8 min readMar 8, 2018

In this article we will discuss the minimum configuration required for setting up an IBM Cloud Private (CE) for the purpose of deployment of IBM Integration Bus using Helm charts from the ICP catalog. We split our discussion primarily in two sections:

  1. Setting up the IBM Cloud Private cluster
  2. Deployment of IBM Integration Bus

Part 1 : Setting up the IBM Cloud Private cluster on Ubuntu

An IBM® Cloud Private cluster has four main classes of nodes:

  • boot
  • master
  • worker
  • proxy

You determine the architecture of your IBM Cloud Private cluster before you install it. After installation, you can add or remove only worker nodes from your cluster.

Prepare each node for installation

For the demonstration in this article we take 3 Linux Ubuntu VMs, one acts as master/proxy node and the other two as worker nodes.

  • Configure the /etc/hosts file on each node in your cluster.
  1. Add the IP addresses and host names for all nodes to the /etc/hosts file on each node.
  • Important: Ensure that the host name is listed by the IP address for the local host. You cannot list the host name by the loopback address, 127.0.0.1.
  • Host names in the /etc/hosts file cannot contain uppercase letters.
  • If your cluster contains a single node, you must list its IP address and host name.

2. Comment out the line of the file that begins with 127.0.1.1. The /etc/hosts file for a cluster that contains a master node, a proxy node, and two worker nodes resembles the following code:

  • On each node in your cluster, confirm that a supported version of Python is installed. Python versions 2.6 to 2.9.x are supported. To install Python 2.7 on Ubuntu, run this command:

sudo apt install python

To check the python version : $ python -version

  • Ensure that an SSH client is installed on each node.
  • On master nodes, ensure that the vm.max_map_count setting is at least 262144.

Determine the value of the vm.max_map_count parameter:

$sudo sysctl vm.max_map_count

If the vm.max_map_count value is not at least 262144, run the following command:

sudo sysctl -w vm.max_map_count=262144

To ensure that this value is maintained between session and system restarts, add this setting to the /etc/sysctl.conf file. As a root user, run the following command:

echo "vm.max_map_count=262144" | tee -a /etc/sysctl.conf

  • Determine your authentication method between cluster nodes. You configure the authentication during the IBM® Cloud Private installation. We use SSH keys for secure connection between the cluster nodes.
  • Secure Shell (SSH) keys are used to allow secure connections between hosts in an IBM® Cloud Private cluster.
  • Before you install an IBM® Cloud Private cluster, you configure authentication between configuration nodes. You can generate an SSH key pair on your boot node and share that key with the other cluster nodes. To share the key with the cluster nodes, you must have the access to an account with root access for each node in your cluster.
  • Follow the procedure given at : Sharing SSH keys among cluster nodes
  • ssh-copy-id -f -i ~/.ssh/id_rsa.pub root@node1_IP
    ssh-copy-id -f -i ~/.ssh/id_rsa.pub root@ node2_IP
    ssh-copy-id -f -i ~/.ssh/id_rsa.pub root@ node3_IP
    sudo systemctl restart sshd
  • Install python and pip

$ apt-get install -y python-setuptools
$ easy_install pip

  • Install the ntp service

On the master node, execute: $ apt-get install -y ntp
To check that ntp is working execute: ntpq -p

Installing Docker

  • Update ubuntu repositories

apt-get update

  • Install Linux image extra packages

apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual

  • Install additional needed packages

apt-get install -y apt-transport-https ca-certificates curl software-properties-common

  • Add Docker’s official GPG key

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -

  • Verify that the key fingerprint is 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88

apt-key fingerprint 0EBFCD88

  • Setup the Docker stable repository and update the local cache

add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

apt-get update

  • Install Docker

apt-get install -y docker-ce

  • Test Docker to make sure it is working

docker run hello-world

  • On all nodes, ensure that the Docker engine is started. Run the following command:

sudo systemctl start docker

Installing ICP

  • Log in to the boot node as a user with root permissions. The boot node is usually your master node. During installation, you specify the IP addresses for each node type.
  • Download the IBM Cloud Private-CE installer image. For Linux® 64-bit, run this command:

sudo docker pull ibmcom/icp-inception:2.1.0.1

  • Create an installation directory to store the IBM Cloud Private configuration files in and change to that directory.

For example, to store the configuration files in /opt/ibm-cloud-private-ce-2.1.0.1, run the following commands:

mkdir /opt/ibm-cloud-private-ce-2.1.0.1
cd /opt/ibm-cloud-private-ce-2.1.0.1

  • Extract the configuration files. For Linux® 64-bit, run this command:

sudo docker run -e LICENSE=accept \
-v /opt/ibm-cloud-private-ce-2.1.0.1:/data ibmcom/icp-inception:2.1.0.1 cp -r cluster /data

  • A cluster directory is created inside your installation directory.

For example : /opt/ibm-cloud-private-ce-2.1.0.1/cluster

  • Add the IP address of each node in the cluster to the /installation_directory/cluster/hosts file. (installation_directory is the directory path where user decides to install the ICP on their system). It should have following structure:

If you use SSH keys to secure your cluster, in the /installation_directory/cluster folder, replace the ssh_key file with the private key file that is used to communicate with the other cluster nodes. See Sharing SSH keys among cluster nodes.

Run this command:

sudo cp ~/.ssh/id_rsa ./cluster/ssh_key

  • Change to the cluster folder in your installation directory.

cd /installation_directory/cluster

sudo docker run -e LICENSE=accept --net=host \
-t -v /opt/ibm-cloud-private-ce-2.1.0.1/cluster:/installer/cluster \
ibmcom/icp-inception:2.1.0.1 install

  • Verify the status of your installation. If the installation succeeded, the access information for your cluster is displayed:

https://master_ip:8443
default username/password is admin/admin
master_ip is the IP address of the master node for your IBM Cloud Private-CE cluster.

The dashboard of ICP displays the health of the nodes that are part of the cluster

Navigation menu Platform ->Nodes will show all the Linux nodes that are part of the ICP cluster and their roles.

Part 2 : Deploying IBM Integration Bus using Helm Charts

Configure Properties for IIB deployment

From the navigation menu, Click Catalog -> Helm Charts

Select ibm-integration-bus-dev

Enter the parameters as per illustrated below:

Click Finish after completing the details as per above.

From the navigation menu, click, Workload -> Deployment

An IIB instance is seen with following name created. Initially the Ready and Available columns will be seen as blank but as the container deployment completes, the status would change to Ready = 1 and Available = 1

While filling up the Configuration wizard for creating the IIB container, since we mentioned ‘replicas=1’ the IIB container is deployed to only one of the worker nodes.

The following screen capture shows the status and other metadata about the deployed IIB container image.

If the container state is not in ‘Running’ mode, you can observe the errors in the ‘Event’ and ‘Logs’ tab as shown above.

When you have confirmed that the IIB container is running, you can access the IIB node by obtaining its WebUI port number as shown below.

Navigate to Network Access -> Services

From the list of services, select the IIB container service. The IIB WebUi and other listener services port numbers can be seen as shown below :

The Node port section shows the mapping for the port numbers of respective listeners

The WebUI port for the Integration node is 4414 but it is mapped 32740 in the container. So if you want to access the WebUI from the browser, you specify the URL as : http://9.124.112.29:32740/ Similarly, if you have deployed an Http based message flow to the integration server, the port 7800 for IS is mapped to the serverlistener port property 30242 as shown above.

Note : The actual port numbers will vary in your environment. The above numbers are just for illustration purposes.

Scaling the deployment to additional worker nodes

To scale your integration instances to other worker nodes, go to the navigation menu

Workloads -> Deployments.

Select the container service that you want to scale up.

Specify the number of instances you want the IIB instances to be scaled to.

When the container gets deployed to other worker nodes , the deployment status would change to Ready=2 , Availability=2.

The following screen capture shows that the POD is now running on the second worker node .

Verifying your container is running correctly

Whether you are using the image as provided or if you have customised it, here are a few basic steps that will provide you with the confidence that your image has been created/deployed properly.

Running administration commands

You can run any of the Integration Bus commands by attaching a bash session to your container and execute your commands as you would normally:

docker exec -it /container name/bin/bash

You can get the container id using the docker ps command.

At this point you will be in a shell inside the container and can source mqsiprofile and run your commands.

This image also configures syslog, so when you run a container, your node will be outputting messages to /var/log/syslog inside the container. You can access this by attaching a bash session as described above or by using docker exec. For example:

At this point, your container is running and you can deploy integration solutions to it using any of the supported methods.

Originally published at developer.ibm.com on March 8, 2018.

--

--