IBM Master Data Management on OpenShift— Deployed MDM — Part I

Chitra Ananthanarayanan
5 min readSep 10, 2023

--

We can deploy containerized MDM on OpenShift Container Platform. Deploying MDM on OpenShift gives a number of advantages including

  • Reduced cost and skill to deploy MDM
  • Faster deployment and upgrade
  • Auto healing and Auto scaling
  • Security

In this article, let us focus on how we can deploy MDM Sample on OpenShift.

Note: Deployed MDM is a Sample and cannot be used for Production.

Requirements

We can deploy MDM v12.0.0.1 on OpenShift using Ansible. Please install Ansible on the Linux box from which you will configure OpenShift using the below commands:

yum install python38
pip3 install openshift
pip3 install ansible
ansible-galaxy collection install community.kubernetes
ansible-galaxy collection install operator_sdk.util
pip3 install Jinja2

Note: In this sample, OpenShift v4.10.59 is used.

After the above commands are executed, we can start invoking ansible-playbook which is used to deploy MDM.

Note: YAML files to install IBM MDM on OpenShift are also provided.

Download MDM OpenShift Ansible files

Download the MDM OpenShift Ansible files from Passport Advantage.

The compressed file that gets downloaded holds sample playbooks, roles, a folder holding SSL certificates for the MDM images and a file config.json that holds variables.

Deploy MDM with DB2 on OpenShift

In this sample, let us deploy MDM along with DB2, MQ and Clientapps on OpenShift. Here, MDM is configured to work with DB2 and MQ.

Note: Using deployed MDM is a Sample to demonstrate how MDM can work on OpenShift

Create Project

Let us first create a project in OpenShift that will have MDM resources in it. Create a new project (namespace) com-ibm-mdm using the below command.

oc new-project com-ibm-mdm — display-name=”Master Data Management”

Note: Ensure that you provide two consecutive hyphens and appropriate double quotes.

Now OpenShift will using the project com-ibm-mdm. You can confirm this by giving the below command:

oc project

View config.yaml and prov_mdm_db2.yaml

When you extract the compressed ansible file, you will reach the MDM_12.0.0.1_Ansible folder. Please get into the mdm-ansible folder there.

Open the config.yaml. This file has sections corresponding to common, mdmdb, mdm, mdmmq, clientapps, mdmui, mdmml, bpmdb and mdmisc. Note that these are the sub-folders of the roles folder.

In the common section, ensure that the namespace is the same as the project you created (com-ibm-mdm) and the value of status is present. Since we are about to use the container images of deployed MDM from IBM, we need not edit values for docker_user, docker_password and email_id.

Let us go with the default values for all attributes in the other sections.

Now let us have an overview of file prov_mdm_db2.yaml. In this file we are able to see that the tasks common, mdmdb, mdmmq, mdm and clientapps will be included.

Execute ansible-playbook

Now execute the below command from the same folder

ansible-playbook -e “@config.yaml” prov_mdm_db2.yaml

Here we are passing the file prov_mdm_db2.yaml and mentioning that the parameter values can be taken from config.yaml file.

From the output, we can understand that the ansible-playbook utility has created all resources mentioned in prov-mdm-db2.yaml file in a specific order.

List all resources in the com-ibm-mdm namespace

Once the command completes execution, execute the statement

oc get all

to find out whether the MDM OpenShift resources corresponding to DB2, MQ, DB2 and MDM are available.

oc get all

The Pods, ReplicationControllers, Services, HorizontalPodAutoScalers, DeploymentConfigs and Routes in the OpenShift namespace com-ibm-mdm are displayed. These resources were created by the ansible-playbook command that got executed.

Examining Pods

DB2

Database is available in the DeploymentConfig db2host, Inside db2host there is a ReplicationController by name db2host-1 which maintains a Pod db2host-1-mqqcv.

Note that the DB2 container is a sample and it is used in this demonstration.

Let us check the content in the APPSOFTWARE table present in the MDMDB database.

Logging in to the db2host

Note: The best approach to work in a Pod is using oc rsh pod/<pod_name>

In the container, we are switching from root user to db2inst1 user, logging in to the MDMDB database and observing the value in the APPSOFTWARE table.

MDM

Now let us check the logs of an MDM Pod

Logs from MDM Pod

Note the value of the uid and the group. In OpenShift, the container is being run by a user that has an arbitrary assigned id. In our startup script we have provide a name — ws9admin and group — root for this user. We are then executing the startServer command. The startup script checks whether there are any custom CBAs added. It then displays the content from the SystemOut.log.

Now let us get into an MDM Pod and check the contents in the MDM container.

We do notice that MDM is installed on WAS Base at /opt/IBM/MDM and the WAS Profile Home is /opt/IBM/WebSphere/AppSrv01. Explore the sub folders in the profile to figure out where the deployed applications are placed.

Note that this container has the DB2 JDBC jars — db2jcc4.jar and db2jcc_license_cu.jar at /opt/IBM/db2/java

Please try executing IVT by switching to directory /opt/IBM/MDM/IVT and executing the below command:

./verify.sh db2inst1 db2inst1 mdmadmin mdmadmin true /opt/IBM/WebSphere/AppServer/profiles/AppSrv01/config/cells/WASHOSTCell01/nodes/WASHOSTNode01/trust.p12 MDMWebAS

Then inspect the response files in /opt/IBM/MDM/IVT/testCases/xml/response and /opt/IBM/MDM/IVT/testCases/xml_virtual/response folders.

Let us have an overview of clientapps and mq DeploymentConfig in the next article.

--

--