Deep Dive on IBM ACE on K8s

Victor Paulo
9 min readSep 1, 2019

--

Deep dive on Integration with IBM App Connect Enterprise on Containers
IBM APP CONNECT ENTERPRISE

IBM App Connect Enterprise v11 (ACE) is the successor to IBM Integration Bus 10 which was released back in 2015. IIB 10 has been a stable evolution of the product series since previous versions and has evolved by adding new features to the product.

The goal of this post is to cover IBM ACE running on top of container orchestrator Kubernetes (aka: k8s). I'll assume you have some knowledge in Helm, Docker and Kubernetes.

You can build a container image containing one of the following combinations/flavours:

  1. IBM App Connect Enterprise
  2. IBM App Connect Enterprise with IBM MQ Advanced
  3. IBM App Connect Enterprise for Developers with IBM MQ Advanced for Developers
  4. IBM App Connect Enterprise for Developers

Getting started

I am going to use Helm as start point. Helm is a package manager for Kubernetes, you can find more information at helm.sh.

First let's add the helm repository for IBM products.

  • Searching for the IBM ACE
$ helm repo add ibm 'https://raw.githubusercontent.com/IBM/charts/master/repo/stable'$ helm search ace
helm charts for IBM ACE
  • Installing the IBM ACE using Helm charts
$ helm install ibm/ibm-ace-server-dev --name ace-dev --set license=accept --set persistence.enabled=true

The following picture shows the output of the helm install command;

The output from helm install command
  • Checking the status of our pods
Checking the status of the pods

By default the helm chart for ACE will create a deployment with 3 replicas, you can change this behaviour by providing the parameter below as part of the helm install command.

# defining the number of replicas
--set aceonly.replicaCount=1
# To inspect all possible parameters, run the following command
$ helm inspect ibm/ibm-ace-server-dev

If it's taking a long time to start the pods, you can inspect the status by issuing the command below:

$ kubectl describe pod ace-dev-ibm-ace-server-dev-5c7478dd5c-xqwnx
Error creating the pod

In this case, the problem was due the lack of CPU. You can scale down (from 3 to 1 replica) or try to run in an environment with more resource 😤. The resource limits and requests default for ACE is 1 for CPU and 1GB for memory, depending of the ACE flavour you chose, ACE+MQ is 2GB of memory. You can also try to start with less resources to your pods as explained below:

$ helm install ibm/ibm-ace-server-dev --name ace-dev \
--set license=accept \
--set persistence.enabled=true \
--set aceonly.replicaCount=1 \
--set aceonly.resources.limits.cpu=0.5 \
--set aceonly.resources.limits.memory=256Mi \
--set aceonly.resources.requests.cpu=0.5 \
--set aceonly.resources.requests.memory=256Mi

At this point, IBM ACE is up and running.

For the sake of curiosity, there is another way to upgrade the values of a helm chart that were set previously. These values are passed to the helm chart and then applied to the Kubernetes resource (Deployment, StatefulSet, DaemonSet, etc).

$ helm get values ace-dev > values.yaml
$ helm upgrade ace-dev -f values.yaml

Customising an IBM ACE container image

  • Installing the IBM ACE toolkit

First let's download the IBM ACE toolkit to create one sample application to be packaged in our container.

https://developer.ibm.com/integration/docs/app-connect-enterprise/get-started/

  • Creating a simple message flow

The goal is to get a .bar file which is an application deployable unit to demonstrate the creation of the ACE custom docker image.

App Connect Enterprise — Message flow

If you don't want to install the IBM ACE Toolkit, you can get the sample .bar files from Open Technologies for Integration(ot4i), here.

Github setup

We are going to use Github as our source of truth to store and versioning of all artefacts created in this demo.

  • Creating the Git repository on Bitbucket
Bitbucket repository
  • Cloning the repository
$ git clone git@bitbucket.org:bobdylan/ibm-ace-lab.git
  • Creating a dev branch
$ cd ibm-ace-lab
$ git checkout -b dev
  • Create a Dockerfile with the content below
FROM ibmcom/ace:11.0.0.5-amd64ENV BAR1=API.bar 
ENV OVERRIDE_FILE=override.properties
# Copy the override properties file to ace-server overrides directory
COPY --chown=aceuser $OVERRIDE_FILE /home/aceuser/ace-server/overrides
# Copy in the bar file to a temp directory
COPY --chown=aceuser $BAR1 /tmp

# Unzip the BAR file; need to use bash to make the profile work
RUN bash -c 'mqsibar -w /home/aceuser/ace-server -a /tmp/$BAR1 -c'
# Switch off the admin REST API for the server run, as we won't be deploying anything after startRUN sed -i 's/adminRestApiPort/#adminRestApiPort/g' /home/aceuser/ace-server/server.conf.yaml
Project structure
  • Building the docker image
# docker build -t <docker_registry>/<img_name>:<version>$ docker build -t localhost:5000/ace-dev:1.0 .# I've configured the docker registry locally (localhost:5000)
  • IBM ACE container image size

If you get the ACE image from docker hub the size is around 1 to 1.5GB, but It seems that there are other small images (experimental) as shown below.

$ docker images | grep aceREPOSITORY    TAG            IMAGE ID       CREATED       SIZE
ibmcom/ace 11.0.0.5-amd64 1f7537a53e99 2 months ago 1.49GB
# List of experimental images and their sizes:
https://github.com/ot4i/ace-docker/blob/master/experimental/sizes.txt
  • Configuring the container registry locally

Run the command below to create a local docker registry to push your images to it. The registry is needed so that the k8s can pull images from there.

$ docker run -d -p 5000:5000 \
--restart=always \
--name registry \
--env REGISTRY_STORAGE_DELETE_ENABLED="true" \
-v /vagrant/registry:/var/lib/registry registry:2
# login to create a entry on .docker/config.json
$ docker login localhost:5000

To push a container image into the repository use the following command:

$ docker push localhost:5000/ace-dev:1.0# Querying private registry
$ curl -X GET http://localhost:5000/v2/_catalog

If you get the following error outlined below, do the following:

Error: Private registry push fail: server gave HTTP response to HTTPS client

$ echo "{'insecure-registries':['localhost:5000']}" >> /etc/docker/daemon.json# Restart docker daemon
$ service restart docker
Makefile for building, pushing and deleting images
# make <target>
$ make build && make push
Docker push ACE container to local registry

Due the size of the ACE image, the push to a local registry takes some time to finish.

# If you want to remove the large image from local repository 👇$ make del-docker-registry
  • Configuring Kubernetes to use a local registry

If you are using minikube or Kubernetes single control plane cluster just for test purpose, you can use a local registry. If your registry is remote, just change from localhost to the DNS name of the registry.

# create a secret$ kubectl create secret docker-registry local-registry --docker-server=localhost --docker-username=admin --docker-password=admin --docker-email=bob.dylan@gmail.com# Inspecting the password$ kubectl get secret local-registry  -o 'go-template={{index .data ".dockerconfigjson"}}' | base64 --decode

At this moment we have got Helm, a customised ACE docker image + application, local docker registry (with the uploaded image) linked to the Kubernetes.

How to configure a helm chart for your customised ACE docker image?

Let's export the ACE chart provided by the IBM and then customise it to point to our new image.

$ helm fetch ibm/ibm-ace-server-dev --untar#edit the values.yaml file
$ cd ibm-ace-server-dev
$ vim values.yaml

Edit the license, image name and tag properties as shown below.

Values.yaml

I created an entry in the /etc/hosts to map my host IP to registry.local.tld, but you can use localhost if you have everything set up in one box.

  • Deploying the helm chart
$ helm install --name my-ace-dev . --version v1.0

At this moment you have the ACE server plus application running on k8s.

http://<nodePort_IP>:7600
ACE console
  • It’s time to commit changes and upload artifacts created for Github.
$ cd ibm-ace-lab 
$ git add .
$ git commit -a -m "first commit for my ACE image"
$ git push origin dev
Git repository after commit

Scaling your ACE infrastructure

  • Manually scaling your infrastructure

You can increase the number of replicas of your k8s deployment to support your load.

# kubectl scale deployment <deployment name> --replicas=3
$ kubectl scale deployment my-ace-dev --replicas=3
Manual scaling
  • Automatically scaling with HPA

The Horizontal Pod Autoscaler (HPA) automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization.

In the yaml definition below for HPA for ACE, it is defined the max replicas to 10 when CPU reaches 80% of utilisation.

HPA definition for ACE
Kubernetes HPA configuration

Canary deployment

Canary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to the entire infrastructure and making it available to everybody.

Canary Deployment
  • Configuring Canary Deployment
  1. Copy a new version for the application and generate a new API.bar
  2. Change the version in the Makefile from v1.0 to v2.0, build and push the new image to the registry
$ make build && make push

3. Change the ACE chart values.yaml tag from v1.0 to v2.0

4. Deploy helm new release

$ helm install --name my-ace-dev-v2 . --version v2.0
ACE versions v1 & v2

5. Deploy Nginx ingress on k8s

Note: I decided to use Nginx ingress, but you can use Istio ingress which is based on Envoy proxy.

$ helm install stable/nginx-ingress --namespace=default --name=nginx-ingress

6. Create ingress rules for v1 and v2 with the weight.

Nginx Ingress rules v1 and v2
$ kubectl create -f ingress-v1.yaml# Create a canary ingress in order to split traffic: 90% to v1, 10% to v2
$ kubectl create -f ingress-v2-canary.yaml

7. Testing the canary deployment

Getting the ingress address

Ingress entry point
$ nginx_ingress_service=10.106.176.201:31848
$ while true; do curl "$nginx_ingress_service" -H "Host: my-ace-app.com"; sleep 1; done
# Now you should see that the traffic is being splitted

# When you are happy, delete the canary ingress
$ kubectl delete ingress my-ace-dev-v2-canary
# Then finish the rollout, set 100% traffic to version 2
$ kubectl apply -f ./ingress-v2.yaml

Observability

Observability comprises distributed tracing, monitoring/alerting and logging aggregation and analytics.

  • Configure Prometheus and Grafana
$ helm install --name prometheus stable/prometheus
$ helm install --name grafana stable/grafana

After installing both above, prometheus will automatically scrape the IBM ACE endpoint and it will start collecting the data. For Grafana you have to configure the datasource to point to your prometheus server.

Prometheus ACE targets
  • IBM ACE Prometheus metrics

IBM ACE metrics are enabled by default when installing with Helm.

IBM ACE Prometheus Metrics
  • Grafana dashboard
Prometheus/Grafana — Dashboard Monitoring ACE

This is the end of the post, I hope you have enjoyed, please let me know your opinion so that we can learn together.

References:

https://hub.helm.sh/charts/ibm-charts/ibm-ace-dashboard-dev

https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/

https://github.com/ContainerSolutions/k8s-deployment-strategies

https://github.com/ot4i/ace-docker

https://martinfowler.com/bliki/CanaryRelease.html

--

--

Victor Paulo

“We are what we repeatedly do. Excellence then, is not an act but a habit.” — Aristotle