Containerizing WSO2 Middleware

Imesh Gunaratne
Sep 20, 2016 · 7 min read
Figure 1: A resource usage comparison between a VM and a container

The Containerization Process

The process of deploying a WSO2 middleware on a container is quite straightforward, it’s matter of following few steps;

  1. Create a new container image from the above OS image by adding Oracle JDK, product distribution and configurations required.
  2. Export JAVA_HOME environment variable.
  3. Make the entry point invoke the bash script found inside $CARBON_HOME/bin folder.

Configuration Management

Ideally software systems that run on containers need to have two types of configuration parameters according to twelve-factor app [6] methodology:

  1. Environment specific configurations

What if a CM is run in the Container similar to VMs?

Yes, technically that would work, most people would tend to do this with the experience they have with VMs. However IMO containers are designed to work slightly differently than VMs. For an example, if we compare the time it takes to apply a new configuration or a software update by running a CM inside a container vs starting a new container with the new configuration or update, the later is extremely fast. It would take around 20 to 30 seconds to configure a WSO2 product using a CM whereas it would only take few milli-seconds to bring up a new container. The server startup/restart time would be the same in both approaches. Since the container image creation process with layered container images is much efficient and fast this would work very well in most scenarios. Therefore the total config and software update propagation time would be much faster with the second approach.

Choosing a Configuration Manager

There are many different configuration management systems available for applying configurations at the container build time, to name few; Ansible [7], Puppet [8], Chef [9] and Salt [10]. WSO2 has been using Puppet for many years now and currently uses the same for containers. We have simplified the way we use Puppet by incorporating Hiera for separating out configuration data from manifests. Most importantly container image build process does not use a Puppet master, instead it runs Puppet in masterless mode (puppet apply). Therefore even without having much knowledge on Puppet, those can be used easily.

Container Image Build Automation

WSO2 has built an automation process for building container images for WSO2 middleware based on standard Docker image build process and Puppet provisioning. This has simplified the process of managing configuration data with Puppet, executing the build, defining ports and environment variables, product specific runtime configurations and finally optimizing the container image size. WSO2 ships these artifacts [11] for each product release. Therefore it would be much productive to use these without creating Dockerfiles on your own.

Choosing a Container Cluster Manager

Figure 2: Deployment architecture for a container cluster manager based deployment
Figure 3: Container cluster management platforms supported for WSO2 middleware

Strengths of Kubernetes

Kubernetes was born as a result of Google’s experience on running containers at scale for more than a decade. It has covered almost all the key requirements of container cluster management in depth. Therefore it’s the first most preference for me as of today:

  • Container orchestration
  • Container to container routing
  • Load balancing
  • Auto healing
  • Horizontal autoscaling
  • Rolling updates
  • Mounting volumes
  • Distributing secrets
  • Application health checking
  • Resource monitoring and log access
  • Identity and authorization
  • Multi-tenancy

Reasons to Choose DC/OS

AFAIU at the moment DC/OS has less features compared to Kubernetes. However it’s a production grade container cluster manager which has been there for some time now. The major advantage I see with DC/OS is the custom scheduler support for BigData and Analytics platforms. This feature is still not there in Kubernetes. Many major Big Data and Analytics platforms such as Spark, Kafka and Cassandra can be deployed on DC/OS with a single CLI command or via the UI.

The Deployment Process

Figure 4: Container deployment process on a container cluster manager




Containerizing Software Applications

Imesh Gunaratne

Written by

Engineer at Google


Containerizing Software Applications