Best practices with deploying WSO2 platform on Docker and Kubernetes

Chanaka Fernando
WSO2 Best Practices
7 min readDec 4, 2020

--

Introduction

Deploying software applications on container platforms is becoming more and more natural in the enterprise landscape. Because of the advantages it brings to the enterprise application delivery, we see a significant adoption of container based deployments within enterprise IT. Here are some of the key advantages of containerization.

  • Package Software into Standardized Units for Development, Shipment and Deployment

— Standard

— Lightweight

— Secure

  • Runs everywhere in the same manner. No more “it worked on my machine” reasoning for a bug.
  • Better resource utilization when compared with VMs

— Runs on top of the same “host OS”

— VM level isolation through images

While container platforms brings many advantages to the table, managing a container deployment can be challenging. This is where the container orchestration platforms like kubernetes comes into the picture. It allows users to manage large scale container deployments without much hassle. Some advantages of the kubernetes (or similar orchestration platforms) are

  • Self-healing
  • Automated scaling
  • Runs everywhere (on-premise, cloud, VM)
  • Automated rollouts and rollbacks
  • Works on planet scale
  • Easier to manage container workloads

Docker has become the de-facto standard of container runtimes and kubernetes has become the same of container orchestration platforms. Because of this, WSO2 has provided necessary resources to deploy it’s products on these platforms.

Running WSO2 components in Docker

WSO2 provides easy to use docker images that can be used to run WSO2 products like API Manager, Enterprise Integrator and Identity Server within few seconds. Given below is a list of resources that are available for using WSO2 products on Docker platform.

These docker images are tagged in a manner so that it is easier to recognize a given version. Depending on which repository you are getting the docker image from, the tagging mechanism is mentioned below.

Docker Hub

  • {product-version}-{base-os-platform}{base-os-platform-version}

— wso2/wso2is:5.8.0-alpine3.10 (alpine 3.10 based stable image)

— wso2/wso2is:5.8.0 (ubuntu based stable image)

WSO2 Docker repository

  • {product-version}-{base-os-platform}{base-os-platform-version}

— docker.wso2.com/wso2is:5.8.0-alpine3.10 (alpine 3.10 based stable image)

— docker.wso2.com/wso2is:5.8.0 (ubuntu based stable image full channel)

  • {product-version}.{wum-timestamp}-{base-os-platform}{base-os-platform-version}

— docker.wso2.com/wso2is:5.8.0.1565855673150-alpine3.10 (alpine 3.10 based unique image)

— docker.wso2.com/wso2is:5.8.0.1567933846116 (ubuntu based unique image)

  • {product-version}.{wum-timestamp}.{docker-release-version-number}-{base-os-platform}

— docker.wso2.com/wso2is:5.8.0.1565855673150.3-alpine3.10 (alpine 3.10 based source unique image)

— docker.wso2.com/wso2is:5.8.0.1567933846116.3 (ubuntu based source unique image)

WSO2 provides a set of best practices when building docker images and using them in production environments. Here are some of the best practices for building and using WSO2 docker images.

Best practices for using WSO2 docker images in production

  • Configuration changes needs to be done via a volume mount. Use dedicated volume mount by the name: wso2-config-volume.
  • Add new artifacts / non-configuration files (e.g. third-party libraries, Carbon extensions in the form of OSGi bundles, Carbon Applications or any security-related artifacts such as Java Keystore files) via a volume mount. Use dedicated volume mount by the name: wso2-artifact-volume.
  • In case you are using a product that has patches, add patches via a volume mount. Use dedicated volume mount by the name: wso2-patch-volume.
  • In case of the need of an extension, WSO2 recommends using an official WSO2 product Docker image as the base of that extended Docker image.

Best practices that are followed by WSO2 when building docker images

  • Minimize container image size:

— Minimizing the number of instructions in the dockerfile that would create sizeable image layers, leading to reduced image size.

— The product distribution pack and its dependencies are removed to prevent the Docker image size from increasing. Only the necessary software packages have been installed

  • Set the default Docker image user to a non-root user. WSO2-released Docker images ship with ‘wso2carbon’ non-root user for running WSO2 products.

Best practices when building docker images (for any software)

  • Verify Authenticity of any software installed in the image:

— Check for GPG keys, signed files or valid checksums when installing software from a third party repository and downloading any files from the internet.

  • Always use HTTPS for files added remotely.
  • From directive: Use specific / unique tag
  • Never run as root
  • USER directive: Always
  • Drop privileges where necessary
  • Expose only required ports

Best practices on container security

  • WSO2 uses Clair for vulnerability scanning which is an open source project for the static analysis of vulnerabilities in docker containers. Clair scans each container layer and provides a notification of vulnerabilities that may be a threat, based on the Common Vulnerabilities and Exposures database (CVE) and similar databases from Red Hat, Ubuntu, and Debian.
  • Center for Internet Security (CIS) has a Docker Benchmark for evaluating the security of a Docker container
  • Docker provides an open source script called Docker Bench for Security. You can use this script to validate a running Docker container against the CIS Docker Benchmark
  • Sign images and verify before running

Another critical aspect of deploying WSO2 products on container platform like docker is that usage of a private docker registry. It allows users to have a controlled environment with secured docker images. Here are some of the advantages of using a private docker repository.

  • You do not have much control over the images stored in an external registry (e.g. docker hub). New images could be added and already existing images could be removed from the registry at any time.
  • A private container registry with scanning capabilities and role-based access control offers more security, governance and efficient management
  • WSO2 recommends the use of your own private container registry to host all the required images in your deployments, so that you have full control over them. Even just in case of a rollback, the recovery of a previous version is easy at any time with zero risk and zero dependence on outside sources

Let’s move to kubernetes.

Running WSO2 components in Kubernetes

Running a WSO2 product on docker is a somewhat trivial task given that it is just executing a docker command and running a single instance. But in a real production deployment, users have to run multiple containers according to a deployment pattern that is better suited for the use case. That is where the need for kubernetes comes into the picture. WSO2 has provided a set of kubernetes resources that can be used in a production scale deployments with minor modifications.

Here is a list of resources available for kubernetes on WSO2.

  • Kubernetes resources are available for various deployment patterns of WSO2 products (helm charts)
  • API Manager operator for kubernetes to deploy APIM from command line
  • API operator for kubernetes to directly deploy microgateways from command line
  • Resources for CICD pipeline for kubernetes

As mentioned before, running WSO2 components in a kubernetes environment is not a trivial task and it requires a proper planning of the deployment. Here are some of the pre-requisites to fulfill before deploying WSO2 products on kubernetes.

  • A production grade RDBMS
  • A container registry for managing container images
  • A centralized logging system for Kubernetes
  • A monitoring system for Kubernetes

WSO2 also recommends a set of best practices when deploying WSO2 products on kubernetes ecosystem. Here is a list of best practices.

  • WSO2 uses helm as the package manager for kubernetes artefacts
  • Use deployments to create pods as and when required
  • Do not use naked pods (they won’t get rescheduled when fails)
  • Use ConfigMaps to induce changes to configuration files
  • WSO2 uses NFS Server provisioner for storage
  • Use PersistentVolumes with required read/write access
  • Use Ingress Controller when exposing to external traffic
  • Configure Horizontal Pod Autoscaling (HPA) based on CPU
  • Use Kubernetes Rolling Updates to perform an update to an existing WSO2 product deployment in a Kubernetes environment. In order to roll back to a previous revision of the deployment, it is recommended to use Kubernetes Rollouts

Now we have a good understanding of the resources available for docker, kubernetes and the best practices, let’s try to design a reference deployment architecture for a complex WSO2 deployment.

A reference WSO2 deployment architecture on Kubernetes

WSO2 API Manager product comes with a modularized architecture where different functional capabilities are grouped into “profiles” that can be run independently. Because of this flexibility, it can be deployed in a manner where only the required components needs to be scaled up and down based on the usage.

Let’s assume a use case where an organization have a requirement to expose a set of APIs for both internal and external usage. Assuming a heavy usage of Identity and Access Management features, WSO2 Identity Server is also brought in to the picture. Given below is a diagram that explains the deployment architecture of WSO2 API Manager and WSO2 Identity Server for this use case.

Figure: WSO2 kubernetes deployment architecture

As depicted in the above figure,

  • Kubernetes “Services” are used to expose internal components for interconnectivity
  • Kubernetes “Deployment” are used to create pods of similar components and scale automatically
  • Kubernetes “Pods” are used to create running instances of the products and profiles
  • Persistent Volumes and Persistent Volume Claims are used to share files across pods
  • “Ingress Controller” is used to expose functionality to external consumers

The above architecture was designed for AWS infrastructure and you can use a similar approach for any other infrastructure option as well.

Finally, you can find a comprehensive guide on using containers with WSO2 platform from the below github link.

References:

WSO2 Docker tutorial series

Deploying WSO2 products on GKE

--

--

Chanaka Fernando
WSO2 Best Practices

Writes about Microservices, APIs, and Integration. Author of “Designing Microservices Platforms with NATS” and "Solution Architecture Patterns for Enterprise"