Transforming WebSphere to Open Liberty on OpenShift: An Operations point of view — Part 3: A simplified overview of OCP

Karri Carlson-Neumann
AI+ Enterprise Engineering
10 min readMay 29, 2021

This is Part 3. You can also review Part 1 and Part 2.

This series is devoted to introducing traditional WebSphere Application Server Network Deployment (WAS ND) architects and runtime operations teams to a first look at Open Liberty on OpenShift Container Platform (OCP).

In this third installment of our series, we will focus on OCP and Open Liberty, and setting the groundwork for the terminology and points of focus from which we will later expand. This will help clarify the relevant basics of where we are starting (WAS ND) and the important facts about for this audience about where we are going (Open Liberty on OCP).

Please note that all storytelling has been greatly simplified. The immediate goal of this section is to set the footings for some bridges between some of the major concepts.

This document will not dive into the fine details, because that can go on forever. It is hoped that once enough of the higher-level translations are established, that you will be empowered to make some conclusions about the relationships of the finer details. You are encouraged to dig into deeper details in the appropriate product documentation, such as:

Because this installment talks about Kubernetes and OCP, it will be helpful to have a little bit of background knowledge about containerization. The linked article is getting a bit dated (2019), but it provides a pretty good technical history: https://www.ibm.com/cloud/learn/containerization

The key bullets about the container engine to keep in mind as you proceed to read this installment are:

  • Container engines, such as Docker or CRI-O, provide the container runtime for pods in Kubernetes.
  • Kubernetes provides orchestration of containerized runtimes.

OCP is a comprehensive container application platform based on Kubernetes. Basically, instead of rolling your own Kubernetes stack from scratch, OCP has architected a lot of pieces together for you and extended a few concepts and capabilities.

Simplified overview of OCP

OCP can be installed to bare metal, on virtual machines (VMs), or utilized as-a-Service from many cloud providers. While OCP itself is rather agnostic to the underlying IaaS, to maintain as much similarity as possible to the WAS ND descriptions from part 1 of this series, this installment will generalize the underlying infrastructure to VMs.

A Kubernetes cluster is the administrative domain of the Kubernetes environment.

For the sake of brevity, here we will focus on the control plane (includes what was formerly known as master nodes) and worker nodes.

When you are ready for a more detailed introduction, a good place to start is in the Kubernetes documentation at https://kubernetes.io/docs/concepts/overview/components/

The control plane is similar to the WAS ND Deployment Manager in the sense that it is the administrative hub and knows the configuration, and different in the sense that it is more scalable and controls more activity (ie, where the work gets scheduled to run). The control plane includes an API server (kube-apiserver), a backing store (etcd), the means to schedule the workloads that run on the worker nodes (kube-scheduler), and a means to run the controllers (kube-controller-manager). Essentially, the Kubernetes cluster is a state machine, made up of a lot of parts with their own desired states, and a lot of controllers that are constantly checking and trying to move things to their desired state.

The worker nodes each have an agent (kubelet), which is similar to, but more evolved than, the node agent process that we’re accustomed to in WAS ND. Each worker node also has a kube-proxy, which basically routes requests to the correct service.

In Kubernetes, a “service” is how an application (pod) has exposed its endpoints. Basically, what the previous paragraph says is that something that wants to consume an app doesn’t need to know the IP or even what node on which that app is currently running. That thing that wants to consume the app only needs to get to the kube-proxy, and the kube-proxy keeps track for itself where to go (which node, which IP) to find the service. This is good, because the pod where the app was running might have been on worker node 1, then died, then got rescheduled to run on worker node 2. In that case, the kube-proxy automatically knows that the service is now on worker node 2. In WAS ND/IHS terms, try to imagine generating and propagating the plugin.xml fast enough to keep up with WAS ND cluster members being created and deleted constantly! This is a good enough explanation to start with, but be aware that when it comes time to talk about inbound HTTPS traffic to OCP, the more detailed discussions will also include “ingress”.

Most importantly, the worker nodes are where the workload (pods) runs.

An OCP cluster. This simplified view shows the API server, etcd, Controller Manager, and Scheduler in the Master nodes (Control Plane). Unless you are talking about a very small cluster with very low requirements, there are generally 3 or more master nodes. This view also indicates the kubelet and kube-proxy in each worker node, so that you are aware that they exist. However, subsequent diagrams in this document are not going to include that level of detail because the focus will be on the pods and workload. As for inbound HTTPS traffic, there’s typically additional considerations for Routes and Ingress, which arenot shown here, and are certainly important to thediscussion.

Namespaces are logical clusters or partitions of resources. In OCP, a project is a namespace plus additional administrative capabilities, including setting resource quotas per project, and security context constraints. For simplicity, this overview will primarily refer to these constructs as namespaces.

The concept of namespaces in OCP can be reasonably mapped to WAS ND clusters for logical reasons (such as isolation or grouping of workload). However, they are physically very different things:

  • a WAS ND cluster is a very specifically defined group of server JVMs, and each JVM exists quite permanently on a specific WebSphere node
  • an OCP namespace is a logical concept, and the pods deployed within the namespace might come and go on any OCP worker node that has capacity and hasn’t otherwise disallowed the specific workload.
A Kubernetes cluster illustrating several namespaces. Namespace 1 has a single Pod A. Pod A has 3 instances. The fact that it has one instance on every node might indicate this is a DaemonSet (ensures all nodes run a copy of the pod). Alternatively, Pod A might simply be a Deployment that has been scaled up to have 3 instances of the pod, plus specify some anti-affinity rule so that those instances should not be active on the same node. Namespace 2 has a single pod B. If Pod B in worker node 2 were to terminate, it might be scheduled/restarted in either worker node 1, 2, or 3. Namespace 3 has 2 pods (C and D). Pod C has 3 instances, while Pod D has only 1 instance. Namespace 4 has three pods (E, F, G). Pod G has been scaled up to have 6 instances.

The application code that is deployed into the OCP environment is an container image packaged as a resource object that describes the app’s lifecycle. Usually this is a Deployment that rolls out a ReplicaSet, or a StatefulSet, or a DaemonSet. You will see the units of workload as pods at runtime. Each pod has certain resource requirements and can be scheduled to run on nodes only if there are sufficient resources available on that node to meet the pod’s requirements.

For example, the scheduler wants to schedule the workload for Pod X somewhere. Pod X has metadata that says it is going to request 1.5 GiB memory and 1 (1000m) CPU. The scheduler looks out at the worker nodes to see if any of them have at least that much memory and CPU available. If such a node is found, say Node 1, and it doesn’t conflict with any other deployment considerations (such as any anti-affinity settings that may be in place), then the scheduler will schedule Pod X on Node 1.

You can imagine how useful it is to be able to create scheduling policies to help guide the scheduler as to where (which nodes) and under what conditions or selection criteria (there’s lots of possibilities) you want certain workloads to be scheduled to run.

Kubernetes doc for these points:- “Managing Resources for Containers” https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/- “Assigning Pods to Nodes” https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

Do not become overly ambitious trying to limit or specify where every last shred of workload lands. Part of the beauty of Kubernetes is letting the scheduler place workload where it fits.

The preceding words were quite a bit of setup to get to the punchline of “scaling”. Two differences between app/pod scaling between WAS ND clusters and OCP is the granularity of scaling and the degree of available automation. Although we’ve primarily been talking about OCP considerations so far, the table below includes a row about JVM size, which is more of an Open Liberty consideration. That detail is included here for the context of the scaling discussion.

Links to cluster autoscaling and Horizontal Pod Autoscaler as mentioned above

Note that the autoscaling approaches supported by OCP can be configured to occur automatically, which means in seconds versus the WebSphere options requiring a human interaction, which may or may not be done using automation.

Quick takeaway about OCP as compared to WAS ND

The concepts of scaling and management aren’t unique to WAS ND. The “Network Deploy”-ness of WAS ND could be grossly summarized as the plumbing that let WAS admins create and scale clusters. The ability to create and scale deployments is what we’ve largely been talking about with respect to OCP and Kubernetes.

The “WebSphere Application Server”-ness of the WAS ND cluster member JVMs and the applications running on them is the parallel to capabilities in the Open Liberty images.

Open Liberty

Up to this point in the Open Liberty on OCP overview, this series has primarily mentioned Kubernetes and OCP, but not many details that are unique to Open Liberty.

From the OCP admin’s perspective, Open Liberty is just another workload in the OCP environment, and expose services and need ingress, storage, secrets, … all very standard Kubernetes/OCP behavior.

This is a shift in perspective from the WAS ND admin’s point of view. Now, instead of looking at an entire cell’s worth of resources.xml files, there is only the server.xml file, and it is baked into the Open Liberty image. This means that the app developer and the runtime ops must coordinate about details in the server.xml file further back in the pipeline or even as part of the development process.

The above does not mean that you have to hardcode all those values in the server.xml, but rather, you just have to plan to know what will be interesting to the containerized applications in the target environments. You could parameterize almost any values in there, and then utilize environment variables and default values as needed. The Transformation Advisor (will be mentioned further below) can give you a jump start as to what the WAS ND config will map in your new server.xml files.  Note that the configuration generated by TA a direct mapping, and further efforts to decide what parts of that direct mapping to parametrize are based on your determinations.  

The Open Liberty story is more interesting than simply shrinking down a traditional WebSphere application into a container. Open Liberty moves the game to a properly cloud manageable approach. An example of this is MicroProfile, which is a modular approach to developing and deploying cloud-native Java applications. This is interesting to the Ops teams because of the runtime capabilities and concepts that are now embraced, such as readiness and liveness checks (mpHealth), metrics endpoints (mpMetrics), generating OpenTracing spans (mpOpenTracing), and API documentation (mpOpenAPI). Simply by adding a few annotations, advantages of capabilities and frameworks in the Kubernetes environments can gained, and will reduce the time that operations teams require to figure out and configure these capabilities from scratch. This is true for both net-new development and for containerization efforts.

You’ve likely at least heard the term “Operators” in Kubernetes conversations, and of course Open Liberty has one. The ops team can use the Operator provided with Open Liberty and then create the Custom Resource (CR) yaml to create the OCP environment-specific secrets and dependencies that are requested by the application.

Operators are a fantastic Kubernetes concept that are going to help you, the runtime admin, manage applications and their components at both deployment time and with Day 2 activities. That said, if you are reading this document truly as an introduction to OCP concepts, you should be warned that “Operators” are not a beginner-friendly topic. As you are easing into this space, you can refer to the “Operator pattern” documentation from Kubernetes.  You can read about the Open Liberty operator documentation for the specific example.

The focus in these articles is around the administration and operations. There are numerous other good sources that outline the journey that application developers will take as they transform their source code and developer owned artifacts from that of the traditional WebSphere programming model to that of Open Liberty. Transformation Advisor offers a good way to get started from an application developer’s perspective by providing insights on necessary code changes to move from traditional WebSphere to Open Liberty running in k8s.


Transformation Advisor has been described in the Cloud Engagement Hub's Medium stories (including
this piece by Ryan Claussen), and continues to grow and be used throughout IBM.
- Transformation Advisor in WebSphere Application Server Knowledge Center
-
Cloud Transformation Advisor in IBM Documentation (It can do more than just WAS!)

Tying the WAS ND and Open Liberty on OCP overviews together

The overviews in the preceding pages were not brief. Some of the main points are arranged in the tables below.

Table of WAS to OCP infrastructure concepts
Table of WAS applications to Open Liberty applications concepts

The intent of Part 3 was to establish, for current WAS ND admins and architects, some translation of Open Liberty on OCP concepts and terminology back to WAS ND concepts and terminology. This is barely a dent in the mountain of overall learning and skills transfer that will ultimately need to occur. However, it is hoped that this information will help kickstart the learning process and be more helpful than being left on your own to pick up OpenShift Container Platform.

One aspect to take away is this is a happy story. Ultimately, when you move to OCP, you are not beholden to perpetuate your original reasons for your old cluster topologies or the priorities of certain old requirements. You have an excellent opportunity to re-examine how the workload and requirements have changed over time and make adjustments before deploying to OCP. You are going to arrive to a more scalable, more automated, newer technology, that you will enjoy using much, much more (after you scale the initial learning curve!).

Coming up next…

In Part 4 of this article, we will overview some considerations for mapping WAS ND topologies to OCP topologies.

The author would like to thank the trusted people who reviewed this article, including Eric Herness, John Alcorn, and Ryan Claussen.

--

--