Transforming WebSphere to Open Liberty on OpenShift: An Operations point of view — Part 4: Mapping WAS ND topologies to OCP topologies

Karri Carlson-Neumann
AI+ Enterprise Engineering
11 min readJul 2, 2021

This is Part 4. You can also review Part 1 and Part 2 and Part 3.

Introduction

This series is devoted to introducing traditional WebSphere Application Server Network Deployment (WAS ND) architects and runtime operations teams a first look at Open Liberty on OpenShift Container Platform (OCP).

In this fourth installment of our series, we will focus on considerations for mapping WAS ND topologies to OCP topologies. Basic terminology and infrastructure of WAS ND was described in Part 2, and that of OCP in Part 3. Please read Part 2 and Part 3 before reading Part 4, because for brevity, there’s a lot that won’t be repeated. Here in Part 4, we will describe high level details for you to consider as you begin to visualize a potential mapping of your WAS ND cells to new OCP clusters.

Please note that all storytelling has been greatly simplified. The immediate goal of this section is to set the footings for some bridges between some of the major concepts. You are encouraged to dig into deeper details in the appropriate product documentation, such as:

A simple example mapping

Let’s start with a very basic example. In the diagram below, there is a WAS ND cell on the left, and an OCP cluster on the right. The WAS ND cell contains two clusters. Two namespaces for applications have been created in the OCP cluster. There are no applications included yet, as those will be added in soon.

WAS ND cell on the left, OCP cluster on the right. The only details we see here are the host VMs, the WAS cell, node, clusters, and cluster members, and the OCP cluster, nodes and namespaces. We’re not getting down to the networking layer, or very deep at all, in this article.

By simply looking at these side-by-side pictures, a few likenesses are easily observable: both have a concept of a management plane, nodes where work will happen, and an approach to organizing the work that will happen on the nodes. The terminology (described in Part 2 and Part 3) and implementations of these things are different but recognizing the basic structural similarities is a first step.

One of the immediately observable differences is that the WAS ND cell has a lot of Java Virtual Machines (JVMs), which are illustrated as the light green rounded rectangles. As a WAS ND administrator, you have likely done a lot of work to size and tune each of the JVMs in your environments in preparation for the workload to be deployed there. There are no application server JVMs in the OCP infrastructure. Later (and not shown above), when the Open Liberty applications are deployed, if you look in their pods, you will find a JVM in those containerized images. The JVM is now part of the containerized application and is no longer permanently baked into the environment.

In this simple example, at a high level it seems like the WAS ND cell’s Cluster 1 will map to the OCP cluster’s Namespace 1, and that Cluster 2 will map to Namespace 2. That may turn out to be correct, but there is more to be considered before making decisions.

A few more variations

The simple example illustrated a single WAS ND cell with two clusters mapping to a single OCP cluster with 2 namespaces created for applications. There are a few easily imaginable variations illustrated below. While these examples use very small numbers of WAS ND clusters and OCP namespaces for simple visualization, in reality you are likely dealing with environments that are many times larger.

WAS ND clusters are not required to map 1:1 to OCP namespaces.

WAS ND cells are not required to map 1:1 to OCP clusters

These illustrations are lovely but are also meaningless at this point. No mappings can be usefully architected unless we also consider the workload.

Consider the workload

Appreciating the workload and what "modernizing it" means is, in reality, a fantastic high level story, with a LOT of details to be learned underneath of it.  This is especially true and challenging to implementation-minded people, such as WAS administrators who are freshly learning about Kubernetes.While this series barely gets into details, there are many other articles you can check out for further experiences and guidance in the broader topic. The Cloud Engagement Hub has published many articles that focus on modernization.  John Alcorn has a number of articles that focus on the Stock Trader application, and includes a lot of illustrative snippets.

The earlier parts of this WAS ND to OCP mapping article probably felt hollow because there was no discussion about applications or any of the applications’ requirements and details. Without a doubt, there should have been some discussion about:

  • what lines of business own each of the apps,
  • what do the apps do,
  • runtime requirements and dependencies,
  • are we talking about development, test, or production environments,
  • are some of the applications at end of life, and,
  • which applications have greater availability and recoverability requirements.

This kind of information helps us to understand why the applications are organized the way they are in the WAS ND cells and clusters.

We’re going to set aside those massively critical considerations for a moment, and zoom out to the 100,000 foot view. I have added some applications to the simple example’s illustration:

On the left is a WAS cell with 2 clusters. Cluster 1 has one instance of a single app in each of its 3 cluster members. Cluster 2 has one instance each of 2 apps in each of its 3 cluster members. On the right is an OCP cluster that has two namespaces. App 1 is now deployed in Namespace 1. App 2 and App 3 are deployed in Namespace 2. You are not required to have the same number of instances of each pod in OCP as there were instances (cluster members) in the WAS environment, especially if the demand does not require it.

This illustrates an assumption that the application in WAS Cluster 1 will be deployed to OCP Namespace 1, and that the two applications in WAS Cluster 2 will be deployed to OCP Namespace 2. This is a good starting assumption, but after considering details about the applications, the simple example might have different outcomes based on what is discovered about the applications. A few directions that might go are:

  • Consolidating the apps from multiple WAS clusters into fewer OCP namespaces
  • Pruning down / not carrying forward some WAS applications that have come to the end of their usefulness
  • Expanding the apps from fewer WAS clusters to dispersement across more OCP namespaces
  • Becoming more selective about which applications are allowed to be scheduled on particular OCP nodes
The example on the left illustrates the consolidation of 2 WAS clusters into a single namespace in OCP. The variation on the right illustrates the case where one of the applications has come to its End of Life, and will not require a namespace in the new OCP environment
The example on the left illustrates the case where it has been determined that App 2 and App 3 each require its own unique namespace. The example on the right illustrates a case where the WAS clusters map well to OCP namespaces, but App1 has a unique requirement that necessitates separate worker nodes.

Now let’s zoom in closer and bring back those very critical considerations.

  • Are the applications simple microservices?
  • What are the dependencies of each application?
  • Are they exposed to the external internet, or are these internal only applications?
  • Are they business critical apps?
  • Do they all have the similar availability requirements?
  • Do the apps all require similar security controls?
  • Which applications are approaching their end of life?
  • Where are all the development, various testing, and production environments for each of the applications?
  • Which applications are owned by which LoBs?

Additional considerations that may affect placement of the workload is to consider which applications may be appropriate for deployment to a public cloud provider versus which applications must be hosted locally. An example of this type of consideration is gravity to certain data.

These answers for each application will help you to group applications in ways that may influence the details of the target OCP namespaces or clusters.

Examining the details of your applications and sorting them them into groups will help you determine appropriate placement into the new OCP environments, and it will help you assess factors in determining which applications will be able to move sooner than others (aka, a wave plan).

In addition to the general considerations listed above, sometimes there are just reasons for the original arrangement of applications and WAS clusters. Using the simple example as a thought basis, there are many possible reasons why application 1 was deployed to a separate cluster than applications 2 and 3, including:

  • Perhaps they support different parts of the business, but back at that time everything was getting deployed into a shared WAS cell.
  • Perhaps application 1 is simply a very large app and uses all the memory of its host JVM.
  • Perhaps application 1 has or had a less-than-reliable track record and you can’t afford to let it go crazy in the same JVM as other applications.

Depending on you or your administrative and architectural predecessors, the original reasons for these decisions may or may not have been documented. Some of the original reasons may have even changed over time.

You’re going to need those original reasons and reassess if they are still applicable going forward.

Things that may have changed include:

  • Some of the applications may have arrived at end-of-life and do not need to be modernized.
  • Some of the applications that had previously shared a WAS ND cluster with other applications may have grown in complexity and should get their own namespaces.
  • The availability requirements of some applications have grown, while others have become less critical.

Now that some requirements and grouping considerations of workload are generally acknowledged, let’s return discussion to the runtime pieces.

Big parts of the diagrams

Looking over to the OCP side of the runtime mapping pictures, the biggest OCP puzzle pieces are OCP clusters, namespaces, and nodes

How many OCP clusters?

The total number of OCP clusters locally and/or in public cloud providers largely comes down to meeting workload availability, isolation, and dependency requirements. The challenges to manage that total number of clusters can get interesting.

Instead of letting this article wander too far, I encourage you to later pick up some details about IBM Cloud Satellite.  A great reference is the "Journey to a Distributed Cloud" series by Greg Hintermeister. 

There are enough sources bombarding you with encouragement to embrace a hybrid deployment strategy and move as much as possible to a public cloud provider, so we’re not going to dive into those details here. Just know that OCP runs both on-premises and in public clouds.

This article has primarily considered production environments. Expanding that to include development, test, and pre-production environments, it is very possible that you will not have the same total number of OCP clusters that you had for WAS ND cells. Due to less stringent isolation and availability requirements, many of the lower development and test environments that may have previously been separate WAS cells might now simply be separate namespaces in a shared OCP cluster.

How many namespaces?

In most cases, it might be a reasonable assumption that the motivation for maintaining a degree of isolation between the applications from separate WAS clusters continues to hold true. In that case, at least for a starting assumption, it might be accurate to assume you will have as many OCP namespaces for applications as you had WAS ND clusters for applications.

Kubernetes describes some motivations for using namespaces here  

The act of creating the namespace itself is trivial. The work to define appropriate resources, policies, and constraints per namespace requires some planning. The details you will be considering in planning the resources, policies, and constraints configurations are likely based on the exact reasons why a degree of separation was desired in the first place. For example, you can uniquely define policies to limit user actions in each namespace. You can also set quotas for resource consumption per namespace.

According to OCP 4.7 documentation “Planning your environment according to object maximums”, you could have up to 10,000 namespaces in most deployments. There are various caveats and warnings about approaching such a number. However, because we’re in the context of talking about WAS ND clusters, and on average that implies a handful to a few dozen clusters per WAS ND cell, it is likely that the mapping of WAS clusters to OCP namespaces is well under the maximum number of namespaces.

When looking at “Planning your environment according to object maximums”, you will also need to consider the maximum number of nodes, pods, pods per node, and pods per core among other things listed. You will need to account for the size of each application (and some number of instances per each app) and be aware that your app pods are probably larger than the test pods used to calculate the numbers shown on that page.This WAS ND to OCP Part 4 article does not go into sizing advice. The total maximums are likely higher than your total Open Liberty apps need to go.  These maximums should be theoretical boundaries, not goals.  Even when you're not even close to the maximums, you should break up your WAS cells/clusters to OCP clusters/namespaces based on requirements, not by possible maximums.

How many nodes?

This article does not go into sizing advice and won’t prescribe a number of worker nodes. Worker node numbers can vary by size or by other attributes. Some worker nodes can be devoted to specific workloads, and in some cases this can be appropriate. This will shape the overall topology pictures a little bit. It is also possible that a similar motivating factor may have existed in the WAS ND cell.

The WAS ND cell in the left has Cluster1 with members on 3 nodes. WAS ND clusters only exist where their cluster members exist. The 3 nodes where Cluster1 exists also has something special about the host, for example it may have special attached storage. The OCP Cluster on the right has 2 namespaces (which “exist” everywhere) and six nodes. Three of those nodes are on hosts that have something special, for example it may have special attached storage or special processors. The nodes on those 3 hosts may have labels and taints that can be utilized by scheduling policies to ensure that the scheduler only schedules allowed work on those nodes.

Generically, though, one of the goals of Kubernetes is to take advantage of density. As mentioned in Part 3 of this series, the Kubernetes scheduler will try to schedule workload where ever it fits. Worker nodes that have available memory and vCPU, and aren’t otherwise constrained by specific labels or taints, can host pods from many namespaces.

If you go to the extreme to manually constrain every bit of workload to run on its own specific nodes, you have gone out of your way to defeat Kubernetes.

Summary of topology mapping

The big picture pieces are WAS cells, nodes, and clusters, to OCP clusters, nodes, and namespaces. These are all dependent on the requirements of the applications.

Generally, the WAS ND clusters might map pretty well to OCP namespaces. However, you cannot take that as a given. WAS ND clusters are not required to map 1:1 to OCP namespaces. Variations include:

  • one WAS cluster to one OCP namespace
  • one WAS cluster to many OCP namespaces
  • multiple WAS clusters to one OCP namespace
  • multiple WAS cells, multiple clusters to one OCP namespace
  • one WAS cell, multiple clusters to multiple OCP clusters, multiple namespaces

To determine what the OCP environments should really look like, you will need to account for the workloads that will run there. One approach is to “bucket” the applications into groups based on their requirements, dependencies, and target deployment goals:

  • Type of application patterns (simple microservices)
  • Common dependencies
  • Exposed to external internet or internal internet only
  • Business criticality
  • Availability requirements
  • Deploy to local datacenter or to public cloud provider

With that knowledge, you can more clearly see if some reorganization into namespaces and/or across different OCP clusters may be necessary.

Coming up next

In Part 5, we will expand on the basic terminology and concepts by stepping through a number of operational tasks. This will allow us to compare tasks performed in a WAS ND environment to the roughly equivalent tasks to care and feed for Open Liberty applications in an OCP environment.

The author would like to thank the trusted people who reviewed this article, including Eric Herness, Greg Hintermeister, John Alcorn, and Ryan Claussen.

--

--