Portability of applications across Kubernetes distributions, part 1

John Alcorn
AI+ Enterprise Engineering
9 min readDec 17, 2021

Portability experiences with the operator

In an earlier series of articles I wrote about my experiences creating an operator for my cloud native application, the IBM Stock Trader. Since I worked for IBM at the time, and they had recently purchased Red Hat, I did all of my operator work atop the OpenShift Container Platform (OCP) 4.x. Once I moved to Kyndryl, and we started embracing not just IBM Cloud, but also AWS and Azure and GCP, I wondered how well this operator would work on Kubernetes distributions other than OCP. The short answer is that it worked beautifully, without needing any changes to work in environments like the Elastic Kubernetes Service (EKS) on AWS, the Azure Kubernetes Service (AKS), the Google Kubernetes Engine (GKE), or even the Tanzu Kubernetes Grid (TKG).

First, let me say that I do in fact still quite like OpenShift. Not because it is a better base Kubernetes distribution (all of the Kube distributions listed above turned out to be excellent and very compatible), but because of the great job that it does integrating the other stuff around the edges, like logging and metrics, into the core OCP distribution, and because of Red Hat’s ecosystem for tightly-integrated add-ons to this environment, such as RH SSO (RH’s productization of KeyCloak), the OpenShift Service Mesh (RH’s productization of Istio), and the RH Advanced Cluster Management and RH Advanced Cluster Security extensions to the control plane. It does all of this in exactly the same way, regardless of whether it is used on-premises, or in any of the public cloud vendors. It also has an excellent admin console, including the Operator Hub UI and the form-based UI for each operator — which is of special importance in this article, as we’ll see shortly.

That being said, if you are running in one of the major cloud vendors, often you would prefer tight integration with that cloud’s DevOps, observability and security/compliance features, rather than sticking solely with what can end up seeming like a least-common-denominator story. For example, if you are on AWS, you’d probably want to use its CloudWatch observability features, so that you can see your application in the same dashboards as the services it uses within that cloud, rather than relying solely on generic open-source projects like Prometheus and Grafana. Using a Kube distribution like EKS gives you that tight integration with the rest of AWS, to continue our example; the same is true when using the Kube offerings in the other major cloud vendors. Costs are usually a significant factor as well, as the OpenShift licenses add up quickly as you add worker nodes (this is why even IBM maintains its own IBM Kubernetes Service (IKS) distribution, in addition to its OpenShift option).

An update on installing the operator

Before discussing portability of the operator, we need to take a slight detour to talk about changes in the operator since my earlier series of Medium articles on it. What follows is a brief subset of the detailed write up I did on the topic in chapter 9 of my recent MicroProfile 4.1 book, now available in paperback and Kindle at https://www.amazon.com/Practical-Cloud-Native-Java-Development-MicroProfile/dp/1801078807/.

Though the basic design of my operator hasn’t significantly changed, I did decide to integrate it with Operator Lifecycle Management (OLM) about a year ago, so that it could appear in your cluster’s OperatorHub. This didn’t change the approach to how the operator creates and configures the various base Kube resources, like Deployments, Services, HorizontalPodAutoscalers (HPAs), ConfigMaps and Secrets, nor did it change how the CR yaml is defined. But it did change how you get the operator running on your cluster (after that, it’s the same as before). Gone are the days of cloning the operator’s GitHub repository and running yaml files you find there; nowadays you just add a new CatalogSource, and that makes it appear in OperatorHub just like any other operator that might have come from Red Hat or any vendor.

If using the OCP admin console such as in Red Hat OpenShift on Amazon (ROSA) on AWS, or in Azure Red Hat OpenShift (ARO) on Azure, or in the Red Hat OpenShift on IBM Cloud, this means going to Administration->Cluster Settings in the left-nav, selecting the Global Configuration tab, and choosing OperatorHub:

OpenShift cluster settings
OpenShift cluster settings

Once there, click the Sources tab, and then the blue Create Catalog Source button (in this screenshot you can see what it looks like once the new source is added):

OpenShift catalog sources

Then just fill in the form, making sure to specify docker.io/ibmstocktrader/stocktrader-operator-catalog:v0.2.0 for the image location (the other fields can have whatever values you want, as they are just display names):

Defining the Stock Trader catalog source

Once you do this, my operator will show up in your cluster’s OperatorHub:

OperatorHub UI

If you click on it, you’ll get the option to install it — which technically is making a Subscription to it (more on that in a bit). I already have it installed on my cluster, so I see a message saying that, along with an Uninstall button, instead of the blue Install button for the operator you’d see if installing it for the first time:

The install/uninstall page for Stock Trader

OK, you may be asking yourself “why are there all of these OpenShift admin console screenshots if we’re talking about portability across Kube distributions?”. The answer is that you essentially do the same thing — that is, creating the CatalogSource and the Subscription — when installing the operator from the kubectl command line interface (CLI).

Installing the operator via the CLI

In a Kube distribution where the OCP console is not an option, you use kubectl to install the operator. Once you are logged in to your cluster and configured to your desired namespace, there are 3 commands you run:

  1. If your cluster doesn’t have OLM installed (it is pre-installed on OCP 4.x), you need to install the Operator SDK (via “brew install operator-sdk” on a Mac) and then run “operator-sdk olm install”. You can check if it’s installed via “operator-sdk olm status” (note either of these commands require you be logged in to your cluster, able to run commands like “kubectl get pods”, for example). For further details on installing OLM, see https://olm.operatorframework.io/docs/getting-started/.
  2. Install the CatalogSource by applying the following yaml (available for convenience at https://github.com/IBMStockTrader/stocktrader-operator/blob/master/catalog-source.yaml):
    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
    name: cloud-journey-optimization-team
    spec:
    displayName: Cloud Journey Optimization Team
    image: ‘docker.io/ibmstocktrader/stocktrader-operator-catalog:v0.2.0’
    publisher: Kyndryl
    sourceType: grpc
  3. Create the Subscription by applying the following yaml (available for convenience at https://github.com/IBMStockTrader/stocktrader-operator/blob/master/subscription.yaml — note you’ll likely want to customize the sourceNamespace, since a non-OCP Kube likely wouldn’t have an openshift-marketplace namespace):
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
    name: stocktrader-operator
    spec:
    channel: alpha
    installPlanApproval: Automatic
    name: stocktrader-operator
    source: cloud-journey-optimization-team
    sourceNamespace: openshift-marketplace
    # customize this namespace if in a non-OpenShift Kubernetes
    startingCSV: stocktrader-operator.v0.2.0

With those 3 simple commands, you now have the OLM-enabled Stock Trader operator available in your cluster. Then you can install an instance of a StockTrader via a “kubectl apply -f” of a given CR yaml (which we’ll discuss more in part 2 of this series). You can see an example of such a StockTrader CR yaml at https://github.com/IBMStockTrader/stocktrader-operator/blob/master/config/samples/operators_v1_stocktrader.yaml. Again, see chapter 9 of my recent book for further details.

Portability issues

The good news is, the above 3 commands work in any Kube distribution. We’ve successfully run them in OCP, EKS, AKS, IKS, GKE, and TKG. In the interest of full disclosure, there was one small change to the ClusterServiceVersion (CSV) file for the operator, to enable “single namespace” mode, since some of the hosted/managed Kube distributions didn’t want to allow a “cluster-wide” (that is, one available in all namespaces in the cluster) operator; note that this change would have even been needed in OCP if you wanted the operator to be isolated to a single namespace.

Now that this change is committed to GitHub and part of the image you use from DockerHub when installing the operator into OperatorHub, this means the same operator works across all Kube distributions. This was pretty cool; to be honest, I’d feared I might have accidentally done something in the operator that made it — or the Kube objects it creates — only work in OCP, since that was the only Kube I’d tested it in back when I wrote it, so I was pleasantly surprised to learn it was so portable.

The one thing that isn’t portable is how the UI is exposed to be called from outside the cluster. In OCP, I used a Route, but that’s an OpenShift-proprietary CRD (with an apiVersion of route.openshift.io/v1). There’s a global.route field in the CR yaml that you’ll need to make sure is set to false when installing to a Kube other than OCP. My operator also supports an Ingress (with an apiVersion of networking.k8s.io/v1), for those environments that support the generic Kube type of Ingress, via the global.ingress boolean field. Or some environments might need a NodePort exposed on the Service instead. Or lastly, when you have Istio enabled (via the global.istio boolean field in the CR yaml), an IngressGateway is used (with an apiVersion of networking.istio.io/v1beta1).

We found that in clusters in some clouds, none of the above options (OCP’s Route, generic Ingress, or Istio’s IngressGateway) that I built in to the operator would work. In such cases, you need to set each of those fields to false in the CR yaml, and you will have to manually create the necessary object after installing the StockTrader CR yaml. For example, in EKS, we found we had to manually create a Service of type LoadBalancer, with an AWS annotation:

apiVersion: v1
kind: Service
metadata:
name: trader-loadbalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: example-stocktrader-trader
spec:
ports:
- name: http
protocol: TCP
port: 9080
targetPort: 9080
- name: https
protocol: TCP
port: 9443
targetPort: 9443
selector:
app: trader
sessionAffinity: None
type: LoadBalancer

Note that since such a manually-created Kube object is outside the purview of the operator, that means that if you do a “kubectl delete StockTrader <name>” (like if your CR yaml had a name field of “st-aws”, you’d delete it via “kubectl delete StockTrader st-aws”), then the Kube object you manually created will be “orphaned”, and you’ll have to manually delete it (via “kubectl delete Service trader-loadbalancer”).

Summary

The net is, the OLM-enabled Stock Trader operator works great in whatever Kubernetes distribution you might choose. Though you won’t have OpenShift’s fancy admin console UI, you can install the operator with 3 easy CLI commands, and then when you apply a yaml of type StockTrader, the operator takes care of creating, configuring, and managing the entire Stock Trader application on your behalf.

It doesn’t matter whether you are in AWS, Azure, GCP, or IBM Cloud — you install the operator the same way, and deploy an instance of a Stock Trader in the same way. The only thing to look out for is that you might have to manually create the cloud-vendor-specific Kube object to expose the application to the internet.

Stay tuned for part 2, where we’ll discuss how we made the application portable across different JDBC database providers and different JMS messaging providers, and for part 3, where we’ll discuss how to make Stock Trader use the API gateway and the Functions-as-a-Service (FaaS) environment in different clouds. Thanks for reading, and as always, feedback is welcome!

--

--

John Alcorn
AI+ Enterprise Engineering

Member of the Cloud Journey Optimization Team at Kyndryl. Usually busy writing/testing code, or teaching others what I’ve learned.