Using Cincinnati Operator (AKA Update-Service-Operator) in disconnected environments

David Daskalo
7 min readDec 2, 2021

--

In this article I describe the procedure I had to make to enable over the air updates in my disconnected environment. I also write about re-adding versions that were discarded from the upgrade graph.

Do not add versions that are out of the graph unless you must, updating from these versions is not supported by Red Hat!

To check if your current version and the one you update to are in the path, check this site — Redhat Update Path Site. Also consult this doc if you didn’t install yet the Update-Service-Operator.
Another great source to understand the operator is — OpenShift Update Service: Update Manager for Your Cluster (redhat.com)

I will be writing in ACM terms because my environment utilizes ACM (Advanced Cluster Management for Kubernetes). You do not need to have ACM installed to use the update-service-operator.

Hub would be the cluster which sends the data graph.
Managed cluster would be the cluster that uses the graph to update itself.

Prerequisites

  • Hub Openshift Container Platform + Update Service Operator
  • Openshift Container Platform — Managed
  • Cloned Cincinnati Graph Data git

We will also use the Cincinnati Graph Data image but that will be done later in the article as I am modifying the contents to allow updating from a blocked version, if that is not your use case follow the image creation procedure found in the Update Service Operator doc.

Graph Data Preparation

Disclaimer: This step is done only if you require a version that does not exist in the official path.

To start clone the contents of the official Cincinnati git -

git clone http://github.com/openshift/cincinnati-graph-data.git

Now to add the version you wish to upgrade to, get into the blocked-edges folder and delete your version yaml file if it exists there (for example: 4.6.0 because I already deleted 4.6.7).

Content blocked edges directory

After removing it, we need to ensure it isn’t blocked in the channels, you can use the following command to filter out the files that contain the version and then edit them accordingly.

grep -rnw '/<source_directory>/cincinnati-graph-data/channels/' -e '<version>'

You want to comment out any tombstone reference and optionally comment them from being on candidate channel or fast as viable versions.

Example for commenting out 4.6.7 from tombstone

To wrap up the changes we need to now add the version to the stable channel yaml.

it is important to write the version in order with the other versions that exist, do not write 4.6.64.6.54.6.7.

You can tar and gzip your folder now as preparation for the image building.

4.6.7 was manually added

Now we can get to building the image, but before that we need to modify the Dockerfile to contain our current modified repo.
You can copy this Dockerfile which I made and customize it to your need, it is imperative that you use your local modified repo in the Dockerfile because the original Dockerfile pulls straight from GitHub.

FROM registry.access.redhat.com/ubi8/ubi:8.1COPY /<source_directory>/cincinnati-graph-data.tar.gz .RUN mkdir -p /var/lib/cincinnati/graph-data/CMD exec /bin/bash -c "tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati/graph-data/ --strip-components=1"

Build your image and ship it to your disconnected environment now.

What does this Dockerfile do?

  • Copy the tarball of the modified directory instead of wget from git
  • Create a directory to house the contents of the repo
  • Extract the contents into the new directory while stripping one unnecessary layer

This Dockerfile is not the graph data that is run by the update operator, it is an init-container that we use because we work in a disconnected environment and cannot fetch the graph from the internet. We reference the init container to the update-service-operator instance later in this article by specifying the graphDataImage to pull.

Hub Cluster Preparation

First, if you work in a disconnected environment you need a registry that supports Docker V2 API for the graph builder to work (Harbor, Jfrog, Quay…).
I will assume you have installed the Update Service Operator, for installing in disconnected environment you can refer to the guide Disconnected operator installation and that your cluster has correct Image content source policies.

Ensure you have a separate repo for your ocp-release images, otherwise your pod will have inflated ram consumption and will crash.
The release image can be found in —
https://quay.io/repository/openshift-release-dev/ocp-release
In disconnected usage when mirroring the version to the local registry, just ensure you create a different repo for the images that are tagged with just the version number.

Example for release image repository

You also need to follow this guide to add additional trust bundle for the update service operator.

This will roll out your cluster so wait for it to be finished before proceeding.

Pay attention that you must use the key updateservice-registry, customize the data to fit your registry if needed, usually just adding the certificate is enough.

apiVersion: v1
kind: ConfigMap
metadata:
name: my-registry-ca
data:
updateservice-registry: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
registry-with-port.example.com..5000: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----

If you wish to upgrade the Hub cluster with its own created path, the CA should exist for secure connection, if it does not, follow this guide.

This will roll out your cluster so wait for it to be finished before proceeding.

Now we can create the service operator instance.

cat <<EOF | oc create -f -
apiVersion: updateservice.operator.openshift.io/v1
kind: UpdateService
metadata:
name: test
namespace: open-cluster-management
spec:
graphDataImage: 'registry/graph-data:v1.0'
releases: registry/ocp-version/ocp-release
replicas: 1
EOF

Wait for the pod to finish, the logs should look like this

[2021-11-30T11:05:44Z TRACE cincinnati::plugins] Running next plugin 'edge-add-remove' [2021-11-30T11:05:44Z DEBUG cincinnati::plugins::internal::edge_add_remove] Regex '4.5.41' matches version '4.5.41+amd64' [2021-11-30T11:05:44Z INFO  cincinnati::plugins::internal::edge_add_remove] [4.6.35+amd64]: removing previous 4.5.41+amd64 by regex [2021-11-30T11:05:44Z DEBUG graph_builder::graph] graph update completed, 77 valid releases

To ensure you have a functioning graph you can question the endpoint that is created by the operator with the following curl (sometimes the route can be rejected due to length, modify if needed).

curl -k --silent --header 'Accept:application/json' '<route>/api/upgrades_info/v1/graph?arch=amd64&channel=stable-4.6' | jq '. as $graph | $graph.nodes | map(.version == "<current_version>") | index(true) as $orig | $graph.edges | map(select(.[0] == $orig)[1]) | map($graph.nodes[.])'

The response should look like this if you have a path (my case is 4.6.7–4.6.35)

[
{
"version": "4.6.35",
"payload": "<registry>/ocp-version/release@sha256:e08fbfda50c222bb41a1908d4d75bd6c426fa55f5f3d3a3f79316cff792acd3b",
"metadata": {
"description": "",
"io.openshift.upgrades.graph.previous.remove_regex": "4.5.41",
"io.openshift.upgrades.graph.release.channels": "candidate-4.6,eus-4.6,fast-4.6,stable-4.6,candidate-4.7,fast-4.7,stable-4.7,eus-4.8",
"io.openshift.upgrades.graph.release.manifestref": "sha256:e08fbfda50c222bb41a1908d4d75bd6c426fa55f5f3d3a3f79316cff792acd3b",
"url": "https://access.redhat.com/errata/RHBA-2021:2410"
}
}
]

Great, now that the update service operator is configured, we can move forward and wrap everything up with the managed cluster.

Managed cluster preperation

Start by first changing the URL which your cluster version operator looks at to the newly generated one.

NAMESPACE=openshift-update-service
NAME=test
POLICY_ENGINE_GRAPH_URI="$(oc -n "${NAMESPACE}" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{"\n"}' updateservice "${NAME}")"
PATCH="{\"spec\":{\"upstream\":\"${POLICY_ENGINE_GRAPH_URI}\"}}"
oc patch clusterversion version -p $PATCH --type merge

If everything is set up properly you should be able to see that an upgrade is available instead of the usual error you have in disconnected environments.

If you do not see updates available but instead an x509 error, you should repeat the steps in this guide while ensuring that the bundle contains the ingress certificate of the Hub cluster to enable secure connection.
After the cluster is done rolling out you should now see a graph with the option to update.

But before pressing the update version (I know its tempting) there is a small step that needs to be done.

Apply image signature file

The yaml to apply the image signature file can be found in the folder which holds the version you brought into your disconnected environment, but the procedure has to be done according to the steps in this -

Installing and configuring the OpenShift Update Service | Updating clusters | OpenShift Container Platform 4.9

After finding the signature yaml for the desired version, simply apply it to your cluster.

In the case that like me you also have ACM installed in your hub cluster, you can use this policy to apply the signature file: https://gitlab.com/DavidDaskalo/update-policy/-/raw/main/signature-apply.yaml
I used version 4.9.5 here so just change the name and binary data to fit your version.

Cluster Updating!

Now that we have finished configuring the hub cluster and the managed cluster, we can finally press the magic button!
After clicking the update a window will pop and ask to what version you wish to update, pick the version of the signature file you just applied!

Cluster updating from 4.9.0 to 4.9.5
View from the ACM

Now let the cluster start handling everything for you.

Conclusion

The Openshift Update Service operator gives system administrators a very strong capability to manage Openshift versions and updates from one cluster to many others.
You want a different version? no problem just add it to the release repository of your graph data and that's it, it will show up in the graph.
You no longer have to handle updates for every single cluster, all you have to do is maintain one central hub cluster and all of the connected clusters can be updated according to the graph you choose!

--

--