How to Run CRI-O 1.9.10 with OpenShift Container Platform 3.9 and Red Hat Enterprise Linux 7.4

Scott McCarty
cri-o
Published in
4 min readApr 10, 2018

This article is a quick update of the blog entry I wrote for testing CRI-O Alpha with OpenShift 3.6. Since then, CRI-O 1.9 system containers and rpms have been released as generally available (GA) with OpenShift 3.9. Since the alpha, things have gotten easier and this guide walks you through how to run the latest bits.

You might ask yourself, why would I want to run CRI-O instead of docker? Well, here are six great reasons. The short explanation is, it’s lean, stable, secure and boring. Yes, boring. It only does what it needs to do. It’s a Dump Truck that can carry two tons of dirt, not a Ferrari with a photorealistic backup camera, which is what you want in production. Also, it’s low maintenance (overlayfs), doesn’t require any configuration and its versions are pegged to Kubernetes releases. Finally, it’s managed with an open governance model and driven in a truly community driven open source approach. In short, it fits Kubernetes like a glove.

Now, let’s get to the meat and potatoes. To be clear, this is a generally available release, which means that you can call Red Hat and file tickets for support. I am planning on running CRI-O for my production workloads (ticket system, wiki, blogs, etc). These bits are provided by Red Hat with your OpenShift Container Platform subscription. If you want to test the production bits, and don’t have a subscription, sign up for a free trial here.

Unlike my previous blog entry, I am only going to show you how to install a cluster from scratch — I am not going to highlight how to modify an existing cluster, but you could probably piece it together from the previous post.

Prerequisites

Optional: let’s install the binaries. With the GA bits, this is all you have to do. I haven’t had a chance to test the installer options below (openshift_crio_use_rpm=True) but they should remove the need for this first step. Once you install CRI-O, the rest is handled by the Ansible based OCP 3.9 installer:

yum install -y cri-o cri-tools

A couple of final notes —at this time, you still have to install the docker container engine and do all the normal host preparation. This is because, as of OCP 3.9, builds are still completed with docker, while production pods are run with CRI-O. This actually works out quite nicely. Your production workloads are never exposed to the Internet with Docker, but you still have a familiar environment for troubleshooting builds. In fact, the experience is quite seamless.

Start The Install

The Ansible based OCP 3.9 installer makes it pretty easy to install and configure CRI-O.

Here are my quick install instructions for OCP on RHEL (mileage may vary). Full OCP 3.9 installation instructions here.

I have found that the easiest way to install openshift is to let the Quick Installer build a basic Ansible inventory for you, then make some modifications to it, and let the rest of the installer complete. Alternatively, once the Ansible inventory is created, you could run an Advanced Installation. Run the first part of the Quick Installer, once the final configuration is written, you will see the below text:

Do the above facts look correct? [y/N]: yWrote atomic-openshift-installer config: /root/.config/openshift/installer.cfg.yml
Wrote Ansible inventory: /root/.config/openshift/hosts
Ready to run installation process.If changes are needed please edit the config file above and re-run.Are you ready to continue? [y/N]:

Before the you press “y” and allow the installer to continue, add the following lines to the installer configuration. The first sets the Kubelet options, the second skips some tests if this is a small test environment which doesn’t meet all of the hardware requirements:

vi /root/.config/openshift/hosts

Slightly modified from the OCP 3.9 release notes. The “oreg_url” variable below avoids having to fix a bunch of things post installation, like we did when testing CRI-O with OCP 3.6. Also, if it helps, here is a working example Ansible inventory which I used for my production environment:

[OSEv3:vars]
...
openshift_crio_use_rpm=True
openshift_use_crio=True
oreg_url=registry.access.redhat.com/openshift3/ose-${component}:${version}

Now, finish the installer:

If changes are needed please edit the config file above and re-run.Are you ready to continue? [y/N]: y

Once the installer completes, you should have some pods working in CRI-O.

oc get pods
NAME READY STATUS RESTARTS AGE
docker-registry-1-lkb3c 1/1 Running 0 54s
registry-console-1-bfj3k 1/1 Running 0 54s
rhel7 0/1 Completed 0 1d
router-1-vnpz5 0/1 Pending 0 53s

To verify, run the following docker command. Notice that no containers are running. This is because CRI-O is doing all of the work.

docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Now, run a couple of tests yourself:

oc run --restart=Never --attach --stdin --tty --image registry.access.redhat.com/rhel7/rhel rhel7 bash

Or:

oc run --restart=Never --attach --stdin --tty --image registry.access.redhat.com/rhel7/rhel rhel7 bash

Conclusion

The beauty of the modular architecture of Kubernetes is that it is easy to swap out things like the container engine — in this case swapping docker for CRI-O.

The above instructions should give you a very functional, production-like environment. Builds will continue to work by using the docker daemon, but production pods will run under CRI-O. Quite elegant.

Let me know if you have any feedback, and I would be happy to incorporate it!

--

--

Scott McCarty
cri-o
Writer for

At Red Hat, Scott McCarty is technical product manager for the container subsystem team, which enables key product capabilities in OpenShift & RHEL