How to Test CRI-O with OpenShift Container Platform 3.6 and Red Hat Enterprise Linux 7.4

So, it starts with a thought. What if? We wanted to see how difficult it would be to test CRI-O with RHEL and OCP. The answer, it wasn’t bad at all — especially for alpha software.

You might ask yourself, why would I want to run CRI-O instead of docker? Well, here are six great reasons. The short explanation is, it’s lean, stable, secure and boring. Yes, boring. It only does what it needs to do. It’s a Dump Truck that can carry two tons of dirt, not a Ferrari with a photorealistic backup camera, which is what you want in production. Finally, it’s managed with an open governance model and driven in a truly community driven open source approach.

Now, let’s get to the meat and potatoes. To be clear, this is for testing only. This is not meant for production yet, but if you are like me, you probably are already familiar with RHEL + OCP, so changing one small thing out to test, is a lot easier than using all upstream code where a lot of things can be different than what we are used to :-)

First, you have two choices:

  1. Convert an existing installation of OCP
  2. Start with a fresh install of OCP

With either method, there are some prerequisites.


First, let’s grab and install the CentOS packages, they will work fine for testing:

rpm -ivh --nodeps
yum install -y cri-o

Now, let’s make a quick change to the CRI-O configuration:

vi /etc/crio/crio.conf


storage_option = [


storage_option = [

Now, start CRI-O

systemctl enable --now crio
systemctl status crio

Convert Existing

Alright, we are getting close :-) Now configure OpenShift by passing some arguments to the Kubelet.

vi /etc/origin/node/node-config.yaml

Make the kubletArguments section match below:

- region=infra
- "/var/run/crio.sock"
- "/var/run/crio.sock"
- "remote"
- "15m"
- "true"

Restart the OpenShift Node:

systemctl restart atomic-openshift-node.service

Stop the docker daemon and disable it:

systemctl disable --now docker

Start Fresh

If you don’t have OpenShift installed yet, this is pretty a pretty easy method, all you have to do is modify the ansible installer and everything happens for you. You don’t even need to ever install docker. Woot.

Here are my quick install instructions for OCP on RHEL (mileage may vary). Full OCP 3.6 installation instructions here:

Run the first part of the installer, once the final configuration is written, you will see the below text:

Do the above facts look correct? [y/N]: y
Wrote atomic-openshift-installer config: /root/.config/openshift/installer.cfg.yml
Wrote Ansible inventory: /root/.config/openshift/hosts
Ready to run installation process.
If changes are needed please edit the config file above and re-run.
Are you ready to continue? [y/N]:

Before the you press “y” and allow the installer to continue, add the following lines to the installer configuration. The first sets the Kubelet options, the second skips some tests if this is a small test environment which doesn’t meet all of the hardware requirements:

vi /root/.config/openshift/hosts

Add the following:

openshift_node_kubelet_args={'image-service-endpoint': ['/var/run/crio.sock'], 'container-runtime-endpoint': ['/var/run/crio.sock'], 'container-runtime': ['remote'], 'runtime-request-timeout': ['15m'], 'enable-cri': ['true']}

Now, finish the installer:

If changes are needed please edit the config file above and re-run.
Are you ready to continue? [y/N]: y

Final Steps For Both Methods

Modify the DeploymentConfiguration objects for the router, registry, and registry console to use full path. This is because the docker daemon has a search path similar to DNS where it will check and for images .This is fixed in current versions of CRI-O (as of about a week ago), but have not made their way into the packages yet.


oc edit dc router

Add the full path to the image directive:



oc edit dc docker-registry

Add the full path to the image directive:

Registry Console:

oc edit dc registry-console

Add the full path to the image directive:


Delete each of the pods, and things should start working with CRI-O now:

for i in `oc get pods | grep -E 'router|registry' | awk '{print $1}'`; do oc delete pod $i; done

You should now have working pods running under CRI-O

oc get pods
docker-registry-1-lkb3c 1/1 Running 0 54s
registry-console-1-bfj3k 1/1 Running 0 54s
rhel7 0/1 Completed 0 1d
router-1-vnpz5 0/1 Pending 0 53s

And, now look with crioctl

crioctrl ctr list

You will see magic:

ID: 5b8f0772358ef031df7af7a7201f135f44da44f1f411225fb3f28570dad9cc3d
Pod: 5d3d39bbc564ba0c45bda3d99ae96f46ac25f1de103cbe09c49100989afb094a
Name: router
Attempt: 0

Finally, configure OpenShift to use the full path for all new image pulls:

vi /etc/origin/master/master-config.yaml

Change the imageConfig section to match below:

latest: false

Restart the master

systemctl restart atomic-openshift-master.service -l

Now, run a couple of tests:

oc run --restart=Never --attach --stdin --tty --image rhel7 bash


oc run --restart=Never --attach --stdin --tty --image rhel7 bash


So, whether you converted an existing test installation or installed a new one, it really wasn’t too bad right? The beauty of the modular architecture of Kubernetes is that it is easy to swap out pieces parts and try new things.

The above instructions should give you a very functional, production-like environment. Builds won’t work because there is no docker daemon, which is something that people often want in production.

Let me know if you have any feedback, and I would be happy to incorporate it!

Show your support

Clapping shows how much you appreciated Scott McCarty’s story.