Photo by Bernd Dittrich on Unsplash

Using CRI-O as container runtime for Kubernetes

Arun Prasad
Nerd For Tech
Published in
5 min readJan 12, 2021

--

In this post we will see how to setup cri-o as a container runtime for Kubernetes.

What’s a container runtime?

Container Runtime is a software that is responsible for running and managing containers on a node. Docker is the most widely known container runtime but there are few others in the market like containerd, rkt and cri-o.

After version 1.20, Kubernetes will deprecate docker as its container runtime. For docker this isn’t a big deal because docker is not just a container runtime but its a suite of products that can be still used to build and run containers. You can read this post for more information.

CRI-O

cri-o is a light weight OCI complaint container runtime which means we can use any compliant registry to store images and run any OCI compliant containers. You can read more about OCI (Open Container Initiative) on this website https://opencontainers.org/.

A comparison between Docker, Containerd and CRI-O will look like below:

Docker vs Containerd vs CRI-O

Installation

In this demo, we will be using two ec2 vms , one as master node and the other one as worker node. Necessary configurations have already been done at the network layer so that these nodes can communicate with each other for Kubernetes to work.

The whole process is divided into 3 phases:

Pre-requisites:

We will have to enable 2 kernel modules on both nodes and then enable these during the runtime. Execute below commands on both the nodes:

Note: Please ignore the comments in brackets while copying the commands.

#modprobe overlay  ( For using overlayFS )
#modprobe br_netfilter (Turns on VxLan for pod communication)
#cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
#sysctl --system
#swapoff -a (kubernetes scheduler requires this setting to be done.)

Install cri-o:

We have to set 2 environment variables on both nodes before we start downloading the packages and these are OS and VERSION. OS depends on the flavor and version of operating system of these nodes. Use the below lookup table to set the value. This information is also available on cri-o website.

$OS

To determine the version of Ubuntu you can run below command:

#lsb_release -a

Since I am running Ubuntu 18.* on both nodes, I will set the value as below:

#OS=xUbuntu_18.04

Next, set the VERSION of cri-o to download. This should match the version of Kubernetes you are planning to deploy. I will be using Kubernetes 1.20 and hence will set VERSION to 1.20.

#VERSION=1.20

Now we can execute below command to download the packages

#echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list#echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list#curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -

Update the apt repository and install cri-o, cri-o-runc and cri-tools.

#apt-get update
#apt-get install cri-o cri-o-runc cri-tools

Once the packages are installed, we will edit /etc/crio/crio.conf file and set the value of conmon=/usr/bin/conmon. Conmon is a utility that monitors crio. By default conman setting in crio.conf is blank and we have to replace it with the binary path of conman.

After editing the crio.conf file, enable cri-o.service and start the service.

#systemctl enable cri-o.service
#systemctl start cri-o.service
#systemctl status cri-o.service

Run crictl info command to check if cri-o was installed and started properly.

#:/home/ubuntu# crictl info
{
"status": {
"conditions": [
{
"type": "RuntimeReady",
"status": true,
"reason": "",
"message": ""
},
{
"type": "NetworkReady",
"status": true,
"reason": "",
"message": ""
}
]
}
}

Install K8:

We are going to follow the standard kubeadm method for setting up a 2 node cluster. Follow this installation guide to install kubectl, kubeadm and kubelet (version 1.20) . We don’t have to follow the container runtime section since we have already setup cri-o in above steps.

Once the packages are installed, we have to configure kubelet on both nodes to start using systemd as cgroup driver. By default cri-o uses systemd as cgroup driver whereas kubelet is set to use cgroupfs. We have to edit kubelet service file and include an extra “KUBELET_EXTRA_ARGS” setting highlighted in below snapshot.

#vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_EXTRA_ARGS=--feature-gates='AllAlpha=false,RunAsGroup=true' --container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

Save the file and execute below commands to reload systemd and then restart kubelet service:

#systemctl daemon-reload
#systemctl restart kubelet

Note: For demonstration I will be deploying flannel as the network plugin and have to set pod-network-cidr accordingly.

We can now execute the kubeadm init command and wait for the control plane to be up and running.

#kubeadm init --pod-network-cidr=10.244.0.0/16

Once the control plane is up, we can deploy flannel plugin with below command. Make sure to copy the kubeconfig file as per the output of kubeadm command before you run kubectl.

#kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ym

Once the resources are created check if all the pods under kube-system namespaces are running and then execute the kubeadm join (copy this from kubeadm output) to join our worker node with this cluster. Execute the below command on master node and check if both nodes are up and shows the status as ‘Ready’:

#kubectl get pods -n kube-system
#kubectl get nodes -o wide

In the above output check the CONTAINER-RUNTIME column and you will notice that it shows cri-o://1.20.0

Now you can deploy a sample nginx container and check if it gets deployed properly. If you didn’t encounter any errors so far, this step should just work fine.

#kubectl run nginx --image=nginx
#kubectl get pods

That’s All Folk’s

Wasn’t that simple! We now have a Kubernetes cluster that uses CRI-O as the container runtime.

So why cri-o? I am working on setting up Kata Containers and and for that either containerd or cri-o has to be set as the container runtime. More on this in my next post.

You can find the video tutorial of above steps in this youtube channel. Thanks for taking time to read this.

--

--

Arun Prasad
Nerd For Tech

Cloud native Architect || Golang Programmer || Amateur Star Gazer