Running Kubernetes cluster on RancherOS

Ivan Mikushin
6 min readMar 17, 2015

RancherOS is the smallest (and arguably the coolest) operating system to run Docker containers. How about running distributed apps, made of lots of containers providing different services to each other? One of emerging technologies to serve that purpose is Kubernetes.

I’m going to show you how to run Kubernetes on RancherOS. We’ll use Vagrant for a quick start.

TL;DR

To begin using Kubernetes on RancherOS it, just clone this GitHub repo and follow the README:
https://github.com/imikushin/rancheros-k8s-vagrant

To learn what are Kubernetes’ essential parts and how to deploy it yourself, read on.

Tools

Vagrant

We’ll need a multi-machine cluster of RancherOS VMs. The easiest way to start is Vagrant, so I’ve hacked a Vagrant box and Vagrantfile to do just that (stay tuned for an upcoming “prequel” post). You can clone the repo, run “vagrant up” in its dir, and here you are with a few ready to use RancherOS VMs accessible with “vagrant ssh” and ready to communicate via a host-only VirtualBox network.

Docker

If you want to run anything on RancherOS, you’re going to run it in a Docker container, or a bunch of containers, in our case. For running infrastructure services like Kubernetes, RancherOS has a tool called “system-docker”.

All the necessary components are packaged into these containers:

  • imikushin/flannel — contains flannel and etcd binaries
  • imikushin/kubernetes — contains all kubernetes binaries

These are not specific to RancherOS, and you can use them to run Kubernetes anywhere.

Architecture

How exactly does one run Kubernetes? Previously, I’d had it run for me by shell scripts from its distribution or by existing Vagrant configurations like this one. So, that was the question I asked myself when I just started this little project.

I’ve found a good introduction to Kubernetes on DigitalOcean Community. Basically, Kubernetes is a bunch of processes on master and minion nodes, collaborating to provide a distributed container runtime, manageable via a RESTful API on the master node.

Master processes are:

  • kube-apiserver — you know, the API server
  • kube-controller-manager — handles replication
  • kube-scheduler — assigns workloads to minion nodes

Minion processes are:

  • kubelet — receives commands and work from the master
  • kube-proxy — exposes the deployed containers to clients

Common processes are:

  • etcd — cluster key-value store, needed by almost all other k8s processes
  • flannel — provides cluster-wide overlay network
  • docker daemon, configured to use flannel overlay network
Kubernetes architecture. Source: Kubernetes project

Let’s build our own Kubernetes cluster. We’ll start with a multiple RancherOS machines deployed with Vagrant/VirtualBox:

We’ll put our host scripts (to be run on the host machine only) to ‘scripts’ dir and all other sources to ‘src’ (we already have it). To start the VMs, run

vagrant up

So, we have a bunch of VMs with RancherOS started with Vagrant. Now we need these VMs to run the processes according to their roles (master or minion). Let’s consider ‘node-01’ to be the master:

We’ll check for ‘.k8s-master’ file later.

Common processes

etcd

All cluster nodes run etcd, a distributed configuration database, needed by flannel, kubelet, kube-proxy and kube-apiserver.

You want your etcd cluster to configure itself somehow (otherwise you’ll have to provide static addresses of all nodes to every etcd instance). This feature is called discovery. Luckily, there’s a public etcd discovery service, and we’re going to use it. We’re not running this in production, so let’s specify the cluster size=1: the first node to boot (‘node-01’) is becoming the only “real” participant, the others to become “proxies”.

In order to provide fresh discovery URL, we have to run this script on our host every time before bringing up our cluster:

./scripts/etcd-discovery

‘.etcd-discovery-url’ file will be provisioned to all our machines by Vagrant on every boot:

We’re running etcd with ‘--discovery’ option containing the generated discovery URL:

For details on REGISTRY_MIRROR environment variable, see below.

flannel

All cluster nodes will run flannel, an overlay network for deployed containers communication. There are other options like Weave and OpenVSwitch, but flannel is much easier to start with on a small local cluster.

We need to save flannel configuration to etcd, first:

That script is actually run inside ‘imikushin/flannel’ container, that has etcdctl installed:

We also need to provide ‘FLANNEL_NETWORK’ which specifies our overlay network IP address space (in CIDR notation).

Now we can run flannel:

Docker daemon

RancherOS has a system container called ‘userdocker’ responsible for running the docker daemon for user containers. We need to replace it with our customized docker daemon (currently, we can’t simply pass ‘userdocker’ the network configuration we need):

For the machine to be able to participate in a Kubernetes cluster, its docker daemon should be configured to run deployed containers on the overlay network.

We’re sourcing ‘/var/run/flannel/subnet.env’ produced by flannel daemon and using FLANNEL_SUBNET and FLANNEL_MTU variables from it to configure docker daemon for Kubernetes.

Master processes

Now that we have common processes, we can start master and minion specific processes. But there’s just one thing before we go.

Minions need to know Kubernetes master API endpoint to start ‘kubelet’ process, so let’s publish it to etcd:

kube-apiserver

Start Kubernetes API server:

About the ‘--portal_net=10.0.0.0/16’ magic constant: see discussion below
(in the kube-proxy section).

kube-controller-manager

Start Kubernetes Controller Manager:

kube-scheduler

Start Kubernetes Scheduler:

Minion processes

First, let’s retrieve the Kubernetes API server endpoint saved by master on boot — we’ll need this value for ‘kubelet’ service:

kube-proxy

Start Kubernetes Proxy:

Just a quick note about the ‘--portal_net’ magic constant above (in the kube-apiserver section). It is a network address space for services exposed by pods deployed to the Kubernetes cluster. It is used by kube-proxy to expose these services to clients (primarily, other deployed pods).

Service proxying details. Source: Kubernetes project

kubelet

Start Kubelet service:

Register minion with the master

There’s one last thing we need to do: register just created minion node with the API server. One would expect kubelet to register itself with the API server on start (it has API server’s address, right?), but that doesn’t happen. So, we must step up and do this for them:

Almost there

Now, because we have ‘.k8s-master’ file provisioned to our master node, it’s trivial to decide whether to run ‘start-k8s-master.sh’ or ‘start-k8s-minion.sh’:

Just a few more things left todo.

Remove stale containers

When writing your start scripts on RancherOS, beware of old stopped (but not removed) containers. RancherOS won’t remove them for us, so it’s a good idea to remove them before running new instances, e.g. like this:

Make sure you do this for all named containers you run (not only those with ‘-d’ flag): Docker won’t let you run another container with the same name.

REGISTRY_MIRROR

You might want to run a Docker registry mirror on your host machine, to reduce container images downloads:

Here’s a script to get the registry mirror endpoint on a cluster node:

Detach the start script

Let’s make our start script execute in the background and move on:

Finish the Vagrantfile

Make sure you provision all your scripts to cluster nodes and launch the start script:

That’s it: now we have all the necessary services in place and can use our shiny new Kubernetes cluster (something like this):

./scripts/etcd-discovery; vagrant up

Okay, let’s try something: you can explore Kubernetes REST API with “swagger-ui”. My MASTER_IP is 172.28.128.3, so I’ll open the following URL in my browser:
http://172.28.128.3:8080/swagger-ui/

Looks like master is accessible.

Okay, here’s another check. Make sure you have kubectl installed on your local machine and have KUBERNETES_MASTER environment variable set to

http://${MASTER_IP}:8080

where MASTER_IP is your cluster master (‘node-01’) IP address.

All right, looks like we have the minions ready.

Now, go ahead and try some Kubernetes examples with your new cluster. Smooth sailing!

Resources

RancherOS project.

Vagrant documentation.

I’ve already mentioned An Introduction to Kubernetes on DigitalOcean Community.

kubernetes-vagrant-coreos-cluster, found on the Kubernetes Getting Started on CoreOS page was of great help to understand the moving parts.

--

--