Kubernetes with OpenStack Cloud Provider: Current state and upcoming changes (part 1 of 2)

Arthur Miranda
5 min readOct 20, 2017

--

Introduction

This guide is intended to help you understand the cloud provider role in a Kubernetes cluster and the Kubernetes architecture itself, introducing the changes in the upcoming releases and guiding you with a pratical example using OpenStack as the cloud provider.

First of all, it’s needed to understand a little bit about Kubernetes core architecture to know what components interact with the cloud.
Kubernetes integrate with cloud providers to get information about available nodes, to create load balancers for services and to configure persistent volumes. These actions are performed by multiple Kubernetes components, called kube-controller-manager and kube-api-server, running on the kubernetes masters, and kubelet, running on the nodes.

  1. Kubernetes Controller Manager (KCM): In Kubernetes, a controller is a control loop that watches the shared state of the cluster through the API server and makes changes attempting to move the current state towards the desired state. The KCM embeds the core control loops shipped with Kubernetes (e.g. replication controller, endpoints controller, namespace controller, and serviceaccounts controller).
  2. Kube API Server: It validates and configures data for the API objects (pods, services, deployments etc), and provides the REST operations to manage them.
  3. Kubelet: It’s is the primary node agent, reporting the node status, running the pod’s containers, mounting pod’s volumes, downloading the pod’s secrets, and executing container liveness probes.

My setup has four kubernetes nodes, one master and three worker nodes, this means that my kubernetes master has kubelet and kube-proxy, because it is also a cluster node that runs only kubernetes master components, that runs as pods too.

Initializing the Kubernetes core components

How to create my cluster Kubernetes with the OpenStack cloud provider? Well, there are many ways to do it, for this one, you only need an OpenStack ready and the kubeadm tool, whose function is to bootstrap clusters magically. I hope to make a more advanced post using it in the near future. Follow this guide to get it.

Tricks: As you follow the guide above, you will need open some ports for master nodes and others ones for worker nodes. Create the security groups “k8s-master” and “k8s-worker” on OpenStack (Access & Security -> Security Groups, on Horizon) opening the required ports. It’s possible to create a custom script to do the all other things described in the guide, making your Instances will be ready for Kubernetes at launch moment.

Once you’ve set up kubeadm, you can start it on the master node. I’m using the v1.8.1 Kubernetes version.

$ kubeadm init --pod-network-cidr=192.168.0.0/16
[...]
Your Kubernetes master has initialized successfully!
< Here are three steps to be made. See below >

I will use Calico as the overlay network, for enabling the pods to communicate with each other, so I need to pass the flag --pod-network-cidr=192.168.0.0/16 in the kubeadm init routine. For more information/options see the docs.

After this command, three steps are shown in the output, the first one is to create your $HOME/.kube/config file, basically, this file contains the IP and port of your API server and the auth information to access it.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

The second one is to add an overlay network, as I said above, I will use Calico

kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

The third one is to join your worker nodes to the cluster, so in each worker node, you shoud run:

kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

The kubeadm tool already created the Kubernetes core components and you already bootstrapped your cluster.

$ kubectl get pods -n kube-system
NAME READY STATUS
calico-etcd-qhf7m 1/1 Running
calico-kube-controllers-6ff88bf6d4 1/1 Running
calico-node-5m2hd 1/2 Running
calico-node-9dn7h 2/2 Running
calico-node-cx2gq 1/2 Running
calico-node-wrhm2 1/2 Running
etcd-art-m1 1/1 Running
kube-apiserver-art-m1 1/1 Running
kube-controller-manager-art-m1 1/1 Running
kube-dns-545bc4bfd4-kgphs 3/3 Running
kube-proxy-2pgvq 1/1 Running
kube-proxy-9tqpx 1/1 Running
kube-proxy-jfsjc 1/1 Running
kube-proxy-wtzpv 1/1 Running
kube-scheduler-art-m1 1/1 Running
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
art-m1 Ready master 7m v1.8.1
art-w1 Ready <none> 19s v1.8.1
art-w2 Ready <none> 20s v1.8.1
art-w3 Ready <none> 19s v1.8.1

Creating the cloud-config file:

Each cloud provider has the own configuration, here is a simple example of to OpenStack. You should create this file for all nodes, including the master.

[Global]
auth-url=https://<Keystone endpoint>:5000/v3
domain-id=<your-domain-id>
tenant-id=<your-project-id>
username=<name>
password=<pass>
[LoadBalancer]
subnet-id=<your-nodes-subnet-id>
floating-network-id=<public-network-id>

You can find all the options here.

Configuring KCM and Kube API Server

The kubeadm creates the manifests files in the /etc/kubernetes/manifests by default, in this path you can find kube-controller-manager.yaml and kube-apiserver.yaml. You should edit them, in the master node.

Basically, you should add the cloud-provider and cloud-config args and mount a volume pointing to your cloud-config file.

[...] 
spec:
containers:
- command:
- kube-controller-manager (kube-apiserver)
- --cloud-provider=openstack
- --cloud-config=/etc/kubernetes/cloud.conf
[...]
volumeMounts:
- mountPath: /etc/kubernetes/cloud.conf
name: cloud-config
readOnly: true
[...]
volumes:
- hostPath:
path: /etc/kubernetes/cloud.conf
type: FileOrCreate
name: cloud-config

[...]

Once you edit correctly this file, the KCM and the Kube API Server will be restarted. If you recieve the message The connection to the server x was refused — did you specify the right host or port?, don’t be a worry, the Kube API server is restarting, it’s normal to lose the connection with it.

To confirm that these components have been configured, you can run:

$ kubectl describe pod kube-controller-manager -n kube-system | grep '/etc/kubernetes/cloud.conf' — cloud-config=/etc/kubernetes/cloud.conf
/etc/kubernetes/cloud.conf from cloud-config (ro)
Path: /etc/kubernetes/cloud.conf

Configuring Kubelet

This step is needed for all nodes, including the master. Edit the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file, adding the --cloud-provider=openstack and --cloud-config=/etc/kubernetes/cloud.conf parameters in the KUBELET_CONFIG_ARGS envoirment variable and restart kubelet service using:

systemctl daemon-reload
systemctl restart kubelet
kubectl get nodes
NAME STATUS ROLES AGE VERSION
art-m1 Ready <none> 8s v1.8.1
art-w1 Ready <none> 8s v1.8.1
art-w2 Ready <none> 9s v1.8.1
art-w3 Ready <none> 9s v1.8.1

Good news! You already have you kubernetes clusters with in-tree OpenStack Cloud Provider.

Good news again! All in-tree cloud providers will be pushed out of Kubernetes core! but don’t be sad, there is a great refactoring in progress, introducing a new architectural element, called Cloud Controller Manager, which I will address in the next part (2/2) making Kubernetes increasingly pluggable with any cloud providers. See ya.

Any question or suggestion: artmr@lsd.ufcg.edu.br

--

--