Deploying Kubernetes On-Premise with RKE and deploying OpenFaaS on it — Part 1
I’m a big fan of Rancher and am very excited in how their RKE (Rancher Kubernetes Engine) is going to evolve and ease the way I deploy Kubernetes. As I’m heavily investing my time on OpenFaaS (an open source serverless platform), I’d like to easily deploy it above the kubernetes cluster made by RKE. In this post I’d like to show:
- How to deploy a kubernetes cluster with 2 nodes (1 master & 1 worker) using RKE
- How to deploy OpenFaaS on the kubernetes cluster you deployed via Helm
The following diagram shows a simple relation of the components:
Prerequisites
- 2 hosts that can run docker (ver. 1.12 to 17.03)
(I’ll be using 2 Ubuntu 16.04 hosts with docker 17.03-ce) - Each of my hosts have 1 CPU core and 1GB RAM
Deploy a Kubernetes Cluster with RKE
If you haven’t read “Announcing RKE, a Lightweight Kubernetes Installer” already, take a look and try it out. In addition, if you have time you should watch “Managing Kubernetes Clusters with Rancher 2.0 — November 2017 Online Meetup” as it explains the newest features of Rancher 2.0 and about RKE as well.
Download RKE
You can download RKE from here. It’s a simple CLI tool to deploy Kubernetes. If you’re using OSX you’ll be downloading rke_darwin-amd64
. Rename it to rke
and don’t forget to give it execution permissions via chmod +x rke
. From here I’m assuming that you added rke
to the PATH
. Confirm that you can execute rke
. You should see something similar to this:
NAME:
rke - Rancher Kubernetes Engine, Running kubernetes cluster in the cloudUSAGE:
rke [global options] command [command options] [arguments...]VERSION:
v0.0.8-devAUTHOR(S):
Rancher Labs, Inc.COMMANDS:
up Bring the cluster up
remove Teardown the cluster and clean cluster nodes
version Show cluster Kubernetes version
config, config Setup cluster configuration
help, h Shows a list of commands or help for one commandGLOBAL OPTIONS:
--debug, -d Debug logging
--help, -h show help
--version, -v print the version
Create RKE Config
Now that we can use rke
, let’s create a config in order to deploy Kubernetes to our hosts. Execute rke config
and you should be prompted to answer some questions. The following diagram describes my hosts and their roles in Kubernetes. Basically Host1 will be a master node and Host2 will be a worker node.
With this in mind, my rke config
answers look like this:
Cluster Level SSH Private Key Path [~/.ssh/id_rsa]:
Number of Hosts [3]: 2
SSH Address of host (1) [none]: 203.104.214.176
SSH Private Key Path of host (203.104.214.176) [none]:
SSH Private Key of host (203.104.214.176) [none]:
SSH User of host (203.104.214.176) [ubuntu]: root
Is host (203.104.214.176) a control host (y/n)? [y]: y
Is host (203.104.214.176) a worker host (y/n)? [n]: n
Is host (203.104.214.176) an Etcd host (y/n)? [n]: y
Override Hostname of host (203.104.214.176) [none]:
Internal IP of host (203.104.214.176) [none]:
Docker socket path on host (203.104.214.176) [/var/run/docker.sock]:
SSH Address of host (2) [none]: 203.104.227.60
SSH Private Key Path of host (203.104.227.60) [none]:
SSH Private Key of host (203.104.227.60) [none]:
SSH User of host (203.104.227.60) [ubuntu]: root
Is host (203.104.227.60) a control host (y/n)? [y]: n
Is host (203.104.227.60) a worker host (y/n)? [n]: y
Is host (203.104.227.60) an Etcd host (y/n)? [n]: n
Override Hostname of host (203.104.227.60) [none]:
Internal IP of host (203.104.227.60) [none]:
Docker socket path on host (203.104.227.60) [/var/run/docker.sock]:
Network Plugin Type [flannel]: calico
Authentication Strategy [x509]:
Etcd Docker Image [quay.io/coreos/etcd:latest]:
Kubernetes Docker image [rancher/k8s:v1.8.3-rancher2]:
Cluster domain [cluster.local]:
Service Cluster IP Range [10.233.0.0/18]:
Cluster Network CIDR [10.233.64.0/18]:
Cluster DNS Service IP [10.233.0.3]:
Infra Container image [gcr.io/google_containers/pause-amd64:3.0]:
Quick Note: My hosts only had root users so I’m using root
but any user who can use docker
could be set. Additionally, I’m using calico
for networking but flannel
and canal
are supported as well ( weave
is probably coming in the next release based on this PR from my co-worker).
This generates a cluster.yml
file and the contents should look like this:
nodes:
- address: 203.104.214.176
internal_address: ""
role:
- controlplane
- etcd
hostname_override: ""
user: root
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ""
- address: 203.104.227.60
internal_address: ""
role:
- worker
hostname_override: ""
user: root
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ""
services:
etcd:
image: quay.io/coreos/etcd:latest
extra_args: {}
kube-api:
image: rancher/k8s:v1.8.3-rancher2
extra_args: {}
service_cluster_ip_range: 10.233.0.0/18
kube-controller:
image: rancher/k8s:v1.8.3-rancher2
extra_args: {}
cluster_cidr: 10.233.64.0/18
service_cluster_ip_range: 10.233.0.0/18
scheduler:
image: rancher/k8s:v1.8.3-rancher2
extra_args: {}
kubelet:
image: rancher/k8s:v1.8.3-rancher2
extra_args: {}
cluster_domain: cluster.local
infra_container_image: gcr.io/google_containers/pause-amd64:3.0
cluster_dns_server: 10.233.0.3
kubeproxy:
image: rancher/k8s:v1.8.3-rancher2
extra_args: {}
network:
plugin: calico
options: {}
auth:
strategy: x509
options: {}
addons: ""
system_images: {}
ssh_key_path: ~/.ssh/id_rsa
Install Docker to the Hosts
I’ll use Docker 17.03-ce for this post. Any version Kubernetes supports should work (atm this post will deploy Kubernetes 1.8.3). One of the easy ways to install Docker is from the shell provided by Rancher Labs on the following link:
The following command should work for Docker 17.03-ce:
curl https://releases.rancher.com/install-docker/17.03.sh | sh
Confirm your docker version is correct:
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Experimental: false
Register authorized_keys
Be sure you can access your hosts via an ssh key
. Suppose you’re going to access the host with the private key ~/.ssh/id_rsa
. You will likely have your public key at ~/.ssh/id_rsa.pub
so cat
and copy the content. In each of your hosts, paste the content inside ~/.ssh/authorized_keys
. Confirm that you can access your hosts via ssh
.
Turn Swap Off in your Hosts
If you’re using your own on-premise machine, it’s likely that your swap is on. Kubelet will fail to activate saying something like:
error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained:
Either disable swap with:
sudo swapoff -a
or set your cluster.yml
with fail-swap-on: false
like this:
kubelet:
image: rancher/k8s:v1.8.3-rancher2
extra_args:
fail-swap-on: false
Deploy Kubernetes!
You’re all set now! Confirm that you’re in the directory with cluster.yml
and execute rke up
. Yes, that’s it! You should see rke
deploying the components for Kubernetes to work the the specified hosts.
INFO[0000] Building Kubernetes cluster
INFO[0000] [ssh] Setup tunnel for host [203.104.214.176]
INFO[0000] [ssh] Setup tunnel for host [203.104.214.176]
INFO[0001] [ssh] Setup tunnel for host [203.104.227.60]
INFO[0002] [certificates] Generating kubernetes certificates
INFO[0002] [certificates] Generating CA kubernetes certificates
INFO[0002] [certificates] Generating Kubernetes API server certificates
INFO[0002] [certificates] Generating Kube Controller certificates
INFO[0002] [certificates] Generating Kube Scheduler certificates
INFO[0003] [certificates] Generating Kube Proxy certificates
INFO[0003] [certificates] Generating Node certificate
INFO[0004] [certificates] Generating admin certificates and kubeconfig
INFO[0004] [reconcile] Reconciling cluster state
INFO[0004] [reconcile] This is newly generated cluster
INFO[0004] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0023] Successfully Deployed local admin kubeconfig at [./.kube_config_cluster.yml]
INFO[0023] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0023] [etcd] Building up Etcd Plane..
INFO[0023] [etcd] Pulling Image on host [203.104.214.176]
INFO[0028] [etcd] Successfully pulled [etcd] image on host [203.104.214.176]
INFO[0028] [etcd] Successfully started [etcd] container on host [203.104.214.176]
INFO[0028] [etcd] Successfully started Etcd Plane..
INFO[0028] [controlplane] Building up Controller Plane..
INFO[0028] [controlplane] Pulling Image on host [203.104.214.176]
INFO[0086] [controlplane] Successfully pulled [kube-api] image on host [203.104.214.176]
INFO[0087] [controlplane] Successfully started [kube-api] container on host [203.104.214.176]
INFO[0087] [controlplane] Pulling Image on host [203.104.214.176]
INFO[0089] [controlplane] Successfully pulled [kube-controller] image on host [203.104.214.176]
INFO[0089] [controlplane] Successfully started [kube-controller] container on host [203.104.214.176]
INFO[0090] [controlplane] Pulling Image on host [203.104.214.176]
INFO[0092] [controlplane] Successfully pulled [scheduler] image on host [203.104.214.176]
INFO[0092] [controlplane] Successfully started [scheduler] container on host [203.104.214.176]
INFO[0092] [controlplane] Successfully started Controller Plane..
INFO[0092] [worker] Building up Worker Plane..
INFO[0092] [worker] Pulling Image on host [203.104.214.176]
INFO[0095] [worker] Successfully pulled [kubelet] image on host [203.104.214.176]
INFO[0095] [worker] Successfully started [kubelet] container on host [203.104.214.176]
INFO[0095] [worker] Pulling Image on host [203.104.214.176]
INFO[0097] [worker] Successfully pulled [kube-proxy] image on host [203.104.214.176]
INFO[0098] [worker] Successfully started [kube-proxy] container on host [203.104.214.176]
INFO[0098] [worker] Pulling Image on host [203.104.227.60]
INFO[0103] [worker] Successfully pulled [nginx-proxy] image on host [203.104.227.60]
INFO[0103] [worker] Successfully started [nginx-proxy] container on host [203.104.227.60]
INFO[0103] [worker] Pulling Image on host [203.104.227.60]
INFO[0156] [worker] Successfully pulled [kubelet] image on host [203.104.227.60]
INFO[0156] [worker] Successfully started [kubelet] container on host [203.104.227.60]
INFO[0156] [worker] Pulling Image on host [203.104.227.60]
INFO[0159] [worker] Successfully pulled [kube-proxy] image on host [203.104.227.60]
INFO[0159] [worker] Successfully started [kube-proxy] container on host [203.104.227.60]
INFO[0159] [worker] Successfully started Worker Plane..
INFO[0159] [certificates] Save kubernetes certificates as secrets
INFO[0177] [certificates] Successfuly saved certificates as kubernetes secret [k8s-certs]
INFO[0177] [state] Saving cluster state to Kubernetes
INFO[0177] [state] Successfully Saved cluster state to Kubernetes ConfigMap: cluster-state
INFO[0177] [network] Setting up network plugin: calico
INFO[0177] [addons] Saving addon ConfigMap to Kubernetes
INFO[0177] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-network-plugin
INFO[0177] [addons] Executing deploy job..
INFO[0183] [addons] Setting up KubeDNS
INFO[0183] [addons] Saving addon ConfigMap to Kubernetes
INFO[0183] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-kubedns-addon
INFO[0183] [addons] Executing deploy job..
INFO[0188] [addons] KubeDNS deployed successfully..
INFO[0188] [addons] Setting up user addons..
INFO[0188] [addons] No user addons configured..
INFO[0188] Finished building Kubernetes cluster successfully
How long it takes for initial deploy will depend on your network connection because rke
needs to pull the docker images, but without that it will end in a couple of minutes. After you see Finished building Kubernetes cluster successfully
, you should see a file called .kube_config_cluster.yml
. You can use kubectl
with this config. Confirm that your nodes are working with the following command:
kubectl --kubeconfig .kube_config_cluster.yml get all --all-namespaces
You should get all information about your kubernetes cluster.
Wrap Up
Pretty simple wasn’t it? You can easily create a kubernetes cluster even with an environment at your house (which I did). Part 1 focused on creating a kubernetes cluster. In Part 2 I’d like to deploy OpenFaaS on top of it.