Install Kubernetes on bare-metal CentOS7

Lorenz Vanthillo
ITNEXT
Published in
6 min readAug 21, 2018

--

Nowadays most Cloud providers offer a managed solution to run a Kubernetes cluster in their environment. Although it’s very easy to install Kubernetes in the cloud, there is still need for non-cloud based Kubernetes solutions.
In this blog I’ll explain how to install a Kubernetes Cluster on CentOS7 machines using Ansible and kubespray which will automate our work.

Server setup

We need to create 3 virtual machines which we will use to create our cluster:

  • Operating System: CentOS7/RHEL7
  • 2048MB RAM + 2 CPU + 20 GB HD (minimum) / node
  • Internet access / eth0 interface
  • User with root privileges

I’ve created 3 VM’s with the minimum requirements in VMware Fusion on my local machine. I’ve used the CentOS-7-x86_64-Minimal-1805–01.iso.

The user I’ve created is called centos with password supersecret. You can use any username/password but root permissions are required.

This server has openssh preinstalled. The IP of my first server is 192.168.140.101.

We need to be sure that we can SSH to every server of our cluster:

$ ssh centos@192.168.140.101
centos@192.168.140.101's password:
[centos@localhost ~]$ exit
$ ssh centos@192.168.140.102
centos@192.168.140.102's password:
[centos@localhost ~]$ exit
$ ssh centos@192.168.140.103
centos@192.168.140.103's password:
[centos@localhost ~]$ exit

Server configuration

Our 3 servers need to be configured before we will use kubespray to install the Kubernetes cluster. There are some prerequisites:

  • Up to date packages
  • All servers need to have the same date and time (ntpd)
  • Disable firewall
  • Disable swap
  • Passwordless sudo
  • passwordless SSH (recommended)

We can configure every server separately by executing all the necessary commands on every server or node. But to automate those steps I’ve written a basic Ansible Playbook which is available on my GitHub. This playbook will configure our CentOS7 machines.

Run Ansible Playbook to meet prerequisites

Install Ansible on your machine from where you will start the playbook. I’m working on MacOS.

$ brew install ansible
$ ansible --version
ansible 2.6.2

The playbook will configure passwordless SSH by using a public key. This public is stored in ~/.ssh/id_rsa.pub but the location can be customized. As shown in the beginning of this blog: I’m using user centos which is in group centos and has password supersecret. Those names can be customized too.

Now clone the repository which contains the playbook.

$ git clone https://github.com/lvthillo/ansible-centos7-kubespray.git
$ cd ansible-centos7-kubespray

Update the hosts.ini file and add the IP’s of your servers.

[all]
node1 ansible_host=192.168.140.101
node2 ansible_host=192.168.140.102
node3 ansible_host=192.168.140.103

Now we will run the playbook and define our variables. Again, be sure you can ssh with your user (by using password). My command looks like this (update the variables!):

$ ansible-playbook -i hosts.ini -u centos -k playbook.yml --extra-vars "ansible_sudo_pass=supersecret user=centos group=centos pubkeypath=~/.ssh/id_rsa.pub"

The playbook will check , among other things, whether the ntpd service is running. If this service isn’t running this will cause an error which will be ignored by the playbook. The ntpd service will be installed and enabled.

In the end the servers are restarted and our servers meet the prerequisites.

Run Ansible Playbook to install Kubernetes

Now we can finally install Kubernetes. Here for we will use kubespray which also uses an Ansible playbook. We will use the current latest release (v.2.6.0).
Kubespray will configure our cluster, install Docker, etcd, calico Network plugin (default), … .

$ git clone https://github.com/kubernetes-incubator/kubespray
$ cd kubespray
$ git checkout v2.6.0

Follow the instructions described on the GitHub README of Kubespray.

# Install dependencies from ``requirements.txt``
$ sudo pip install -r requirements.txt

# Copy inventory/sample as inventory/mycluster
$ cp -rfp inventory/sample inventory/mycluster

# Update Ansible inventory file with inventory builder
$ declare -a IPS=(192.168.140.101 192.168.140.102 192.168.140.103)
$ CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}

# Review and change parameters under inventory/mycluster/group_var
$ cat inventory/mycluster/group_vars/all.yml
$ cat inventory/mycluster/group_vars/k8s-cluster.yml
# Optional: modify the hosts.ini file to your needs.
$ vi inventory/mycluster/hosts.ini
[all]
node1 ansible_host=192.168.140.101 ip=192.168.140.101
node2 ansible_host=192.168.140.102 ip=192.168.140.102
node3 ansible_host=192.168.140.103 ip=192.168.140.103
# possible to add additional masters
[kube-master]
node1
[kube-node]
node2
node3
# possible to add additional etcd's
[etcd]
node1
[k8s-cluster:children]
kube-node
kube-master
[calico-rr][vault]
node1
node2
node3
# Deploy Kubespray with Ansible Playbook. In my case with the centos user.
$ ansible-playbook -u centos -b -i inventory/mycluster/hosts.ini cluster.yml

Install the kubectl CLI on your local machine.

$ brew install kubernetes-cli

Now configure the CLI to use your Kubernetes cluster.

$ ssh centos@192.168.140.101 sudo ls /etc/kubernetes/ssl/$ ssh centos@192.168.140.101 sudo cat /etc/kubernetes/ssl/admin-node1-key.pem > admin-key.pem$ ssh centos@192.168.140.101 sudo cat /etc/kubernetes/ssl/admin-node1.pem > admin.pem$ ssh centos@192.168.140.101 sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem$ kubectl config set-cluster default-cluster --server=https://192.168.140.101:6443 --certificate-authority=ca.pem$ kubectl config set-credentials default-admin \
--certificate-authority=ca.pem \
--client-key=admin-key.pem \
--client-certificate=admin.pem
$ kubectl config set-context default-system --cluster=default-cluster --user=default-admin$ kubectl config use-context default-system$ kubectl version$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-cbptm 1/1 Running 0 9m
calico-node-q5mlp 1/1 Running 0 9m
calico-node-qtvrf 1/1 Running 0 9m
kube-apiserver-node1 1/1 Running 0 9m
kube-controller-manager-node1 1/1 Running 0 10m
kube-dns-69f4c8fc58-mpwz6 3/3 Running 0 9m
kube-dns-69f4c8fc58-zpffn 3/3 Running 0 9m
kube-proxy-node1 1/1 Running 0 10m
kube-proxy-node2 1/1 Running 0 10m
kube-proxy-node3 1/1 Running 0 10m
kube-scheduler-node1 1/1 Running 0 10m
kubedns-autoscaler-565b49bbc6-227rh 1/1 Running 0 9m
kubernetes-dashboard-6d4dfd56cb-phvzd 1/1 Running 0 9m
nginx-proxy-node2 1/1 Running 0 10m
nginx-proxy-node3 1/1 Running 0 10m

Deploy a basic application in Kubernetes

By deploying a very basic Python application we can prove the Kubernetes cluster is running and working. We will deploy two pods which will print their IP and a service of type NodePort. The application is available on my GitHub.

The deployment.yaml :

Create the deployment and service.

$ kubectl create -f deployment.yaml
deployment.apps/my-nginx created
service/my-nginx created
$ kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
my-app-5c7694f7f8-b4lrm 1/1 Running 0 2m
my-app-5c7694f7f8-p58hj 1/1 Running 0 2m
$ kubectl get svc -n default
NAME TYPE CLUSTER-IP PORT(S) AGE
kubernetes ClusterIP 10.233.0.1 443/TCP 37m
my-app NodePort 10.233.62.17 8080:30956/TCP 2m

Now visit http://192.168.140.101:30956 and refresh the page till you’re routed to the other pod.

The pods are running on our nodes.

$ kubectl describe pod my-app-5c7694f7f8-b4lrm | grep ‘Node:’
Node: node3/192.168.140.103
$ kubectl describe pod my-app-5c7694f7f8-p58hj | grep ‘Node:’
Node: node2/192.168.140.102

Conclusion

We have created a Kubernetes cluster with one master and two nodes on three bare-metal CentOS7 machines. The install of Kubernetes is pretty complex but by using Ansible and Kubespray we were able to install the cluster without doing a lot of manual interaction. Don’t forget to configure the firewall rules of the cluster. This was out of scope for this tutorial.
Hope you enjoyed it!

If it really helped you… :)

--

--

AWS Community Builder | DevOps | Docker Certified Associate | 5x AWS Certified | CKA Certified | https://lvthillo.com