Production Ready Kubernetes Installation with kubespray.

Cagri Ersen
8 min readApr 1, 2019

--

Kubernetes (k8s) is one of the most popular container orchestration platform for the last few years and almost all cloud providers have native support for it. For instance, Google Cloud Platform offers GKE (Google Kubernetes Engine) which you can deploy a k8s cluster with a few clicks. Also AWS EKS and DigitalOcean Kubernetes Service introduced last year.

So you can easily build a kubernetes cluster on many cloud platforms; but what if you need a production ready, fully redundant cluster in your on-premise environment ?

In this situation, there are bunch of solutions that can be used to build such a cluster and I personally prefer to use kubespray.

Kubespray is a project that contains a bunch of ansible playbooks which deploy kubernetes clusters in an automated way. It is very suitable installation, configuration and maintaining method for on-premise environments since you can use it to deploy kubernetes clusters on different linux distros like, Ubuntu, Coreos and Centos.

So, in this document I want to explain how to install, configure and maintain a CentOS7 based kubernetes cluster with kubespray.

Requirements

In order to deploy a fully redundant k8s cluster, you should ideally have at least nine hosts, which three hosts for masters, three for workers and three for etcd cluster. However, in the sake of the simplicity we are going to use only three nodes which all components will be installed on these three nodes, so each node will have master, worker and etcd roles.

But again, don’t forger that, if your cluster will be used as a production environment that run a critical workload, all k8s components should be installed on a separated host groups.

Also, as we now kubespray uses ansible, there should be a host with ansible installation that holds the playbooks and configurations which responsible for running playbooks against k8s-nodes over SSH to deploy the cluster. This host can be a separated or you can use one of the k8s-nodes for this purpose. In our example we’ll use three hosts named k8s-host-0{1,2,3} and ansible host will be k8s-host-01

There are two main preparation part of this document:

  • Ansible host preperations
  • k8s nodes preperations.

After preperation steps we’ll initiate kubespray cluster.yml playbook to deploy our first cluster.

So let’s get started with ansible preperation tasks.

Preparing ansible

As we’ve mentioned above our ansible host will be k8s-host-01 and in this section all necessary action steps will be done in this host.

First of all disable selinux and firewalld:

Then install required packages by kubespray:

Now we’ll create an user named kubespray and add to sudo (as NOPASSWD). Actually this is not a “must have” situation but I think isolating the kubespray from the root useris a good practise.

Switch to kubespray and create an SSH key pair. We’ll use this keypair to establish passwordless connections to k8s nodes.

Get the latest kubespray release from https://github.com/kubernetes-incubator/kubespray/releases and extract it. (As the date of this document has written/updated the latest version was 2.13.1)

Now we install required python packages by using pip

Next steps are related to cluster configurations and there are some variables which shoud be defined correctly in you environment.

By default, when you initiate a cluster deployment via kubespray, it sets the cluster name as cluster.local, we’ll change it via environment variables. So please change $CLUSTER_NAME and $CLUSTER_DOMAIN variables as your needs.

Create a directory that presents your cluster name. This directory will contains cluster specific ansible inventory and variable files and we get those files by copying them from the sample directory.

Now our ansible environment is ready to be configured as our specifis needs. There are two main configuration files named all.yml and k8s-cluster.yml and both files is placed in “inventory/$CLUSTER_NAME/group_vars/” directory.

Let’s change the cluster name first.

And set token auth modes true:

I prefer to use weave as CNI (for further reading, here is a good comparison https://chrislovecnm.com/kubernetes/cni/choosing-a-cni-provider/) so we set the kube_network_plugin variable as weave and define a password for encryption.

also kube_read_only_port variable should be enabled in order to metrics-server work.

rename the hosts.ini file as inventory.cfg

inventory.cfg file is main configuration file that defines the k8s host roles. So you need to be set as properly. In our example we have three nodes named k8s-host-0{1,2,3} so our inventory.cfg should look like below:

Note: You need to also set the bastion and ip variable options if your network requires it. And there is an another section related to nginx ingress controller. But we’ll deploy ingress controller separately on the next post (in a more customized way) we don’t use the ingress section of inventory so simply delete it.

A few notes about connectivity:
-
Your nodes should be resolve each other hostname. So create proper DNS records or add them to the hosts file.

- Since we use a custom user for SSH connections, you need to set kubespray as remote_user in ansible.cfg file’s [default] section as below:

Not: If you use a different port for sshd, you also need to change remote_port definition.

Ansible host preparation is done and now we will configure our k8s nodes.

Preparing k8s nodes

These steps should be done on all k8s nodes.

Disable selinux and firewalld:

Swap should be disabled on all nodes otherwise kubespray will fail. So let’s disable it.

And add kubespray user and setup sudoers for it.

Now you need to distribute the ansible host’s (k8s-host-01) pubkey to the k8s nodes’ kubespray user as authorized key. And you need to also put the pubkey to host-01’s authorized_keys file, since ansible will make a ssh connection for itself too by using its primary ip address.

Deploy kubernetes cluster from the ansible instance

OK, we are ready to deploy cluster. It’s a very simple step, only run cluster.yml playbook as follows:

After deployment process the cluster should be installed properly.

Accessing kubernetes API from k8s-node with kubectl

By default kubernetes API can be accessible via kubectl from any k8s host.

Let’s check it on host-01:

As you can see our k8s cluster is ready and all nodes act as master and worker (also etcd host).

Accessing kubernetes API from a remote kubectl

When you initiate cluster deployment, by default kubespray adds an admin user to the cluster named “kube”. And its password written down to “inventory/$CLUSTER_NAME/credentials/kube_user.creds” file.

Although we’ll create a user and join it to a new admin group in order to do more granular access control (also this will interact to the kubernetes API by using its PKI key pair instead of basic auth.) on the next section, if you still want to use the default user to connect kubernetes you can follow the steps below:

Create a kubeconfig file on your local machine.

Note: Your local machine should meets the requirements listed below:

  • Able to access to kubernetes API on https://kubernetes.default.svc.$CLUSTER_FQDN:6443
  • Masters tcp port 6443 needs to be accessible from your client.
  • $CLUSTER_FQDN resolves to one of the master’s primary IP addresses (or you can load balance them).

And paste the snippet:

And change the variables listed below:

  • $CLUSTER_FQDN: should be changed as your cluster FQDN
  • certificate-authority-data: It has to contains /etc/kubernetes/ssl/ca.crt information as one-line base64 encoded format. To grab it as required format, you can run something like this in one of your master node:
    cat /etc/kubernetes/ssl/ca.crt |base64 -w 0 then copy the output and paste it to $CLUSTER_CA_CRT_BASE64_IN_ONE_LINE section.
  • password: This needs to be changed with the password that stored ininventory/$CLUSTER_NAME/credentials/kube_user.creds” file.

If you prepare the kubeconfig properly, you can set this file as kubectl config via KUBECONFIG variable:

Now you can test your connectivity like below:

Well, our cluster is formed and we have connected it by using the basic authentication method. Now we’ll do some customization like creating a new admin user which will use PKI instead of basic auth as mentioned above.

Additional Configurations

In this chapter we’ll cover the additional configuration steps listed below:

  • Create an admin group and an user
  • Dashboard Settings
  • Metrics Server Deployment
  • Nginx Ingress Controller Deployment

Create an user with administrator privileges

This section consists of four phases:

  1. Create a “ClusterRole” for the custom admin group
  2. Create a “ClusterRoleBinding” for our newly created “ClusterRole”
  3. Create an user certificate for kubectl
  4. kubeconfig configuration

1. Create a ClusterRole

First of all we’re going to create a cluster wide role with full permission to any api group. In our example the ClusterRole name is cluster-admins-cr. So each binding member to this ClusterRole, can be able to access any kubernetes API group.

On one of your master hosts, run:

2. Create a ClusterRoleBinding

Next, we create a “ClusterRoleBinding” to give access “ClusterRole” named cluster-admins-cr for members of cluster-admins group.

This creates a ClusterRoleBinding named cluster-admins-crb, binds it to cluster-admins-cr ClusterRole and permits ClusterRole’s privileges (which is permitted for all API group calls) for each users who provide a certificate that issued by this kubernetes cluster CA. Also as we’ll see the next section, any user certificate organization (O) should be defined as the group name to access the cluster with admin privileges.

3. Crate an user certificate

When you create a user certificate, there are two important variable that you need to define before:

  • $USERNAME: This is the username to be created
  • $GROUP_NAME: This is the group name which we’ll be bind our admin “ClusterRole” that we’ve created the last section. Our example is “cluster-admins

So let’s define them first as environment variables:

Now we ready to issue for our user by using our cluster’s CA. We’ll run our commands as root and create a hidden folder in root’s home directory to place the generated certificates which should be keep as secure as possible.

Our user certificate should be issued, let’s check it:

Now we can configure our kubeconfig by using these PKI files.

4. kubeconfig Configuration

Create a kubeconfig file on your local machine and populate it as follows and change the variables as your setup.

In order to set kubeconfig properly these variables should be defined:

  • $CLUSTER_CA_CRT_CONTENT:
    This is your kubernetes cluster CA public key file which places in /etc/kubernetes/ssl/ca.crt on any master hosts. It have to be presented in one-line base64 encoded format in kubeconfig. To grab it in correct format run: # cat /etc/kubernetes/ssl/ca.crt |base64 -w 0 and copy the output to paste it as certificate-authority-data value.
  • CLUSTER_FQDN:
    This is our clusters name variable which we’ve defined in installation section.
  • USERNAME: In our example this is my-admins-username
  • USERNAME.CRT_CONTENT: This is our user’s certificate content and its should be as one line base64 encoded format too. You can convert it as follow: # cat ~/.k8s-user-certs/my-admins-username.crt |base64 -w 0 and use it as client-certificate-data
  • USERNAME.KEY_CONTENT: This is our user’s private key content and also its must be as one line base64 encoded format:
    # cat ~/.k8s-user-certs/my-admin-user.key |base64 -w 0 copy and paste it as client-key-data

If you populate the kubeconfig you can set this file as kubectl config via KUBECONFIG variable:

Now test your connectivity:

Kubernetes Dashboard Settings

Kubespray installs the dashboard by default (since k8s 1.7) and you can authenticate by using the default kube admin user. However we will create a service account with admin privileges and use its token to access the dashboard.

Create an Service account named dashboard-admin:

Then create an ClusterRoleBinding for the dashboard-admin ServiceAccount and assign it to cluster-admin ClusterRole.

Now you can extract the bearer token from the dashboard-admin ServiceAccount to use it on the dashboard authentication.

Now run kubectl proxy command then visit http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

In the login section you can select token based authentication option and provide your bearer-token to login.

Metrics Server Deployment

Please see:

Nginx Ingress Controller Deployment

Nginx controller deployment post will be published soon.

--

--