Bootstrap Kubernetes the hard way on Vagrant on Local Machine

Paul Zhao
Paul Zhao Projects
Published in
34 min readMay 29, 2020

Throughout this project, we’ll be bootstrapping 5 VMs on Vagrant on local machine. Since I’ll be using Mac in this project, steps mentioned in this project will be solely for Mac users. However, resources for other OS users are provided as well.

Prerequisites

VM Hardware Requirements

8 GB of RAM (Preferebly 16 GB) 50 GB Disk space

Virtual Box

Download and Install VirtualBox on any one of the supported platforms:

  • Windows hosts
  • OS X hosts
  • Linux distributions
  • Solaris hosts

Installation on Mac

Download VirtualBox
Installed VirtualBox

Vagrant

Once VirtualBox is installed you may chose to deploy virtual machines manually on it. Vagrant provides an easier way to deploy multiple virtual machines on VirtualBox more consistently.

Download and Install Vagrant on your platform.

  • Windows
  • Debian
  • Centos
  • Linux
  • macOS
  • Arch Linux

Installation on Mac terminal

First, we will make a directory for our whole project. Then cd into this directory

$ mkdir kubernetes-project
$ cd kubernetes-project/
Copy link location for macOS

Make a directory for Vagrant under kubernetes-project and cd into it, then download Vagrant using link location copied

$ mkdir vagrant
$ cd vagrant/
$ wget https://releases.hashicorp.com/vagrant/2.2.9/vagrant_2.2.9_x86_64.dmg # This is the link copied as shown above

Using Homebrew to ease your install with click of a button

To install Linuxbrew on your Linux distribution, fist you need to install following dependencies as shown.

--------- On Debian/Ubuntu --------- 
$ sudo apt-get install build-essential curl file git
--------- On Fedora 22+ ---------
$ sudo dnf groupinstall 'Development Tools' && sudo dnf install curl file git
--------- On CentOS/RHEL ---------
$ sudo yum groupinstall 'Development Tools' && sudo yum install curl file git

Once the dependencies installed, you can use the following script to install Linuxbrew package in /home/linuxbrew/.linuxbrew (or in your home directory at ~/.linuxbrew) as shown.

$ sh -c "$(curl -fsSL https://raw.githubusercontent.com/Linuxbrew/install/master/install.sh)"

Next, you need to add the directories /home/linuxbrew/.linuxbrew/bin (or ~/.linuxbrew/bin) and /home/linuxbrew/.linuxbrew/sbin (or ~/.linuxbrew/sbin) to your PATH and to your bash shell initialization script ~/.bashrc as shown.

$ echo 'export PATH="/home/linuxbrew/.linuxbrew/bin:/home/linuxbrew/.linuxbrew/sbin/:$PATH"' >>~/.bashrc
$ echo 'export MANPATH="/home/linuxbrew/.linuxbrew/share/man:$MANPATH"' >>~/.bashrc
$ echo 'export INFOPATH="/home/linuxbrew/.linuxbrew/share/info:$INFOPATH"' >>~/.bashrc

Then source the ~/.bashrc file for the recent changes to take effect.

$ source  ~/.bashrc

Check the version to confirm if it is installed correctly.

$ brew --version
Homebrew 2.2.16
Homebrew/homebrew-core (git revision e3e5c; last commit 2020-05-15)
Homebrew/homebrew-cask (git revision 152a4; last commit 2020-05-15)

Now we can easily install our VirtualBox and Vagrant with two command lines

$ brew cask install virtualbox
$ brew cask install vagrant

Verify VirtualBox and Vagrant

$ virtualbox --version
DEBUG: issetugid_for_AppKit was called by 0x7fff50ed105c /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit::_NSCheckForIllegalSetugidApp+0x1d (via 0x7fff50ed0c5a)
# VirtualBox application is popped up
$ vagrant --version
Vagrant 2.2.7

Provisioning Compute Resources

Download this github repository and cd into the vagrant folder

$ git clone https://github.com/mmumshad/kubernetes-the-hard-way.git

CD into vagrant directory

$ cd kubernetes-project/vagrant

Notes: vagrant command must be executed in kubernetes-project/vagrant folder to provision resources

Initialize Vagrant

$ vagrant init hashicorp/precise32

Run Vagrant up

$ vagrant up

Verify Vagrant Installation ( A Vagrantfile should be shown)

$ ls
Vagrantfile vagrant_2.2.9_x86_64.dmg

Verify these 5 nodes using VirtualBox

Verifying VirtualBox

This does the below:

  • Deploys 5 VMs — 2 Master, 2 Worker and 1 Loadbalancer with the name ‘kubernetes-ha-* ‘
  • This is the default settings. This can be changed at the top of the Vagrant file
  • Set’s IP addresses in the range 192.168.5
  • Add’s a DNS entry to each of the nodes to access internet
  • DNS: 8.8.8.8
  • Install’s Docker on Worker nodes
  • Runs the below command on all nodes to allow for network forwarding in IP Tables. This is required for kubernetes networking to function correctly.
sysctl net.bridge.bridge-nf-call-iptables=1

SSH to the nodes

There are two ways to SSH into the nodes:

1. SSH using Vagrant

From the directory you ran the vagrant up command, run vagrant ssh <vm> for example vagrant ssh master-1.

Note: Use VM field from the above table and not the vm name itself.

2. SSH Using SSH Client Tools

Use the above IP addresses. Username and password based SSH is disabled by default. Vagrant generates a private key for each of these VMs. It is placed under the .vagrant folder (in the directory you ran the vagrant up command from) at the below path for each VM:

Private Key Path: .vagrant/machines/<machine name>/virtualbox/private_key

Username: vagrant

Verify Environment

  • Ensure all VMs are up
All is up
  • Ensure VMs are assigned the above IP addresses

After logging in each file, check out its IP

$ ip a
  • Ensure you can SSH into these VMs using the IP and private keys

Generate Key Pair on master-1 node $ssh-keygen

Leave all settings to default.

View the generated public key ID at:

$cat .ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD......8+08b vagrant@master-1

Move public key of master to all other VMs

$cat >> ~/.ssh/authorized_keys <<EOF
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD......8+08b vagrant@master-1
EOF

Logging in other nodes from master-1 (master node)

$ vagrant@master-1:~$
.
.
.
$ vagrant@worker-2:~$

Notes: When ssh to loadbalancer you may ssh lb

  • Ensure the VMs can ping each other
$ ping worker-1
PING worker-1 (192.168.5.21) 56(84) bytes of data.
64 bytes from worker-1 (192.168.5.21): icmp_seq=1 ttl=64 time=0.514 ms
64 bytes from worker-1 (192.168.5.21): icmp_seq=2 ttl=64 time=0.981 ms
64 bytes from worker-1 (192.168.5.21): icmp_seq=3 ttl=64 time=0.596 ms
.
.
.
  • Ensure the worker nodes have Docker installed on them. Version: 18.06

Verify it after logging in node worker-1

vagrant@worker-1:~$ docker --version
Docker version 19.03.9, build 9d988398e7

Install kubectl

The kubectl. command line utility is used to interact with the Kubernetes API Server. Download and install kubectl from the official release binaries:

Reference: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Linux

$ wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl$ chmod +x kubectl$ sudo mv kubectl /usr/local/bin/

Verification

Verify kubectl version 1.13.0 or higher is installed:

$ kubectl version --clientClient Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

Provisioning a CA and Generating TLS Certificates

In this lab you will provision a PKI Infrastructure using the popular openssl tool, then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.

Where to do these?

You can do these on any machine with openssl on it. But you should be able to copy the generated files to the provisioned VMs. Or just do these from one of the master nodes.

In our case we do it on the master-1 node, as we have set it up to be the administrative client.

Certificate Authority

In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.

Create a CA certificate, then generate a Certificate Signing Request and use it to create a private key:

# Create private key for CA
$ openssl genrsa -out ca.key 2048
# Comment line starting with RANDFILE in /etc/ssl/openssl.cnf definition to avoid permission issues
$ sudo sed -i '0,/RANDFILE/{s/RANDFILE/\#&/}' /etc/ssl/openssl.cnf
# Create CSR using the private key
$ openssl req -new -key ca.key -subj "/CN=KUBERNETES-CA" -out ca.csr
# Self sign the csr using its own private key
$ openssl x509 -req -in ca.csr -signkey ca.key -CAcreateserial -out ca.crt -days 1000

Results:

$ ls
ca.crt ca.csr ca.key

Reference : https://kubernetes.io/docs/concepts/cluster-administration/certificates/#openssl

The ca.crt is the Kubernetes Certificate Authority certificate and ca.key is the Kubernetes Certificate Authority private key. You will use the ca.crt file in many places, so it will be copied to many places. The ca.key is used by the CA for signing certificates. And it should be securely stored. In this case our master node(s) is our CA server as well, so we will store it on master node(s). There is not need to copy this file to elsewhere.

Client and Server Certificates

In this section you will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes admin user.

The Admin Client Certificate

Generate the admin client certificate and private key:

# Generate private key for admin user
$ openssl genrsa -out admin.key 2048
# Generate CSR for admin user. Note the OU.
$ openssl req -new -key admin.key -subj "/CN=admin/O=system:masters" -out admin.csr
# Sign certificate for admin user using CA servers private key
$ openssl x509 -req -in admin.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out admin.crt -days 1000

Note that the admin user is part of the system:masters group. This is how we are able to perform any administrative operations on Kubernetes cluster using kubectl utility.

Results:

$ ls
admin.crt admin.csr admin.key

The admin.crt and admin.key file gives you administrative access. We will configure these to be used with the kubectl tool to perform administrative functions on kubernetes.

The Kubelet Client Certificates

We are going to skip certificate configuration for Worker Nodes for now. We will deal with them when we configure the workers. For now let’s just focus on the control plane components.

The Controller Manager Client Certificate

Generate the kube-controller-manager client certificate and private key:

$ openssl genrsa -out kube-controller-manager.key 2048
$ openssl req -new -key kube-controller-manager.key -subj "/CN=system:kube-controller-manager" -out kube-controller-manager.csr
$ openssl x509 -req -in kube-controller-manager.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kube-controller-manager.crt -days 1000

Results:

$ ls
kube-controller-manager.csr kube-controller-manager.key
kube-controller-manager.crt

The Kube Proxy Client Certificate

Generate the kube-proxy client certificate and private key:

$ openssl genrsa -out kube-proxy.key 2048
$ openssl req -new -key kube-proxy.key -subj "/CN=system:kube-proxy" -out kube-proxy.csr
$ openssl x509 -req -in kube-proxy.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kube-proxy.crt -days 1000

Results:

$ ls 
kube-proxy.csr kube-proxy.key kube-proxy.crt

The Scheduler Client Certificate

Generate the kube-scheduler client certificate and private key:

$ openssl genrsa -out kube-scheduler.key 2048
$ openssl req -new -key kube-scheduler.key -subj "/CN=system:kube-scheduler" -out kube-scheduler.csr
$ openssl x509 -req -in kube-scheduler.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kube-scheduler.crt -days 1000

Results:

$ ls
kube-scheduler.csr kube-scheduler.key kube-scheduler.crt

The Kubernetes API Server Certificate

The kube-apiserver certificate requires all names that various components may reach it to be part of the alternate names. These include the different DNS names, and IP addresses such as the master servers IP address, the load balancers IP address, the kube-api service IP address etc.

The openssl command cannot take alternate names as command line parameter. So we must create a conf file for it:

cat > openssl.cnf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
IP.1 = 10.96.0.1
IP.2 = 192.168.5.11
IP.3 = 192.168.5.12
IP.4 = 192.168.5.30
IP.5 = 127.0.0.1
EOF

Generates certs for kube-apiserver

$ openssl genrsa -out kube-apiserver.key 2048
$ openssl req -new -key kube-apiserver.key -subj "/CN=kube-apiserver" -out kube-apiserver.csr -config openssl.cnf
$ openssl x509 -req -in kube-apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kube-apiserver.crt -extensions v3_req -extfile openssl.cnf -days 1000

Results:

$ ls
kube-apiserver.key kube-apiserver.crt kube-apiserver.key

The ETCD Server Certificate

Similarly ETCD server certificate must have addresses of all the servers part of the ETCD cluster

The openssl command cannot take alternate names as command line parameter. So we must create a conf file for it:

cat > openssl-etcd.cnf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = 192.168.5.11
IP.2 = 192.168.5.12
IP.3 = 127.0.0.1
EOF

Generates certs for ETCD

$ openssl genrsa -out etcd-server.key 2048
$ openssl req -new -key etcd-server.key -subj "/CN=etcd-server" -out etcd-server.csr -config openssl-etcd.cnf
$ openssl x509 -req -in etcd-server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out etcd-server.crt -extensions v3_req -extfile openssl-etcd.cnf -days 1000

Results:

$ ls
etcd-server.csr etcd-server.key etcd-server.crt

The Service Account Key Pair

The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as describe in the managing service accounts documentation.

Generate the service-account certificate and private key:

$ openssl genrsa -out service-account.key 2048
$ openssl req -new -key service-account.key -subj "/CN=service-accounts" -out service-account.csr
$ openssl x509 -req -in service-account.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out service-account.crt -days 1000

Results:

$ ls 
service-account.csr service-account.key service-account.crt

Distribute the Certificates

Copy the appropriate certificates and private keys to each controller instance:

for instance in master-1 master-2; do
scp ca.crt ca.key kube-apiserver.key kube-apiserver.crt \
service-account.key service-account.crt \
etcd-server.key etcd-server.crt \
${instance}:~/
done

The kube-proxy, kube-controller-manager, kube-scheduler, and kubelet client certificates will be used to generate client authentication configuration files in the next lab. These certificates will be embedded into the client authentication configuration files. We will then copy those configuration files to the other master nodes.

Generating Kubernetes Configuration Files for Authentication

In this lab you will generate Kubernetes configuration files, also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.

Client Authentication Configs

In this section you will generate kubeconfig files for the controller manager, kube-proxy, scheduler clients and the admin user.

Kubernetes Public IP Address

Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the load balancer will be used. In our case it is 192.168.5.30

LOADBALANCER_ADDRESS=192.168.5.30

The kube-proxy Kubernetes Configuration File

Generate a kubeconfig file for the kube-proxy service:

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://${LOADBALANCER_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.crt \
--client-key=kube-proxy.key \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}

Results:

kube-proxy.kubeconfigReference docs for kube-proxy [here](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/)

The kube-controller-manager Kubernetes Configuration File

Generate a kubeconfig file for the kube-controller-manager service:

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.crt \
--client-key=kube-controller-manager.key \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}

Results:

kube-controller-manager.kubeconfig

Reference docs for kube-controller-manager here

The kube-scheduler Kubernetes Configuration File

Generate a kubeconfig file for the kube-scheduler service:

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.crt \
--client-key=kube-scheduler.key \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}

Results:

kube-scheduler.kubeconfig

Reference docs for kube-scheduler here

The admin Kubernetes Configuration File

Generate a kubeconfig file for the admin user:

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.crt \
--client-key=admin.key \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig
}

Results:

admin.kubeconfig

Reference docs for kubeconfig here

Distribute the Kubernetes Configuration Files

Copy the appropriate kube-proxy kubeconfig files to each worker instance:

for instance in worker-1 worker-2; do
scp kube-proxy.kubeconfig ${instance}:~/
done

Copy the appropriate admin.kubeconfig, kube-controller-manager and kube-scheduler kubeconfig files to each controller instance:

for instance in master-1 master-2; do
scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done

Generating the Data Encryption Config and Key

Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest.

In this lab you will generate an encryption key and an encryption config suitable for encrypting Kubernetes Secrets.

The Encryption Key

Generate an encryption key:

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

The Encryption Config File

Create the encryption-config.yaml encryption config file:

cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF

Copy the encryption-config.yaml encryption config file to each controller instance:

for instance in master-1 master-2; do
scp encryption-config.yaml ${instance}:~/
done

Reference: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#encrypting-your-data

Bootstrapping the etcd Cluster

Kubernetes components are stateless and store cluster state in etcd. In this lab you will bootstrap a two node etcd cluster and configure it for high availability and secure remote access.

Prerequisites

The commands in this lab must be run on each controller instance: master-1, and master-2. Login to each of these using an SSH terminal.

Running commands in parallel with tmux

tmux can be used to run commands on multiple compute instances at the same time. See the Running commands in parallel with tmux section in the Prerequisites lab.

Bootstrapping an etcd Cluster Member

Download and Install the etcd Binaries

Download the official etcd release binaries from the coreos/etcd GitHub project:

wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz"

Extract and install the etcd server and the etcdctl command line utility:

{
tar -xvf etcd-v3.3.9-linux-amd64.tar.gz
sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/
}

Configure the etcd Server

{
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo cp ca.crt etcd-server.key etcd-server.crt /etc/etcd/
}

The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address of the master(etcd) nodes:

INTERNAL_IP=$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)

Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:

ETCD_NAME=$(hostname -s)

Create the etcd.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/etcd-server.crt \\
--key-file=/etc/etcd/etcd-server.key \\
--peer-cert-file=/etc/etcd/etcd-server.crt \\
--peer-key-file=/etc/etcd/etcd-server.key \\
--trusted-ca-file=/etc/etcd/ca.crt \\
--peer-trusted-ca-file=/etc/etcd/ca.crt \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster master-1=https://192.168.5.11:2380,master-2=https://192.168.5.12:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

Start the etcd Server

{
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
}

Remember to run the above commands on each controller node: master-1, and master-2.

Verification

List the etcd cluster members:

sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.crt \
--cert=/etc/etcd/etcd-server.crt \
--key=/etc/etcd/etcd-server.key

output

45bf9ccad8d8900a, started, master-2, https://192.168.5.12:2380, https://192.168.5.12:2379
54a5796a6803f252, started, master-1, https://192.168.5.11:2380, https://192.168.5.11:2379

Reference: https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#starting-etcd-clusters

Bootstrapping the Kubernetes Control Plane

In this lab you will bootstrap the Kubernetes control plane across 2 compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.

Prerequisites

The commands in this lab must be run on each controller instance: master-1, and master-2. Login to each controller instance using SSH Terminal. Example:

Running commands in parallel with tmux

tmux can be used to run commands on multiple compute instances at the same time. See the Running commands in parallel with tmux section in the Prerequisites lab.

Provision the Kubernetes Control Plane

Create the Kubernetes configuration directory:

sudo mkdir -p /etc/kubernetes/config

Download and Install the Kubernetes Controller Binaries

Download the official Kubernetes release binaries:

wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl"

Reference: https://kubernetes.io/docs/setup/release/#server-binaries

Install the Kubernetes binaries:

{
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
}

Configure the Kubernetes API Server

{
sudo mkdir -p /var/lib/kubernetes/
sudo cp ca.crt ca.key kube-apiserver.crt kube-apiserver.key \
service-account.key service-account.crt \
etcd-server.key etcd-server.crt \
encryption-config.yaml /var/lib/kubernetes/
}

The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:

INTERNAL_IP=$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)

Verify it is set

echo $INTERNAL_IP

Create the kube-apiserver.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.crt \\
--enable-admission-plugins=NodeRestriction,ServiceAccount \\
--enable-swagger-ui=true \\
--enable-bootstrap-token-auth=true \\
--etcd-cafile=/var/lib/kubernetes/ca.crt \\
--etcd-certfile=/var/lib/kubernetes/etcd-server.crt \\
--etcd-keyfile=/var/lib/kubernetes/etcd-server.key \\
--etcd-servers=https://192.168.5.11:2379,https://192.168.5.12:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \\
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/service-account.crt \\
--service-cluster-ip-range=10.96.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kube-apiserver.crt \\
--tls-private-key-file=/var/lib/kubernetes/kube-apiserver.key \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

Configure the Kubernetes Controller Manager

Copy the kube-controller-manager kubeconfig into place:

sudo cp kube-controller-manager.kubeconfig /var/lib/kubernetes/

Create the kube-controller-manager.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--cluster-cidr=192.168.5.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \\
--cluster-signing-key-file=/var/lib/kubernetes/ca.key \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.crt \\
--service-account-private-key-file=/var/lib/kubernetes/service-account.key \\
--service-cluster-ip-range=10.96.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

Configure the Kubernetes Scheduler

Copy the kube-scheduler kubeconfig into place:

sudo cp kube-scheduler.kubeconfig /var/lib/kubernetes/

Create the kube-scheduler.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \\
--address=127.0.0.1 \\
--leader-elect=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

Start the Controller Services

{
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
}

Allow up to 10 seconds for the Kubernetes API Server to fully initialize.

Verification

kubectl get componentstatuses --kubeconfig admin.kubeconfigNAME                 STATUS    MESSAGE              ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}

Remember to run the above commands on each controller node: master-1, and master-2.

The Kubernetes Frontend Load Balancer

In this section you will provision an external load balancer to front the Kubernetes API Servers. The kubernetes-the-hard-way static IP address will be attached to the resulting load balancer.

Provision a Network Load Balancer

Login to loadbalancer instance using SSH Terminal.

#Install HAProxy
loadbalancer# sudo apt-get update && sudo apt-get install -y haproxy
loadbalancer# cat <<EOF | sudo tee /etc/haproxy/haproxy.cfg
frontend kubernetes
bind 192.168.5.30:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server master-1 192.168.5.11:6443 check fall 3 rise 2
server master-2 192.168.5.12:6443 check fall 3 rise 2
EOF
loadbalancer# sudo service haproxy restart

Verification

Make a HTTP request for the Kubernetes version info:

curl  https://192.168.5.30:6443/version -k

output

{
"major": "1",
"minor": "13",
"gitVersion": "v1.13.0",
"gitCommit": "ddf47ac13c1a9483ea035a79cd7c10005ff21a6d",
"gitTreeState": "clean",
"buildDate": "2018-12-03T20:56:12Z",
"goVersion": "go1.11.2",
"compiler": "gc",
"platform": "linux/amd64"
}

Bootstrapping the Kubernetes Worker Nodes

In this lab you will bootstrap 2 Kubernetes worker nodes. We already have Docker installed on these nodes.

We will now install the kubernetes components

Prerequisites

The Certificates and Configuration are created on master-1 node and then copied over to workers using scp. Once this is done, the commands are to be run on first worker instance: worker-1. Login to first worker instance using SSH Terminal.

Provisioning Kubelet Client Certificates

Kubernetes uses a special-purpose authorization mode called Node Authorizer, that specifically authorizes API requests made by Kubelets. In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the system:nodes group, with a username of system:node:<nodeName>. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.

Generate a certificate and private key for one worker node:

On master-1:

master-1$ cat > openssl-worker-1.cnf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = worker-1
IP.1 = 192.168.5.21
EOF
openssl genrsa -out worker-1.key 2048
openssl req -new -key worker-1.key -subj "/CN=system:node:worker-1/O=system:nodes" -out worker-1.csr -config openssl-worker-1.cnf
openssl x509 -req -in worker-1.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out worker-1.crt -extensions v3_req -extfile openssl-worker-1.cnf -days 1000

Results:

worker-1.key
worker-1.crt

The kubelet Kubernetes Configuration File

When generating kubeconfig files for Kubelets the client certificate matching the Kubelet’s node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes Node Authorizer.

Get the kub-api server load-balancer IP.

LOADBALANCER_ADDRESS=192.168.5.30

Generate a kubeconfig file for the first worker node.

On master-1:

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://${LOADBALANCER_ADDRESS}:6443 \
--kubeconfig=worker-1.kubeconfig
kubectl config set-credentials system:node:worker-1 \
--client-certificate=worker-1.crt \
--client-key=worker-1.key \
--embed-certs=true \
--kubeconfig=worker-1.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:worker-1 \
--kubeconfig=worker-1.kubeconfig
kubectl config use-context default --kubeconfig=worker-1.kubeconfig
}

Results:

worker-1.kubeconfig

Copy certificates, private keys and kubeconfig files to the worker node:

On master-1:

master-1$ scp ca.crt worker-1.crt worker-1.key worker-1.kubeconfig worker-1:~/

Download and Install Worker Binaries

Going forward all activities are to be done on the worker-1 node.

On worker-1:

worker-1$ wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubelet

Reference: https://kubernetes.io/docs/setup/release/#node-binaries

Create the installation directories:

worker-1$ sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes

Install the worker binaries:

{
chmod +x kubectl kube-proxy kubelet
sudo mv kubectl kube-proxy kubelet /usr/local/bin/
}

Configure the Kubelet

On worker-1:

{
sudo mv ${HOSTNAME}.key ${HOSTNAME}.crt /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv ca.crt /var/lib/kubernetes/
}

Create the kubelet-config.yaml configuration file:

worker-1$ cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.crt"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.96.0.10"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
EOF

The resolvConf configuration is used to avoid loops when using CoreDNS for service discovery on systems running systemd-resolved.

Create the kubelet.service systemd unit file:

worker-1$ cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--tls-cert-file=/var/lib/kubelet/${HOSTNAME}.crt \\
--tls-private-key-file=/var/lib/kubelet/${HOSTNAME}.key \\
--network-plugin=cni \\
--register-node=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

Configure the Kubernetes Proxy

On worker-1:

worker-1$ sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig

Create the kube-proxy-config.yaml configuration file:

worker-1$ cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "192.168.5.0/24"
EOF

Create the kube-proxy.service systemd unit file:

worker-1$ cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

Start the Worker Services

On worker-1:

{
sudo systemctl daemon-reload
sudo systemctl enable kubelet kube-proxy
sudo systemctl start kubelet kube-proxy
}

Remember to run the above commands on worker node: worker-1

Verification

On master-1:

List the registered Kubernetes nodes from the master node:

master-1$ kubectl get nodes --kubeconfig admin.kubeconfig

output

NAME       STATUS     ROLES    AGE   VERSION
worker-1 NotReady <none> 93s v1.13.0

Note: It is OK for the worker node to be in a NotReady state. That is because we haven’t configured Networking yet.

Optional: At this point you may run the certificate verification script to make sure all certificates are configured correctly. Follow the instructions here

TLS Bootstrapping Worker Nodes

In the previous step we configured a worker node by

  • Creating a set of key pairs for the worker node by ourself
  • Getting them signed by the CA by ourself
  • Creating a kube-config file using this certificate by ourself
  • Everytime the certificate expires we must follow the same process of updating the certificate by ourself

This is not a practical approach when you have 1000s of nodes in the cluster, and nodes dynamically being added and removed from the cluster. With TLS boostrapping:

  • The Nodes can generate certificate key pairs by themselves
  • The Nodes can generate certificate signing request by themselves
  • The Nodes can submit the certificate signing request to the Kubernetes CA (Using the Certificates API)
  • The Nodes can retrieve the signed certificate from the Kubernetes CA
  • The Nodes can generate a kube-config file using this certificate by themselves
  • The Nodes can start and join the cluster by themselves
  • The Nodes can renew certificates when they expire by themselves

So let’s get started!

What is required for TLS Bootstrapping

Certificates API: The Certificate API (as discussed in the lecture) provides a set of APIs on Kubernetes that can help us manage certificates (Create CSR, Get them signed by CA, Retrieve signed certificate etc). The worker nodes (kubelets) have the ability to use this API to get certificates signed by the Kubernetes CA.

Pre-Requisite

kube-apiserver — Ensure bootstrap token based authentication is enabled on the kube-apiserver.

--enable-bootstrap-token-auth=true

kube-controller-manager — The certificate requests are signed by the kube-controller-manager ultimately. The kube-controller-manager requires the CA Certificate and Key to perform these operations.

--cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \\
--cluster-signing-key-file=/var/lib/kubernetes/ca.key

Note: We have already configured these in our setup in this course

Copy the ca certificate to the worker node:

scp ca.crt worker-2:~/

Step 1 Configure the Binaries on the Worker node

Download and Install Worker Binaries

wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubelet

Reference: https://kubernetes.io/docs/setup/release/#node-binaries

Create the installation directories:

sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes

Install the worker binaries:

{
chmod +x kubectl kube-proxy kubelet
sudo mv kubectl kube-proxy kubelet /usr/local/bin/
}

Move the ca certificate

sudo mv ca.crt /var/lib/kubernetes/

Step 1 Create the Boostrap Token to be used by Nodes(Kubelets) to invoke Certificate API

For the workers(kubelet) to access the Certificates API, they need to authenticate to the kubernetes api-server first. For this we create a Bootstrap Token to be used by the kubelet

Bootstrap Tokens take the form of a 6 character token id followed by 16 character token secret separated by a dot. Eg: abcdef.0123456789abcdef. More formally, they must match the regular expression [a-z0–9]{6}.[a-z0–9]{16}

Bootstrap Tokens are created as a secret in the kube-system namespace.

cat > bootstrap-token-07401b.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
# Name MUST be of form "bootstrap-token-<token id>"
name: bootstrap-token-07401b
namespace: kube-system
# Type MUST be 'bootstrap.kubernetes.io/token'
type: bootstrap.kubernetes.io/token
stringData:
# Human readable description. Optional.
description: "The default bootstrap token generated by 'kubeadm init'."
# Token ID and secret. Required.
token-id: 07401b
token-secret: f395accd246ae52d
# Expiration. Optional.
expiration: 2021-03-10T03:22:11Z
# Allowed usages.
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
# Extra groups to authenticate the token as. Must start with "system:bootstrappers:"
auth-extra-groups: system:bootstrappers:worker
EOF
kubectl create -f bootstrap-token-07401b.yaml

Things to note:

  • expiration — make sure its set to a date in the future.
  • auth-extra-groups — this is the group the worker nodes are part of. It must start with “system:bootstrappers:” This group does not exist already. This group is associated with this token.

Once this is created the token to be used for authentication is 07401b.f395accd246ae52d

Reference: https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#bootstrap-token-secret-format

Step 2 Authorize workers(kubelets) to create CSR

Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet

kubectl create clusterrolebinding create-csrs-for-bootstrapping --clusterrole=system:node-bootstrapper --group=system:bootstrappers--------------- OR ---------------cat > csrs-for-bootstrapping.yaml <<EOF
# enable bootstrapping nodes to create CSR
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: create-csrs-for-bootstrapping
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:node-bootstrapper
apiGroup: rbac.authorization.k8s.io
EOF
kubectl create -f csrs-for-bootstrapping.yaml

Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#authorize-kubelet-to-create-csr

Step 3 Authorize workers(kubelets) to approve CSR

kubectl create clusterrolebinding auto-approve-csrs-for-group --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers --------------- OR ---------------cat > auto-approve-csrs-for-group.yaml <<EOF
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
EOF
kubectl create -f auto-approve-csrs-for-group.yaml

Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#approval

Step 3 Authorize workers(kubelets) to Auto Renew Certificates on expiration

We now create the Cluster Role Binding required for the nodes to automatically renew the certificates on expiry. Note that we are NOT using the system:bootstrappers group here any more. Since by the renewal period, we believe the node would be bootstrapped and part of the cluster already. All nodes are part of the system:nodes group.

kubectl create clusterrolebinding auto-approve-renewals-for-nodes --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes--------------- OR ---------------cat > auto-approve-renewals-for-nodes.yaml <<EOF
# Approve renewal CSRs for the group "system:nodes"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-renewals-for-nodes
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
EOF
kubectl create -f auto-approve-renewals-for-nodes.yaml

Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#approval

Step 4 Configure Kubelet to TLS Bootstrap

It is now time to configure the second worker to TLS bootstrap using the token we generated

For worker-1 we started by creating a kubeconfig file with the TLS certificates that we manually generated. Here, we don’t have the certificates yet. So we cannot create a kubeconfig file. Instead we create a bootstrap-kubeconfig file with information about the token we created.

This is to be done on the worker-2 node.

sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://192.168.5.30:6443' --certificate-authority=/var/lib/kubernetes/ca.crt
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-context bootstrap

Or

cat <<EOF | sudo tee /var/lib/kubelet/bootstrap-kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority: /var/lib/kubernetes/ca.crt
server: https://192.168.5.30:6443
name: bootstrap
contexts:
- context:
cluster: bootstrap
user: kubelet-bootstrap
name: bootstrap
current-context: bootstrap
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: 07401b.f395accd246ae52d
EOF

Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration

Step 5 Create Kubelet Config File

Create the kubelet-config.yaml configuration file:

cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.crt"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.96.0.10"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
EOF

Note: We are not specifying the certificate details — tlsCertFile and tlsPrivateKeyFile — in this file

Step 6 Configure Kubelet Service

Create the kubelet.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--bootstrap-kubeconfig="/var/lib/kubelet/bootstrap-kubeconfig" \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--cert-dir=/var/lib/kubelet/pki/ \\
--rotate-certificates=true \\
--rotate-server-certificates=true \\
--network-plugin=cni \\
--register-node=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

Things to note here:

  • bootstrap-kubeconfig: Location of the bootstrap-kubeconfig file.
  • cert-dir: The directory where the generated certificates are stored.
  • rotate-certificates: Rotates client certificates when they expire.
  • rotate-server-certificates: Requests for server certificates on bootstrap and rotates them when they expire.

Step 7 Configure the Kubernetes Proxy

In one of the previous steps we created the kube-proxy.kubeconfig file. Check here if you missed it.

sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig

Create the kube-proxy-config.yaml configuration file:

cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "192.168.5.0/24"
EOF

Create the kube-proxy.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

Step 8 Start the Worker Services

{
sudo systemctl daemon-reload
sudo systemctl enable kubelet kube-proxy
sudo systemctl start kubelet kube-proxy
}

Remember to run the above commands on worker node: worker-2

Step 9 Approve Server CSR

kubectl get csr

NAME                                                   AGE   REQUESTOR                 CONDITION
csr-95bv6 20s system:node:worker-2 Pending

Approve

kubectl certificate approve csr-95bv6

Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubectl-approval

Verification

List the registered Kubernetes nodes from the master node:

master-1$ kubectl get nodes --kubeconfig admin.kubeconfig

output

NAME       STATUS   ROLES    AGE   VERSION
worker-1 NotReady <none> 93s v1.13.0
worker-2 NotReady <none> 93s v1.13.0

Note: It is OK for the worker node to be in a NotReady state. That is because we haven’t configured Networking yet.

Configuring kubectl for Remote Access

In this lab you will generate a kubeconfig file for the kubectl command line utility based on the admin user credentials.

Run the commands in this lab from the same directory used to generate the admin client certificates.

The Admin Kubernetes Configuration File

Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.

Generate a kubeconfig file suitable for authenticating as the admin user:

{
KUBERNETES_LB_ADDRESS=192.168.5.30
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://${KUBERNETES_LB_ADDRESS}:6443
kubectl config set-credentials admin \
--client-certificate=admin.crt \
--client-key=admin.key
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
}
Reference doc for kubectl config [here](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)

Verification

Check the health of the remote Kubernetes cluster:

kubectl get componentstatuses

output

NAME                 STATUS    MESSAGE             ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}

List the nodes in the remote Kubernetes cluster:

kubectl get nodes

output

NAME       STATUS   ROLES    AGE    VERSION
worker-1 NotReady <none> 118s v1.13.0
worker-2 NotReady <none> 118s v1.13.0

Note: It is OK for the worker node to be in a NotReady state. Worker nodes will come into Ready state once networking is configured.

Provisioning Pod Network

We chose to use CNI — weave as our networking option.

Install CNI plugins

Download the CNI Plugins required for weave on each of the worker nodes — worker-1 and worker-2

wget https://github.com/containernetworking/plugins/releases/download/v0.7.5/cni-plugins-amd64-v0.7.5.tgz

Extract it to /opt/cni/bin directory

sudo tar -xzvf cni-plugins-amd64-v0.7.5.tgz --directory /opt/cni/bin/

Reference: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni

Deploy Weave Network

Deploy weave network. Run only once on the master node.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Weave uses POD CIDR of 10.32.0.0/12 by default.

Verification

List the registered Kubernetes nodes from the master node:

master-1$ kubectl get pods -n kube-system

output

NAME              READY   STATUS    RESTARTS   AGE
weave-net-58j2j 2/2 Running 0 89s
weave-net-rr5dk 2/2 Running 0 89s

Reference: https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/#install-the-weave-net-addon

RBAC for Kubelet Authorization

In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.

This tutorial sets the Kubelet --authorization-mode flag to Webhook. Webhook mode uses the SubjectAccessReview API to determine authorization.

Create the system:kube-apiserver-to-kubelet ClusterRole with permissions to access the Kubelet API and perform most common tasks associated with managing pods:

cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF

Reference: https://v1-12.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole

The Kubernetes API Server authenticates to the Kubelet as the kubernetes user using the client certificate as defined by the --kubelet-client-certificate flag.

Bind the system:kube-apiserver-to-kubelet ClusterRole to the kubernetes user:

cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kube-apiserver
EOF

Reference: https://v1-12.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding

Deploying the DNS Cluster Add-on

In this lab you will deploy the DNS add-on which provides DNS based service discovery, backed by CoreDNS, to applications running inside the Kubernetes cluster.

The DNS Cluster Add-on

Deploy the coredns cluster add-on:

kubectl apply -f https://raw.githubusercontent.com/mmumshad/kubernetes-the-hard-way/master/deployments/coredns.yaml

output

serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.extensions/coredns created
service/kube-dns created

List the pods created by the kube-dns deployment:

kubectl get pods -l k8s-app=kube-dns -n kube-system

output

NAME                       READY   STATUS    RESTARTS   AGE
coredns-699f8ddd77-94qv9 1/1 Running 0 20s
coredns-699f8ddd77-gtcgb 1/1 Running 0 20s

Reference: https://kubernetes.io/docs/tasks/administer-cluster/coredns/#installing-coredns

Verification

Create a busybox deployment:

kubectl run --generator=run-pod/v1  busybox --image=busybox:1.28 --command -- sleep 3600

List the pod created by the busybox deployment:

kubectl get pods -l run=busybox

output

NAME                      READY   STATUS    RESTARTS   AGE
busybox-bd8fb7cbd-vflm9 1/1 Running 0 10s

Execute a DNS lookup for the kubernetes service inside the busybox pod:

kubectl exec -ti busybox -- nslookup kubernetes

output

Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

Smoke Test

In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.

Data Encryption

In this section you will verify the ability to encrypt secret data at rest.

Create a generic secret:

kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata"

Print a hexdump of the kubernetes-the-hard-way secret stored in etcd:

sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.crt \
--cert=/etc/etcd/etcd-server.crt \
--key=/etc/etcd/etcd-server.key\
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C

output

00000000  2f 72 65 67 69 73 74 72  79 2f 73 65 63 72 65 74  |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a 78 cd 3c 33 3a 60 d7 |:v1:key1:x.<3:`.|
00000050 4c 1e 4c f1 97 ce 75 6f 3d a7 f1 4b 59 e8 f9 2a |L.L...uo=..KY..*|
00000060 17 77 20 14 ab 73 85 63 12 12 a4 8d 3c 6e 04 4c |.w ..s.c....<n.L|
00000070 e0 84 6f 10 7b 3a 13 10 d0 cd df 81 d0 08 be fa |..o.{:..........|
00000080 ea 74 ca 53 b3 b2 90 95 e1 ba bc 3f 88 76 db 8e |.t.S.......?.v..|
00000090 e1 1e 17 ea 0d b0 3b e3 e3 df eb 2e 57 76 1d d0 |......;.....Wv..|
000000a0 25 ca ee 5b f2 27 c7 f2 8e 58 93 e9 28 45 8f 3a |%..[.'...X..(E.:|
000000b0 e7 97 bf 74 86 72 fd e7 f1 bb fc f7 2d 10 4d c3 |...t.r......-.M.|
000000c0 70 1d 08 75 c3 7c 14 55 18 9d 68 73 ec e3 41 3a |p..u.|.U..hs..A:|
000000d0 dc 41 8a 4b 9e 33 d9 3d c0 04 60 10 cf ad a4 88 |.A.K.3.=..`.....|
000000e0 7b e7 93 3f 7a e8 1b 22 bf 0a |{..?z.."..|
000000ea

The etcd key should be prefixed with k8s:enc:aescbc:v1:key1, which indicates the aescbc provider was used to encrypt the data with the key1 encryption key.

Cleanup: kubectl delete secret kubernetes-the-hard-way

Deployments

In this section you will verify the ability to create and manage Deployments.

Create a deployment for the nginx web server:

kubectl create deployment nginx --image=nginx

List the pod created by the nginx deployment:

kubectl get pods -l app=nginx

output

NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-6lxg2 1/1 Running 0 10s

Services

In this section you will verify the ability to access applications remotely using port forwarding.

Create a service to expose deployment nginx on node ports.

kubectl expose deploy nginx --type=NodePort --port 80PORT_NUMBER=$(kubectl get svc -l app=nginx -o jsonpath="{.items[0].spec.ports[0].nodePort}")

Test to view NGINX page

curl http://worker-1:$PORT_NUMBER
curl http://worker-2:$PORT_NUMBER

output

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
# Output Truncated for brevity
<body>

Logs

In this section you will verify the ability to retrieve container logs.

Retrieve the full name of the nginx pod:

POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")

Print the nginx pod logs:

kubectl logs $POD_NAME

output

10.32.0.1 - - [20/Mar/2019:10:08:30 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.58.0" "-"
10.40.0.0 - - [20/Mar/2019:10:08:55 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.58.0" "-"

Exec

In this section you will verify the ability to execute commands in a container.

Print the nginx version by executing the nginx -v command in the nginx container:

kubectl exec -ti $POD_NAME -- nginx -v

output

nginx version: nginx/1.15.9

Smoke Test

In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.

Data Encryption

In this section you will verify the ability to encrypt secret data at rest.

Create a generic secret:

kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata"

Print a hexdump of the kubernetes-the-hard-way secret stored in etcd:

sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.crt \
--cert=/etc/etcd/etcd-server.crt \
--key=/etc/etcd/etcd-server.key\
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C

output

00000000  2f 72 65 67 69 73 74 72  79 2f 73 65 63 72 65 74  |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a 78 cd 3c 33 3a 60 d7 |:v1:key1:x.<3:`.|
00000050 4c 1e 4c f1 97 ce 75 6f 3d a7 f1 4b 59 e8 f9 2a |L.L...uo=..KY..*|
00000060 17 77 20 14 ab 73 85 63 12 12 a4 8d 3c 6e 04 4c |.w ..s.c....<n.L|
00000070 e0 84 6f 10 7b 3a 13 10 d0 cd df 81 d0 08 be fa |..o.{:..........|
00000080 ea 74 ca 53 b3 b2 90 95 e1 ba bc 3f 88 76 db 8e |.t.S.......?.v..|
00000090 e1 1e 17 ea 0d b0 3b e3 e3 df eb 2e 57 76 1d d0 |......;.....Wv..|
000000a0 25 ca ee 5b f2 27 c7 f2 8e 58 93 e9 28 45 8f 3a |%..[.'...X..(E.:|
000000b0 e7 97 bf 74 86 72 fd e7 f1 bb fc f7 2d 10 4d c3 |...t.r......-.M.|
000000c0 70 1d 08 75 c3 7c 14 55 18 9d 68 73 ec e3 41 3a |p..u.|.U..hs..A:|
000000d0 dc 41 8a 4b 9e 33 d9 3d c0 04 60 10 cf ad a4 88 |.A.K.3.=..`.....|
000000e0 7b e7 93 3f 7a e8 1b 22 bf 0a |{..?z.."..|
000000ea

The etcd key should be prefixed with k8s:enc:aescbc:v1:key1, which indicates the aescbc provider was used to encrypt the data with the key1 encryption key.

Cleanup: kubectl delete secret kubernetes-the-hard-way

Deployments

In this section you will verify the ability to create and manage Deployments.

Create a deployment for the nginx web server:

kubectl create deployment nginx --image=nginx

List the pod created by the nginx deployment:

kubectl get pods -l app=nginx

output

NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-6lxg2 1/1 Running 0 10s

Services

In this section you will verify the ability to access applications remotely using port forwarding.

Create a service to expose deployment nginx on node ports.

kubectl expose deploy nginx --type=NodePort --port 80PORT_NUMBER=$(kubectl get svc -l app=nginx -o jsonpath="{.items[0].spec.ports[0].nodePort}")

Test to view NGINX page

curl http://worker-1:$PORT_NUMBER
curl http://worker-2:$PORT_NUMBER

output

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
# Output Truncated for brevity
<body>

Logs

In this section you will verify the ability to retrieve container logs.

Retrieve the full name of the nginx pod:

POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")

Print the nginx pod logs:

kubectl logs $POD_NAME

output

10.32.0.1 - - [20/Mar/2019:10:08:30 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.58.0" "-"
10.40.0.0 - - [20/Mar/2019:10:08:55 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.58.0" "-"

Exec

In this section you will verify the ability to execute commands in a container.

Print the nginx version by executing the nginx -v command in the nginx container:

kubectl exec -ti $POD_NAME -- nginx -v

output

nginx version: nginx/1.15.9

Run End-to-End Tests

Install Go

wget https://dl.google.com/go/go1.12.1.linux-amd64.tar.gzsudo tar -C /usr/local -xzf go1.12.1.linux-amd64.tar.gz
export GOPATH="/home/vagrant/go"
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin

Install kubetest

go get -v -u k8s.io/test-infra/kubetest

Note: This may take a few minutes depending on your network speed

Extract the Version

kubetest --extract=v1.13.0cd kubernetesexport KUBE_MASTER_IP="192.168.5.11:6443"export KUBE_MASTER=master-1kubetest --test --provider=skeleton --test_args="--ginkgo.focus=\[Conformance\]" | tee test.out

This could take about 1.5 to 2 hours. The number of tests run and passed will be displayed at the end.

Conclusion

In this project, we started by installing both VirtualBox and Vagrant. You can do it manually. But Homebrew is preferred since it is able to install vareity of resources in a timely and effectively manner.

Then we provisined 5 VMs using Vagrant, sshed into each node and verified the environment.

After that, we installed client tools. First, we generated key pair for master node, then we copy the key pair and pasted it into other nodes for ssh connections and we tested them. kubectl installation and verification were done after that.

In the next step, a CA was provisioned and TLS certificates were generated. etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy, all these certificates were authorized and distributed.

Then we generated Kubernetes configuration files for authentication, including the kube-proxy Kubernetes Configuration File, the kube-controller-manager Kubernetes Configuration File, The kube-scheduler Kubernetes Configuration File and the admin Kubernetes Configuration File. Then we distributed them to worker-1 and worker-2 instances.

Next, we would generate the data encryption config and key, boostrap the etcd cluster, the Kubernetes control plane, worker nodes. We would then TLS bootstrap worker nodes automatically.

After this process, we continued to configure kubectl for remote access, provision pod network, authorize Kubelet using RBAC and deploy the DNS cluster add-on.

Finally, smoke and E2E tests were conducted.

The whole project set up a solid foundation for bootstrapping Kubernetes in local environment.

--

--

Paul Zhao
Paul Zhao Projects

Amazon Web Service Certified Solutions Architect Professional & Devops Engineer