How to Secure Kubernetes the Easy Way

How to use Terraform and Kubeadm to bootstrap your Kubernetes cluster

Gaurav Agarwal
Apr 21, 2020 · 9 min read
Image for post
Image for post
Photo by Patrick Schneider on Unsplash

Kubernetes has become a standard for running and managing container-based applications with more and more organisations moving towards and adopting it within their landscape. This story is a follow-up on “How to Secure Kubernetes the Hard Way”, and we will have a similar setup, with the only difference that we would use kubeadm to automate most of the steps for us. We will use Terraform to spin up the infrastructure required on Google Cloud Platform, and it is precisely similar to “How to Secure Kubernetes the Hard Way”. If you are running on-premise, you can skip the terraform part and use the steps described to set up the cluster in your environment.

We require a multi-master setup for high-availability of the cluster, each in separate zones as we need to add redundancy just in case one of the masters fails because of a Zone outage. We would use an NGINX load balancer for the master nodes (Control Plane), and NGINX ingress controllers to route external traffic into the cluster.

Cluster Architecture

We will be booting a multi-master kubernetes cluster with a stacked etcd group in this scenario which would be managed by kubeadm.

Image for post
Image for post

We will have

  • Three master nodes (master01, master02, and master03) run in three different zones.
  • Two worker nodes (node01 and node02) run in two different zones.
  • Two NGINX load balancers (masterlb and masterlb-dr) run in two different zones in an active-standby configuration. I will utilise a static IP with aliasing to ensure that at any given time, all nodes communicate with the static IP which is bound to the active load balancer node.
  • A bastion host runs in one zone (we can create another bastion host in case of zone outage).
  • All servers apart from the bastion host and load balancers are internal. They don’t have an external IP attached.
  • Since the nodes need to have outbound internet connectivity, I have utilised a Cloud NAT gateway for egress traffic from the internal servers.

Firewall rules

Setting up the correct firewall rules is very important as we do not want to leave loose ends. We will open only the required ports in the setup and will allow only the required traffic in the nodes.

Terraform will apply the following firewall rules

Firewall Rules

Setup Infrastructure

Follow “Spinning up Infrastructure with Terraform” section of the “How to Secure Kubernetes the Hard Way” guide to set up your infrastructure as the infrastructure is the same, use https://github.com/bharatmicrosystems/kubeadm-terraform.git for this task.

Log In to Bastion Host

Terraform will also spin up a Bastion Host for you to securely manage your cluster. SSH into it from your local system by running the gcloud compute ssh command.

gcloud compute ssh bastion --zone europe-west2-a

Setup Load Balancer

SSH into the masterlb and masterlb-dr nodes

$ gcloud compute ssh masterlb --internal-ipsudo su -
yum install -y git
git clone https://github.com/bharatmicrosystems/gcp-terraform.git
cd gcp-terraform/scripts

The nginx.conf file assumes that the hostnames are master01, master02, master03 for the master nodes and node01, node02, and node03 for the worker nodes, Edit the nginx.conf file and replace the server names with the names of the relevant nodes, add additional worker nodes in the load balancer configuration as suggested in the data in the below section. You can start by adding further nodes here

upstream stream_node_backend_80 {
server ip_node_01:80;
server ip_node_02:80;
server ip_node_03:80;
# ...
}

Also, configure a health-check endpoint by adding the following:

server {
listen 127.0.0.1:8080;
server_name 127.0.0.1;
location /nginx_status {
stub_status;
}
}

Once you have modified everything run the following

cd gcp-terraform/scripts
rpm -ivh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
yum install -y telnet nginx
cp nginx.conf /etc/nginx/nginx.conf
setenforce 0
sed -i 's/enforcing/permissive/g' /etc/selinux/config

The above steps will set up Nginx, but do not start it yet.

Setup High Availability between your Load Balancers

As we have fired an active and a standby instance of the NGINX Load Balancer, you need to set up High Availability between them. To do so, run the following

git clone https://github.com/bharatmicrosystems/gcp-failoverd.git
cd gcp-failoverd
git checkout develop
cp -a scripts/ exec/
cd exec/
sh -x setup-gcp-failoverd.sh -i nginx-internal-vip -e nginx-external-vip -l masterlb,masterlb-dr -c nginx-kubernetes -h :80

SSH into the masterlb node and run the following

$ gcloud compute ssh masterlb --internal-ippcs status
systemctl status nginx

At this point, NGINX would be running on the masterlb node.

Bootstrap the Kubernetes Cluster

Setup the master nodes

SSH into the master01 node

Install Docker

LOAD_BALANCER_IP=masterlb #Replace this with the relevant hostname for the load balancer
sudo su -
cat <<EOF > /etc/yum.repos.d/centos.repo
[centos]
name=CentOS-7
baseurl=http://ftp.heanet.ie/pub/centos/7/os/x86_64/
enabled=1
gpgcheck=1
gpgkey=http://ftp.heanet.ie/pub/centos/7/os/x86_64/RPM-GPG-KEY-CentOS-7
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
baseurl=http://ftp.heanet.ie/pub/centos/7/extras/x86_64/
enabled=1
gpgcheck=0
EOF
yum -y update
yum -y install docker
systemctl enable docker
systemctl start docker

Install kubelet, kubeadm, and kubectl

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
sed -i 's/enforcing/permissive/g' /etc/selinux/config
yum -y install kubelet kubeadm kubectl
systemctl start kubelet
systemctl enable kubelet

Switch off swap and configure iptables and IP forwarding

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
echo 1 > /proc/sys/net/ipv4/ip_forward
swapoff -a

Initialise the control plane using kubeadm

kubeadm init --control-plane-endpoint "masterlb:${LOAD_BALANCER_PORT}" --upload-certs --pod-network-cidr=10.244.0.0/16

If everything is ok, you will get a message like below

You can now join any number of the control-plane node running the following command on each as root:kubeadm join masterlb:6443 --token hwe4u6.hy79bfq4uq3myhsn \
--discovery-token-ca-cert-hash sha256:7b437ae3463c1236e29f30dc9c222f65f818d304f8b410b598451478240f105a \
--control-plane --certificate-key b38664ca2d82e7e4969a107b45d2be83767606331590d7b487eaad1ddbe8cd26
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:kubeadm join masterlb:6443 --token hwe4u6.hy79bfq4uq3myhsn \
--discovery-token-ca-cert-hash sha256:7b437ae3463c1236e29f30dc9c222f65f818d304f8b410b598451478240f105a

Copy this in a text editor as we will use it later on

We would need to configure the kubernetes config to the user so that kubectl can authenticate with the API server.

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

This file gives you super admin access to your Kubernetes cluster and you should not share this file with your users. Read “How to utilise X509 Client Certificates & RBAC to secure Kubernetes” to learn more about how to access control your Kubernetes cluster

Set up a pod network so that kubernetes resources can communicate with each other internally

export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
kubectl get nodes

We will now attempt to join the master02 and master03 nodes as additional control planes.

SSH into the other master nodes v.i.z master02 and master03

Install Docker

LOAD_BALANCER_IP=masterlb #Replace this with the relevant hostname for the load balancer
sudo su -
cat <<EOF > /etc/yum.repos.d/centos.repo
[centos]
name=CentOS-7
baseurl=http://ftp.heanet.ie/pub/centos/7/os/x86_64/
enabled=1
gpgcheck=1
gpgkey=http://ftp.heanet.ie/pub/centos/7/os/x86_64/RPM-GPG-KEY-CentOS-7
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
baseurl=http://ftp.heanet.ie/pub/centos/7/extras/x86_64/
enabled=1
gpgcheck=0
EOF
yum -y update
yum -y install docker
systemctl enable docker
systemctl start docker

Install kubelet, kubeadm, and kubectl

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
sed -i 's/enforcing/permissive/g' /etc/selinux/config
yum -y install kubelet kubeadm kubectl
systemctl start kubelet
systemctl enable kubelet

Switch off swap and configure iptables and IP forwarding

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
echo 1 > /proc/sys/net/ipv4/ip_forward
swapoff -a

Join the cluster as a control plane. Use the output of the master01 node and use that to join the cluster. The following is just an example, use the output generated when you set up the master01 node.

kubeadm join masterlb:6443 --token hwe4u6.hy79bfq4uq3myhsn \
--discovery-token-ca-cert-hash sha256:7b437ae3463c1236e29f30dc9c222f65f818d304f8b410b598451478240f105a \
--control-plane --certificate-key b38664ca2d82e7e4969a107b45d2be83767606331590d7b487eaad1ddbe8cd26

Setup the worker nodes

SSH into the worker nodes (node01, node02 and node03) and do the following on all worker nodes

Install Docker

LOAD_BALANCER_IP=masterlb #Replace this with the relevant hostname for the load balancer
sudo su -
cat <<EOF > /etc/yum.repos.d/centos.repo
[centos]
name=CentOS-7
baseurl=http://ftp.heanet.ie/pub/centos/7/os/x86_64/
enabled=1
gpgcheck=1
gpgkey=http://ftp.heanet.ie/pub/centos/7/os/x86_64/RPM-GPG-KEY-CentOS-7
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
baseurl=http://ftp.heanet.ie/pub/centos/7/extras/x86_64/
enabled=1
gpgcheck=0
EOF
yum -y update
yum -y install docker
systemctl enable docker
systemctl start docker

Install kubelet, kubeadm, and kubectl

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
sed -i 's/enforcing/permissive/g' /etc/selinux/config
yum -y install kubelet kubeadm kubectl
systemctl start kubelet
systemctl enable kubelet

Switch off swap and configure iptables and IP forwarding

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
echo 1 > /proc/sys/net/ipv4/ip_forward
swapoff -a

Join the cluster as a worker node. Use the output of the master01 node and use that to join the cluster. The following is just an example, use the output generated when you set up the master01 node.

kubeadm join masterlb:6443 --token hwe4u6.hy79bfq4uq3myhsn \
--discovery-token-ca-cert-hash sha256:7b437ae3463c1236e29f30dc9c222f65f818d304f8b410b598451478240f105a

Setup Nginx Ingress Controller on the cluster

An Nginx Ingress controller would help us route and manage traffic within the kubernetes cluster and would be a means to expose your workloads externally using Ingresses.

On the master01 node

cd gcp-terraform/scripts
kubectl apply -f mandatory.yaml
kubectl apply -f service-nodeport.yaml
kubectl get all -n ingress-nginx

Test the ingress Setup

kubectl apply -f apple.yaml
kubectl apply -f banana.yaml
kubectl apply -f ingress.yaml
#Wait for pods to come up
curl -kL http://masterlb/apple
curl -kL http://masterlb/banana

You should get an output like the below

apple
banana

Setting up the Kubernetes Dashboard

Generate certificate, key and CSR of your dashboard domain and create a kubernetes secret file with them. The secret file would then be applied to the dashboard so that they run using the provided cert and key pairs.

cd gcp-terraform\scripts\kubernetes-dashboard
dashboardDomain=<add the domain where you want to host your dashboard>
mkdir $HOME/certs
cd $HOME/certs/
openssl genrsa -out dashboard.key 2048
openssl rsa -in dashboard.key -out dashboard.key
openssl req -sha256 -new -key dashboard.key -out dashboard.csr -subj "/CN=${dashboardDomain}"
openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
kubectl create ns "kubernetes-dashboard"
kubectl -n kubernetes-dashboard create secret generic kubernetes-dashboard-certs --from-file=$HOME/certs

Create a cluster role with the name admin and apply a ClusterRoleBinding to it with cluster-admin privileges. You may wish to add granular permissions and allow only the required accesses.

kubectl apply -f dashboard-sa.yaml
kubectl apply -f dashboard-crb.yaml

Apply the deployment and ingress resource to expose the kubernetes dashboard on the ingress controller using the provided domain.

kubectl apply -f recommended.yaml
sed -i "s/DASHBOARD_DOMAIN/${dashboardDomain}/g" kubernetes-dashboard-in.yaml
kubectl apply -f kubernetes-dashboard-in.yaml

The dashboard would be accessible on https://${dashboardDomain} if everything runs smoothly.

You will need an access token for the dashboard. Run the following to get the secret

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

Setting up the metrics server

Metrics server is necessary for the horizontal pod autoscaler and Prometheus to work. The following are the steps to install the metrics server

Install the metrics server

kubectl apply -f scripts/metrics-server/deploy/1.8+/
kubectl top nodes

Install Prometheus and Grafana

Prometheus and Grafana are widespread open-source monitoring and alerting solution and can be used to monitor the kubernetes cluster.

kubectl create ns monitoring
kubectl apply -f scripts/prometheus-grafana/grafana-pv.yaml
kubectl apply -f scripts/prometheus-grafana/grafana-pvc.yaml
kubectl apply -f scripts/prometheus-grafana/manifests-all.yaml
kubectl apply -f scripts/prometheus-grafana/ingress.yaml

You can now access Prometheus on prometheus.localhost and Grafana on grafana.localhost. Login on Grafana using the default admin/admin creds

Cleaning up

You might want to destroy the objects at the end, especially if you are learning and have the infrastructure temporarily setup. To destroy the terraform objects on your terraform workspace run

terraform destroy

Further Reading

Thank you for reading through. I hope you enjoyed the story. If you are interested to learn further check out the following stories as they might be of interest to you

The Startup

Medium's largest active publication, followed by +771K people. Follow to join our community.

Gaurav Agarwal

Written by

Certified Kubernetes Administrator | Cloud Architect | DevOps Enthusiast | Connect @ https://gauravdevops.com | https://freedevtools.net

The Startup

Medium's largest active publication, followed by +771K people. Follow to join our community.

Gaurav Agarwal

Written by

Certified Kubernetes Administrator | Cloud Architect | DevOps Enthusiast | Connect @ https://gauravdevops.com | https://freedevtools.net

The Startup

Medium's largest active publication, followed by +771K people. Follow to join our community.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store