Kubernetes with KOPS on AWS - Clusters, Metrics Server, Dashboard (Web UI), Scaling HPA, Disruption Budget

Sanjay Chauhan
4 min readJun 23, 2019

--

Kubernetes is an open-source platform for managing cloud workloads , security and comfort of the Cloud infrastructure with the powerful container deployment, management and scaling capabilities .

It is very easy to use and maintain the Kubernetes clusters using kops.

Here, we are going to describe how to use kops to install & manage the Kubernetes Cluster on AWS.

Installation & Configuration

Install kops :-

Go to the ec2 instance > launch the ec2 instance > ssh into the ec2 instance

curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d ‘“‘ -f 4)/kops-linux-amd64sudo chmod +x ./kopssudo mv ./kops /usr/local/bin/

Install kubectl :-

curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectlsudo chmod +x ./kubectlsudo mv ./kubectl /usr/local/bin/kubectl

Setup am IAM user called kops(programmatic user) and grp called kops > give him required access such as like ec2,s3 and VPC.

Login into ec2 instance again :

aws configure

Enter the access key and enter secret access key , enter the region which you using

Setup env variable

export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)

Configure DNS if needed — In case if you don’t want to setup DNS append the name of your cluster with “k8s.local”

Create the s3 bucket — Here it is created with name “youbucketname”

Creating your first cluster

Prepare environment

In this stage, one is ready to start creating his first cluster. Make this process straightforward by setting up a few environment variables.

export NAME=myfirstcluster.example.comexport KOPS_STATE_STORE=s3://prefix-example-com-state-store

If you don’t want to setup a dns and use locally.

export NAME=yourclustername.k8s.localexport KOPS_STATE_STORE=s3://youbucketname

We can always define values using — name and — state flags if we don’t have to use environmental variables.

Create cluster configuration

Here, one needs to note the available zones. Currently we are deploying this cluster to the ap-south-1 region.

aws ec2 describe-availability-zones --region ap-south-1

Below is the create cluster command.

kops create cluster --name=yourclustername.k8s.local --node-count=2 --node-size=t2.micro --master-size=t2.medium

You can check all instances created by kops within ASG (Auto Scaling Groups).

You will get the warning saying that you need to create the public key

ssh-keygen -b 2048 -t rsa -f ~/.ssh/id_rsakops create secret --name ${NAME} sshpublickey admin -i ~/.ssh/id_rsa.pub

To spin up the cluster and nodes in it of the configuration we created through kops create command run below command.

kops update cluster --name ${NAME} --yeskops validate cluster

Some important commands:-

To edit master configuration (instance type, label, min, max etc)

kops edit ig master-ap-south-1a --name ${NAME}

To edit nodes configuration (instance type, label, min, max, etc)

kops edit ig nodes --name ${NAME}

To see cluster configuration and instance in it.

kops get --name ${NAME}kops edit cluster ${NAME}

It will take couple of minutes to start the cluster , until then wait.

The update cluster command will create the ELB that is pointing to the master . it will also create the nodes(worker node)

This will also create the autoscaling grp for the master and nodes

Metrics Server Configuration

Deploy the metrics server in the cluster and this provide metrics via the resources metrics API

https://github.com/kubernetes/kops/tree/master/addons/metrics-server

In order to deploy metrics-server in your cluster run the following command from the top-level directory of this repository:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/metrics-server/v1.8.x.yamlRun kops edit cluster (to solve issue related to connecting api of metrics server) & add below (anonymousAuth, authenticationTokenWebhook, authorizationMode) params in kubelet.kubelet:
anonymousAuth: false
authenticationTokenWebhook: true
authorizationMode: Webhook

Dashboard

The dashboard is an important part of this entire process. While using Kubernetes, make sure that you have installed the dashboard. It is very simple to install it by following some easy methods.

Install dashboard using

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yamlkubectl cluster-info

Kubernetes master:- Use this url from the output

Append the url you get (in Kubernetes master) after running kubectl cluster-info with:-

/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Username and password will be prompted on browser, to get it run below command

kubectl config view

You will get username and password in the commands output

Next we need token

So create user by below step from the url:-

https://github.com/kubernetes/dashboard/wiki/Creating-sample-user

Get token once user is created with admin access

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk ‘{print $1}’)

Horizontal Pod Autoscaler

The Horizontal Pod Autoscaler measure the total number of pods in a replication controller, deployment or replica set based on the observation of CPU utilization.

It scales the number of pods automatically.

Lets try it by using below command.

kubectl autoscale deployment webapp --cpu-percent=10 --min=1 --max=4kubectl get hpa

To test:-

kubectl run -i --tty load-generator --image=busybox /bin/sh

Hit enter for command prompt

while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done

Disruption Budget

Disruption budget support you to limit the number of concurrent disruption. It happens that sometimes your application experiences disruption so during this time it maintains the cluster nodes as per the conditions.

Create a yaml file pdfex.yml

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: budget-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: webapp
kubectl apply -f pdfex.ymlkubectl get poddisruptionbudgets

To test :-

kubectl drain

Delete Cluster

In case,if you want to delete the complete cluster, type the name and write yes, to delete the complete cluster

kops delete cluster --name ${NAME} --yes

To check the details of resources that will be deleted run delete command without –-yes param.

References

--

--