100daysoflearning
Published in

100daysoflearning

Amazon Elastic Container Service for Kubernetes (Amazon EKS) — 100daysoflearning (Part 1)

Day 21,22,23 & 24

Its time that we move on with kubernetes and learn about the managed kubernetes service from AWS. So AWS provides a managed kubernetes service called amazon EKS. So basically what that means is that the master need not be controlled by us and AWS is responsible for controlling, scaling and maintaining the master. So it eliminates our effort of having setup Disaster recovery and high availability of the kubernetes cluster.

My post will be based on Linux Academy Course Amazon EKS Deep Dive, my notes and the official documentation .
Lets start…
Here we will go through a variety of topics but for this post we will cover the portion of setting up the EKS Cluster, its prerequisites, Worker nodes and the Kubernetes dashboard. Good part is that I will show you how to access the dashboard with a trick which probably you won’t find in Linux Academy nor in AWS Documentation .
EKS is costlier than ECS or Elastic Beanstalk but since it is Kubernetes so it will be the most widely accepted service going forward. It is secure by default. As i mentioned earlier as well that EKS manages the master which is the control plane. by default its 3 masters and 3etcd where the backups,snapshots and autoscaling is included. Worker nodes are basically the EC2 instances which has to be managed by the user/customer.
Amazon EKS uses CNI (container network interface) which is responsible for providing the individual ip address to the pods via kubelet. So basically what happens is kubelet goes to CNI then CNI will pick the ip address from the VPC in which the cluster is and then provide to that EC2 instance which is the worker node in the cluster (EC2.AssociateAddress()). So your cluster sits in amazon VPC and stays within that VPC itself using the ip address range from that VPC (I hope its not leading to any confusion).
EKS-Optimized AMI — It is an AWS supplied AMI based on Amazon Linux 2 and is pre-configured with Docker, kubelet, AWS IAM Authenticator. It allows to automatically get connected to the cluster without you going to manually join it to the cluster. Its built using Packer. Different regions have different ami which keeps on changing so you need to keep an eye on the documentation.
Also you can bid for the spot instances by mentioning the SpotPrice in the NodeLaunchConfig file in order to take the benefit of AWS Spot Instance which provides discount upto 90% in some cases.

Now that you have some theoretical Idea about the AWS EKS , Lets go ahead and start creating one :
First step is to create an EKS role , go to IAM service >> select roles >> Create role >> Select EKS

Now that the role is setup next step is to setup a VPC for AWS EKS . For that there is already a cloud formation template that can be used directly.
Go to CloudFormation service >> create stack >>

url : https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09/amazon-eks-vpc-sample.yaml

Now our VPC is set and we are all set to create the EKS Cluster.
Go to EKS AWS Service >>

After next pass the Required information as below:

Once you click on create , the cluster starts creating and you are all set to use the cluster.
the next step is is to provision the worker nodes for the kubernetes cluster. So for provisioning of the worker nodes their is a cloud formation template with AWS EKS ami (different for different region ) which can be used and the worker nodes will automatically join the cluster.

Go to CloudFormation AWS service >> create stack >> enter below url for s3 : https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/amazon-eks-nodegroup.yaml >> specify the required details as below: The cluster name has to be exactly same as the name given at the time of cluster creation so that the nodes can automatically join the cluster. and the node image id will be different for different regions which you can select form the AWS documentation itself. Rest other details are simple to fill. For Stack name its should be <clustername>-WorkerNodes.

Once you have created the stack the nodes will get created in form of EC2 instances and will join the cluster . Next step is to setup kubectl and try to connect to the dashboard.
Below are the commands that you need to run in order to setup kubectl on an EC2 machine .

Kubectl setup:
-
mkdir $HOME/bin
- curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- cp ./kubectl $HOME/bin/kubectl
- export PATH=$HOME/bin:$PATH
- echo ‘export PATH=$HOME/bin:$PATH’ >> ~/.bashrc

Now that the kubectl is setup and you have provisioned the worker node as well but it wouldn't be joined to the cluster. For that you need to get configure aws cli, aws-iam-authenticator and then get aws-auth-cm.yml file. Before that just node down the ‘‘NodeInstanceRole’’of the nodes form the output section of cloudfomartion nodes creation :

Next lets go ahead with the installation of aws-iam-authenticator , upgrading aws cli and then deploy the yml file.

aws-iam-authnticator:
curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.5/2018-12-06/bin/linux/amd64/aws-iam-authenticator
- chmod +x ./aws-iam-authenticator
- cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH
- echo ‘export PATH=$HOME/bin:$PATH’ >> ~/.bashrc
- aws-iam-authenticator help

aws cli upgrade :
-
curl -O https://bootstrap.pypa.io/get-pip.py
- python get-pip.py — user
- pip install awscli — upgrade — user
- export PATH=$HOME/.local/bin:$PATH
- echo ‘export PATH=$HOME/.local/bin:$PATH’ >> ~/.bashrc

aws configure
aws configure
AWS Access Key ID [None]: xxxxxxxxxxxxxxxxxxxx
AWS Secret Access Key [None]: xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Default region name [None]: us-east-1
Default output format [None]:

setting up cluster config and deploying yaml file :
- aws eks update-kubeconfig — name <cluster name>
- curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-11-07/aws-auth-cm.yaml
- vi aws-auth-cm.yaml
enter the noted NodeInstanceRole and save
- kubectl apply -f aws-auth-cm.yaml
- kubectl get nodes — watch ad see the nodes coming up in ready state

Now the cluster is setup and the worker nodes are connected to the cluster as well, we will go ahead with the setup for kubernetes dashboard.
We will run apply some yaml files to get more information from the dashboard in EC2 instance but to view it we will install kubectl locally and just change the config file and login with token:
Steps:
First run the following commands for installing dashboard, heapster and influxdb and setting up RBAC authentication:
- kubernetes dashboard : kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
- heapster : kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml
- influx db : kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
- heapster cluster role binding : kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
- eks -admin Service Account and Cluster Role Binding : create a file eks-admin-service-account.yaml with below contents:
apiVersion: v1
kind: ServiceAccount
metadata:
name: eks-admin
namespace: kube-system
— -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: eks-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: eks-admin
namespace: kube-system

— kubectl apply -f eks-admin-service-account.yaml
- getting the token to connect to the cluster :
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk ‘{print $1}’)
- now if you run kubectl proxy you should be able to see the dashboard at this URL : http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

Hold on Still you would not be able to see it , better way is to view the dashboard locally , just download the kubectl and create a config file in directory : C:\Users\saipatha\.kube\config and copy the contents from
/root/.kube/config of EC2 instance . In that config file change one section :
users:
- name: arn:aws:eks:us-east-1:360343814661:cluster/EKS-Cluster
user:
token: <enter token value here>
Now if you run http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login and login via token you will be able to see the dashboard .

Happy Learning
Saiyam Pathak
https://www.linkedin.com/in/saiyam-pathak-97685a64/
https://twitter.com/SaiyamPathak

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Saiyam Pathak

l CNCF Ambassador | CKA | CKAD | Influx ACE | Multi-cloud certified | Rancher Ranch Hands member