Creating & Deploying New Revisions on EKS (Elastic Kubernetes Service)

Ankan-Devops
codelogicx
Published in
11 min readJul 17, 2022

Amazon Elastic Kubernetes Service (EKS) is an AWS managed service that runs Kubernetes on AWS and on-premises. It is used to deploy, manage, and scale containerized applications using Kubernetes on Amazon Web Services.

Now, we will see how we can deploy EKS in AWS :

Creating EKS Cluster.

EKS can be created through AWS console, eksctl or AWS CLI.
We will follow the ‘eksctl’ option. The process involves creating :
Creating IAM user -> Installing AWS Cli, eksctl & kubectl on a terminal -> Deploying EKS using eksctl

IAM user with permissions to create and manage EKS cluster

  1. Create an policy named ‘eks-policy’. Attach the following json :

    {
    “Version”: “2012–10–17”,
    “Statement”: [
    {
    “Sid”: “ekspolicy”,
    “Effect”: “Allow”,
    “Action”: [
    “eks:*”,
    “iam:CreatePolicy”,
    “iam:CreateRole”,
    “iam:AttachRolePolicy”,
    “iam:DeleteRolePolicy”,
    “iam:DetachRolePolicy”,
    “iam:PutRolePolicy”,
    “iam:DeleteRole”,

    “iam:GetRole”,
    “iam:GetPolicy”,
    “iam:PassRole”,
    “iam:TagRole”,
    “iam:UntagRole”,
    “iam:UpdateOpenIDConnectProviderThumbprint”,
    “iam:UntagServerCertificate”,
    “iam:TagPolicy”,
    “iam:CreateOpenIDConnectProvider”,
    “iam:DeleteOpenIDConnectProvider”,

    “iam:UntagUser”,
    “iam:ListOpenIDConnectProviders”,
    “iam:ListOpenIDConnectProviderTags”,
    “iam:TagServerCertificate”,
    “iam:UntagPolicy”,
    “iam:GetGroupPolicy”,
    “iam:UntagOpenIDConnectProvider”,
    “iam:GetOpenIDConnectProvider”,
    “iam:UntagInstanceProfile”,
    “iam:TagOpenIDConnectProvider”,
    “iam:TagInstanceProfile”
    ],
    “Resource”: “*”
    }
    ]
    }

    *
    we will add this policy to IAM user in the next step.
  2. Creating an IAM user with only programmatic access.
  3. Attach the following permissions :
    - eks-policy (created above)
    - AmazonEKSServicePolicy
    - AWSCloudFormationFullAccess
    - AmazonEC2FullAccess
    - AmazonEC2ContainerRegistryPowerUser (to be used later for ECR image permission)
IAM user permissions for EKS

Save the AWS access & secret key as it will be used in the next step.

Installing AWS Cli, eksctl & kubectl

  1. Installing AWS Cli : Used to configure the AWS account on which EKS is created.
    - Follow the following link to install on specific OS :
    https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html#:~:text=Next%20steps-,AWS%20CLI%20installation%20instructions,-Important
  2. Installing eksctl : Used to create EKS.
    - https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html
  3. Installing kubectl : Used to deploy new versions of deployment in the EKS. (Example : deploying new version of docker image)
    - https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html

Creating EKS cluster through eksctl

  1. Configure AWS Cli
    - Run ‘aws configure’ in the terminal and enter the details.
Add the details on the specific fields

2. EKS cluster can be created in a new VPC or on existing VPC.

  • To create EKS in new VPC, run :
    “eksctl create cluster -n <cluster name> -r <aws region> — instance-types <type- ex: t2.micro> — node-volume-size <EBS volume> — ssh-access=true — ssh-public-key=<SSH pem file name> — kubeconfig=~/.kube/config — vpc-cidr <new VPC CIDR range>”
  • To create EKS on an existing VPC, run :
    “eksctl create cluster -n <cluster name> -r <aws region> — instance-types <type- ex: t2.micro> — node-volume-size <EBS volume> — ssh-access=true — ssh-public-key=<SSH pem file name> — kubeconfig=~/.kube/config — vpc-public-subnets <Public-subnet1, Public-subnet2> — vpc-private-subnets <Private-subnet1, Private-subnet2>”

Note :
* You can remove the SSH flags if SSH access is not needed.
* For entering multiple public or private subnets it is better to have them in different AZ.
* Make sure all the VPC configurations are done when creating EKS on existing VPC.
- The public subnet must have Internet gateway attached to the route table.
- Auto assign pubic IP should be turned on.

Command to create EKS using ecsctl
Output on successful EKS cluster creation
2 node groups (EC2 instances) created as mentioned in the eksctl command

Creating EKS deployment yaml & service yaml

Deplyment yaml

The deployment yaml is needed to deploy the docker image to the nodes. It also specifies the no of pods to run of that particular docker image as ReplicaSet.

A sample deployment.yaml is given below :

apiVersion: apps/v1
kind: Deployment
metadata:
name: deployapp
labels:
app: deployapp
spec:
replicas: <ex: 2>
selector:
matchLabels:
app: deployapp
template:
metadata:
labels:
app: deployapp
spec:
containers:
- name: deployapp
image: <ECR repository name>:${BITBUCKET_BUILD_NUMBER}
ports:
- containerPort: <ex: 80>

Breakdown :

  • The specs.replicas refer to the no of replicas to be created of a particular pod. It is an integer value.
  • The .spec.selector.matchLabels is used to match with the tag of the pod for which the replicaSet is defined.
    - In the example above matchLabel — ‘app: deployapp’ matches with the ‘app: deployapp’ label under the spec.template.metadata section.
    - The .spec.selector.matchLabels field is a map of {key,value} pairs. It has to match in the same way with pod label. matchLabels and matchExpressions, must be satisfied in order to match.
  • The pod’s labels are also used to match with the services for EKS.
  • The specs.containers define the docker image that needs to deployed in the pods.
    - The image tag specifies the ECR image name alonf with the tag.
    - The containerPort defines the port on which the docker container is exposed

Service & Ingress yaml

A service is used to make the pods easily accesible. It brings the pods under a single resource and provides a stable IP address/Domain. It can be used to make the pods accessible to the outside world or to configure loadbalancers for routing to the pods.

The ingress resource configures the ALB/NLB to route HTTP or HTTPS traffic to different pods within the cluster. The AWS Ingress Controller is needed to create loadbalancer service of type Application or Network

The service are mainly of 3 types :

  • ClusterIP (default): Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
  • NodePort: Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
  • LoadBalancer: Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.

In the following steps we will show three ways to create Loadbalancer Service :
1. Classic Loadbalancer ( needs : service.yaml{type: loadbalancer} )
2. Network Loadbalancer (needs : service.yaml{type: loadbalancer}, AWS LB Controller )
3. Application Loadbalancer (needs : service.yaml{type: nodeport}, AWS LB Controller, ingress.yaml )

1. Creating Classic LoadBalancer service :

A sample service.yaml is given below :

apiVersion: v1
kind: Service
metadata:
name: serviceapp
spec:
selector:
app: deployapp
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 32216

Breakdown :

  • The specs.selector is used to match with the pod labels defined in deployment.yaml. This tells the EKS for which pods the service is being created.
    - Here, the ‘app: deployapp’ under spec.selector matches with the specs.template.metadata label in the deployment.yaml.
  • The service ‘type’ can be chosen as ClusterIP/NodePort/Loadbalancer/ExternalName.
    - Here it is chosen as LoadBalancer
  • The ports section are defined as follows :
    - protocol : Defines the protocol like TCP/UDP to use
    - port : The port number exposed internally i the cluster. Other pods within the cluster can communicate with this server on the specified port.
    - targetPort : The exposed port on which the docker container listens to. This is the port on the pod that the request gets sent to.
    - nodePort : Exposes a service externally to the cluster by means of the target nodes IP address and the NodePort. This setting makes the service visible outside the Kubernetes cluster.

2. Changing to Network Loadbalancer Service

By default the Loadbalancer created in EKS is a ‘Classic Loadbalancer’. We can change it to network loadbalancer for higher performance. Also the ‘Classic Loadbalancer’ is deprecated in AWS. Network Loadbalancer lies in the layer 4 of OSI model.

Pre-requisite :

Creating Network loadbalancer in service.yaml is :
A sample service.yaml is given below :

apiVersion: v1
kind: Service
metadata:
name: serviceapp
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
selector:
app: deployapp
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 32216

Breakdown :

The below command is added to service.yaml for network loadbalancing :-
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing

3. Creating Application Loadbalancer Service

Application loadbalancer supports Path based routing. It lies on layer 7 of OSI model. With ALB we can use only one loadblancer to route to different services by defining the HOST and PATH. This is the advantage of Application laodbalancer. Disadvantage is that ALB is slower than NLB.

Pre-requisite :

To create Application Loadbalancer service we need :
- Service.yaml of type NodePort
- ingress.yaml of type ALB

Creating Service.yaml of type NodePort :

apiVersion: v1
kind: Service
metadata:
name: serviceapp
spec:
selector:
app: deployapp
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 80

Creating ingress.yaml of type alb :

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: serviceapp
port:
number: 80

Breakdown :

  • The service type used for application loadbalancing is ‘NodePort’.
  • The ingress.class annotation defined in ingress.yaml is used to refer to the AWS loadbalancer controller through which the ALB is deployed.
  • The scheme annotation is defined as ‘internet-facing’ because internal LB is created by default.
  • ALB Path based routing is configured under the spec.rule section of ingress.yaml.

Note :
- The port.number in ingress.yaml should match the ‘port’ in service.yaml
- You can add multiple rules with different path prefix and point to different services.
- Any path that doesn’t match the path given in ingress.yaml will return a 404 error. Unless DefaultBackend is defined.

Adding SSL to EKS services.

We can configure EKS web application with HTTPS protocol. This result is loadbalancers can use port 443 to route web traffic through the internet.

Pre-requisites :

Creating AWS SSL certificate using AWS Certificate Manager.

  1. Request certificate -> Request a public certificate.
  2. Enter the domain name for which the SSL certificate has to be created. Adding * wildcards before the domain (ex: *.devllops.com) makes the SSL certificates available to all the subdomains.
  3. Validate the certificate using the DNS validation method. Put the given CNAME records the domain registrar records.
  4. After the certificate is validated copy the ARN of the certificate. It will be used later for HTTPS configuration on EKS service yaml.
Adding the domain for which the public certificate is to be created

Configuring with HTTPS routing for :
Classic & Network Loadbalancer

  1. Add the ACM SSL details in the service.yaml

    annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <AWS SSL ARN>
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: “https”
  2. Add the HTTPS ports for loadbalancers in the service.yaml
    - name: https
    protocol: TCP
    port: 443
    targetPort: 80
  3. The modified service.yaml looks like :
apiVersion: v1
kind: Service
metadata:
name: serviceapp
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <AWS SSL ARN>
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
# the next 3 annotations are not needed for classic LB
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
selector:
app: deployapp
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 80

Configuring with HTTPS routing for :
Application Loadbalancer

  1. Add the ACM SSL details in the ingress.yaml

    annotations:
    alb.ingress.kubernetes.io/certificate-arn: <AWS SSL ARN>
    alb.ingress.kubernetes.io/listen-ports: ‘[{“HTTP”: 80}, {“HTTPS”:443}]’
    alb.ingress.kubernetes.io/ssl-redirect: ‘443’
  2. The modified ingress.yaml looks like :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/certificate-arn: <AWS SSL ARN>
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: '443'
spec:
rules:
- host: <Domain name, ex: devllops.com>
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: serviceapp
port:
number: 80

Note :
- Any domain or path that doesn’t match the host or path given in ingress.yaml will return a 404 error. Unless DefaultBackend is defined.
- We can add multiple hosts in the ingress.yaml rules section for domain name based routing.

Deploying new revision of web appilcation to EKS using Bitbucket-pipelines

Bitbucket-pipelines.yml is used to create Continuous deployment of revisions to EKS.

  • The process involves creating a new version of docker image.
  • The docker image is pushed to ECR.
  • The deployment.yaml is edited to use the new version of the docker image by changing the tag value.
  • The new deployment.yaml is deployed to EKS using kubectl command.

Prerequisites :

  1. Create a private ECR and save the private registry and repository name. This is the container registry where the docker images are to be pushed for deploying to EKS.
  2. Save the following the secrets in bitbucket repository variables :
    - AWS_KEY
    - AWS_SECRET
    - AWS_REGION
    * these are the AWS keys for the IAM user created before.

bitbucket-pipelines.yml

A sample bitbucket-pipelines.yml is given below :

image: atlassian/default-image:2
pipelines:
branches:
master:
- step:
services:
- docker
script:
- apt-get update
- apt-get install curl unzip
- curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
- unzip awscliv2.zip
- ./aws/install
- REG=<ECR registry name>
- REPO=<ECR repository name>
- CLUSTER=<EKS cluster name>
- IMAGE=$REG/$REPO
- TAG=${BITBUCKET_BUILD_NUMBER}
- aws configure set aws_access_key_id "${AWS_KEY}"
- aws configure set aws_secret_access_key "${AWS_SECRET}"
- aws configure set default.region "${AWS_REGION}"
- aws configure set default.output json
- aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin $REG
- docker build -t $IMAGE:$TAG .
- docker push $IMAGE:$TAG
- envsubst < eks_deployment/deployment.yaml > eks_deployment/deployment1.yaml
- curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- mv ./kubectl /usr/bin/kubectl
- aws eks --region ${AWS_REGION} update-kubeconfig --name $CLUSTER
- kubectl apply -f eks_deployment/deployment1.yaml
- kubectl apply -f eks_deployment/service.yaml #only when service needs to be modified
- kubectl apply -f eks_deployment/ingress.yaml #only #for ALB, also only when ingress needs to be modified
- sleep 5
- kubectl get services -o wide

Breakdown :

  • In the 1st part AWS ECR registry name, repository name & the EKS cluster name is declared.
    - REG=<ECR registry name>
    - REPO=<ECR repository name>
    - CLUSTER=<EKS cluster name>
    - IMAGE=$REG/$REPO
  • The ${BITBUCKET_BUILD_NUMBER} represents an unique number generated every time bitbucket pipeline is run. This is used to created an unique TAG name for the new version of docker image to be deployed in ECR.
  • The AWS CLI is configured for the docker image to be pushed in AWS ECR and for the kubectl command to deploy a new version of deployment in EKS.
  • A new docker image with an unique tag is build and pushed to ECR.
    - aws ecr get-login-password — region ${AWS_REGION} | docker login — username AWS — password-stdin $REG
    - docker build -t $IMAGE:$TAG .
    - docker push $IMAGE:$TAG
  • The envsubst command is used to replace the docker tag with the ${BITBUCKET_BUILD_NUMBER} in the deployment.yaml file. (Refer to the deployment.yaml file created above). So, the EKS is able to pull the latest docker image for the new revison.
  • The following commands in the bitibucket pipeline installs & configures kubectl :
    - curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl
    - chmod +x ./kubectl
    - mv ./kubectl /usr/bin/kubectl
    - aws eks — region ${AWS_REGION} update-kubeconfig — name $CLUSTER
  • Deploying a new version of docker image to the EKS :
    - kubectl apply -f eks_deployment/deployment1.yaml
    - kubectl apply -f eks_deployment/service.yaml (Note :> Applying service.yaml is only necessary at the beginning and when there are new changes to be made to the service)
    - kubectl apply -f eks_deployment/ingress.yaml
    (Note :> Only needed when deploying Application Loadbalancer, also only necessary at the beginning and when there are new changes to be made to the ingress)
  • The ‘eks_deployment/deployment1.yaml’ means the path to the deployment file as stored in the repository.
  • The kubectl get services -o wide is used to get the loadbalancer URL created by the service through the web application can be accessed.
Breakdown of bitbucket-pipelines.yaml

--

--