Deploy Application on Kubernetes Using Jenkins Multicloud-AWS Jenkins & EKS
Organizations can escape being reliant on a single cloud vendor by adopting a multi-cloud strategy. It also makes it easier for customers to negotiate with service providers for better rates and service level agreements. Data centers for various cloud service companies are spread across various regions. Through the use of a multi-cloud approach, businesses can split up their tasks among various service providers, cutting down on latency and enhancing the user experience for clients in various geographies.
In this Architecture, we’ve specified how quickly we can deploy our application to multiple clouds using Jenkins, and today’s businesses are switching from monolithic to microservice architectures to improve their operations.
High-level Steps
- AWS and GCP infra by Aniket Kumavat.
- EKS creation and Jenkins pipeline setup in AWS by Bhavesh Dhande.
- Jenkins pipeline setup in GCP by Siddhesh Patil.
- GKE creation and Route traffic using Route53 GKE and EKS by Rushabh Mahale.
Note : Beginning with step number 2, I’ll setup jenkins pipeline in AWS and create EKS cluster.
Prerequisites -
- VPCs in AWS need to be created link.
- You should have the Access key & Secrete key of AWS account.
- Webhook connnection with your Git-Hub repository.
Why do we use Jenkins Pipeline ?
Jenkins Pipeline is a popular open-source tool that allows developers to define and manage continuous integration and continuous delivery (CI/CD) pipelines as code.
Jenkins Pipeline is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins. A continuous delivery pipeline is an automated expression of your process for getting software from version control right through to your users and customers. Here are some of the reasons why we use Jenkins Pipeline :
- Automation
- Scalability
- Flexibility
- Visibility
- Collaboration
Jenkins Pipeline Configuration For ECR
What are Jenkins Plugins ?
Jenkins plugins are software components that extend the functionality of the Jenkins automation server. Plugins can be installed and configured in Jenkins to add new features or integrate with other tools and systems. Jenkins plugins can perform a wide range of tasks, including :
- Integrating with source control management tools, such as Git or Subversion, to enable automated builds and tests.
- Facilitating deployment and release management, such as deploying to cloud services like AWS or Azure, or orchestrating containerization with tools like Kubernetes or Docker.
- Enabling integration with other third-party tools, such as JIRA, SonarQube, or Selenium.
To use Docker, ECR and Jenkins we have used some extra plugins such as :
- Amazon ECR plugin
- CloudBees Docker Build and Publish plugin
- Docker plugin
- Docker Pipeline plugin
AWS Credentials for Jenkins
AWS credentials can be used in Jenkins to access AWS services such as EC2 instances, S3 buckets, and more. Here we use it to push our image in Amazon Elastic Container Registry. Using AWS credentials we can push image in ECR with the help of Amazon ECR plugins
Jenkins Job configuration :
Step 1.1 - Create a job using jenkins.
Step 1.2 - Select pipeline job type.
Step 1.3 - Select GitHub hook trigger for GITScm polling to use github webhook.
Step 1.4 - Go to the github repository, create a webhook and check recent deliveries for connections established or not.
Step 1.5 - In jenkins job pipeline section, select pipeline script and paste script.
Code Explanation -
In checkout stage job will pull source code form github repository.
stages {
stage('checkout') {
steps {
checkout scmGit(branches: [[name: '*/AWS']], extensions: [], userRemoteConfigs: [[url: 'https://github.com/bhaveshdhande/multicloud-deploy.git']])
}
}
At the build stage, a single Docker image is initially built, given a tag after being updated with the most recent version, and then pushed into an Amazon ECR using AWS credentials.
stage('build'){
steps{
script{
docker.withRegistry('https://<AWS-account-id>.dkr.ecr.ap-south-1.amazonaws.com/ecr-ap-south1', 'ecr:ap-south-1:awsmaster') {
def customImage = docker.build("<AWS-account-id>.dkr.ecr.ap-south-1.amazonaws.com/ecr-ap-south1:${env.BUILD_ID}")
/* Push the container to the custom Registry */
customImage.push('latest')
}
}
Code -
pipeline {
agent any
stages {
stage('checkout') {
steps {
checkout scmGit(branches: [[name: '*/AWS']], extensions: [], userRemoteConfigs: [[url: 'https://github.com/bhaveshdhande/multicloud-deploy.git']])
}
}
stage('build'){
steps{
script{
docker.withRegistry('https://<AWS-account-id>.dkr.ecr.ap-south-1.amazonaws.com/ecr-ap-south1', 'ecr:ap-south-1:awsmaster') {
def customImage = docker.build("<AWS-account-id>.dkr.ecr.ap-south-1.amazonaws.com/ecr-ap-south1:${env.BUILD_ID}")
/* Push the container to the custom Registry */
customImage.push('latest')
}
}
}
}
}
}
Note : You have to change ‘<AWS-account-id>’.
Step 1.6 - Now Build the job, see the output in console output and verify the image is successfully pushed or not.
Step 1.7 - after job execution finished successfully check the stage view.
What is ECR ?
Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry service provided by Amazon Web Services (AWS). It is used to store, manage, and deploy Docker container images, which can be used to run applications in the cloud.
After the job is successfully run check the amazon ECR the image is pushed or not.
What is EKS ?
EKS stands for Amazon Elastic Kubernetes Service. It is a fully managed container orchestration service provided by Amazon Web Services (AWS) that simplifies the deployment, management, and scaling of containerized applications using Kubernetes.
EKS allows users to run Kubernetes on AWS without having to install, operate, and scale their own Kubernetes clusters. EKS manages the Kubernetes control plane, automates upgrades, and provides a highly available and scalable cluster infrastructure.
With EKS, developers can easily deploy and manage containerized applications on AWS, taking advantage of other AWS services, such as Elastic Load Balancing, Auto Scaling, and CloudWatch, for enhanced availability, scalability, and monitoring.
What is Cluster ?
Amazon Elastic Service Kubernetes’s cluster is a set of nodes that run containerized applications, along with the Kubernetes control plane components that manage those nodes and the applications running on them.
Steps to create EKS Cluster -
Step 2.1 - To create EKS cluster, we have to create IAM role for EKS cluster and attach it to cluster so to create IAM role follow this document.
Step 2.2 - In networking, Select VPC, at least 2 subnets and security group for incoming and outgoing traffic.
Step 2.3 - For this we are using public EKS.
Step 2.4 - Remaining all the settings remain default then review and create EKS cluster.
EKS cluster successfully created.
Note : There is one managed node group and second is self managed node group, but for this project we are using managed node group which is called Node Group.
What is Fargate
Amazon Elastic Kubernetes Service (EKS) Fargate is a serverless compute engine for containers that works with Amazon EKS, a managed Kubernetes service. EKS Fargate enables you to run containers on Amazon Web Services (AWS) without having to manage the underlying infrastructure. With EKS Fargate, AWS handles the server provisioning, scaling, and patching, while you focus on deploying and managing your applications.
What is a Node group ?
In Amazon Elastic Kubernetes Service (EKS), a node group is a set of nodes that share the same configuration and are managed by a single Autoscaling Group (ASG). Node groups are used to add capacity to an EKS cluster, and they allow you to customize the configuration of the nodes based on your requirements.
When you create a node group, you specify the number and type of EC2 instances that you want to use, along with the instance configuration, such as the AMI, instance size, and networking settings. Node groups also allow you to configure node labels, taints, and other Kubernetes settings that are applied to all the nodes in the group.
Steps to create node group -
Step 3.1 - In compute select add node group and create a node group.
Step 3.2 - configure node group and create IAM role and attach to it.
Step 3.3 - configure node group compute configuration select AMI type, Capacity type, Instance type, and disk size up to 20 GiB.
Step 3.4 - In node group Scaling configuration give desired size, minimum size and maximum size according to your need.
Step 3.5 - Select the subnet to launch the node in it
Step 3.6 - Review and create the node group
Now here you can see node group has been created, to see attached EC2 instance with node group, go to compute section in Cluster.
You can also see graphical representation of cluster’s cores, memory and pods capacity.
EKS Add-ons AWS Load Balancer Controller
To use EKS with load Balancer first we have add an add-on in eks which will allow us to use eks ingress with ALB.
To do this i will follow this document — here.
I have this Eks cluster with me and i am using ec2 instance for deployments.
use this command to connect with EKS cluster on EKS-master-vm.
aws eks update-kubeconfig --region region-code --name my-cluster
Step - 4.1 I am using my cluster in ap-south-1 so i will use this command
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.7/docs/install/iam_policy.json
Step - 4.2 Now lets use aws cli for policy creation
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
Output:
Step - 4. 3 Now, we will create a role which will do the job of binding my EKS’s KSA(Kubernetes service account) with the policy that we just created
So, first we have to check if we have KSA in our cluster.
oidc_id=$(aws eks describe-cluster --name <EKS name> --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4
Step - 4.4 I didn’t get any output in means i have to create IAM OIDC provider so now follow this document — here.
Step - 4.5 So, i will follow this steps ,
Step - 4.6 Now after doing above steps you will get openID provider,
Step - 4.7 Now let’s run this commands again,
aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4
Now i got the oidc id,
Step - 4.8 Let’s move ahead with our document
Now we have to edit this json code with your ‘account id’ , ‘oidc id’ , ‘region code’,
cat >load-balancer-role-trust-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<aws-account-id>:oidc-provider/oidc.eks.<region-code>.amazonaws.com/id/<oidc-id>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.<region-code>.amazonaws.com/id/<oidc-id>:aud": "sts.amazonaws.com",
"oidc.eks.<region-code>.amazonaws.com/id/<oidc-id>:sub": "system:serviceaccount:kube-system:aws-load-balancer-controller"
}
}
}
]
}
EOF
After applying this code you will get a .json file,
Step - 4.9 Now use let’s create role using above json file,
aws iam create-role \
--role-name AmazonEKSLoadBalancerControllerRole \
--assume-role-policy-document file://"load-balancer-role-trust-policy.json"
Next We will attach an policy to above created role So, apply this code after editing your account id,
aws iam attach-role-policy \
--policy-arn arn:aws:iam::<aws-account-id>:policy/AWSLoadBalancerControllerIAMPolicy \
--role-name AmazonEKSLoadBalancerControllerRole
Step - 4.10 Now to apply this changes in EKS we have to create an yaml file so edit the next file and apply,
cat >aws-load-balancer-controller-service-account.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/name: aws-load-balancer-controller
name: aws-load-balancer-controller
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<aws-account-id>:role/AmazonEKSLoadBalancerControllerRole
EOF
Step - 4.11 Finally we apply this command to execute our changes in EKS,
kubectl apply -f aws-load-balancer-controller-service-account.yaml
Step - 4.12 Now we have given all the necessary permission to use Load balancer controller in EKS so our next steps will be regarding installation of LB-controller.
So to use this controller we will need cert-manager,
Use this command to add cert-manager,
kubectl apply \
--validate=false \
-f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml
Output:
Step - 4.13 Now download the controller using this command,
curl -Lo v2_4_7_full.yaml https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.4.7/v2_4_7_full.yaml
Output:
Step - 4.14 Run the following command to remove the ServiceAccount section in the above downloaded file,
sed -i.bak -e '561,569d' ./v2_4_7_full.yaml
Step - 4.15 Now we will add our EKS cluster name into the file so just replace “my-cluster” with “Eks-cluster-ap-south1-cluster”,
sed -i.bak -e 's|your-cluster-name|<EDIT HERE WITH CLUSTER NAME>|' ./v2_4_7_full.yaml
Step - 4.16 Finally apply your 2_4_7_full.yaml
kubectl apply -f v2_4_7_full.yaml
Output:
Step - 4.17 Apply this commands,
curl -Lo v2_4_7_ingclass.yaml https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.4.7/v2_4_7_ingclass.yaml
kubectl apply -f v2_4_7_ingclass.yaml
Step - 4.18 Now to verify this,
kubectl get deployment -n kube-system aws-load-balancer-controller
Output:
Step - 4.19 After this most important step is to and tags to your subnet,
Private subnets:
Value — kubernetes.io/role/internal-elb Key — 1
Public subnets:
Value — kubernetes.io/role/elb Key — 1
Step - 4.20 Finally, I will deploy me deployments in this EKS,
My deployment.yaml,
apiVersion: apps/v1
kind: Deployment
metadata:
name: eks-app
spec:
replicas: 2
selector:
matchLabels:
app: eks-app
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: eks-app
spec:
containers:
- name: eks-app
image: 165600868274.dkr.ecr.ap-south-1.amazonaws.com/ecr-ap-south1:latest
ports:
- name: http
containerPort: 80
kubectl apply -f deployment.yaml
Output:
To apply service.yaml,
apiVersion: v1
kind: Service
metadata:
name: eks-svc
spec:
ports:
- name: http
port: 80
targetPort: 80
selector:
app: eks-app
type: NodePort
kubectl apply -f service.yaml
Output:
Now to apply ingress,
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: eks-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
spec:
rules:
- http:
paths:
- path: /AWS
pathType: Prefix
backend:
service:
name: eks-svc
port:
number: 80
kubectl apply -f ingress.yaml
Output:
Conclusion :
Setting up a Jenkins pipeline for an EKS cluster can greatly improve your development process by automating your software deployment and testing workflows. By leveraging tools such as Kubernetes and Jenkins, you can achieve a streamlined and efficient CI/CD process.
In case of any questions regarding this article, please feel free to comment in the comments section or contact me via LinkedIn.
I want to thank my team at Guysinthecloud for all of their help.
Thank You