Server Containerization and Deployment 2

Stanley Ocran
7 min readJun 13, 2023

--

Picking up where we left off, the next order of business is to create the CI/CD pipeline and deploy to EKS on AWS. In summary the steps you will follow are:

  1. Create an EKS Cluster, IAM Role for CodeBuild, and Authorize the CodeBuild
  2. Deployment to EKS using CodePipeline and CodeBuild

Without further ado lets begin. You should make sure to run all commands from the project directory.

Create an EKS (Kubernetes) Cluster

Create an EKS cluster named “simple-jwt-api” in a region of your choice. In this case we will do so in the us-east-2 region.

eksctl create cluster --name simple-jwt-api --nodes=2 --version=1.22 --instance-types=t2.medium --region=us-east-2eksctl create cluster --name simple-jwt-api --nodes=2 --version=1.22 --instance-types=t2.medium --region=us-east-2

The command above will take a few minutes to execute, and create the following resources:

  • an EKS cluster
  • a nodegroup containing two nodes.

You can view the cluster in the EKS cluster dashboard. Take note of the version and use it consistently in your EKS Cluster, local machine, and later in the Codebuild's buildspec.yml file.

After creating the cluster, check the health of your clusters nodes:

kubectl get nodes

Create an IAM Role for CodeBuild

You will need an IAM role that the CodeBuild will assume to access your EKS cluster. Follow the steps below to quickly set up an IAM role.

Get your AWS account id:

aws sts get-caller-identity --query Account --output text
#Returns the AWS account id

Update the trust.json file with your AWS account id. Replace the <ACCOUNT_ID> with your AWS account ID that you get from running the command above.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ACCOUNT_ID>:root"
},
"Action": "sts:AssumeRole"
}
]
}

Create a role, ‘FlaskDeployCBKubectlRole’, using the trust.json trust relationship by running the command below:

aws iam create-role --role-name FlaskDeployCBKubectlRole --assume-role-policy-document file://trust.json --output text --query 'Role.Arn'

The role policy is also a JSON file that defines the set of permissible actions that the Codebuild can perform.
The policy file, ‘iam-role-policy.json’, contains the following permissible actions: “eks:Describe*” and “ssm:GetParameters”.

Attach the iam-role-policy.json policy to the ‘FlaskDeployCBKubectlRole’ by running:

aws iam put-role-policy --role-name FlaskDeployCBKubectlRole --policy-name eks-describe --policy-document file://iam-role-policy.json

Authorize the CodeBuild using EKS RBAC

Bear in mind you will have to repeat this step every time you create a new EKS cluster. For the CodeBuild to administer the cluster, you will have to add an entry of this new role into the ‘aws-auth ConfigMap’. The aws-auth ConfigMap is used to grant role-based access control to your cluster.

First lets get the current ConfigMap and save it to a file:

# Mac/Linux - The file will be created at `/System/Volumes/Data/private/tmp/aws-auth-patch.yml` path
kubectl get -n kube-system configmap/aws-auth -o yaml > /tmp/aws-auth-patch.yml
# Windows users can create the aws-auth-patch.yml file in the current working directory
kubectl get -n kube-system configmap/aws-auth -o yaml > aws-auth-patch.yml

Then open the ‘aws-auth-patch.yml’ file using an editor of your choice or preference, however I’ll use VS code :

# Mac/Linux
code /System/Volumes/Data/private/tmp/aws-auth-patch.yml
# Windows
code aws-auth-patch.yml

add the following group in the data → mapRoles section of this file:

- groups:
- system:masters
rolearn: arn:aws:iam::<ACCOUNT_ID>:role/FlaskDeployCBKubectlRole
username: build

Don’t forget to replace the <ACCOUNT_ID> with your AWS account Id. While you can copy-paste the code snippet from above, be careful with the indentations.

Once you have made the change, update your cluster’s configmap using the snippet below:

# Mac/Linux
kubectl patch configmap/aws-auth -n kube-system --patch "$(cat /tmp/aws-auth-patch.yml)"
# Windows
kubectl patch configmap/aws-auth -n kube-system --patch "$(cat aws-auth-patch.yml)"

If executed properly, you should receive ‘configmap/aws-auth patched’ as an output message.

Now that Codebuild has been authorized, we can proceed to create the pipeline. We want this pipeline to track the changes made to our Github repo so that when new code is pushed, a new image will be built and deployed to your EKS Cluster.

To set this up, we need to generate a Github access token. A Github access token will allow CodePipeline to monitor when a repo is changed. A token is analogous to your Github password and can be generated here. You should generate the token with full control of private repositories. Be sure to save the token somewhere that is secure. Once you create a personal access token, you can share this with any service (such as AWS CloudFormation) to allow accessing the repositories under your Github account.

Create the CodePipeline Resources using a CloudFormation Template

There is a file named ci-cd-codepipeline.cfn.yml provided in the repo. This is the template file that will be used to create your CodePipeline and CodeBuild resources. Open this file, and go to the ‘Parameters’ section. Ensure that the following parameter variables have the appropriate values:

EksClusterName, GitSourceRepo, GitBranch, GitHubUser and KubectlRoleName.

Once those values have been entered, review the resources that will be created using this template. Namely :

  • ECR repository to store your Docker image.
  • S3 bucket to store your Pipeline artifacts
  • A custom provisioning logic
  • A Lambda function and its IAM role
  • CodeBuild and CodePipeline resources and their IAM roles

We can now move to deploying this template.

Use the AWS web-console to create a stack for CodePipeline using the CloudFormation template file ci-cd-codepipeline.cfn.yml. Go to the CloudFormation service in the AWS console. Press the Create Stack button. It will make you go through the following three steps -

  1. Specify template — Choose the options “Template is ready” and “Upload a template file” to upload the template file ci-cd-codepipeline.cfn.yml. Click the ‘Next’ button.

2. Specify stack details — Give the stack a name. You will have a few fileds auto-populated from the parameters used in the ci-cd-codepipeline.cfn.yml file. Fill in your GitHub access token generated in the previous step. Ensure that the Github repo name, IAM role, and EKS cluster name matches with the ones you created earlier and click next.

3. Configure stack options — Leave default, and create the stack. You can check the stack status in the CloudFormation console. It will take some time (5–15 mins) to create the stack. After the successful creation of the stack, you can see the CodeBuild and CodePipeline resources get created for you.

The build will trigger and Codebuild will execute the commands/steps mentioned in the buildspec.yml file.

In the buildspec.yml file, use the same (or within one minor version difference) KUBECTL version as you’ve used while creating an EKS cluster earlier. You can run kubectl version --short --client in your local terminal to check the version locally. Change the version in the buildspec.yml file with a specific version of your choice.

The buildspec.yml file specifies the different phases of a build, such as an install, pre-build, build, and post-build. Each phase has a set of commands to be automatically executed by CodeBuild.

  • install phase: Install Python, pip, kubectl, and update the system path
  • pre-build phase: Log into the ECR repo where the Codebuils will push the Docker image.
  • build phase: Build a Docker image
  • post-build phase: Push the Docker image to the ECR repo, update the EKS cluster’s kubeconfig, and apply the configuration defined in the simple_jwt_api.yml to the cluster.

You can see each command being executed in the CodeBuild log console when you trigger a build.

Trigger the build

To trigger a build you need to push code to the corresponding GitHub repository.

## Verify the remote destination. 
## It should point to the repo in your account (not my repo).
## Otherwise, FORK the repo, and then clone it locally
git remote -v
## Make the changes locally
git status
## Add the changed file to the Git staging area
git add <filename>
## Provide a meaningful commit description
git commit -m “my comment“
## Push to the local master branch to the remote master branch
git push

Once the change has been pushed, you can verify from the CodePipeline dashboard. You should see that the build is running, and it should succeed.

After build succeeds you can test the dockerized application endpoint using this command:

kubectl get services simple-flask-deployment -o wide

and that should bring us to the end of this walkthrough.

Follow-up

If you run into any difficulties or challenges please feel free to reach out to me via email ‘stanocran@gmail.com’.

You can buy me a coffee too if you find this helpful.

--

--

Stanley Ocran

junior web dev, cloud practitioner and all things between.