Geek Culture
Published in

Geek Culture

Created by Raktim using Canva

Running Terraform via AWS Developer Tools — Complete CI/CD Guide

Learn the complete flow to store Terraform code in AWS CodeCommit & use CodePipeline to deploy that code in CodeBuild to provision resources on AWS.

After working on various DevOps projects, I noticed that CI/CD can be implemented in multiple ways using different services like GitLab, Jenkins, Azure DevOps etc. But AWS Developer Tools has some amazing integration with other AWS Services, which makes CI/CD more efficient. Let's go deeper into some of the AWS Developer Tool Services & then implement a real industrial project to learn CI/CD using these amazing technologies.

What’s AWS Developer Tools?

Source: Internet

These are a set of services, mostly used by DevOps professionals to automate the Continuous Integration & Continuous Deployment of applications. Services like AWS CodeCommit, CodeBuild, CodeDeploy, CodePipeline etc. can be used for different scenarios such as complete workflow to build, test & deploy application codes, running codes to provision infrastructure etc.

To learn more: https://docs.aws.amazon.com/whitepapers/latest/aws-overview/developer-tools.html

Intro to CodeCommit:

Source: Internet

In very simple terms it's a cloud hosted & managed version control system similar like GitHub. Nowadays GitHub is the most popular platform for maintaining codes. But if you want to have a private code repository where you can implement all kinds of AWS security features & can integrate that seamlessly with other AWS services, then better choice is to go with AWS CodeCommit. Once we start the practical, you will notice it's very easy to work on AWS CodeCommit.

Official Documentation Page: https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html

Brief about CodeBuild:

Source: Internet

You can think that this service executes an independent isolated process which can help you to run commands for various purposes — mostly to compile/build the code or to run the code. CodeBuild can pull container images & then it can run that image to create the container process. You can integrate CodeBuild with various AWS services like for example Elastic Container Registry to pull your own customized container images or let's say you can provide AWS CodeBuild necessary permissions to store build logs in CloudWatch Logs etc.

Official Documentation Page: https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html

CodePipeline in a nutshell:

Source: Internet

This service has the capability to hold other services in a pipeline so that we can build, test, artifact & deploy our application code in a continuous flow. For example, CodePipeline can pull some code from CodeCommit & then it can send that code to CodeBuild to start the build process. Once build is done CodePipeline can take that artifact & can store in CodeArtifact or in S3. Similarly, it can integrate other services like SNS, SQS for sending relevant information about the pipeline, or even you can integrate CodePipeline with external Code repositories like GitHub, Big Bucket etc. to pull the codes.

Official Documentation Page: https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html

Problem Statement:

Created by Raktim using Canva
  • Create one repository on CodeCommit to store the terraform code. Connect CodeCommit with your local computer for continuous development of the terraform code.
  • Create two CodeBuild projects — first to run the terraform plan & next is to apply the changes.
  • Create one CodePipeline which will fetch the code from CodeCommit & can run CodeBuild projects to deploy that Terraform code. So that we can provision other resources.
  • Create necessary IAM roles & permissions. Also store the logs of CodeBuild process in CloudWatch Logs.

Sounds fun like… Let’s step by step tackle the problem. But before beginning let me tell you some basic requirements.

Pre-requisite:

In terms of knowledge, you should know basics of Git, Terraform, AWS IAM & S3. Rest everything, I will discuss in detail. For sure you need one AWS account & I will be performing everything from Administrator power of that account.

If you don’t have basic understanding of Terraform, then follow below mentioned blog of mine:

Let’s jump in…

Note: Idea is simple. Initially I am going to run Terraform on my local system to provision CodeCommit Repository, Pipeline in CodePipeline, CodeBuild Projects, S3 bucket & IAM Roles.

All codes specified in this blog are available on the below mentioned GitHub repository:

Step 1: Provisioning CodeCommit Repository & S3 bucket

CodeCommit Repository will be used to store the Terraform code, which we will be using to provision the other resources except whatever mentioned earlier. Let's say you want to set up a complex VPC architecture. This approach will show you how to create a CI/CD pipeline to run that Terraform inside CodeBuild to provision that VPC.

S3 bucket we will be using to store the backend state file, so that when CodeBuild will run Terraform, it can look into this state file. Initially it might sound little bit confusing but believe me it's not that hard to understand once you complete the whole setup. Also, CodePipeline is going to use this same bucket to store the artifacts. Again, when we will be setting up CodePipeline you will see how it's functioning.

Create one workspace folder in your system & put the below mentioned code files inside that. Definitely change the parameters as per your needs.

  • These are very basic files in Terraform to provision a S3 bucket called “raktim-infra-vpc-backend” with private ACL & versioning enabled. Also, this file is provisioning a CodeCommit Repository called “infra-vpc-repo”.
  • I created one empty folder in the S3 bucket to store the backend state file of Terraform which will be used while provisioning VPC in future.

Note: I'm using an AWS IAM account with “AdministratorAccess” to authenticate terraform via local system aws cli.

First run “terraform init” & then “terraform apply” to provision the S3 bucket & the codecommit repository.

Now go to AWS account and check CodeCommit & S3.

Step 2: Setting up required IAM Roles

Again, I'm going to use Terraform Code to create two IAM roles —

  • First one is for CodeBuild Project which is going to look at the S3 bucket to fetch the terraform state file. Also, we need to provide certain power to CodeBuild project so that it can provision resources — let's say in our case VPC.
  • Second one is for Pipeline in CodePipeline which is going to run the CodeBuild project. Also, we will provide permission to fetch the codes from CodeCommit.

First, for the best practices I'm adding some more variables in “variables.tf” file.

Next, we have the code to create the IAM Role for CodeBuild…

  • We are creating one IAM role for CodeBuild project which has permissions on VPC, inside of our specified S3 bucket & on CloudWatch Logs. Simply because CodeBuild Project going to provision VPC for us using Terraform & it will maintain the Terraform state file in S3. Also, it will store the execution logs in CloudWatch Logs.

Run “terraform apply” & check the logs. Also, check on AWS console…

Next, we will be creating another code for setting up IAM role for CodePipeline…

  • This role has the specific permissions to fetch the terraform code from CodeCommit. Next it can run our own created CodeBuild project. Also, it can store the artifact in S3 inside our specified bucket. We are giving CloudWatch permission so that in future CodePipeline can be integrated with CloudWatch events.
  • The reason we are creating one local variable called “account_id” is that still we haven't created CodeBuild projects, so terraform can't access it's ARN. Hence, we created the ARN manually using the project name from variables file & the account id.

Run the terraform command & check IAM in AWS…

Step 3: Provisioning CodeBuild Project

Finally, it's time to create the CodeBuild project. First look at the below code & after that I'm explaining how it's working.

  • Here we are simply creating two CodeBuild projects — one is to run “terraform plan” command & another is to run “terraform apply” command, so that if plan command fail, we don't run the apply command later on.
  • Also, we can see that CodeBuild source & artifact is managed by CodePipeline. Now when we will invoke CodeBuild, we’re going to run some terraform command inside the container & for that we are using “hashicorp/terraform:latest” container image.

Now we are going to write those terraform commands which we want to run inside containers. So, in your workspace create one folder called “buildspec” & put the below mentioned two files…

We are done. Let's run the terraform command again & check on AWS Console.

Step 4: Provisioning CodePipeline Project

We are very close to complete our whole setup. This is the last file we going to create to complete this whole CI/CD setup. See the below code.

  • Little bit bigger code, but easy to understand. We are creating one Pipeline in CodePipeline which has three stages. Now it's a constraint in CodePipeline that you have to provide one S3 bucket name where CodePipeline will store its artifacts. Simply because each of the CodePipeline Stages run in an independent container & if you pull code in 1st stage & then in 2nd stage if you want to use that code, then that can be achieved only if CodePipeline stores the code in some centralized location. Currently that location is S3 for CodePipeline.
  • This is the reason we can notice whatever output artifacts CodePipeline is generating, is exactly same what we are passing as input artifacts in second & third stage.
  • Next, in Plan & Deploy stage we are running those two CodeBuild Projects we created earlier.

Let's run terraform apply for the last time…

Hold on… You might see that your CodePipeline failed, simply because It's picking up the codes from CodeCommit repository, but still we haven't created any code there.

Step 4: Cloning CodeCommit Repository

If you know Git & GitHub, then let me tell you when you create one Repository in GitHub without initializing then it's like a blank folder, which still haven't been git initialized. Now there's two techniques…

  • First, you can create a local system folder. Use git to initialize that folder. Put your codes. Setup the remote location to upload the code in remote repository. Then finally do the “git push”.
  • Second one is much simpler. Initialize your remote repository with a “README.md” file. Then clone that repository in local system. Put your codes & then upload the code.

I'm choosing the second method, so let's go to CodeCommit & initialize the repository with a README.md file.

Go inside your repository → Scroll down & click on “Create file”

Once you create the file, wait for around sometime & your pipeline will automatically start because of Poll SCM invocation. Which means CodePipeline is keep on looking at the CodeCommit in every minute for some changes. But again, it will fail. Simply because we still haven't created any terraform code in our repository.

Note: Here I haven't created the CloudWatch events to invoke CodePipeline. But that you can easily learn.

So, let's clone the CodeCommit repository in our local system & start writing Terraform code to develop our VPC in CI/CD manner.

Now there's various methods available to clone the CodeCommit repository in your local system but remember one thing that AWS root account can't clone the repository. Initially I told you that I'm using an AWS IAM account which has “AdministratorAccess”. I used that account access key & secret key to configure AWS CLI on my local system. Hence, I'm able to run the Terraform commands.

You can use that same AWS account to clone the repository & for that on your local system install “Python3” & run the below command to install “git-remote-codecommit” library.

Now if you have your AWS CLI configured the go to AWS CodeCommit repository & copy the “HTTPS (GRC)” URL.

Reference Document: https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-git-remote-codecommit.html

Finally run the git clone command & hopefully you will be able to clone the repository.

In my case I was using “raktim_cli” as the profile, but it requires the default profile. So, I'm again going to setup my AWS configure. Once that's done, I will have the default profile using which I can clone the repository.

Step 5: Setting up the Terraform Backend to use S3 bucket

Now let's start the coding for the VPC development. But before that I will request you go to your S3 bucket & check inside “terraform_backend” folder. It will be empty.

First, we’re going to setup the “backend” for terraform & the provider as well. Because these terraform files going to run in CodeCommit & it needs to fetch the state file from a centralized location.

Create the below three code files…

In this provider we don't need to pass any profile, simply because we have already given CodeBuild project VPC full access & S3 bucket access. So, it can easily do the tasks on those two resources.

This is the file that will help to maintain the terraform state file in the S3 bucket inside “terraform_backend” folder. One very important thing always remember that you can't parameterize any values in this file because normally “terraform backend” is fixed.

Now if you quickly commit & push the code in CodeCommit. What going to happen, let's observe…

Definitely git needs to know the author's name & the email id of the author to do the commit, like we passed while creating README.md file in AWS CodeCommit earlier. But I'm not going to set these two variables as Local because I have other repositories connected to GitHub, GitLab etc.

So, run the config command with “--local” option to set the username & email for this repository only.

We have successfully uploaded our code in CodeCommit. Now let's check on AWS console.

Now wait for some time & check AWS CodePipeline.

Hurray, Our CodePipeline is working smoothly. Now go to your S3 bucket & you will see your terraform state file is present.

Step 6: Implementing VPC using CI/CD

Now everything is very much automated & smooth. Go to your local git repository & alongside the “beckend.tf” file create another file, let's say a sample terraform code to create one VPC with one Subnet in it.

Push the code to CodeCommit & then after some time check the CodePipeline as well as the VPC dashboard.

From the tags of the VPC, we can easily verify that it's created by CodeBuild project. Again, one small thing always remembers that CodeBuild was able to provision VPC because of the CodeBuild IAM role which has “VPCFullAccess” power.

Final Words:

Source: Internet
  • I know I know; it was a long discussion. But if you go to any of the industrial projects then these learning are just a very small part of the whole game. Definitely I believe on having a strong foundation & I hope that this blog helped you to learn a great used case.
  • As always, I will request you to share your thoughts in the comments & give this blog few claps. I am kept on writing such blogs for past 1.5 years & been in this field for more than 2.5 years.
  • Do connect with me on LinkedIn. I also provide Technical Consultancy in Cloud & DevOps. Hopefully we might talk to each other someday.

https://www.linkedin.com/in/raktimmidya/

Thanks for reading. That’s all… Signing Off… 😊

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Raktim Midya

Raktim Midya

Technical Content Writer || Exploring modern tools & technologies under the domains — AI, CC, DevOps, Big Data, Full Stack etc.