Creating a Full CI/CD Pipeline on AWS with Jenkins, Slack, and GitHub: Part 4— AWS setup with Terraform

Rapidcode Technologies
9 min readNov 23, 2023

--

In this cool fourth part, we're putting the finishing touches on our AWS setup. Here's what's left to create:

S3 Buckets: These are like storage bins for logs and for holding the Jenkins User Data (which we'll tackle in the fifth part).

ECR Repository: This is where we'll upload our Docker images.

IAM Policies: These ensure our EC2 instance has the right access to other AWS resources.

Secrets Manager Secret: This is where we'll stash our GitHub SSH keys (which we'll create). Jenkins needs these keys to pull in the repository at the start of the pipeline. We'll create special Jenkins credentials to handle the private key, so Jenkins can talk to the GitHub repo.

Part 1 (here) → We’ll kick things off by setting up our project. We’ll download a Web App to test our infrastructure and pipeline. We’ll also create and test some Dockerfiles for the project and upload it all to GitHub.

Part 2 (here) → We’ll get Slack in on the action. We’ll create a Bot for Jenkins to keep us posted on how the pipeline’s doing.

Part 3 (here) → It’s time to build the AWS Infrastructure with Terraform. We’ll whip up some EC2 instances, set up SSH keys, create the network infrastructure, and lay the foundation for IAM roles.

Part 4 (Right now) → We’re not done with AWS yet. In this step, we’ll make S3 buckets, and ECR repositories, and finish defining the IAM roles with the right policies.

Part 5 (here) → We’ll fine-tune our Jenkins and Web App instances by making sure the user data is just right.

Part 6 (here) → We’ll put the icing on the cake by implementing the pipeline in a Jenkinsfile. We’ll run the pipeline and see everything come together smoothly. Then, we’ll wrap things up with some final thoughts.

Let’s get started!

1. S3 buckets

Create s3.tf in the Terraform root directory and paste the following code:

# S3 Bucket storing logs

resource "aws_s3_bucket" "nodejs-web-app-logs" {
bucket = "rapidcode-nodejs-web-app-logs"
acl = "private"
}

# S3 Bucket storing jenkins user data

resource "aws_s3_bucket" "jenkins-config" {
bucket = "rapidcode-jenkins-config"
acl = "private"
}

Note that the bucket the name needs to be unique across all AWS!

2. ECR repositories

Create ecr.tf in the Terraform root directory and paste the following code:


# Production Repository

resource "aws_ecr_repository" "nodejs-web-app" {
name = "nodejs-web-app"
image_tag_mutability = "MUTABLE"

image_scanning_configuration {
scan_on_push = true
}

tags = {
Name = "Elastic Container Registry to store Docker Artifacts"
}
}

# Staging Repository

resource "aws_ecr_repository" "nodejs-web-app-staging" {
name = "nodejs-web-app-staging"
image_tag_mutability = "MUTABLE"

image_scanning_configuration {
scan_on_push = true
}

tags = {
Name = "Elastic Container Registry to store Docker Artifacts"
}
}

# Test Repository

resource "aws_ecr_repository" "nodejs-web-app-test" {
name = "nodejs-web-app-test"
image_tag_mutability = "MUTABLE"

image_scanning_configuration {
scan_on_push = true
}

tags = {
Name = "Elastic Container Registry to store Docker Artifacts"
}
}

3. IAM policies

The Web App EC2 instance needs only to access the ECR repository to be able to pull down the production image to run and the Jenkins instance needs to access the following resources:

S3 buckets → To be able to upload logs and pull down its user data configuration files;
ECR → To be able to push all the images it produces;
EC2 → To be able to reboot the Web App instance to make it pull down and run the new production image;
Secrets Manager → To be able to retrieve the GitHub SSH private key.

ECR Access Policy

define the following policy in the iam.tf file:

resource "aws_iam_policy" "ecr-access" {
name = "ecr-access"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage",
"ecr:GetLifecyclePolicy",
"ecr:GetLifecyclePolicyPreview",
"ecr:ListTagsForResource",
"ecr:DescribeImageScanFindings",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:PutImage"
],
"Resource": "*"
}
]
}
EOF
}

S3 Access

Define the following policy in the iam.tf file:

# Policy: S3 Access

resource "aws_iam_policy" "s3-access" {
name = "s3-access"

policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
EOF
}

EC2 Access

To allow the Jenkins instance to access the Web App one, we are defining the following:

Define the following policy in the iam.tf file:

# Policy: Ec2 Reboot access

resource "aws_iam_policy" "ec2-access" {
name = "ec2-reboot-access"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:RebootInstances",
"ec2:StartInstances",
"ec2:StopInstances"
],
"Resource": "*"
}
]
}
EOF
}

Secrets Manager Access

Finally, we define the policy that will allow the Jenkins instance to read a secret value from the AWS Secret manager:

# Policy: Secrets Access

resource "aws_iam_policy" "secrets-access" {
name = "secrets-access"

policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "secretsmanager:GetSecretValue",
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}

Alright, at this point we need to tell the IAM role which policies it needs to use. This means that we need to modify the aws_iam_role for both the Web App and Jenkins instance by adding a field managed_polocy_arns and specifying there the correct policies.

Actually, we need to attach policies to the role.

Web-app

resource "aws_iam_role" "nodejs-web-app" {
name = "nodejs-web-app"

assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})

managed_policy_arns = [aws_iam_policy.ecr-access.arn]
}

Jenkins

resource "aws_iam_role" "jenkins" {
name = "jenkins"

assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})

managed_policy_arns = [ aws_iam_policy.ecr-access.arn,
aws_iam_policy.s3-access.arn,
aws_iam_policy.ec2-access.arn,
aws_iam_policy.secrets-access.arn]

}

4. AWS Secret manager

we are left to create the AWS Secret manager

Lets, create it.

create secrets.tf file under Terraform root directory. and paste the following code.

resource "aws_secretsmanager_secret" "nodejs-web-app" {
name = "nodejs-web-app"
}

resource "aws_secretsmanager_secret_version" "nodejs-web-app" {
secret_id = aws_secretsmanager_secret.nodejs-web-app.id
secret_string = jsonencode(var.secrets)
}

So, all are good and now we have to create some variables for Jenkins instances that can work correctly, and this variable is used to configure Jenkins instances when user data is running...

admin-username → This will be the Jenkins Admin username which will be created and will be used to enter Jenkins
admin-password → Password of the Jenkins Admin
admin-fullname → full name of the Jenkins Admin
admin-email → Email of the Jenkins Admin
remote-repo → Name of the GitHub repository
job-name → This will be the name of the MultiBranch Pipeline we are going to create in Jenkins
secrets→ These will be the GitHub SSH keys and the Slack Bearer Token, which will be put as secrets in the AWS Secret Manager.

Paste the following code inside Terraform/variables.tf

variable "aws-access-key" {
type = string
}

variable "aws-secret-key" {
type = string
}

variable "aws-region" {
type = string
}

variable "admin-username" {
type = string
}

variable "admin-password" {
type = string
}

variable "admin-fullname" {
type = string
}

variable "admin-email" {
type = string
}

variable "remote-repo" {
type = string
}

variable "job-name" {
type = string
}

variable "secrets" {
type = map(string)
}

And now, we need to also modify the jenkins-server/variables.tf to add the required variables. Open it up and add:

variable "ami-id" {
type = string
}

variable "iam-instance-profile" {
default = ""
type = string
}

variable "instance-type" {
type = string
default = "t2.micro"
}

variable "name" {
type = string
}

variable "key-pair" {
type = string
}

variable "network-interface-id" {
type = string
}

variable "device-index" {
type = number
}

variable "repository-url" {
type = string
}





variable "repository-test-url" {
type = string
}

variable "repository-staging-url" {
type = string
}

variable "instance-id" {
type = string
}

variable "public-dns" {
type = string
}

variable "admin-username" {
type = string
}

variable "admin-password" {
type = string
}

variable "admin-email" {
type = string
}

variable "admin-fullname" {
type = string
}

variable "bucket-logs-name" {
type = string
}

variable "bucket-config-name" {
type = string
}

variable "remote-repo" {
type = string
}

variable "job-name" {
type = string
}

variable "job-id" {
type = string
}

Now that we have correctly defined these variables in the Jenkins instance Module, we need to assign them in the actual implementation of that module. So, let’s open up jenkins.tf and paste the following code

module "jenkins" {
source = "./jenkins-server"

ami-id = "ami-0742b4e673072066f" # AMI for an Amazon Linux instance for region: us-east-1
iam-instance-profile = aws_iam_instance_profile.jenkins.name
key-pair = aws_key_pair.jenkins-key.key_name
name = "jenkins"
device-index = 0
network-interface-id = aws_network_interface.jenkins.id
repository-url = aws_ecr_repository.nodejs-web-app.repository_url
repository-test-url = aws_ecr_repository.nodejs-web-app-test.repository_url
repository-staging-url = aws_ecr_repository.nodejs-web-app-staging.repository_url
instance-id = module.application-server.instance-id
public-dns = aws_eip.jenkins.public_dns
admin-username = var.admin-username
admin-password = var.admin-password
admin-fullname = var.admin-fullname
admin-email = var.admin-email
bucket-logs-name = aws_s3_bucket.nodejs-web-app-logs.id
bucket-config-name = aws_s3_bucket.jenkins-config.id
remote-repo = var.remote-repo
job-name = var.job-name
job-id = random_id.job-id.id
}

Perfect, at this point we need to also change the repository-url of the application.tf open it and modify the following :

repository-url = aws_ecr_repository.nodejs-web-app.repository_url

Now create random.tf file under Terraform the root directory for random generates an id for job-id and that resource is actually used inside jenkins.tf

resource "random_id" "job-id" {
byte_length = 16
}

5. Github SSH Keys And .tfvars

Let’s create the GitHub SSH keys. Hop over to the terminal and after making sure that you are in the nodejs-web-app/Terraform folder, type:

ssh-keygen -t rsa -b 4096 -f github

Now update terraform.tfvars the file to assign variables that we defined.

admin-username = "rapidcode"
admin-password = "rapidcode"
admin-fullname = "rapidcode"
admin-email = "tech@rapidcode.co.in"
remote-repo = "git@bitbucket.org:<your_bitbucket_user>/simple-web-app>"
job-name = "CI-CD Pipeline"
secrets = {
public = "<github public key>"
private = "<github private key>"
slackToken = "<slack token>"
}

Above, we have to paste the public and private keys of the GitHub repository that we have created.

As far as the private the attribute is concerned, we need to be careful of the newlines characters that are in the Github key. What I suggest is the following, if you have Python installed, just do the following:

Enter the Python environment and then type three double quotes """ , paste the content of the GitHub key, and then close the three double quotes """ . Pressing enter, python should output the string with the newline characters

this private key from the Python output and pasted it in the private attribute of secrets . Finally, take the Slack Bearer token and GitHub public key and paste it in the above file.

NOTE: you might need to escape some `\` at the end of your public key. So if you have something like:
svil\jdoe@202020 at the end of the public key, you need to escape the slash:
svil\\jdoe@202020hc

Now, delete the GitHub keys

we can try to apply these changes to our infrastructure and see whether everything works fine. Let’s first Terraform init since we have changed some modules:

And then apply the changes:

Goto-> aws console management and check resources are created or not -> s3, ecr repo, secret manager.

All are good cool cool !

Now good time to add, commit, and push our code to GitHub

Awesome! Part four is a wrap, and we’ve finished setting up our infrastructure. In the next part, we’re diving into Jenkins. Specifically, we’ll configure the user data, which kicks in when the instance fires up for the first time. This ensures we’ve got all the tools ready to roll for Jenkins.

Catch you in the next part!

Cheers! 🚀👋

--

--

Rapidcode Technologies

Architecting the future of innovation and design with cloud-native skills. 🌟 Let's transform your business! 🌟 #Innovation #Perseverance