How to Deploy Cloud-Agnostic Jenkins CI/CD Pipelines using Terraform

Tait Hoglund
Nerd For Tech
Published in
7 min readMar 30, 2023

Use-Case

Your team would like to start using Jenkins as their CI/CD tool to create pipelines for DevOps projects. They need you to create the Jenkins server using Terraform so that it can be used in other environments and so that changes to the environment are better tracked.

In this tutorial, I am going to walk you thru how you can leverage a cloud-agnostic tool like terraform to automate the provision and deployment of a Jenkins CI/CD pipeline.

Note: While I will be using AWS’ Cloud9 IDE and deploying to AWS for this tutorial, you can follow along on your own IDE and with a few simple tweaks to your main.tf file, you can deploy to dozens of cloud providers

Let’s begin:

Pre-Requisites

  • Basic knowledge of Terraform commands
  • Comfort in the CLI
  • Terraform installed within your CLI
  • General knowledge of cloud computing services including virtual machines, images, security groups, & virtual networks
  • AWS CLI configured in your IDE to allow Terraform access to resources via your account

Step 1 — Initializing Terraform

Login to your CLI and create a new directory for your project and initialize a new Terraform configuration:

mkdir jenkins-terraform
cd jenkins-terraform

Step 2 — Creating our Terraform files

I am going to start by adding a variables.tf file to define the variables that I plan to use in my configuration. It’s perfectly normal to edit this later after you create your main.tf once you know the full extent of your variables.

variable "aws_region" {
type = string
default = "us-east-2"
}

variable "instance_type" {
type = string
default = "t2.micro"
}

Next, I will create a main.tf file and define the resources I will use. Each section includes comments that describes in detail what the HCL language is doing:

# My variables
variable "instance_type" {
type = string
default = "t2.micro"
}

# S3 bucket for Jenkins artifacts
resource "random_id" "random_suffix" {
byte_length = 4
}

resource "aws_s3_bucket" "jenkins-artifacts" {
bucket = "jenkins-artifacts-${random_id.random_suffix.hex}"
}

# IAM role for EC2 instance
resource "aws_iam_role" "jenkins-s3-role" {
name = "jenkins-s3-role"

assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}
]
})
}

# IAM instance profile for EC2 instance
resource "aws_iam_instance_profile" "jenkins-s3-instance-profile" {
name = "jenkins-s3-instance-profile"
role = aws_iam_role.jenkins-s3-role.name
}

# IAM policy for S3 access
resource "aws_iam_policy" "jenkins-s3-policy" {
name = "jenkins-s3-policy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "S3Access"
Effect = "Allow"
Action = [
"s3:GetObject",
"s3:PutObject"
]
Resource = [
"${aws_s3_bucket.jenkins-artifacts.arn}/*",
"${aws_s3_bucket.jenkins-artifacts.arn}"
]
}
]
})
}

# Attaches IAM policy to IAM role
resource "aws_iam_role_policy_attachment" "jenkins-s3-policy-attachment" {
policy_arn = aws_iam_policy.jenkins-s3-policy.arn
role = aws_iam_role.jenkins-s3-role.name
}

# Defines ACL for S3 bucket
resource "aws_s3_bucket_acl" "jenkins-artifacts-acl" {
bucket = aws_s3_bucket.jenkins-artifacts.id

# Sets bucket ACL to private
acl = "private"
}

# Defines EC2 instance running Jenkins
resource "aws_instance" "jenkins" {
ami = "ami-0533def491c57d991"
instance_type = var.instance_type
user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
sudo yum upgrade -y
sudo amazon-linux-extras install java-openjdk11 -y
sudo yum install jenkins -y
sudo systemctl enable jenkins
sudo systemctl start jenkins
EOF

# Assigns IAM role to EC2 instance
iam_instance_profile = aws_iam_instance_profile.jenkins-s3-instance-profile.id

# Assigns security group to EC2 instance allowing traffic on port 22 and 8080
vpc_security_group_ids = [aws_security_group.jenkins-sg.id]

# Tag EC2
tags = {
Name = "jenkins-instance"
}
}

# Defines security group allowing traffic on port 22 and 8080
resource "aws_security_group" "jenkins-sg" {
name_prefix = "jenkins-sg"

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
Screenshots of our main.tf file in our CLI

In addition to my main.tf &variable.tffiles, I need to create a providers.tf file to define which cloud-based service I will be deploying into:

provider "aws" {
region = var.aws_region
}

Step 3 — Deploying our AWS Resources

Having created my three terraform files, I am ready to move forward with deployment.

Every terraform deployment starts with running the command ‘terraform init’- This command initializes the working directory containing Terraform configuration files and downloads any necessary provider modules.

Next, I run the ‘terraform plan’ command — This shows what changes will be applied to infrastructure. This step is important because it allows you to review and confirm the changes before applying them.

Next, I run the ‘terraform apply’ command, which will deploy our infrastructure. If this plan was modifying an existing deployment, terraform would automatically tear-down or redeploy new resources based on the changes in my main.tf from the last deployment.

Depending on the size of your deployment, it could take several minutes. Mine completed in about 40 seconds:

Step 4 — Verifying our Jenkins Deployment and S3 Bucket Permissions

Once your Terraform configuration has been applied and your EC2 instance is running, you can access Jenkins by navigating to the public IP address of your instance followed by “:8080” in your web browser:

http://3.137.217.213:8080

I can also that my Jenkins instance has S3 read & write permissions via the CLI by testing some read and write commands by logging in to my instance and running some commands using the AWS CLI.

First, I will configure the AWS CLI on your instance. You will also need to input your AWS Access Key ID, Secret Access Key, Default region, and output format

aws configure

Next, I am going to create a sample file “Hello World” on my local Jenkins instance and move it to my S3 bucket to test the I configured the permissions on my EC2 instance correctly:

echo "Hello World" > test.txt
aws s3 cp test.txt s3://jenkins-artifacts-f8ce49d3
Hello World file was successfully created on EC2 and then moved to S3 bucket

Next, I will test the read permissions of my EC2 instance by listing the contents of my S3 bucket:

aws s3 ls s3://jenkins-artifacts-f8ce49d3

Last, I will visit my S3 Bucket to also verify that to ensure that the file we copied is visible in my bucket:

Thanks for following along! Remember to tear-down your architecture when you are done to avoid incurring unnecessary costs! In terraform, we do this by using the ‘terraform destroy’ command:

terraform destroy

If you found my content helpful. then please follow me so that you don’t don’t miss it when I drop another tutorial. You might also like a previous article I wrote about Deploying a LAMP stack Web App on Google Cloud.

Additionally, I would appreciate if you would leave a couple of ‘claps’ or a comment below. It helps this type of content reach more DevOps folks.

Until next time, Onward!

--

--

Tait Hoglund
Nerd For Tech

🖥 Cloud Engineer | ☁Certified in AWS, Terraform, & Linux | I help organizations deliver CI/CD automation solutions 🐍 Python | GitHub | ⚓Docker | 🐳 Kubernetes