Automating with infrastructure as code: Deploying a Jenkins server with Terraform

Aalok Trivedi
18 min readApr 2, 2023

intro

Hey, remember when we had to manually create a VPC, subnets, route tables, route table associations, security groups, and EC2 instances, one by one?… Then we had to manually delete them all, one by one?…

…Yeah, we don't have to do that anymore.

Why Infrastructure as Code & Terraform?

While there are many Infrastructure as Code (IaC) tools out there, such as AWS’ CloudFormation and Azure’s ARM. Terraform is cloud-agnostic, so we can configure deployments to any cloud platform.

IaC allows us to remove manual steps and prevent human error while providing capabilities to scale quickly and collaborate.

Scenario

Your company, Brainiac, wants to start migrating its application to AWS and use Terraform to maintain and automate the infrastructure. They want to start using Jenkins as their CI/CD tool, so they need to launch a Jenkins server to begin the process.

What we’ll be building

  1. A VPC, two public and private subnets, an internet gateway, and route table.
  2. An EC2 instance that will run Jenkins.
  3. An S3 bucket for Jenkins to store artifacts.

Disclaimer: Although we’re starting up a Jenkins server, this a not a ‘how-to’ for Jenkins. This is simply a way to use Terraform to deploy resources; one of which is an EC2 instance with Jenkins installed.

Prerequisites

  1. An AWS account with IAM user access.
  2. Foundational knowledge of Linux systems and commands.
  3. Foundational knowledge of AWS networks (VPCs, subnets, security groups), EC2 instances, IAM, and S3.
  4. AWS CLI installed and configured.
  5. Access to a command line tool.
  6. An IDE, such as VS Code or Cloud9.

GitHub repo

Here is a link to my GitHub repo for this project.

Let’s get started!

Install Terraform

If you haven't already done so, install Terraform however you see fit for your OS/environment. You can find detailed instructions in the Terraform documentation.

File setup

I will be using VS Code as my IDE. If you want to make your life easier, install the official Hashicorp Terraform extension. This will help with autocompletion and syntax highlighting.

In our working directory, let’s create three files, named providers.tf, main.tf, and variables.tf. Make sure each file has the ‘.tf’ extension, so the system knows these are Terraform files.

Step 1: Establish our providers and credentials

Alright, we’re ready to start terraforming! For Terraform to access the correct resources and data, we need to establish a provider — in our case, AWS.

Providers are a logical abstraction of an upstream API. They are essentially a method that allows Terraform to understand API interactions and expose resources. We can see a full list of official providers in the Terraform Registry.

If we go to the AWS page in the registry and ‘Use Provider, it will give us the configuration code to define it as a provider.

We’ll paste this code into the providers.tf file and define the region we want to deploy our resources (I will be using “us-east-1”).

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.60.0"
}
}
}

provider "aws" {
region = "us-east-1"
}

Using variables

One of the biggest benefits of Terraform is its flexibility and modularity. It’s best practice to use variables whenever possible and prevent hardcoding to maximize the reusability of the configurations.

We’ll store all of our variables in the variables.tf file we created earlier. To create a variable, we need a variable block.

variable "var_name" {
type = string
description = "description that describes the variable (optional)"
default = "var_value"
}

Let’s create one for the region for our AWS provider.

variable "aws_region" {
type = string
default = "us-east-1"
}

Now, let’s call that variable back in our provider (in the providers.tf file).

provider "aws" {
region = var.aws_region
}

Terraform automatically knows to look for the var in the variables.tf file. We’ll go back and forth between the variables.tf and the main.tf files as we continue with the tasks.

Formatting tip: You can run terraform fmt in the terminal, and it will automatically format your files so the keys and values are nicely aligned. Make sure you’re in the working directory. If you’re using the Terraform VS Code extension, it should auto-format whenever you save.

Export AWS credentials

To access AWS resources, we need to export our credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) as environment variables. In the terminal export your keys.

#Linux/Mac/Bash
export AWS_ACCESS_KEY_ID="<YOUR ACCESS KEY>"
export AWS_SECRET_ACCESS_KEY="<YOUR SECRET KEY>"

#Windows PowerShell
$Env:AWS_ACCESS_KEY_ID="<YOUR ACCESS KEY>"
$Env:AWS_SECRET_ACCESS_KEY="<YOUR SECRET KEY>"

#Default Windows
setx AWS_ACCESS_KEY_ID <YOUR ACCESS KEY>
setx AWS_SECRET_ACCESS_KEY <YOUR SECRET KEY>

Initialize Terraform

We’re ready to init Terraform. The init is a core command that prepares the working directory for Terraform and initializes the providers. Each time a provider is added/edited or the main working directory is changed, we need to re-initialize Terraform.

In the terminal, under the working directory, run the init command.

terraform init

Perfect! We’re ready to build our infrastructure!

Let’s also apply our Terraform code. The apply is another core command that will deploy any resources we’ve defined. We don’t have any resources, but we’ll run it anyway for good practice. Run the command and enter yes to confirm

terraform apply

Store state and credentials in Terraform Cloud (optional)

Once we apply our code, notice Terraform creates a terraform.tfstate file. This is a very important file that keeps track of each ‘apply’ state. Every time we run the apply command, the state is updated and all of the environment information, such as resources, ids, IP addresses, credentials, etc… are stored. Can we see how this can become a security risk?

We’re building this locally, so it’s not a major issue for now, but if we’re using GitHub or a CI/CD pipeline, we don’t want this file out in the open, so make sure it’s included in the .gitignore file (GitHub has a Terraform template available).

Another way to keep this information safe is to use a remote backend, such as Terraform Cloud. Terraform Cloud controls the state remotely and can even store credentials, such as our access keys.

This step isn’t necessary to complete the rest of the tasks, but it’s highly recommended (and free!). For a full walkthrough, follow the tutorial on their official site.

Step 2: Create the network infrastructure

We’re ready to create our network infrastructure that will house our Jenkins server.

Our network will consist of:

  1. A VPC.
  2. Two (2) public and two (2) private subnets.
  3. An internet gateway.
  4. A public route table.

Availability zones

First, in the main.tf file, we need to retrieve the availability zones in our region (us-east-1) using a data block. We’ll use this data when we create the subnets.

# Retrieve the list of AZs in the current AWS region
data "aws_availability_zones" "available" {}
data "aws_region" "current" {}

VPC

First, let’s declare some variables we’ll use for the VPC (variables.tf).

# naming vars
#---------------------------------------
variable "app_name" {
type = string
description = "app name prefix for naming"
default = "brainiac"
}

# vpc vars
#----------------------------------------
variable "vpc_cidr" {
type = string
description = "VPC cidr block"
default = "10.0.0.0/16"
}

variable "enable_dns_hostnames" {
type = bool
description = "enable dns hostnames"
default = true
}

# common cidrs
#----------------------------------------
variable "all_traffic" {
type = string
description = "all traffic"
default = "0.0.0.0/0"
}

The first variable stores the application name “brainiac,” so we can use it to consistently and dynamically name our resources.

The second stores the CIDR block we’ll use for the VPC. We can also store common CIDRs, so we don't have to type it all out repeatedly.

Now, in the main.tf file (all of our resources will be in this file), we’ll create a resource block for the VPC and assign the variables to their appropriate keys.

# Define the VPC
#----------------------------------------------------
resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = var.enable_dns_hostnames

tags = {
Name = "${var.app_name}_vpc"
Environment = var.environment
Terraform = "true"
}
}

Let’s break this down a bit

resource "aws_vpc" "vpc":
We are creating a new VPC resource from the AWS provider → "aws_vpc". Then we give it a name → "vpc". This can be anything, but keep it logical. It’s important to know that this ‘name’ is only for the Terraform object, NOT the actual VPC resource name.

Inside the resource block, we define the cidr_block and assign it the variable we created.

Under tags, we’ll give it a Name using our app_name variable (this will ultimately give the VPC the name of brainiac_vpc) and some other tags. Whatever is undefined will end up using the default settings.

Subnets

We’re creating two public and two private subnets, so let’s declare some more variables.

# private subnet vars
#----------------------------------------
variable "private_subnets" {
default = {
"private_subnet_1" = 0
"private_subnet_2" = 1
}
}

# public subnet vars
#----------------------------------------
variable "public_subnets" {
default = {
"public_subnet_1" = 0
"public_subnet_2" = 1
}
}

variable "auto_ipv4" {
type = bool
description = "enable auto-assign ipv4"
default = true
}

For both subnets, we’re creating a list with multiple values. Essentially, we’re defining the names of our subnets and giving each an integer value. It seems strange, but it will make sense in the next step.

Now to create the subnet resources (do you see a pattern?).

# deploy the public subnets
resource "aws_subnet" "public_subnets" {
vpc_id = aws_vpc.vpc.id
for_each = var.public_subnets
cidr_block = cidrsubnet(var.vpc_cidr, 8, each.value + 100)
availability_zone = tolist(data.aws_availability_zones.available.names)[each.value]

map_public_ip_on_launch = var.auto_ipv4

tags = {
Name = "${var.app_name}_${each.key + 1}"
Environment = var.environment
Terraform = "true"
}
}

vpc_id = aws_vpc.vpc.id:
We need to attach the subnets to the VPC, but we don’t have the VPC ID…
No worries! We can still reference other resources and get the ‘planned’ VPC ID from the resource we created. When Terraform deploys the VPC resource, it will get the ID and use it for the subnet resources.

for_each = var.public_subnets:
If you’re familiar with any programming language, you might be familiar with iterators or for-loops. For-loops allow us to iterate over a list of values programmatically. We can do the same with Terraform using the for_each key and assigning it our public_subnets to establish what we’re going to iterate over.

cidr_block = cidrsubnet(var.vpc_cidr, 8, each.value + 100):
Terraform has helper functions to make our lives easier. One of which, is the cidrsubnet() function. This function calculates and generates a subnet address within a given IP network address prefix.

  • We give it the initial CIDR block for our VPC 10.0.0.0/16
  • Then the number of ‘newbits’: 8 = 10.0.0.0/1610.0.0.0/24
  • Finally, the number we want to add to the initial address prefix. Here, we’re iterating over the var.public_subnets to get each key value (0 and 1) add 100.
  • "public_subnet_1"=0 → +100 → 10.0.100.0/24
    "public_subnet_2"=1
    → +100 → 10.0.101.0/24

For more details on the cidersubnet() function, visit the official documentation.

availability_zone = tolist(data.aws_availability_zones.available.names)[each.value]:
We can map each public subnet to its own AZ from the data block we created earlier:

  • The tolist() helper function can be used to convert the AZ data we retrieved into a list. We can then iterate over each of the values in var.public_subnets (0 and 1) to map each AZ in the list to a subnet.
  • tolist(data.aws_availability_zones.available.names)[0] = us-east-1a
  • tolist(data.aws_availability_zones.available.names)[1] = us-east-1b

We’ll do the same for the private subnets

# deploy subnets
#----------------------------------------------------
# deploy the private subnets
resource "aws_subnet" "private_subnets" {
for_each = var.private_subnets
vpc_id = aws_vpc.vpc.id
cidr_block = cidrsubnet(var.vpc_cidr, 8, each.value + 1)
availability_zone = tolist(data.aws_availability_zones.available.names)[each.value]

tags = {
Name = "${var.app_name}_${each.key + 1}"
Environment = var.environment
Terraform = "true"
}
}

Plan & apply

Phew! That was a lot to take in, so let’s take a break from resource creation and deploy what we have so far.

If we run terraform plan, Terraform will essentially create a ‘dry-run’ of our deployment and present the resources about to be created (without actually creating them). This is a great way to double-check our work, check for errors, and validate our deployment.

VPC
Public subnets

Everything looks great! Let’s deploy our VPC and subnets with the terraform apply command.

Starting to see the power of Terraform? Alright, let’s plow through and create our internet gateway and public route table.

Internet gateway

We’ll need an internet gateway for our public subnets, so let’s create a new resource.

# Create Internet Gateway
#----------------------------------------------------
resource "aws_internet_gateway" "internet_gateway" {
vpc_id = aws_vpc.vpc.id
tags = {
Name = "${var.app_name}_igw"
Environment = var.environment
Terraform = "true"
}
}

Again, we’ll reference the VPC resource to get the ID and use the var.app_name variable.

Route table

Instead of creating a new route table, we can use the table automatically created by our VPC — known as the aws_default_route_table — and add a route to direct all traffic from the public subnets to the internet gateway.

#Edit default route table for public subnets
#----------------------------------------------------
resource "aws_default_route_table" "public_route_table" {
default_route_table_id = aws_vpc.vpc.default_route_table_id

route {
cidr_block = var.all_traffic
gateway_id = aws_internet_gateway.internet_gateway.id
}
tags = {
Name = "${var.app_name}_public_rt"
Environment = var.environment
Terraform = "true"
}
}

Route table associations

Lastly, we need to associate the public subnets with the public route table.

#Create route table associations
resource "aws_route_table_association" "public" {
depends_on = [aws_subnet.public_subnets]
route_table_id = aws_default_route_table.public_route_table.id
for_each = aws_subnet.public_subnets
subnet_id = each.value.id
}

Here, we can use the depends_on key so Terraform knows to create the subnets before it creates the association.

Again, we can iterate over all of the subnets, so we don’t have to individually create association resources for each subnet.

Plan & apply

Let’s apply these resources by running terraform apply again. Notice how Terraform will detect changes/additions and only apply those changes, rather than redeploying every resource.

Fantastic! We’ve deployed the networking base for our Jenkins infrastructure.

Step 4: Create the Jenkins server

Now that our networking base has been deployed, let’s create the EC2 instance that will run as the Jenkins server.

Our Jenkins server will need:

  1. A security group.
  2. An Amazon EC2 instance.
  3. A script to install Jenkins.

Security group

Before we create our instance, let’s set up the security group. For inbound rules (ingress), we’ll need SSH access from “our IP address” and “all traffic” on port 8080, which is the default port used by Jenkins. We’ll allow all outbound (egress) traffic.

Let’s create a variable for our IP Address.*

# security group vars
#----------------------------------------
variable "ssh_location" {
type = string
description = "My IP address"
default = "YOUR_PUBLIC_IPV4_ADDRESS/32"
}

Disclaimer: This is for demo purposes only. In an actual production environment, we should not be importing our local IP address like this, as this is not secure and always changing. Normally, there would be a CIDR range of approoved addresses to SSH from. Make sure you remove your IP address when uploading your files to an open repo like Github.

And now for the security group resource.

# # deploy security groups
#----------------------------------------------------
resource "aws_security_group" "jenkins_sg" {
name = "${var.jenkins_server_name}_sg"
description = "Allow ssh and traffic on port 8080"
vpc_id = aws_vpc.vpc.id

# ssh
ingress {
description = "ssh from IP"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [var.ssh_location]
}
# jenkins port 8080
ingress {
description = "jenkins default port"
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = [var.all_traffic]
}
# allow all outbound traffic
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [var.all_traffic]
ipv6_cidr_blocks = ["::/0"]
}

tags = {
Name = "${var.app_name}_${var.jenkins_server_name}_sg"
Environment = var.environment
Terraform = "true"
}
}

EC2 instance

Let’s declare some variables for the instance that will store the name, AMI, type, key pair, and user data.

# ec2 vars
#----------------------------------------
variable "jenkins_server_name" {
type = string
default = "jenkins_server"
}
variable "jenkins_server_ami" {
type = string
description = "Instance AMI: Amazon Linux 2"
default = "ami-04581fbf744a7d11f"
}
variable "jenkins_server_type" {
type = string
default = "t2.micro"
}

variable "key_pair" {
type = string
description = "ec2 key pair"
default = "YOUR_KEY_PAIR"
}

variable "user_data_file" {
type = string
description = "user data file name"
default = "install_jenkins.sh"
}

Before we create the EC2 resource, we need to bootstrap the instance to install and launch Jenkins. We’ll do this by creating a shell script and inserting it as the instance’s user_data.

Create a new file named install_jenkins.sh and insert this script:

#!/bin/bash
#Update all packages
sudo yum update -y

#Get the latest stable version of jenkins and import the key
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key

#Install java first. Then Jenkins
sudo amazon-linux-extras install java-openjdk11 -y && sudo yum install jenkins -y
sudo systemctl daemon-reload

#Enable and start Jenkins
sudo systemctl enable jenkins
sudo systemctl start jenkins

NOTE: I’ve had some trouble getting the Jenkins key to import properly, so if the Jenkins doesn’t install properly, try this URL instead: https://pkg.jenkins.io/redhat-stable/jenkins.io-2023.key

Now, let’s create the EC2 resource "aws_instance” and name it "jenkins_server". We’ll place the instance in public_subnet_1 and point to our script file for the user_data.

# deploy ec2 instance
#----------------------------------------------------
resource "aws_instance" "jenkins_server" {
ami = var.jenkins_server_ami
instance_type = var.jenkins_server_type
subnet_id = aws_subnet.public_subnets["public_subnet_1"].id
vpc_security_group_ids = [aws_security_group.jenkins_sg.id]
key_name = var.key_pair
user_data = file("${path.module}/${var.user_data_file}")

tags = {
Name = "${var.app_name}_${var.jenkins_server_name}"
Environment = var.environment
Terraform = true
}
}

Plan & deploy

Let's deploy these two resources using terraform apply and make sure we can reach Jenkins.

In a web browser, head to the instance's public IP address and port 8080 in INSTANCE_PUBLIC_IP:8080.

Success! If you can’t reach Jenkins, you might need to give it a minute or two to fully initialize.

From here, we can SSH into the server, get the admin password, and continue with the setup process.

Step 5: Create the S3 bucket

Our Jenkins server is set up, so let’s create an S3 bucket so Jenkins can store artifacts.

Let’s declare some variables for the bucket name and settings to make our bucket and objects private.

# S3 vars
#----------------------------------------
variable "s3_name" {
type = string
default = "jenkins-artifacts"
}

# S3 private
variable "block_public_acls" {
type = bool
default = true
}
variable "block_public_policy" {
type = bool
default = true
}
variable "ignore_public_acls" {
type = bool
default = true
}
variable "restrict_public_buckets" {
type = bool
default = true
}

Tip: Save yourself 30 minutes of head-smashing and review the S3 bucket name rules so you don’t go insane from errors due to AWS’s inconsistent naming conventions…

Randomize bucket name

Since S3 buckets require a unique global namespace, we can either hope and pray the name we give is unique… OR we can make use of Terraform’s random provider and use the random_id resource to create a randomized alphanumeric suffix.

In the main.tf file, create a new resource.

# deploy S3 bucket
#----------------------------------------------------
# random alphanum
resource "random_id" "randomize" {
byte_length = 8
}

Although we’re not explicitly declaring the new randomprovider, we still need to re-initialize Terraform. Rerun terraform init so Terraform knows to grab the provider.

Now, we can create a new S3 bucket resource and give it a unique name.

resource "aws_s3_bucket" "jenkins_artifacts_s3" {
bucket = "${var.app_name}-${var.s3_name}-${random_id.randomize.hex}"

tags = {
Name = "${var.app_name}_${var.s3_name}_s3"
Environment = var.environment
Terraform = true
}
}

We also need to create make sure we set the bucket and objects to “private.”

# set s3 to private
resource "aws_s3_bucket_public_access_block" "s3_private_access" {
bucket = aws_s3_bucket.jenkins_artifacts_s3.id
block_public_acls = var.block_public_acls
block_public_policy = var.block_public_policy
ignore_public_acls = var.ignore_public_acls
restrict_public_buckets = var.restrict_public_buckets
}

Plan & apply

You know the deal… terraform apply.

Step 6: Create permissions

Home stretch! The last step is to give our Jenkins server permissions to access the S3 bucket.

What we’ll need

  1. An IAM role for the Jenkins server.
  2. An IAM policy that allows the Jenkins server to Putbject, GetObject, and ListBucket in our S3 bucket.
  3. An IAM policy attachment to attach the policy to the role.
  4. An IAM instance profile to attach the role to the Jenkins server.

IAM role

We need to create a role for the EC2 instance to assume, so let’s create an aws_iam_role resource.

First, a variable for the role name.

# IAM vars
#----------------------------------------
# role name
variable "iam_role_name" {
type = string
description = "IAM role name"
default = "jenikins_s3_role"
}

And then the resource. We can use the jsonencode() helper function to format the output into JSON.

# create IAM role
#----------------------------------------------------
resource "aws_iam_role" "jenkins_s3_role" {
name = var.iam_role_name
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = { Service = "ec2.amazonaws.com" }
Action = "sts:AssumeRole"
}
]
})
}

IAM policy

The IAM policy should allow the Jenkins server to perform Putbject, GetObject, and ListBucket actions on our s3 bucket.

Let’s create variables that will store the policy name, actions, and resource type.

# policy name
variable "iam_policy_name" {
type = string
description = "IAM policy name"
default = "jenikins_s3_policy"
}

# policy resource actions
variable "iam_actions" {
type = list(string)
description = "actions allowed by Jenkins server"
default = [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
]
}

# resource type/prefix
variable "iam_resource_type" {
type = string
description = "IAM policy resource type"
default = "arn:aws:s3:::"
}

And now, the resource. Notice how we can combine the var.iam_resource_type variable with the output of the S3 bucket name.

# create IAM policy
#----------------------------------------------------
resource "aws_iam_policy" "jenkins_s3_policy" {
name = var.iam_policy_name
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = var.iam_actions
Resource = [
"${var.iam_resource_type}${aws_s3_bucket.jenkins_artifacts_s3.bucket}"
"${var.iam_resource_type}${aws_s3_bucket.jenkins_artifacts_s3.bucket}/*"
]
}
]

})
}

IAM policy attachment

Next, we need to attach the policy to the role.

# create IAM policy attachment
#----------------------------------------------------
resource "aws_iam_role_policy_attachment" "jenkins_s3_policy_attachment" {
role = aws_iam_role.jenkins_s3_role.name
policy_arn = aws_iam_policy.jenkins_s3_policy.arn
}

IAM instance profile

Finally, we need to create an instance profile so we can attach the role to the Jenkins server.

Variable:

# instance profile name
variable "iam_instance_profile_name" {
type = string
description = "instance profile name"
default = "jenkins_s3_instance_profile"
}

Resource:

# create IAM instance profile
#----------------------------------------------------
resource "aws_iam_instance_profile" "jenkins_s3_instance_profile" {
name = var.iam_instance_profile_name
role = aws_iam_role.jenkins_s3_role.name
}

Remember to go back and add the iam_instance_profile to the Jenkins EC2 instance resource.

# deploy ec2 instance
#----------------------------------------------------
resource "aws_instance" "jenkins_server" {
ami = var.jenkins_server_ami
instance_type = var.jenkins_server_type
subnet_id = aws_subnet.public_subnets["public_subnet_1"].id
vpc_security_group_ids = [aws_security_group.jenkins_sg.id]
key_name = var.key_pair
user_data = file("${path.module}/${var.user_data_file}")
iam_instance_profile = aws_iam_instance_profile.jenkins_s3_instance_profile.name

tags = {
Name = "${var.app_name}_${var.jenkins_server_name}"
Environment = var.environment
Terraform = true
}
}

Step 7: Output resource information

Last step! Once Terraform applies our resources, we can tell it to output certain several helpful resource information, such as IDs, resource ARNS, public IPs, etc…

Let’s output some info so we can verify the Jenkins server has proper access to the S3 bucket. In a new file, named outputs.tf, we’ll create output blocks for the Jenkins server public IPv4 address, S3 bucket name, and S3 bucket ARN.

output "jenkins_server_public_ip" {
description = "publiic ip address for Jenkins server "
value = aws_instance.jenkins_server.public_ip
}

output "s3_bucket_name" {
description = "Jenkins S3 bucket name"
value = aws_s3_bucket.jenkins_artifacts_s3.bucket
}

output "s3_bucket_arn" {
description = "Jenkins S3 bucket arn"
value = aws_s3_bucket.jenkins_artifacts_s3.arn
}

When the deployment is applied, Terraform will output the information for us to use. Run another terraform init and terraform apply.

Step 8: Verify Jenkins server → S3 access

Now that we have the correct IAM role and outputs, we need to verify that the Jenkins server can upload to the S3 bucket. We’ll test this through the AWS CLI.

First, let’s SSH into the Jenkins server and create a test file touch file.txt, and then use the AWS CLI to upload the file.

aws s3api put-object --bucket YOUR-BUCKET-NAME --key file.txt --body file.txt

The --key refers to the key name we’ll give the object. The --body refers to the local file path to the object.

Success!

Congrats! We’ve successfully deployed an infrastructure for a Jenkins server and connected it to an S3 bucket using Terraform! It may have seemed like a lot of work upfront, but now, we can reuse this as a template for future deployments.

Destroy resources

As quickly as it was to create the resources, it’s just as quick to tear everything down. We can run terraform destroy and confirm the resources to be destroyed.

You may have to manually delete the S3 bucket because it has an object. Alternatively, you can set force_destroy = true on the S3 bucket resource.

Here is the Github repo again.

Thank you

Thank you for following me on my cloud engineering journey. I hope this article was helpful and informative. Please give me a like & follow as I continue my journey, and I will share more articles like this!

--

--