Terraform & Using a Jenkins Bootstrap to Automate a CI/CD Pipeline

Katie Sheridan
7 min readMay 27, 2023

--

Welcome back to another tech write up for Level Up in Tech! This week I’ve been working on my Terraform skills, and I’m totally in love with this service! It streamlines a lot of the work with creating components into a few easy steps. Let’s get into it.

Scenario: Your team would like to start using Jenkins as their CI/CD tool to create pipelines for DevOps projects. They need you to create the Jenkins server using Terraform so that it can be used in other environments and so that changes to the environment are better tracked.

What is IaC? — Simply put Infrastructure as Code is a way to deploy services based on a single (or a few) files of code. This streamlines workload and reduces human error by having to re-create components manually. It can also destroy them, so it is cost-saving tool too.

What is Terraform? Terraform is a platform that utilizes a multi-cloud to provide services to many different providers. This can even be done all within the one file.

How does Terraform Work? Terraform utilizes a CI/CD pipeline to write, validate, then apply that code into production. It’s great because you can edit components of the code, without really having to tear down the services already up and running.

Why Jenkins? Jenkins is a service that integrates really well with CI/CD pipelines. It runs automatically and is user-friendly.

In order to do this, I will utilize the standard workflow of write, plan, apply.

Objectives:

  1. Create a single file , called main.tf, which will accomplish the following:
  • 1 EC2 Instance in the Default VPC.
  • Bootstrap the EC2 instance with a script that will install and start Jenkins. (Official Jenkins Documentation.)
  • Create and assign a Security Group to the Jenkins Security Group that allows traffic on port 22 from your IP and allows traffic from port 8080.
  • Create an S3 bucket for your Jenkins Artifacts that is not open to the public.

2. Create two additional files with all the variables and services in this project which will change. (This will allow you to re-use the files multiple times, and all you essentially might need to change are the specifications, essentially leaving the main file alone.)

3. Apply the Changes and Deploy

4. Verify functionality

5. (optional) Destroy all the components.

Prerequisites:

  • AWS Free Tier Account- with IAM access
  • Cloud based IDE installed- I will be using AWS Cloud 9. One benefit of using Cloud9 versus VS Code is that Terraform is already installed.
  • Basic Knowledge of Terraform

Let’s Do This!

Step 1: Write the Files

I logged into my Cloud9 environment and created a new directory for this project. Then I used touch to create the 3 files quickly.

In the first file, I created the following code. In the first block, I name the AWS instance, name the instance and set up a few variables to be referenced.

Inside the main.tf file, I then created the bootstrap that will access the Jenkins service. I used the Jenkins documentation to help with this step.

I also utilized the Terraform documentation as well.

Moving on, created a security group, named it, and used the vpc variable as a reference. I will define this variable in the variables.tf file.

I also opened up ports on 22, 443, and 8080 for communication.

Additionally, I created a block which will create a S3 bucket to store the artifacts. After this screenshot, I added a acl-prive line in order to make the bucket private.

Next, I created the variables.tf file.

This file includes all the interchangeable data required for the Ec2 instance set up or other parameters you might specify. It is best practice to use this file for that purpose so you can easily reuse any code you write and modify it for later purposes. This is also a place where you can store information you may not necessarily want to hardcode in the main.tf file.

To reference which ami to list, you can find this information in the EC2 console under the Ami catalog. I later went in and used the t2 micro AMI and not the one listed for 2023. This was a first step in the debugging that occurs later on.

In the last part of this step, I created the providers file. In this file you would specify any providers you want to work with. If this were going to be a multi-cloud operation, I would list that information here. I also listed my region in this section as well. If you are in another region, this could be one place you could denote that information.

Now that the files are writing, that wraps up part one. Now’s the time to see if it works!

Step 2: Plan the Execution.

The first part of this step includes “turning on terraform.” This can be done running terraform init.

Then, I ran terraform validate to see if my code was good.

Although the code is valid, I made a few changes later on.

I could have also run terraform fmt to fix any errors with lines or spaces.

To see what terraform will do, I ran terraform plan.

Now for the actual execution!

Step 3: Apply the Code!

This in my opinion is one of the coolest aspects of terraform, but running terraform apply, you can see all the components spin up in front of your eyes!

And just like that, bam! A whole web service has been created.

Step 4: Verify the Service.

Over on the Ec2 console, you can see the new instance has been created with the name you provided in the code.

Additionally, you can select the security groups tab and see that the group you have created has been made.

I then, navigated over to the S3 console and saw that the bucket had been created as well.

But, when I tried to access the website from the browser…

I had forgotten to do a couple steps.

I added a 4th file named output.tf .

I also had forgotten to set up my Cloud9 IDE with my AWS access keys. This is a critical step in order to get the bootstrap to actually work.

At this point, I also changed the ami image type as well as pulling my user data block over into the variables file. Because I changed the ami type, it rebuilt everything with a new IP.

Run the Ip in the browser, making sure to add :8080 to the end.

That’s it! Successfully created!

Step 5: Tear it all down!

One of the greatest features of Terraform, which could also be the most deadly, is terraform destroy. No more do you have to dig through a million console pages and manually delete (and verify deletion) of each component. In one command, everything comes down as easily as it went up. This is a HUGE benefit if you’re trying to take down services and avoid unwanted charges.

You can verify that everything is wiped clean over on your console page. That wraps it up!

GitHub

To reference any code used in this write-up, please check out my GitHub page.

Thank you so much for reading this article. If you have any tips, tricks, or questions, please let me out. If you found the demo helpful please give it a clap and a follow. I’d love to connect with you on my LinkedIn as well @KatieSheridan/DevOps

--

--

Katie Sheridan

DevOps/ CloudEngineer--- I will be using Medium to blog about my projects and any tips or tricks I come across.