Immutable Infrastructure: How to Deploy Auto-Scalable Gitlab Runners in AWS Using Packer, Terraform, and Ansible

A step-by-step guide on how to provision highly scalable specific runners in AWS using Packer, Ansible, and Terraform

Ronie Horca
15 min readAug 28, 2023

Overview

Gitlab is not only famous for being a version control software, but also for its scalable Gitlab CI service. Gitlab has been an aide in practicing and evangelizing DevSecOps to safely and quickly deliver softwares in companies, of all sizes, for years already. Today we will explore and leverage Gitlab Runners, which are the virtual machines that make CI/CD possible.

Currently, there are three types of Gitlab Runners:
a) Shared Runners — runners available to all groups and projects
b) Group Runners — runners available to all groups and subgroups
c) Specific Runners — runners available to specific projects only

We will be focusing on setting up Specific Runners, as these are arguably the most secured types of runners. This article will serve as a step-by-step guide on how to provision highly-scalable specific runners in AWS using Packer, Ansible, and Terraform. At the end of this article, we will test the runners using Gitlab CI pipeline.

Architecture

Architecture of Gitlab Manager and Scalable Runners in AWS

Based from the architecture above, we will provision a gitlab manager instance in a public subnet. The gitlab manager is placed in an autoscaling group with a maximum and minimum of one instance only for the purpose of self-recovery the moment the server goes down.

The gitlab manager will be responsible for provisioning the specific runners in a private subnet. The runners will continuously poll the Gitlab server through Nat Gateway, which has a route towards the Internet Gateway. We will be using EC2 Spot Instances to minimize our costs and S3 Bucket for our runner caching.

With That… Let’s Get It On With The Setup!!!

Prerequisites

a. AWS Account — A fundamental understanding of AWS resources would be necessary to fully understand this setup. You should also have an AWS Account to work on

b. Docker — We will use Packer, Ansible and Terraform as docker images in our setup.
Installation Procedure:
https://docs.docker.com/engine/install/

c. An existing IAM user with AWS access and secret keys

d. Keybase Username — This will be used to secure the Gitlab Manager IAM user access credentials
Installation Procedure:
https://keybase.io/download

e. Gitlab Account — You can register an account and leverage the 30-days free trial. An experience using Git is an advantage to seamlessly follow along

Step-by-Step Procedure

1.) In your Gitlab account, create a repository and clone it locally

2.) Open the cloned repository using your preferred text editor, then prepare the following files and folders

Prepare the bash scripts

terraform > bin > plan, terraform > bin > init, and terraform > bin > apply should have the following same contents

#!/bin/bash

cd `dirname $0`/../

source bin/terraform

terraform > bin > destroy

#!/bin/bash

cd `dirname $0`/../
INSTANCE_AMI_NAME=$(cat ../packer-build-output.log | grep 'Creating AMI' | awk '{print $5}')
export INSTANCE_AMI_NAME="${INSTANCE_AMI_NAME%%[[:cntrl:]]}"
echo "EC2 AMI NAME = $INSTANCE_AMI_NAME"
source bin/terraform

terraform > bin > docker
We will use the docker image hxhroniegss/terraform:1.5.4-alp3.17.4arm64 to run terraform. If the platform architecture of your computer is amd64, update the script below and use the image hxhroniegss/terraform:1.5.4-alp3.17.4amd64 instead

#!/bin/bash

containerName="terraform_devops"

if [ "$(docker inspect -f '{{.State.Running}}' ${containerName} 2>/dev/null)" = "true" ];
then
echo "Stopping ${containerName} container..."
docker stop $containerName
echo "Deleting ${containerName} container..."
docker rm $containerName
fi

docker run --name $containerName --rm -it \
-v ~/.ssh:/home/.ssh \
-v $(pwd):/home/terraform \
-e "TF_VAR_region=$AWS_DEFAULT_REGION" \
-e "TF_VAR_access_key=$AWS_ACCESS_KEY_ID" \
-e "TF_VAR_secret_key=$AWS_SECRET_ACCESS_KEY" \
-e "TF_VAR_keybase_username=$KEYBASE_USERNAME" \
-e "TF_VAR_instance_ami_name=$INSTANCE_AMI_NAME" \
-w /home/terraform/src \
hxhroniegss/terraform:1.5.4-alp3.17.4arm64 "$@"

terraform > bin > init_backend

#!/bin/bash

cd `dirname $0`/../

echo "
= = = = = = = = = = = = = = = = = = = = = = = = = =
Running terraform init backend ...
= = = = = = = = = = = = = = = = = = = = = = = = = =
"
terraform="bin/docker"

$terraform init -backend-config="access_key=$AWS_ACCESS_KEY_ID" -backend-config="secret_key=$AWS_SECRET_ACCESS_KEY"

terraform > bin > terraform

#!/bin/bash

cmd="$(basename "$(test -L "$0" && readlink "$0" || echo "$0")")"

echo "
= = = = = = = = = = = = = = = = = = = = = = = = = =
Running terraform ${cmd} ...
= = = = = = = = = = = = = = = = = = = = = = = = = =
"
terraform="bin/docker"

$terraform $cmd

terraform > bin > show_creds

#!/bin/bash

cd `dirname $0`/../

echo "
= = = = = = = = = = = = = = = = = = = = = = = = = =
Show Credentials from Terraform Ouput ...
= = = = = = = = = = = = = = = = = = = = = = = = = =
"
terraform="bin/docker"

ACCESS_KEY_ID=$($terraform output -raw gitlab_manager_access_key_id)
SECRET_ACCESS_KEY=$($terraform output -raw gitlab_manager_secret_access_key)

echo "ACCESS_KEY_ID=$ACCESS_KEY_ID"
echo "SECRET_ACCESS_KEY=$(echo $SECRET_ACCESS_KEY | base64 -d | keybase pgp decrypt)"

Prepare the terraform scripts

terraform > src > provider.tf

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}

provider "aws" {
region = var.region
access_key = var.access_key
secret_key = var.secret_key
}

terraform > src > variables.tf

variable "access_key" {}
variable "secret_key" {}
variable "region" {}
variable "instance_ami_name" {}
variable "keybase_username" {}
variable "vpc_cidr" {
default = "10.0.0.0/16"
}
variable "gitlab_manager_user" {
default = "Gitlab-Manager"
}
variable "project_name" {
default = "gitlab-autoscale"
}

variable "tags" {
default = {
Name = "Devops Gitlab Autoscale"
}
}

terraform > src > output.tf

output "gitlab_manager_secret_access_key" {
sensitive = true
value = aws_iam_access_key.gitlab_manager.encrypted_secret
}

output "gitlab_manager_access_key_id" {
sensitive = true
value = aws_iam_access_key.gitlab_manager.id
}

output "vpc_id" {
value = aws_vpc.main.id
}

output "private_subnet_id" {
value = values(aws_subnet.private)[*].id
}

output "runner_security_group_name" {
value = aws_security_group.gitlab_runner.name
}

output "cache_bucket_name" {
value = aws_s3_bucket.gitlab_runner.bucket
}

terraform > src > network.tf
We will provision private and public subnets, as well as nat gateways throughout all the availability zones of your preferred AWS region

data "aws_availability_zones" "available" {}

resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true

tags = var.tags
}

resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id

tags = var.tags
}

resource "aws_subnet" "public" {
for_each = toset(data.aws_availability_zones.available.names)
vpc_id = aws_vpc.main.id
cidr_block = "${cidrsubnet(aws_vpc.main.cidr_block, 8, index(data.aws_availability_zones.available.names, each.key))}"
availability_zone = each.key
map_public_ip_on_launch = true

tags = {
Name = "${var.tags.Name} Public"
}
}

resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}

tags = {
Name = "${var.tags.Name} Public"
}
}

resource "aws_route_table_association" "public" {
for_each = aws_subnet.public
subnet_id = each.value.id
route_table_id = "${aws_route_table.public.id}"
}

resource "aws_eip" "nat" {
for_each = toset(data.aws_availability_zones.available.names)
domain = "vpc"
depends_on = [aws_internet_gateway.main]
}

resource "aws_nat_gateway" "main" {
for_each = aws_subnet.public
allocation_id = aws_eip.nat[each.key].id
subnet_id = each.value.id
depends_on = [aws_internet_gateway.main]

tags = {
Name = "${var.tags.Name} Nat Gateway"
}
}

resource "aws_subnet" "private" {
for_each = toset(data.aws_availability_zones.available.names)
vpc_id = aws_vpc.main.id
cidr_block = "${cidrsubnet(aws_vpc.main.cidr_block, 8, index(data.aws_availability_zones.available.names, each.key) + length(data.aws_availability_zones.available.names))}"
availability_zone = each.key

tags = {
Name = "${var.tags.Name} Private"
}
}

resource "aws_route_table" "private" {
for_each = aws_nat_gateway.main
vpc_id = aws_vpc.main.id

route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = each.value.id
}

tags = {
Name = "${var.tags.Name} Private"
}
}

resource "aws_route_table_association" "private" {
for_each = aws_subnet.private
subnet_id = each.value.id
route_table_id = aws_route_table.private[each.key].id
}

resource "aws_security_group" "gitlab_manager" {
name = "${var.project_name}-manager-sg"
vpc_id = aws_vpc.main.id
description = "Block all inbound"

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = var.tags
}

resource "aws_security_group" "gitlab_runner" {
name = "${var.project_name}-runners-sg"
vpc_id = aws_vpc.main.id
description = "Allow gitlab manager"

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = [aws_security_group.gitlab_manager.id]
}

ingress {
from_port = 2376
to_port = 2376
protocol = "tcp"
security_groups = [aws_security_group.gitlab_manager.id]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = var.tags
}

terraform > src > iam.tf

data "aws_iam_policy_document" "main" {
statement {
actions = ["sts:AssumeRole"]

principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
}

resource "aws_iam_instance_profile" "main" {
name = "${var.project_name}-profile"
role = aws_iam_role.main.name
}

resource "aws_iam_role" "main" {
name = "${var.project_name}-role"
path = "/"
assume_role_policy = data.aws_iam_policy_document.main.json
}

resource "aws_iam_role_policy_attachment" "ssm" {
role = aws_iam_role.main.name
policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
}

resource "aws_iam_role_policy_attachment" "cloudwatch" {
role = aws_iam_role.main.name
policy_arn = "arn:aws:iam::aws:policy/CloudWatchFullAccess"
}

resource "aws_iam_group" "gitlab_manager" {
name = "${var.gitlab_manager_user}-Group"
}

resource "aws_iam_group_policy_attachment" "gitlab_manager_ec2" {
group = aws_iam_group.gitlab_manager.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2FullAccess"
}

resource "aws_iam_group_policy_attachment" "gitlab_manager_s3" {
group = aws_iam_group.gitlab_manager.name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}

resource "aws_iam_user" "gitlab_manager" {
name = var.gitlab_manager_user
}

resource "aws_iam_access_key" "gitlab_manager" {
user = aws_iam_user.gitlab_manager.name
pgp_key = "keybase:${var.keybase_username}"
}

resource "aws_iam_group_membership" "gitlab_manager" {
name = "${aws_iam_group.gitlab_manager.name}-Membership"
users = [
aws_iam_user.gitlab_manager.name,
]
group = aws_iam_group.gitlab_manager.name
}

terraform > src > s3.tf
Ensure the name of your s3 bucket is unique

resource "aws_s3_bucket" "gitlab_runner" {
bucket = "${var.project_name}-runner-cache"
tags = var.tags
}

resource "aws_s3_bucket_policy" "gitlab_runner" {
bucket = aws_s3_bucket.gitlab_runner.id

policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "PreventDeletion"
Effect = "Deny"
Principal = "*"
Action = [
"s3:DeleteBucket",
"s3:DeleteObject"
]
Resource = [
aws_s3_bucket.gitlab_runner.arn,
"${aws_s3_bucket.gitlab_runner.arn}/*",
]
},
]
})
}

resource "aws_s3_bucket_public_access_block" "gitlab_runner" {
bucket = aws_s3_bucket.gitlab_runner.id

block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}

resource "aws_s3_bucket_versioning" "gitlab_runner" {
bucket = aws_s3_bucket.gitlab_runner.id
versioning_configuration {
status = "Enabled"
}
}

resource "aws_s3_bucket_server_side_encryption_configuration" "gitlab_runner" {
bucket = aws_s3_bucket.gitlab_runner.bucket

rule {
bucket_key_enabled = true
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}

terraform > src > backend.tf

The backend is optional but recommended. If you want to setup the terraform backend in your AWS account, you can follow the step-by-step guide below:
https://medium.com/cloud-native-daily/how-to-securely-manage-terraform-state-file-in-aws-using-terraform-7c20b211c9cb

terraform {
backend "s3" {
bucket = "hxhdevsecops-terraform-state" #<-- Change with your s3 bucket
key = "common/gitlab_runners"
region = "ap-southeast-1" #<-- Change with your region
dynamodb_table = "tf_state_locks" #<-- Change with your dynamodb table
encrypt = true
}
}

3.) Make the bash scripts executable

$ chmod +x ./terraform/bin/*

4.) Export the following environmental variables
(Change the values with yours)

export AWS_DEFAULT_REGION='<AWS_REGION>'
export AWS_ACCESS_KEY_ID='<ADMIN_USER_ACCESS_KEY_ID>'
export AWS_SECRET_ACCESS_KEY='<ADMIN_USER_SECRET_ACCESS_KEY>'
export KEYBASE_USERNAME='<KEYBASE_USERNAME>'

5.) Provision the resources
Initialized the configuration files first

$ terraform/bin/init # Run this if you did not create the backend
# or
$ terraform/bin/init_backend # Run this if you created the backend

It is recommended to run terraform plan first to check if your code is in good condition

$ terraform/bin/plan

Run the following to provision

$ terraform/bin/apply

To show the IAM access key id and secret access key of the Gitlab Manager, run the following script. Take note of the result

$ terraform/bin/show_creds

6.) So far we have provisioned the required resources for our Gitlab Manager and Runners. Now let’s prepare the following scripts for the gitlab manager and runners setup

ansible > roles > setup_gitlab_manager > tasks > main.yml

---
- name: Install docker
ansible.builtin.yum:
name: docker
state: present

- name: Add ec2-user to docker group
ansible.builtin.user:
name: ec2-user
group: docker

- name: Enable docker
ansible.builtin.systemd:
name: docker.service
state: started
enabled: true

- name: Download the binary for your system
ansible.builtin.get_url:
url: "{{ docker_machine_download_url }}"
dest: /usr/local/bin/docker-machine
mode: 'a+x'

- name: Download gitlab runner repository script
ansible.builtin.get_url:
url: "{{ gitlab_repository_script }}"
dest: /opt/gitlab_script.sh
mode: 'a+x'

- name: Run the script
ansible.builtin.shell: /opt/gitlab_script.sh
args:
executable: /bin/bash

- name: Install gitlab runner
ansible.builtin.yum:
name: gitlab-runner
state: present

- name: Copy gitlab runner manager config
ansible.builtin.template:
src: templates/config.toml.j2
dest: /etc/gitlab-runner/config.toml

ansible > roles > setup_gitlab_manager > templates > config.toml.j2
The following is the configuration (config.toml) of the gitlab manager, you can adjust it accordingly. As you can see, we will use Spot instances for the runners to reduce cost and we will limit the number of runners to 2 for now

concurrent = 10
check_interval = 0

[[runners]]
name = "gitlab-aws-autoscaler"
url = "{{ gitlab_url }}"
token = "{{ gitlab_token }}"
executor = "docker+machine"
limit = 2
[runners.docker]
image = "alpine"
privileged = true
disable_cache = true
[runners.cache]
Type = "s3"
Shared = true
[runners.cache.s3]
ServerAddress = "s3.amazonaws.com"
AccessKey = "{{ gitlab_access_key_id }}"
SecretKey = "{{ gitlab_secret_access_key }}"
BucketName = "{{ bucket_name }}"
BucketLocation = "{{ aws_default_region }}"
[runners.machine]
IdleCount = 1
IdleTime = 1800
MaxBuilds = 10
MachineDriver = "amazonec2"
MachineName = "gitlab-docker-machine-%s"
MachineOptions = [
"amazonec2-access-key={{ gitlab_access_key_id }}",
"amazonec2-secret-key={{ gitlab_secret_access_key }}",
"amazonec2-region={{ aws_default_region }}",
"amazonec2-vpc-id={{ vpc_id }}",
"amazonec2-subnet-id={{ private_subnet_id }}",
"amazonec2-private-address-only=true",
"amazonec2-tags=runner-manager-name,gitlab-aws-autoscaler,gitlab,true,gitlab-runner-autoscale,true",
"amazonec2-security-group={{ security_group_name }}",
"amazonec2-instance-type={{ gitlab_runner_instance_type }}",
"amazonec2-request-spot-instance=true",
"amazonec2-spot-price={{ gitlab_runner_spot_price }}",
"amazonec2-ami={{ gitlab_runner_ami }}",
"engine-install-url='{{ docker_engine_url }}'",
]
[[runners.machine.autoscaling]]
Periods = ["* * 9-17 * * mon-fri *"]
IdleCount = 2
IdleTime = 3600
Timezone = "UTC"
[[runners.machine.autoscaling]]
Periods = ["* * * * * sat,sun *"]
IdleCount = 1
IdleTime = 60
Timezone = "UTC"

ansible > roles > setup_gitlab_manager > vars > main.yml
Change the value of vpc_id, private_subnet_id, security_group_name, and bucket_name based from the output of terraform apply.
Also change the gitlab_runner_ami and gitlab_runner_spot_price based from your AWS region

gitlab_repository_script: https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.rpm.sh
docker_machine_download_url: https://gitlab-docker-machine-downloads.s3.amazonaws.com/v0.16.2-gitlab.15/docker-machine-Linux-x86_64
docker_engine_url: https://releases.rancher.com/install-docker/20.10.21.sh
vpc_id: vpc-01a13723933131475
private_subnet_id: subnet-0af758e3aa5a21b3e
security_group_name: gitlab-autoscale-runners-sg
gitlab_url: "{{ lookup('ansible.builtin.env', 'GITLAB_URL') }}"
aws_default_region: "{{ lookup('ansible.builtin.env', 'AWS_DEFAULT_REGION') }}"
gitlab_token: "{{ lookup('ansible.builtin.env', 'GITLAB_RUNNER_TOKEN') }}"
bucket_name: gitlab-autoscale-runner-cache
gitlab_access_key_id: "{{ lookup('ansible.builtin.env', 'GITLAB_ACCESS_KEY_ID') }}"
gitlab_secret_access_key: "{{ lookup('ansible.builtin.env', 'GITLAB_SECRET_ACCESS_KEY') }}"
gitlab_runner_instance_type: t3.medium
gitlab_runner_spot_price: 0.0528
gitlab_runner_ami: ami-0950bf7d28f290092

ansible > playbook.yml

---
- name: Setup Gitlab Runner Manager
hosts: all
become: true
become_method: sudo
gather_facts: yes
vars:
ansible_python_interpreter: /usr/bin/python3
roles:
- roles/setup_gitlab_manager

packer > bin > run
We will run the following script to build the AWS AMI of the gitlab manager instance as well as run terraform apply. The script uses the packer docker image hxhroniegss/packer:1.10.0-alp3.17arm64 for arm64 platform architecture. If your computer has an amd64 architecture then use hxhroniegss/packer:1.10.0-alp3.17amd64 docker image instead

#!/bin/bash

hcl_file="/home/src/packer/src/amzl2.pkr.hcl"

docker run --rm -it \
-v $(pwd):/home/src \
-e "AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION" \
-e "AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" \
-e "AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID" \
-e "GITLAB_ACCESS_KEY_ID=$GITLAB_ACCESS_KEY_ID" \
-e "GITLAB_SECRET_ACCESS_KEY=$GITLAB_SECRET_ACCESS_KEY" \
-e "GITLAB_RUNNER_TOKEN=$GITLAB_RUNNER_TOKEN" \
-e "GITLAB_URL=$GITLAB_URL" \
hxhroniegss/packer:1.10.0-alp3.17arm64 build "${hcl_file}" | tee packer-build-output.log

INSTANCE_AMI_NAME=$(cat packer-build-output.log | grep 'Creating AMI' | awk '{print $5}')
export INSTANCE_AMI_NAME="${INSTANCE_AMI_NAME%%[[:cntrl:]]}"
echo "EC2 AMI NAME = $INSTANCE_AMI_NAME"

terraform/bin/apply

packer > src > amzl2.pkr.hcl


packer {
required_plugins {
amazon = {
source = "github.com/hashicorp/amazon"
version = "~> 1"
}
ansible = {
source = "github.com/hashicorp/ansible"
version = "~> 1"
}
}
}

variable "aws_access_key" {
type = string
default = "${env("AWS_ACCESS_KEY_ID")}"
}

variable "aws_secret_key" {
type = string
default = "${env("AWS_SECRET_ACCESS_KEY")}"
}

variable "region" {
type = string
default = "${env("AWS_DEFAULT_REGION")}"
}

variable "ami_username" {
type = string
default = "ec2-user"
}

data "amazon-ami" "amzl2" {
access_key = "${var.aws_access_key}"
filters = {
name = "amzn2-ami-hvm-*-x86_64-ebs"
}
most_recent = true
owners = ["amazon"]
region = "${var.region}"
secret_key = "${var.aws_secret_key}"
}

source "amazon-ebs" "gitlab_runner_manager" {
access_key = "${var.aws_access_key}"
ami_description = "Amazon Linux 2 with Gitlab Runner Manager"
ami_name = "gitlab-runner-packer-${formatdate("MM-DD-YY_hh-mm-ss", timestamp())}"
communicator = "ssh"
force_deregister = "true"
instance_type = "t3.micro"
region = "${var.region}"
secret_key = "${var.aws_secret_key}"
source_ami = "${data.amazon-ami.amzl2.id}"
ssh_username = "${var.ami_username}"
}

build {
sources = ["source.amazon-ebs.gitlab_runner_manager"]

provisioner "ansible" {
extra_arguments = ["--scp-extra-args", "'-O'", "--ssh-extra-args", "-o IdentitiesOnly=yes -o HostKeyAlgorithms=+ssh-rsa -o PubkeyAcceptedAlgorithms=+ssh-rsa"]
playbook_file = "/home/src/ansible/playbook.yml"
user = "${var.ami_username}"
}
}

Also, add the following terraform scripts and userdata for the gitlab manager

terraform > src > asg.tf
We will be using t3.micro instance here since the gitlab manager does not use a lot of resources. NOTE: The instance type used in terraform should match the instance type used in packer

data "aws_ami" "main" {
most_recent = true

filter {
name = "name"
values = [var.instance_ami_name]
}

filter {
name = "root-device-type"
values = ["ebs"]
}
}

resource "aws_placement_group" "main" {
name = "${var.project_name}-pg"
strategy = "partition"
tags = var.tags
}

resource "aws_autoscaling_group" "main" {
name = "${var.project_name}-asg"
desired_capacity = 1
max_size = 1
min_size = 1
vpc_zone_identifier = [for subnet in aws_subnet.public: subnet.id]
launch_template {
id = aws_launch_template.main.id
version = aws_launch_template.main.latest_version
}
instance_refresh {
strategy = "Rolling"
preferences {
min_healthy_percentage = 90
instance_warmup = 120
auto_rollback = true
}
}
lifecycle {
create_before_destroy = true
}
}

resource "aws_launch_template" "main" {
name = "${var.project_name}-template"
ebs_optimized = true

image_id = data.aws_ami.main.id
instance_initiated_shutdown_behavior = "terminate"
instance_type = "t3.micro"
iam_instance_profile {
name = aws_iam_instance_profile.main.name
}
monitoring {
enabled = true
}
network_interfaces {
associate_public_ip_address = true
security_groups = [aws_security_group.gitlab_manager.id]
}
tag_specifications {
resource_type = "instance"
tags = var.tags
}
user_data = filebase64("${path.module}/userdata.sh")
lifecycle {
create_before_destroy = true
}
}

terraform > src > userdata.sh

#!/bin/bash

sudo gitlab-runner start
sudo systemctl start amazon-ssm-agent

7.) On Gitlab (Settings > CI/CD > Runners), create a project runner, use linux as OS, then take note of the runner token as you will use it on the next step

8.) Export the following environmental variables. The Gitlab Manager access key id and secret access key are the ones revealed by the show_creds script you ran earlier

export AWS_DEFAULT_REGION='<AWS_REGION>'
export AWS_ACCESS_KEY_ID='<ADMIN_USER_ACCESS_KEY_ID>'
export AWS_SECRET_ACCESS_KEY='<ADMIN_USER_SECRET_ACCESS_KEY>'
export KEYBASE_USERNAME='<KEYBASE_USERNAME>'
export GITLAB_ACCESS_KEY_ID='<GITLAB_MANAGER_ACCESS_KEY_ID>'
export GITLAB_SECRET_ACCESS_KEY='<GITLAB_MANAGER_SECRET_ACCESS_KEY>'
export GITLAB_RUNNER_TOKEN='<GITLAB_RUNNER_TOKEN>'
export GITLAB_URL='https://gitlab.com'

9.) Provision the Gitlab Manager

$ packer/bin/run

In a few minutes, you should be able to see the following instances created

The runners are automatically created by the Gitlab Manager (Devops Gitlab Autoscale). You can also connect to the Gitlab Manager using Session Manager and check the logs

The message “Docker is up and running!” means SUCCESS!

Also, you should have your project runner available in Gitlab with a green status

Now that we have the Runners, It’s TIME to TEST!!!

That was a lot right? But the fun part is to test it. You can simply create a .gitlab-ci.yml file that does some “echo” commands, but we can also run some terraform plan here, which is cooler. For that make sure you do have a terraform backend.
First add the following variables in Gitlab (Settings > CICD > Variables)

TF_VAR_access_key, TF_VAR_secret_key, TF_VAR_region, and TF_VAR_keybase_username are the same as the environmental variables that you exported on your local terminal earlier

export AWS_DEFAULT_REGION='<AWS_REGION>'
export AWS_ACCESS_KEY_ID='<ADMIN_USER_ACCESS_KEY_ID>'
export AWS_SECRET_ACCESS_KEY='<ADMIN_USER_SECRET_ACCESS_KEY>'
export KEYBASE_USERNAME='<KEYBASE_USERNAME>'

TF_VAR_instance_ami_name is the name of the Gitlab Manager AMI you provisioned with Packer

Add the following .gitlab-ci.yml file

default:
image: hxhroniegss/terraform:1.5.4-gitlabci

terraform-init:
stage: build
script:
- echo "Run Terraform Init here!"
- cd $CI_PROJECT_DIR/terraform/src
- ls -la
- terraform init -backend-config="access_key=$TF_VAR_access_key" -backend-config="secret_key=$TF_VAR_secret_key"
artifacts:
expire_in: 2h
paths:
- terraform/src/.terraform
tags:
- manager
- project
- dockermachine

terraform-plan:
stage: test
dependencies:
- terraform-init
script:
- echo "Run Terraform Plan here!"
- cd $CI_PROJECT_DIR/terraform/src
- ls -la
- terraform plan
tags:
- manager
- project
- dockermachine

As well as the following .gitignore file

**.terraform
**terraform.tfstate*
**.log*

Finally, push it to your main/master branch.

From the Gitlab Manager logs, you should be able to see something like this

Cleanup!!!

Ofcourse, you can play with this as much as you want, but in case you want to delete everything for now, then simply run the script below

$ terraform/bin/destroy

You have to manually delete the runners though

As well as the following resources

What’s Next?

Since we have the Gitlab Runners, we can do a lot of experimentation and deployments on the articles to come. We can combine Packer, Ansible, and Terraform on a single pipeline, which will be an exciting project. We can even use this pipeline to deploy Kubernetes applications on an EKS cluster that we shall provision using Terraform in Gitlab CI/CD pipeline as well. Stay tuned so you can witness more of these small projects. I hope you learned something of value from this article. Cheers!

In Plain English 🚀

Thank you for being a part of the In Plain English community! Before you go:

--

--

Ronie Horca

Site Reliability Engineer | Health & Fitness Enthusiast | DevSecOps Evangelist | Expertise on Digital Transformations (FinTech, Big Data, Machine Learning)