Series: Gitlab pipeline (#2)

Terraform Packer.

Automation for creating custom AMIs.

Karishma
Nerd For Tech

--

#AMI=Replication Photo by Sid Balachandran on Unsplash

This is the 2nd article in the series of “Gitlab pipeline”. The aim here is to show the steps needed to create these custom AMIs, why they are needed and how it was backed by code.

We used Terraform packer to “document” all the manual steps done to create the AMIs.

2 AMIs were needed — one for runner manager and one for job executor.

AMI for runner manager

Launch a t2.nano instance using latest (EBS enabled) Ubuntu image provided by Amazon as the base AMI. Perform the following steps to create its AMI:

  • Manually create a key-pair used by the “runner manager” to SSH into the “job executor”. Copy this private key and reduce its permission to 400.
  • Install plugins named docker, gitlab-runner, fleeting-plugin-aws.
  • Create AWS config and AWS credentials file on the runner manager EC2 instance used by gitlab-runner plugin to talk to ASG.
  • Copy the config.toml file to register the shared Gitlab runner (created via the Admin UI) withgitlab-runner plugin.

Create a file named runner-manager-ami.pkr.hcl to incorporate the above content:

packer {
required_plugins {
amazon = {
version = ">= 1.2.8"
source = "github.com/hashicorp/amazon"
}
}
}

variable "AWSAccessKeyID" {}
variable "AWSSecretAccessKey" {}
variable "GitlabRunnerToken" {}

source "amazon-ebs" "ubuntu" {
ami_name = "gitlab-runner-manager-packer-ami"
instance_type = "t2.nano"
region = <VALUE>
source_ami_filter {
filters = {
name = "ubuntu/images/*ubuntu-jammy-22.04-amd64-server-*"
root-device-type = "ebs"
virtualization-type = "hvm"
}
most_recent = true
owners = ["099720109477"]
}
ssh_username = "ubuntu"
}

build {
name = "runner-manager-ami-packer"
sources = [
"source.amazon-ebs.ubuntu"
]

provisioner "file" {
source = "privateKey.pem"
destination = "~/.ssh/key.pem"
}
provisioner "shell" {
inline = [
"chmod 400 ~/.ssh/key.pem"
]
}

provisioner "shell" {
script = "install-docker.sh"
}
provisioner "shell" {
script = "install-gitlab-runner.sh"
}
provisioner "shell" {
script = "install-fleeting-plugin-aws.sh"
}

provisioner "shell" {
inline = [
"mkdir ~/.aws"
]
}
provisioner "file" {
source = "aws-config"
destination = "~/.aws/config"
}
provisioner "file" {
source = "aws-creds"
destination = "~/.aws/credentials"
}
provisioner "shell" {
inline = [
"sed -i 's|aKeyID|${var.AWSAccessKeyID}|' ~/.aws/credentials",
"sed -i 's|aSecretAccessKey|${var.AWSSecretAccessKey}|' ~/.aws/credentials"
]
}

provisioner "file" {
source = "config.toml"
destination = "~/config.toml"
}
provisioner "shell" {
inline = [
"sudo mv ~/config.toml /etc/gitlab-runner/config.toml",
"sudo sed -i 's|aRunnerToken|${var.GitlabRunnerToken}|' /etc/gitlab-runner/config.toml"
]
}
}

2 provisioners have been used above :

  • “shell” is used to execute the specified commands in the shell.
  • “file” is used to copy the given files in specified locations.

Notice 3 variables have been declared and used in sed commands. I will explain that in the section Execution . First I will show the shell scripts used in the “shell” provisioners :

Install docker

Notice line #19 : user ubuntu should be a part of group docker . Rest of the commands are from the documentation.

Install fleeting plugin

#!/bin/bash

sudo su -
wget https://gitlab.com/gitlab-org/fleeting/fleeting-plugin-aws/-/releases/v0.4.0/downloads/fleeting-plugin-aws-linux-amd64
mv fleeting-plugin-aws-linux-amd64 fleeting-plugin-aws
chmod 744 fleeting-plugin-aws
sudo mv fleeting-plugin-aws /usr/local/bin/

Install gitlab-runner

#!/bin/bash

curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
sudo apt-get install gitlab-runner

Configure the gitlab-runner plugin

concurrent = 2
check_interval = 0
shutdown_timeout = 0

[session_server]
session_timeout = 1800

[[runners]]
name = <VALUE>
url = <VALUE>
id = X
token = aRunnerToken
token_obtained_at = 2023-11-30T06:43:05Z
token_expires_at = 0001-01-01T00:00:00Z
executor = "docker-autoscaler"
shell = "sh"
limit = 2
request_concurrency = 2

[runners.cache]
MaxUploadedArchiveSize = 0

[runners.docker]
image = "docker:latest"
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
privileged = false

[runners.autoscaler]
capacity_per_instance = 1
max_use_count = 0
max_instances = 2
plugin = "fleeting-plugin-aws"

[runners.autoscaler.plugin_config]
config_file = "/home/ubuntu/.aws/config"
name = "terraform-asg-gitlab-pipeline"
credentials_file = "/home/ubuntu/.aws/credentials"
profile = "default"

[runners.autoscaler.connector_config]
username = "ubuntu"
protocol = "ssh"
use_external_addr = true
key = "/home/ubuntu/.ssh/key.pem"
use_static_credentials = false

[[runners.autoscaler.policy]]
idle_count = 0
idle_time = "10m0s"

The parameter aRunnerToken gets its value from the sed command in the .hcl file. The actual values are provided through the CLI command (see Execution).

Create AWS config and credentials

[default]
region = <VALUE>
[default]
aws_access_key_id=aKeyID
aws_secret_access_key=aSecretAccessKey

The parameters aKeyID& aSecretAccessKey gets their values from the sed command in the .hcl file. The actual values are provided through the CLI command (see Execution).

Execution

  • Pre-requisite steps:
cd path_to_packer_directory
export AWS_ACCESS_KEY_ID= ****
export AWS_SECRET_ACCESS_KEY= ****
  • Create a variable file called variableValues.json(in the same directory). Not committed in Git due to sensitive content:
{
"GitlabRunnerToken": "\"VALUE\"",
"AWSSecretAccessKey": "VALUE",
"AWSAccessKeyID": "VALUE"
}
  • Run packer build -var-file=variableValues.json runner-manager-ami.pkr.hcl .

AMI for job executor

Job executor needs an AMI like above but with only docker installed. Hence rest of the steps are not needed. You can create it using packer build lanuch-template-ami.pkr.hcl

These AMIs are available privately in your account for your region.

References:

PS : Follow this series — “Gitlab pipeline” for more articles.
In the
next article, write your own gitlab pipeline to deploy applications on K8S cluster.

--

--

Karishma
Nerd For Tech

QA Architect | Ops practitioner | System Design enthusiast