Hashicorp Packer. Build Automated AWS AMI

Aizhamal Nazhimidinova
10 min readJan 16, 2024

--

Table of content:

What is Packer
Benefits of using Packer
Packer templates
Core Components and Commands of Packer Packer Workflow
Automate Golden AMI with CI/CD

What is Packer

Packer serves as an open-source tool designed to generate consistent machine images across various platforms using a unified source configuration. Known for its lightweight nature, Packer is compatible with major operating systems, demonstrating high performance by concurrently producing machine images for multiple platforms. It is essential to note that Packer does not act as a substitute for configuration management tools such as Chef or Puppet. Instead, during image creation, Packer seamlessly integrates with tools like Chef or Puppet to facilitate software installations onto the resulting image.

A machine image represents a static and self-contained unit comprising a pre-configured operating system and installed software, enabling the rapid deployment of new operational instances. The format of machine images varies for each platform, encompassing examples like Amazon Machine Images (AMIs) for EC2, VMDK/VMX files for VMware, and OVF exports for VirtualBox.

Benefits of using Packer

  1. Packer facilitates exceptionally rapid infrastructure deployment. With Packer images, fully provisioned and configured machines can be launched within seconds, a stark contrast to the conventional provisioning time of minutes or hours. This acceleration benefits both production and development environments, allowing instantaneous launch of development virtual machines, eliminating the need for prolonged provisioning periods.
  2. The versatility of Packer extends to multi-provider portability. By generating identical images for various platforms, you can seamlessly operate production in AWS, employ a private cloud like OpenStack for staging/QA, and utilize desktop virtualization solutions such as VMware or VirtualBox for development. Each environment runs on an identical machine image, ensuring unparalleled portability.
  3. Packer contributes to enhanced stability by installing and configuring all required software during the image creation process.This proactive approach detects and addresses script bugs early in the development cycle, minimizing the chances of discovering issues minutes after a machine launch.
  4. Furthermore, Packer significantly improves testability. Once a machine image is constructed, it can be swiftly launched and subjected to smoke testing, providing a quick verification of its functionality. This confidence in the image’s integrity extends to any other machines launched from it.

In essence, Packer simplifies the utilization of these advantages, making the process seamless and accessible.

Packer templates

Packer utilizes templates to coordinate the construction of one or more images. In traditional JSON templates, you define a sequence of builders, provisioners, and post-processors to create images. However, HCL2 templates introduce a different approach, where the configuration language allows you to articulate builders through sources and integrate them within build blocks.

JSON

"variables": {
"access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
"secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}"

HCL2

provider "aws" {
access_key = var.access_key
secret_key = var.secret_key
variable "secret_key" {
type = string
default = var.AWS_SECRET_ACCESS_KEY

Core Components and Commands of Packer

Packer works by defining a configuration file that specifies the settings and steps needed to build an image. Here are the core components and commands of Packer:

Core Components:

Builders. Purpose: Defines the platform and type of machine image to build (e.g., Amazon AMI, VirtualBox, Docker).

"builders": [
{
"type": "amazon-ebs",
"region": "us-east-1",
// Other builder-specific settings
}
]

Provisioners. Purpose: Specifies how to install and configure software within the machine image.

"provisioners": [
{
"type": "shell",
"script": "install-web-server.sh"
}
]

Variables. Purpose: Defines input variables that can be used throughout the configuration.

variable "aws_access_key" {}
variable "aws_secret_key" {}

Post-Processors. Purpose: Defines additional steps to be performed after the image is built.

"post-processors": [
{
"type": "docker-tag",
"repository": "my-repo",
"tag": "latest"
}
]

Core Commands:

1. packer build — Initiates the build process using the provided configuration file.
2. packer validate — Validates the syntax and configuration of a template without building an image. 3. packer inspect — Prints the attributes of a template for debugging and verification.
4. packer fix — Updates an HCL1 Packer template to HCL2 format.
5. packer version — Displays the Packer version currently installed.

Packer Workflow

The Packer workflow involves defining a configuration file, specifying the desired components and settings, and then running Packer to create machine images. Here’s a general overview of the Packer workflow:

1. Define Configuration:
Let’s start by creating our files. Create a configuration file in either JSON or HCL2 format. This file includes details about the platform, builders, provisioners, variables, and any other necessary settings. If you are using JSON format you will not be able to add providers

blog. You would need to upload providers mannually or through GUI. However, if you are writing your code in HCL format, you will be able to use it.

Example (main.json):

{
"builders": [
{
"type": "amazon-ebs",
"profile": "packer",
"ami_name": "{{user `ami_name`}}-{{timestamp}}",
"region": "{{user `region`}}",
"instance_type": "{{user `instance_type`}}",
"ssh_username": "{{user `ssh_username`}}",
"source_ami": "{{user `source_ami`}}",
"vpc_id": "{{user `vpc_id`}}",
"subnet_id": "{{user `subnet_id`}}",
"iam_instance_profile": "{{user `iam_instance_profile`}}",
"ami_users": ["{{user `aws_account_id_to_share_with`}}"]
}
],
"provisioners": [
{
"type": "file",
"source": "script.sh",
"destination": "/tmp/script.sh"
},
{
"type": "shell",
"inline": [
"chmod +x /tmp/script.sh",
"/tmp/script.sh"
]
}
]
}

2. Since we are using variables define variables.json folder. You can also pass your variables in main folder. However, if you have many variables it is more convenient to use different folder. Also you can name your file however you want.

{
"source_ami": "your-base-ami-id",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "PackerAMI",
"region": "us-east-1",
"vpc_id": "your-vpc-id",
"subnet_id": "your-subnet-id",
"iam_instance_profile": "YourInstanceProfileName",
"aws_account_id_to_share_with": "id"
}

3. Let’s define our script.sh file where we will be installing necessary packages for our OS. In case of our project, we need Filebeat and Fluentd agents. The following commands work for Ubuntu OS. Change it depending on your need.

#!/bin/bash
# Enable the "exit on error" option
set -e

#Install Fileabeat
sudo apt-get update
sudo apt-get upgrade -y
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.11.4-amd64.deb
sudo dpkg -i filebeat-8.11.4-amd64.deb
sudo apt-get install -f # To automatically fix dependencies
sudo service filebeat start

#Install Fluentd
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install -y ruby ruby-dev make gcc g++
sudo gem install fluentd -v '1.12'
sudo mkdir -p /var/log/fluent
sudo chmod 777 /var/log/fluent
sudo fluentd -s /var/log/fluent
sudo fluentd -s /var/log/fluent &

# Clean up
sudo apt-get autoremove -y
sudo apt-get clean

4. We could go ahead and run our packer build command locally, but we will be automating AMI creation by using CI/CD Jenkins Pipeline. Let’s define our pipeline as below.

pipeline {
agent any
stages {
stage('Initializing plugins') {
steps {
sh "packer plugins install github.com/hashicorp/amazon""
}
}

stage('Validating Packer Code') {
steps {
sh "packer validate --var-file variables.json main.json"
}
}
stage('Build AMI with Packer') {
steps {
script {
// Run Packer build and pass your variables file
sh "packer build --var-file=variables.json main.json"
}
}
}
}
post {
success {
echo 'Success! Packer AMI has been created!'
}
failure {
echo 'Failure! Please look at the console output!'
}
}
}

5. Once your files are ready you can push it to your BitBucket. Don’t forget to integrate with Bitbucket using app passwords and with AWS using IAM Role. There are 4 ways to integrate with AWS. However, IAM Role is more safe.

6. I named the policy PackerJenkinsRole, attach it to your server that runs Jenkins. In my case I have both Jenkins and Packer in the same instance Create an instance profile from your role.

provider "aws" {
region = "us-east-1" # Change this to your desired region
}

resource "aws_iam_role" "packer_execution_role" {
name = "PackerExecutionRole"

assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Effect = "Allow",
Principal = { Service = "ec2.amazonaws.com" },
Action = "sts:AssumeRole",
}],
})
}

resource "aws_iam_policy" "packer_execution_policy" {
name = "PackerExecutionPolicy"
description = "Policy for Packer execution"

policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Action = [
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CopyImage",
"ec2:CreateImage",
"ec2:CreateKeypair",
"ec2:CreateSecurityGroup",
"ec2:CreateSnapshot",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:DeleteKeypair",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSnapshot",
"ec2:DeleteVolume",
"ec2:DeregisterImage",
"ec2:DescribeImageAttribute",
"ec2:DescribeImages",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSnapshots",
"ec2:DescribeSubnets",
"ec2:DescribeTags",
"ec2:DescribeVolumes",
"ec2:DetachVolume",
"ec2:GetPasswordData",
"ec2:ModifyImageAttribute",
"ec2:ModifyInstanceAttribute",
"ec2:ModifySnapshotAttribute",
"ec2:RegisterImage",
"ec2:RunInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"ec2:RequestSpotInstances",
"ec2:CancelSpotInstanceRequests",
],
Resource = "*",
},
],
})
}

resource "aws_iam_role" "packer_execution_role" {
name = "PackerExecutionRole"

assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Effect = "Allow",
Principal = {
Service = "ec2.amazonaws.com",
},
Action = "sts:AssumeRole",
}],
})
}

resource "aws_iam_instance_profile" "example_instance_profile" {
name = "PackerInstanceProfile"

role = aws_iam_role.packer_execution_role.name
}

resource "aws_iam_role_policy_attachment" "packer_execution_attachment" {
policy_arn = aws_iam_policy.packer_execution_policy.arn
role = aws_iam_role.packer_execution_role.name
}

7. Since, I’m sharing my ami with another account. I need to define a policy in my account B as well. I run a terraform code to create my policy in dev account (Account B).

provider "aws" {
region = "us-east-1"
alias = "dev"

assume_role {
role_arn = "arn:aws:iam::271858351803:role/Execution-role"
session_name = "Packer"
}
}

# First Destination Account
resource "aws_iam_role" "destination_execution_role_1" {
name = "DestinationPackerExecutionRole1"

assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Effect = "Allow",
Principal = { Service = "ec2.amazonaws.com" },
Action = "sts:AssumeRole",
}],
})

# Specify the provider alias for resources in the dev account
provider = aws.dev
}

resource "aws_iam_policy" "launch_instance_policy_1" {
name = "LaunchInstancePolicy1"
description = "Policy to allow launching EC2 instances in Destination Account 1"

policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Effect = "Allow",
Action = "ec2:RunInstances",
Resource = "*",
},
{
Effect = "Allow",
Action = "ec2:DescribeImages",
Resource = "*",
},
],
})
provider = aws.dev
}

resource "aws_iam_role_policy_attachment" "launch_instance_attachment_1" {
policy_arn = aws_iam_policy.launch_instance_policy_1.arn
role = aws_iam_role.destination_execution_role_1.name

# Specify the provider alias for resources in the dev account
provider = aws.dev
}

8. Once your files are ready. Let’s run some codes.

Install Packer on Linux

wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install packer

Install Packer on MacOS

brew tap hashicorp/tap
brew install hashicorp/tap/packer

Install Amazon plugin

packer plugins install github.com/hashicorp/amazon

Additional plugins can extend Packer’s functionality for specific use cases.

9. Initialize packer. If the template requires any plugins that are not already installed, the “packer init” command will download and install those plugins. You can also use it as part of your Jenkinsfile command.

packer init main.json

10. Run Validation (Optional):
If you want to validate and try to build your packer locally on your machine, you can try running next two commands. Otherwise, just run your Jenkins Pipeline. Use the packer validate command to check the syntax and configuration of your template without performing the actual build. Also we need to pass variables file if you are using one.

packer validate main.json

packer validate main.json

11. Build Machine Image (Optional): Execute the packer build command with your configuration file as an argument to initiate the image-building process.

packer build main.json

12. Configure your Jenkins Pipeline. Check out Automate Golden AMI with CI/CD below step for more information. Run you Jenkins Pipeline. Packer is going to take the next steps.

  1. Packer Executes:
    Packer creates a temporary instance based on the specified source AMI and provisions it according to the defined provisioners.
  2. Builders and provisioners run in sequence to configure the instance and install software.
  3. Create Machine Image:
    Packer creates a new machine image (AMI, Vagrant box, Docker image, etc.) based on the provisioned instance.
  4. Cleanup (Optional):
    The temporary resources (e.g., instances) created during the build process are automatically terminated and cleaned up.
  5. Output Artifacts:
    If the build is successful, Packer outputs the identifier of the created machine image (e.g., AMI ID) and any additional artifacts specified in post-processors.
  6. Artifact Usage:
    Use the generated machine image in your infrastructure, whether for deploying virtual machines, containers, or other instances.
  7. Iterate and Update:
    If needed, iterate on the configuration file to make changes or improvements, and then repeat the build process.
  8. Version Control:
    Store your Packer configuration files in version control systems (e.g., Git) to track changes and manage infrastructure as code.

Automate Golden AMI with CI/CD

Automating Continuous Integration and Continuous Deployment (CI/CD) for Packer involves integrating Packer builds into your CI/CD pipeline. Below is a general guide on how to automate CI/CD for Packer with a focus on integrating it into a popular CI/CD tool like Jenkins. Adjustments may be needed based on your specific CI/CD tool.

Prerequisites: 1. CI/CD Tool:

Choose a CI/CD tool like Jenkins, GitLab CI/CD, CircleCI, Travis CI, or others based on your preferences and requirements. 2. Packer Template:

Have a Packer template (e.g., golden-ami.pkr.hcl ) configured for your AMI creation.

packer plugins install github.com/hashicorp/amazon

Steps:
1. Set Up CI/CD Environment:

Install and configure your chosen CI/CD tool on your build server.

2. Version Control:

Store your Packer template and CI/CD scripts in a version control system like Git.

3. Create CI/CD Pipeline:

Define a CI/CD pipeline in your CI/CD tool’s configuration file (e.g., Jenkinsfile for Jenkins).

Adjust the pipeline according to your specific needs, adding stages for testing, deployment, or any other steps.

4. Configure CI/CD Environment Variables:

Set up environment variables for AWS credentials in your CI/CD tool’s settings (e.g., Jenkins Credentials plugin). These credentials should have the necessary permissions for Packer to interact with AWS.

5. Trigger CI/CD Pipeline:

Trigger the CI/CD pipeline manually or configure it to trigger automatically upon code changes.

6. Monitor and Iterate:

Monitor the CI/CD pipeline and review build logs to identify and resolve any issues. Iterate on your Packer template and CI/CD pipeline as needed.

Notes:
Secrets Management:

Handle sensitive information (e.g., AWS credentials) securely using your CI/CD tool’s built-in secrets management or external solutions.

Parallelization:

Depending on your needs, consider parallelizing certain stages of your CI/CD pipeline for faster image creation.

Notifications:

Set up notifications to alert stakeholders on successful or failed CI/CD builds.

Artifact Management:

If you use the generated AMI as an artifact, ensure that it’s versioned and properly managed.

This guide provides a basic outline, and specific configurations will vary based on your CI/CD tool. Adjust the examples and steps to fit your environment and requirements.

Conclusion

Packer simplifies and streamlines the process of creating consistent machine images for various platforms. By automating the creation of golden AMIs and integrating it into CI/CD pipelines, organizations can achieve greater efficiency, reliability, and security in their infrastructure deployment processes. Packer’s multi-platform support and versioning capabilities make it a valuable tool for managing and maintaining a scalable and consistent infrastructure.

Resources

--

--