Chaining Machine Image Builds with Packer

How chaining Packer templates can optimize your development of immutable server processes

Sarah Horan Van Treese
The Startup
10 min readApr 30, 2020

--

Hashicorp’s Packer is a tool for automating the creation of pre-baked machine images across multiple platforms, enabling server management via infrastructure-as-code in a way that is especially helpful for scenarios which intentionally require a high degree of consistency but need some degree of variation from one another.

In this post, I’ll discuss an approach to machine image building and management that I’ve occasionally heard referred to as an image baking factory or an image bakery, which essentially means chaining image builds such that immutable server images are built upon one another. This post specifically focuses on implementing this approach using Packer.

This has some big potential benefits to specific scenarios, but it also introduces additional steps and complexities, so I’ll also discuss when you would and wouldn’t benefit from the approach, compare the benefits to alternative solutions, and then finally dig into some Packer template examples for AWS and Azure.

If you are completely new to Packer, I’d recommend at least reviewing their overview on getting started building images and overview of Packer terminology, since most of these tips assume that general background.

Background

Packer templates have a few basic sections, but the ones most important to this discussion are Builders and Provisioners.

One of the use cases that Packer specifically highlights and covers in depth in their documentation is supporting multi-platform builds from a single template. Some example use cases would be:

  • dev servers hosted as on-premise VM’s with test and prod hosted in the cloud OR
  • distributed server software which certifies support across multiple clouds and on-premise hosting.
In this example template, you could use a single template to produce an Amazon Machine Image (AMI) and also a VMWare virtual disk to support multi-platform image parity; optionally, certain provisioning steps can be conditional to specific builders using “only” and “except” attributes.

What about diverging image scenarios within a single-platform?

The single template scenario above is focused primarily on scenarios where the provisioners are shared but the platform is different. But for single-platform use cases where the divergence is in the provisioners, I’ve found chaining image builds to be the optimal approach.

At a high level, chaining your image builds simply means breaking your provisioners out into multiple templates and chaining the builds such that you consume the output from a prior build as the base image to configure in the subsequent build. It’s conceptually very similar to what you do with software builds, but the mechanics and implementation with Packer are very different.

How does this work?

For cloud image builders (AWS AMI’s, Azure ARM’s, etc.), the process look something like this:

In the above, image building is split into three packer templates, each which different provisioning steps. Template 1 references the base image as input and produces a custom image output; each subsequent template references the prior output as its input.

What use cases does this apply to?

Chaining your image build and templates makes sense for single-platform scenarios where you have divergent requirements for servers themselves and would use each generalized image to generate one or more servers.

This is actually a pretty common use case: provisioners are the mechanism to install and configure software, so if you are supporting different server configurations on the same platform, provisioners are likely to be the area that differs.

Consider the following example use cases with diverging server requirements:

  • dev, test and prod are all hosted within the same cloud but still have some minor differences between servers (i.e. perhaps you install additional software and/or open up additional access for developers in your dev environment) OR
  • an organization with multiple software products or teams which share 80% of image configuration needs but which each require some different additional variations specific to their product.
In this case, all three images have their own servers; servers from custom image 3 require all the same configurations as servers from custom image 2 and custom image 1 plus some additional configurations.
In this case the two servers share some of the same provisioning requirements but each have their own additional provisioning needs.

Benefits vs Alternative Approaches

For the right scenarios, chaining templates and image builds can offer benefits of:

a) reductions in time during image building processes and image development and/or

b) maximum maintainability of code and reuse of infrastructure code across divergent configurations.

To really understand these benefits, let’s discuss alternative solutions. This post is about mechanics specific to Packer, so all alternative solutions use Packer, as well.

Consider the following alternative solutions to handle the latter example scenarios from earlier for building image 2A and 2B:

Alternative 1: Create 2 separate templates and include the same provisioner steps in each.

This approach will work, but is less desirable for a couple reasons:

  1. Since provisioning steps are copied between templates, you need to maintain the code for any shared provisioning steps in two places. The risks around dual maintenance are somewhat offset when provisioning steps are pointers to shared files (i.e. scripts or configuration files for tools like Chef or Puppet), but there’s still a potential concern if new steps are added or for changes which otherwise require modifying the template directly.
  2. The builds will run independently which doubles build time overall for shared steps; this can be significant considering how long builds can sometimes be (if you were building Windows images, updates alone can take 20+ minutes). This time cost is especially heavy during development of provisioning scripts and Packer templates, where you may have to iterate multiple times across multiple templates and incur longer wait times while testing your changes.

Alternative 2: Combine into a single template and use conditional attributes in provisioners to reference builders by name.

Packer provides support for multiple build definitions of the same builder type within the same build template if you explicitly provide build names for each build definition, which gives us an option to use an approach similar to the earlier described approach for multi-platform builds if combined with conditionally running provisioning steps with “only” and “except” attributes by specifying the builds by name, as per below.

This is preferable to alternative 1 from a code maintenance perspective, but this approach still incurs the same build time costs as alternative 1 and could get messy from conditional logic.

Even with the builds being parallelized, the cost is still incurred during development and builds.

Time Savings during Development with Chained Templates

Let’s say for the sake of example that the average time to run provisioners for our original example was 20 minutes for shared provisioners (custom image 1) and 10 minutes each for provisioners on custom image 2A and 2B.

Consider the ramifications if you had a need to maintain provisioners specific to 2A.

With either alternative approach, anytime maintenance needed to be performed to 2A or 2B, you would incur the build times from the first 20 minutes of shared provisioners even though those provisioning steps were not the ones changing or being maintained. This has significant implications during development cycles when you may need to run the builds multiple times to troubleshoot, fix, etc.

Time Savings During CI Builds with Chained Templates

Additionally, consider the impact to your ability to maintain triggers in a CI pipeline for automating your Packer builds.

Most CI build systems support triggers based on source control commits that can be filtered by file and folder paths, which allows build optimization through finer-grained control over what changes trigger which builds. Let’s compare the optimal setup and triggers we’d be able to achieve with chaining the builds and with the alternative solutions.

Build Setup with Alternative 1: 2 templates, shared provisioners in both files

In the scenario where 2 templates point to the same shared scripts, then you’d need to configure two builds, each triggered by changes to its relevant template and scripts/configs but also triggered by changes to the shared scripts.

In this approach, changes to the template or the scripts/configs specific to template 2A or 2B each trigger only their one respective 30 minute build. Changes to the shared scripts trigger two 30 minute builds.

Build Setup with Alternative 2: 1 template, shared and dedicated scripts

In this approach, since the template is shared you’d trigger a single 30 minute build for changes to any files. This means a higher likelihood of changes to 2A requiring rebuild of 2B even with the optimized configuration.

Potentially you could try to optimize by creating two builds and having each run packer CLI specifying the named build to execute based on folder paths but would only see benefits if you found that the highest frequency of changes were to the underlying scripts/configs referenced by provisioners and not changes to the shared template file.

In this case, anytime you had a change for 2A that included changes to the shared template (i.e. adding a step or script to run for 2A only), you’d still have to rerun the build for 2B even with this setup.

Although this potentially reduces the frequency of builds and the likelihood of 2A’s maintenance requiring 2B’s rebuild (if no changes are required to the shared template), changes to shared files kick off 2 separate builds, which potentially doubles build times in many scenarios, offsetting the potential optimization except for in the specific scenarios where changes were made for 2A or 2B scripts without requiring touching the Packer template file.

Build setup with chained templates

With chained templates, each template AND its relevant provisioning files (scripts, etc.) are unshared, so the triggers from source are for the changes relevant to only that template, with changes to the first image in the chain also triggering the other two downstream.

Final comparison

To sum up the math, here are build time impacts from various types of potential maintenance scenarios compared across the four approaches:

Each alternative solution has negative impacts to build pipelines. Whereas the 1 template/1 build approach has the lowest overall build time in the case of changes to shared provisioners, it has the highest frequency of regenerating images that did not have changes. The 1 template/2 builds approach only limits this in the case where scripts are changed without touching the shared template. The 2 templates/2 builds approach has the benefit of not regenerating images where unneeded, but the build times are higher.

The chained template approach with 3 templates and 3 builds has the shortest overall time in builds and never requires regenerating an image unnecessarily.

Should you use this approach?

First, it’s important to understand when NOT to use this approach. The chaining process above doesn’t make sense if the only thing that uses “Custom Image 1” and “Custom Image 2” is another Packer build template. Why not?The output of each step above is a fully generalized image. Unlike server snapshots (of specialized named servers), where each snapshot stores only the disk difference since the prior snapshot, each generalized image is the size of the full image. If you simply need snapshots or checkpoints within a single instance of a server, chaining images would result in nearly 3 times the disk size in the earlier example and would not be the best approach to your use case.

Iterative image building really only gives benefits when you would use each generalized image to generate one or more servers OR to generate one or more images in your chain.

What about on-prem virtualization scenarios?

For on-premise virtualization (i.e. VirtualBox, VMWare, Hyper-V), the high level process is a bit more complex, but could be accomplished with something similar to this:

For on-premise, virtualization the initial template would use an ISO and each subsequent template would use a specialized VM, requiring an additional step to create a running VM between running Packer templates.

Like with the cloud, the on-premise scenario only makes sense if you have a need to run server instances from each image. If you used this approach, you would still want to run it as an automated chain where you have a pre-build step before running the Packer build that uses the VM to generate a dedicated VM for that build and then a post-build step to subsequently destroy it once the output was generalized in order to prevent any configuration drift, particularly if you wanted to run build templates independently rather than as a single build process with multiple steps.

Finally… some examples

If you think this approach meets your use case, I’ll leave you with a couple examples of how to implement this in your Packer templates. The examples below would be syntax you’d use in your “Template 1” and “Template 2A” from my earlier scenarios. I’ve included examples from Azure ARM and Amazon AMI, but Packer’s documentation on other builder types provides details on many ways to specify your own image as inputs for other builder types.

Azure

In Azure, your template for Custom Image 1:

And your template for Custom Image 2A, where image_publisher, image_offer and image_sku parameters are replaced by custom_managed_image_name and custom_managed_image_resource_group_name.

AWS

In AWS, your template for Custom Image 1:

And, finally, your template for Custom Image 2. In the AWS builder, the parameter names don’t change, so you are just swapping the name to “custom-image-1” and setting the owner to “self”:

--

--

Sarah Horan Van Treese
The Startup

Software Architect, DevOps and Software Engineering Leader