Infrastructure construction in Terraform with modules is a must; How can we version modules while not breaking our Monorepo?

Managing Terraform Modules in a Monorepo

A solution for versioning multiple Terraform modules while preserving your Monorepo

Ben Goodman
4 min readJul 17, 2023

--

Motivation

Our organization uses a Monorepo. Chances are, if you have found this article, you use a Monorepo as well.

While building out our cloud infrastructure modules in Terraform, we realized that we wanted to preserve our Monorepo, while at the same time build our modules with some best practices. Primarily, we were concerned about making modules that are: Accessible to developers within our organization for viewing and use, and have versioned releases so that we can test and then “promote” infrastructure across development and production environments.

Existing solutions we investigated did not meet both our mono-repo and module-versioning requirements:

  1. Using a remote module registry, like HashiCorp’s Terraform Registry. Placing modules in HashiCorp’s remote Terraform registry is great for versioning modules, and sharing them either publicly or privately with a private registry. Better yet, it is free! Unfortunately, a module hosted on a public or private registry must live within its own individual repository, thus breaking our Monorepo.
  2. Building modules within the Monorepo. This solution preserves the Monorepo structure, but we don’t have access to versioning. Modules are just defined and can be continuously updated. This leads to difficulties when wanting to confidently test a module update in dev without a change accidently making its way into prod. Without versioning, every module reference is always fully up to date, even when just testing out new changes.

New Approach — Versioning modules within our Monorepo

The key change was to build modules locally within our Monorepo, but on push to dev, create versioned releases of each module in S3. Then, when we want to actually test the module in dev, we can upgrade the module reference. Once satisfied with the latest module version, we promote it to higher environments (staging, production, etc.).

Implementation

Our infrastructure directory within our Monorepo looks as follows:

infrastructure
-- iac/
---- dev/
------- api/
....
----------- main.tf
....
---- prod/
------- api/
....
----------- main.tf
....

-- module_deployment/
---- main.py
---- requirements.txt

-- modules/
---- config/
------ versions.yml
---- api/
....
------ main.tf
....

We essentially define our modules within a single sub-directory. Then, using a small deployment script, upon merge to a major branch deploy the new module version to s3 using GitHub Actions.

modules/config/versions.yml:

api: v1.0.0

module_deployment/main.py (Note how each module gets a sub directory in the defined s3 bucket):

import os
import shutil
import yaml
import boto3

BUCKET_NAME = "your-bucket-name-here"


if __name__ == "__main__":
# Read in config around versions of the modules to deploy
with open(
f"{os.getcwd()}/infrastructure/modules/config/versions.yml", "r"
) as config_file:
config = yaml.safe_load(config_file)

# Create the S3 client
boto_session = boto3.Session()
s3_client = boto_session.client('s3')

# For each module:
# 1) Zip the configuration folder
# 2) Upload the zipped module folder to S3 bucket
for module, version in config.items():
print(f"Beginning to zip module {module}-{version}.")

zip_output_name = f"{os.getcwd()}/infrastructure/modules/" +/
f"{module}/{module}-{version}"
directory_name = f"{os.getcwd()}/infrastructure/modules/{module}/"
shutil.make_archive(zip_output_name, "zip", directory_name)
print(f"Done zipping module {module}-{version}.")

print(f"Uploading {module}-{version} to S3.")
s3_client.upload_file(
f"{zip_output_name}.zip", BUCKET_NAME, f"{module}/{version}.zip")
print(f"Done uploading {module}-{version} to S3.")

GitHub Action to deploy new module versions (essentially no more than calling the above defined python script within GitHub Actions):

name: module deployment
on:
push:
branches: dev
paths:
- 'infrastructure/modules/**'

jobs:
zip-tf-modules-and-send-to-s3:
runs-on: ubuntu-latest
container: python:3.10.1-slim-buster
timeout-minutes: 3

permissions:
contents: 'read'
id-token: 'write'

steps:
- name: Checkout branch
uses: actions/checkout@v3

- name: Install dependencies
run: |
pip3 install -r infrastructure/module_deployment/requirements.txt

- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: "arn:aws:iam::MY-PROJECT_ID:role/write_s3"
aws-region: "MY_REGION"

- name: ZIP Modules and ship to S3
run: |
python3 infrastructure/module_deployment/main.py

We now have multiple versioned modules defined and deployed from our Monorepo. In order to call our versioned module from within Terraform, we simply call:

module "google-backend-api" {
source = "s3::https://s3-region.amazonaws.com/my-bucket/api/v1.0.0.zip"

...
other_variable = "XYZ"
}

Possible Downsides

  • Discoverability is potentially limited: Discoverability with this approach comes down to your organizations ability to document well each module within your Monorepo, and key-word search with the repository.
  • Cross-account credentialing can quickly become an issue: If the S3 bucket containing your module .zip files is in one account, but infrastructure is being deployed to a different account, then the credentials that Terraform uses will require cross-account access.

Conclusion

Using a simple python script in conjunction with a GitHub Action, we now can support an arbitrary number of versioned modules within our existing MonoRepo.

Have questions or see something that we missed? Let us know in the comments!

dragondrop.cloud’s mission is to automate developer best practices while working with Infrastructure as Code. Our flagship OSS product, cloud-concierge, allows developers to codify their cloud, detect drift, estimate cloud costs and security risks, and more — while delivering the results via a Pull Request. For enterprises running cloud-concierge at scale, we provide a management platform. To learn more, schedule a demo or get started today!

--

--

Ben Goodman

Senior Site Reliability Engineer @ ROKT. Working on developer tooling