What does a good Terraform Module have?

PR Code - Christopher Pateman
Version 1
Published in
7 min readJul 4, 2024

Terraform Modules are great for isolated components you can reuse and plug-in to your main Infrastructure as Core code base. They can then also be shared and used by multiple other teams at the same time to reduce repeated code, complexity and increase compliance. However, as they are isolated, they are harder to test, maintain and support with multiple teams working on them. In this post I will go through some general suggestions for keeping a well formatted, versioned, and tested module.

Testing

Modules will be altered and managed by multiple different teams for their own requirements, so we need methods to validate the module is in a good state to use, readable and secure. Therefore, we need to implement different levels of testing in the module.

Format

Code Quality is something I am a big fan of. Although it doesn’t have any impact to the performance or action of the code, but it does make the code more readable for yourself and the next engineer coming work on the module. It also sets a standard for all engineers to work in the same way, to the same standard.

To do this you can use the default built in command `terraform fmt — recursive` to format the code, and then the `terraform fmt –check` to validate it. This will check the Terraform code is well formatted and fail if it is not. However, this is limited to just check the format of the code against the Terraform standard with limited validation of the actual implemented code.

This is why there are other tools like TF Lint https://github.com/terraform-linters/tflint. TF Lint, like others out there, can validate the format with configuration to align it to your standards like naming conventions. It also reviews the resource blocks used to make sure they are implemented correctly like what the Terraform Validate command will do. Finally, it can also check your providers versioning to make sure they are using the latest version.

TF Lint can easily be run using the Docker image:

docker run — rm -v $pwd:/data -t ghcr.io/terraform-linters/tflint

Validate

As spoken about in the previous section you can use TF Lint to validate the resource blocks implementation against the providers, or a more native method is to use the Terraform Validate. This does the same as TF Lint, but only validates against the provider and not the linting as well.

To run this, you can do the simple command `terraform validate` but this can only be run after the Terraform has been initialized.

Security

Security is key to shift left the testing of secure resources. Instead of waiting until they are deployed into the environment, we can validate if they are configured correctly as part of the IaC. This can take into consideration the different cloud providers and their respective standards.

The common tool I use is TFSec (https://github.com/aquasecurity/tfsec), but there are also others like Checkov (https://www.checkov.io/1.Welcome/What%20is%20Checkov.html). For TFSec you can run this as part of a docker container and output the JUnit report.

docker run — rm -t -v $tfDir`:/src tfsec/tfsec ./src — format JUnit > $reportPath/TFSecReport/junit.xml

This will scan the `$tfDir`, which is the Terraform directory, for any security or best practice issues with the resources. The outcome is printed to the screen, but there is also the JUnit report output that can be used to publish the outcome to tools like Azure DevOps.

Code Quality

Something tools don’t generally pick up with Terraform is code quality. There is no perfect way to do this and there are multiple options on what is the best method. I spoke about his before with https://prcode.co.uk/2022/02/08/terraform-code-quality/, which even I have changed stance on some of these since writing it.

In general, you are looking for consistency and easy reading, as the aim is to give you colleges something workable to code against, plus also when you need to find something it is easy. Some quick tips are:

  • Naming convention for files that describe what content is in the file.
  • Use the naming convention with files to also order then in a logical method e.g. numbers or grouped by resource type.
  • Consistent naming convention for resources, data attributes and variables.
  • No monolithic code files to reduce scrolling.
  • Smart Modules made with clear usage, rather then made for the sake of it.

Documentation

A key part to Terraform Modules is documentation, which should describe:

  • What is the module for?
  • How it should be used?
  • Inputs and Outputs for the module.

This should be all put within the README based in the root of the module, to be easily found and read using markdown. Instead of hand writing this you can use tools like Terraform Docs (https://github.com/terraform-docs/terraform-docs) that can auto-generate all the Terraform details for you.

There are many ways to set this up, but my preference is to use the configuration file `.terraform-docs.yml` that describes how to build the README. From the example below you can see I am setting the format to be in markdown, with the header and footer content located in the `docs` folder. The `sort` configuration organises the Terraform outputs order and the `output` option says to put the content in the `README.md` file by replacing the content. Finally, the `content` describes how the content should be setup with placeholders that are replace by their respective name. Of course, there is other configuration and methods you can do with this file.

formatter: "markdown table" 
footer-from: "docs/footer.md"
header-from: "docs/header.md"
sort:
enabled: true
by: name
output:
file: README.md
mode: replace
content: |-

{{ .Header }}

This is the modules Terraform documentation.

## Example

```hcl

{{ include "examples/_main.tf" }}

```

{{ .Inputs }}

{{ .Outputs }}

{{ .Footer }}

Within the placeholders you have the header and footer located in the `doc` folder, then there is also an example in the `examples` folder. This is great to describe how to use the Terraform module using the HCL code as an example. Like it should in the configuration you would add a Terraform file with an example of how to set it up, then the Terraform Docs will render the HCL on the markdown.

The file/folder setup would look like this:

  • >Docs
  • >>Footer.md
  • >>Header.md
  • >Examples
  • >main.tf
  • >.terraform-docs.yml
  • >README.md

You could then generate the Terraform documentation using the command:

terraform-docs — config .terraform-docs.yml .

Continuous Integration

These tools and standards above will ensure a secure, well formatted, documented, and valid Terraform module for use. The next part of it is ensuring that these standards are held up when new development is happening. To do this we create a Continuous Integration process that will execute these tools on Pull Requests to protected branches and once merged in.

This is the repeatable design I have come up with that includes all the validation, support and documentation as mentioned before. Below is a walkthrough on how the process works.

Branching

The `main` branch is protected to not allow any direct commits. This ensures that all new changes go through the Pull Request (PR) flow to validate the changes.

  1. Create a new Feature branch based on the current Main branch.
  2. Make all changes and tests in the Feature branch.
  3. Create a Pull Request with the Build Validation and at least 1 peer review from a college.
  4. Once approved and passed, merge into the Main branch.

Build Validation

The Build Validation is to validate the code and change are up to the correct standards. This is triggered when a new Pull Request is created, based on the changed Feature Branch. If any of the tests fail then it would fail the build and in turn fail the Pull Request, stopping it from merging.

  1. Validate the Terraform code.
  2. Validate the format of the code.
  3. Validate the Security of the code.

Continuous Integration

This Continuous Integration (CI) pipeline is triggered once a new change is merged into the Main branch. It will update the documentation and tag the branch with a new version.

The reason I have it after it merge, is having in the PR means every time you have a change it needs to re-evaluate and commit back to the feature. This would also happen if someone else changes the Main branch while you are still working on the PR. I have found this then got a lot of complaints as it resets the review constantly, causing more work. In this method we would only do these updates once everything was signed off and part of the Main branch.

  1. Generate the automated documentation.
  2. Force commit this back to the Main branch.
  3. Get the latest version number.
  4. Increment the version based on logic.
  5. Tag the latest commit including the documentation change on the Main branch.

The versioning logic can be determined by what you see fit, but a general pattern I would advise is the Semantic Versioning.

Major.Minor.Patch e.g. 1.2.3

  • Major should be used for any breaking change like upgrading the provider version.
  • Minor should be used for any new feature, so others can decide when to take them in.
  • Patch should be used for hotfixes for issues, which means people can either decide to take them in or use the 1.2.* method to take all new patches in.

Conclusion

The goal is to create a repeatable, isolated, and safe method to create new module. We do not want to put more work involved in setting these up each time, as people would then just find ways around it. Therefore, making it easier to build and maintain is key to adoption and continued support.

Always make sure the modules are valid, secure, readable, and documented with correct versioning.

About the Author:

Christopher Pateman is the Azure Team Lead at Version 1.

--

--

PR Code - Christopher Pateman
Version 1

I’m a Azure DevOps Engineer with a wide knowledge of the digital landscape. I enjoy sharing hard to find fixes and solutions for the wider community to use.