Terraform Testing Made Easy with Python: Exploring tftest

Isar Nasimov
saas-infra
Published in
6 min readMar 2, 2023

--

tftest is a Python package that provides a testing framework for Terraform modules. It allows users to write automated tests to verify the functionality and behavior of their Terraform code, including the creation and modification of resources. tftest simplifies the testing process by providing an easy-to-use API and a range of helpful utilities, making it an essential tool for any developer or team working with Terraform.

Make sure to check out the GitHub repo with the project files and give it a star if you find it helpful. Link.

Example project

In my root directory, I have my test_my_module.py and my_tf_module directory with my terraform module.

Project structure.

In the main.tf this code block creates three local files with the same content specified in the variable “file_content” and names them as “<prefix_name>-file1–3.txt”, where “prefix_name” is another variable.

main.tf

And I have my variables.tf and output.tf

variables.tf

Ensuring that your output file contains all pertinent information is crucial for effective testing. Specifically, in this instance, it’s important that the test is able to access the file’s path.

Output.tf

Let the testing begin!

Let’s run a plan and test if some variables are present and test if some outputs are present.

Plan fixtrue

By utilizing the default variables from variables.tf, you can execute a simple terraform plan with tftest. This fixture will execute the plan and return the results to the test for evaluation.

In brief, a pytest fixture is a function that provides a fixed baseline for a test, such as setting up and configuring necessary components or data. It is called at the beginning of a test and can be reused across multiple tests.

To test that variables and outputs are present, I created 2 tests.

Plan tests

This test is very simple.
I test it the prefix_name and file_content are present inside the variables passed to the plan.

To enhance the complexity of these tests, let’s introduce custom variables.

Custom variables in plan

Within this fixture, I have incorporated custom variables specific to my module. As demonstrated, we are passing ‘tfvars=vars’ to the tf.plan.

And I have added 2 more asserts to test those vars.

Test custom variables.

The first test validates that the prefix of my files will contain the string ‘Important’.
The second test confirms that the path of ‘file-1’ is correctly identified as the first element in the ‘file_names’ output.

Tests passed!

Are you hungry for more? Perhaps you crave the ability to plan with not one, but two or even three sets of variables. Fear not, for fixtures are here to save the day once again!

Adding more vars.

With the implementation of this fixture, it will run twice for each set of parameters provided.

To break it down, the decorator ‘@pytest.fixture(params=[vars0, vars1])’ is used to indicate that the fixture will receive a set of parameters to operate on. In this specific case, the fixture is designed to execute twice, with ‘vars0’ and ‘vars1’ being the two different sets of parameters passed in. This allows for multiple test cases to be conducted with varying inputs, without the need for duplicating code.

4 tests in total

As you can see, in that test, our second dict (That was not important) fails the test.

I can almost hear someone grumbling about the hassle of having to create a python dictionary for thousands of variables. Fear not, my friend, for there is a simpler solution: let’s use a tfvars file instead!

Within the ‘my_tfvars’ directory, you will find both ‘important.tfvars’ and ‘temp.tfvars’ files. These files will then be passed to the tftest plan for execution.

Using tfvars to pass vars to the plan.

As you may have noticed, the path to the tfvars file is somewhat unconventional. It is important to note that this path must be relative to the 'main.tf' file and not to the 'test_my_module.tf' file.

Let's APPLY our knowledge.

Now that the plan tests have been established, let’s take things up a notch and conduct some testing on actual resources.

To get started, I have created an ‘apply’ fixture.

Apply fixture

In Python, ‘yield’ is a keyword used within a function to return a generator object that can be iterated over. The ‘yield’ keyword operates similarly to ‘return’ in that it provides a value to the caller, but it does so without exiting the function entirely.

Confused?

This is particularly useful in the context of pytest fixtures because it enables the fixture to execute both before and after the test function is run, allowing for more comprehensive setup and teardown procedures. In essence, the fixture can yield the resources or data necessary for testing, and then clean them up once the testing is complete. This capability greatly enhances the flexibility and efficiency of pytest testing.

The tests:

Apply tests

The initial test is straightforward: it simply verifies that the output contains 3 file paths.

For the second test, we delve a bit deeper. Here, we confirm that the files have indeed been created (as python will throw an exception if it is unable to open them). Additionally, we ensure that each file contains the desired string, “file content”. This level of testing provides a more thorough evaluation of the functionality of our IAC.

Test output

Real life cases.

Imagine that you’ve just completed your latest module — one that deploys EKS with 2 node groups — and you’re eager to ensure that future updates won’t cause any unexpected issues. So, what kind of tests can you run?

Well, first and foremost, it’s essential to confirm that all the important variables and outputs are present and functioning correctly. This can be done in a similar manner to what we’ve already demonstrated here.

Next, you may want to apply your EKS module and test connectivity to the K8S API using the request python library. Another option is to use boto3 to check that there are indeed 2 node groups present.

From there, you could deploy a simple helm application using pyhelm or helm-api, and then test connectivity to that application, even retrieving and evaluating responses.

Finally, you can assume the role of a developer and check if he or she has access to the kube-system namespace (spoiler alert: they shouldn’t!). Once you’ve finished testing, let the fixture destroy your resources to ensure that everything is cleaned up properly.

By conducting a wide range of tests like these, you can gain a high degree of confidence in the stability and functionality of your module, helping to prevent any future surprises.

Stay tuned for more posts about tftest! In future articles, I plan to delve deeper into this testing framework, exploring more advanced features like testing state files and monitoring changes in resources during testing. With these powerful tools at your disposal, you can take your module testing to the next level and ensure the utmost stability and reliability in all of your deployments.

--

--