Interpreting Terraform Plan Outputs with the Help of an LLM

Kevin Alexis
7 min readSep 2, 2024

--

So, you’ve got a Terraform plan output in front of you. It’s filled with lines of code that look like a language only your computer understands. We’ve all been there — staring at that screen, wondering if we need a decoder ring to figure out what’s going on. But what if I told you there’s a way to make sense of all that gibberish without losing your sanity? Enter LLMs (Language Learning Models) — the trusty sidekicks that can help you translate that tech-speak into something that actually makes sense. In this article, we’re diving into how you can use an LLM to make reading Terraform plans feel less like deciphering ancient hieroglyphics (hahah its not like it , but you know ) and more like, well, reading something in your native tongue. Let’s get started!

This is how a common terraform plan looks like

simple terraform plan output

And at the end you get the summary of all changes terraform would try to do

While interpreting all these changes isn’t too difficult, it’s easy to imagine a future where applications and integrations leverage LLMs to provide us with more intuitive, human-readable feedback. Imagine getting insights and summaries that are as clear as a conversation with a colleague — now that’s something worth looking forward to!

Lets create then this simple idea of using an LLM as interpreter

  • Step 1 : Get a plan file out of terraform
  • Prepare the prompt template
  • Invoke a local model
  • Show results

So , ill be using ollama as framework for calling a local LLM like llama3.1 along with lang chain in case you would like to extend this application

Have a terraform project and generate a terraform output using these commands

terraform plan -out=myPlan.plan 
terraform show -no-color myPlan.plan > myPlan.txt

I was thinking in converting the terraform binary into json using some more complex query and converting capabilities like this :

terraform plan -out=plan && \
terraform show -no-color -json plan | jq -r '.resource_changes[] | select(.change.actions[0]=="update" or .change.actions[0]=="create" or .change.actions[0]=="create") | .address'

But , i had no luck doing it , since the terraform plan json changes over time along with terraform version . Feel free to look into this idea . Some open source projects tackles the idea of parsing terraform plan json like https://github.com/lifeomic/terraform-plan-parser

Also from terraform documentation :

So , once we have our file already set up , lets create our LLM script .

Im using langchain along with olama for running an LLM in local environment

from langchain_core.prompts import ChatPromptTemplate
from langchain_ollama.llms import OllamaLLM

with open('monedero.txt', 'r') as file:
terraform_plan_text = file.read()

template = """Question: {question}

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
~ update in-place
- destroy
-/+ destroy and then create replacement

Answer: Provide a summary of the changes Terraform will make, also ommit if there are any change in outputs"""

question = "Based on the following Terraform plan, what changes will be performed?\n\n" + terraform_plan_text

prompt = ChatPromptTemplate.from_template(template)

model = OllamaLLM(model="llama3.1")

chain = prompt | model

result = chain.invoke({"question": question})
print(result)

Lets break into this code

Step 1: Read the Terraform Plan File

First things first, we need to grab the content of our Terraform plan. In this example, our plan is stored in a file named monedero.txt. We’ll read this file into a variable so we can pass it to our LLM later.

with open('monedero.txt', 'r') as file:
terraform_plan_text = file.read()

Step 2: Prepare the Prompt Template

Now that we have the Terraform plan text, we need to prepare a prompt template. This template will guide the LLM on how to interpret the plan. The template includes instructions on what to look for — like the creation, update, or destruction of resources.

template = """Question: {question}

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
~ update in-place
- destroy
-/+ destroy and then create replacement

Answer: Provide a summary of the changes Terraform will make, also omit if there are any changes in outputs."""

Step 3: Formulate the Question

Next, we combine our Terraform plan text with the prompt template to form a specific question that we’ll ask the LLM. This question is essentially, “What’s going to change based on this plan?

question = "Based on the following Terraform plan, what changes will be performed?\n\n" + terraform_plan_text

Step 4: Create the Prompt and Model

Using ChatPromptTemplate from langchain_core, we can create a prompt from our template. Then, we set up our LLM using the OllamaLLM class, specifying the model we want to use—in this case, "llama3.1".

prompt = ChatPromptTemplate.from_template(template)
model = OllamaLLM(model="llama3.1")

Step 5: Invoke the Model

Finally, we chain the prompt and model together. When we invoke this chain, it processes the question and provides us with a summary of the changes in the Terraform plan.

chain = prompt | model

# Invoke the model with the question
result = chain.invoke({"question": question})
print(result)

So , after testing out this tiny script , im getting this silly approach with a large terraform plan , seems like the quantized model is not able to get a complete summary of all of the changes

Lets try to change our prompt a littlle bit to improve this performance

template = """Question: {question}

The following is a Terraform plan output, detailing the upcoming changes to the infrastructure:
+ create new resources
~ update existing resources in place
- destroy existing resources
-/+ destroy and then create a replacement resource

Please provide a clear and accurate summary of the changes that Terraform will implement. Your summary should be suitable for stakeholders, highlighting the key actions and their impact on the infrastructure. Be precise and avoid technical jargon where possible, ensuring the explanation is understandable for non-technical audiences.

Answer: Provide a stakeholder-friendly summary of the planned infrastructure changes, focusing on what will change, why it matters, and any potential impacts."""
question = "Based on the following Terraform plan, what changes will be performed, and what should stakeholders know about these changes?\n\n" + terraform_plan_text

After running with this code im getting this output !!

Here's a clear and accurate summary of the changes that Terraform will implement:

**Key Changes:**

* **New resources:** 10 new AWS resources will be created to support the infrastructure.
* **Updates to existing resources:** 6 existing resources will be modified to ensure they remain up-to-date and functional.
* **Deletion of existing resources:** 2 existing resources will be removed from the infrastructure.

**Impact:**

The changes are aimed at updating and refining the AWS infrastructure to better serve its purpose. The new resources will provide additional functionality, while the updated resources will ensure that existing components continue to work correctly. The removal of outdated resources will help maintain a streamlined and efficient infrastructure.

**Key Components Affected:**

* [XX-iam-role-config] - IAM role for configuration updates
* [pipeline-roles-lambdas] - IAM role for pipeline execution
* [XX] - Lambda function to activate a monedero NIP
* [XX] - Lambda function to update maximum amount
* [XX] - Lambda function to add money to a monedero
* [XX] - Lambda function for creating a new monedero
* [XX] - Lambda function for callback MSM
* [XX] - Lambda function to consult historical balances
* [XX] - Lambda function to create a new monedero
* [XX] - Lambda function to modify NIP of a monedero
* [XX] - Lambda function to obtain constants for a monedero
* [XX] - Lambda function to recover data from a monedero
* [XX] - Lambda function to recover expired monederos
* [XX] - Lambda function to remove money from a monedero
* [XX] - Lambda function to get the next balance
* [XX] - Lambda function to get the expired balance
* [XX] - Lambda function for status of adding money to a monedero
* [XX] - Lambda function for status of creating a new monedero
* [XX] - Lambda function for status of removing money from a monedero
* [XX] - Lambda function to update the wallet client
* [XX] - Lambda function to validate NIP of a monedero
* [XX] - Lambda function for pipeline notification via Telegram
* [XX] - Lambda function to consult balance

That’s better! Since the LLM provided the names of the components in brackets, I had to mask them, but feel free to play around with this concept and make improvements..

Conclusion:

As we’ve seen, leveraging LLMs to interpret Terraform plans is just the beginning of what’s possible. This approach not only simplifies complex outputs but also bridges the gap between technical systems and human understanding. The ability to turn dense, technical data into clear, actionable insights is a game-changer, and I have no doubt that we’re only scratching the surface.

In the near future, we’ll see more and more applications utilizing LLMs to translate the language of machines into something we can all understand. Whether it’s infrastructure plans, logs, or real-time system feedback, LLMs will play a crucial role in making interactions with technology more intuitive and accessible. The days of deciphering cryptic outputs are numbered; soon, communicating with our systems will be as straightforward as chatting with a colleague.

The integration of LLMs into our workflows promises a future where technology works for us — not the other way around. So, if you haven’t already, now’s the time to start exploring how LLMs can enhance your systems and make your work more human-centric.

--

--