12 Cloud Commandments: Applying 12 Factor App Principles to Master Terraform — Part 3

Manish Warang
Engineered @ Publicis Sapient
8 min readJun 5, 2024

In the previous part of this series, we explored the initial steps to applying the 12 Factor App principles in Terraform code. We discussed:

  1. Config: Emphasizing the separation of configuration from code, enabling dynamic and environment-specific settings. This enhances the reliability and scalability of your infrastructure provisioning process.
  2. Backing Services: Treating backing services such as databases as attached resources. This involves securely managing credentials and leveraging tools like AWS Secrets Manager to ensure portability and maintainability.
  3. Build, Release, Run: Strictly separating these stages to prevent configuration drift and maintain consistency across environments. This ensures what you build is exactly what you release and run, fostering a robust deployment process.

These foundational principles help in creating a scalable, maintainable, and secure infrastructure using Terraform.

Processes — Execute the app as one or more stateless processes

Choreographers of your Cloud ballet — they keep everyone in step and moving in the right direction. But watch out for the prima donna who insists on stealing the spotlight — keep your processes lean and your performance flawless!

Scenario 1: The “Zombie Apocalypse” of Processes
Imagine this: you’re managing a fleet of Cloud servers, each running multiple services to keep your application humming along. But as time goes by, you start noticing something eerie — zombie processes lurking in the shadows. These undead remnants of past deployments refuse to die gracefully, hogging precious resources and haunting your system like ghosts of past deployments. They slow down your servers, drain your budgets, and turn your Cloud infrastructure into a graveyard of wasted compute power. It’s a nightmare straight out of a horror movie, except instead of brains, these zombies crave CPU cycles.

Scenario 2: The “Whack-a-Mole” Conundrum
Ever felt like you’re playing an endless game of whack-a-mole with your Cloud infrastructure? You squash one pesky process causing trouble, only for another one to pop up somewhere else. It’s like trying to plug leaks in a sinking ship with duct tape — a never-ending cycle of firefighting and frustration. You scale up to handle increased traffic, only to find that your processes aren’t playing nice with each other, leading to bottlenecks and slowdowns. It’s enough to make even the most seasoned DevOps engineer feel like they’re chasing their own tail in a never-ending game of cat and mouse.

Scenario 3: Unmanaged Background Processes

In a typical Cloud infrastructure setup managed through Terraform, engineers often need to provision various resources like virtual machines, databases, and networking components. Sometimes, these resources require background processes for tasks such as data migration, log streaming, or periodic clean-up. However, if these background processes are not properly managed, they can lead to resource wastage, performance issues, or even unexpected downtime. Consider a scenario where a DevOps team provisions a set of EC2 instances using Terraform to host a microservices architecture. Each instance needs to run a background process for log aggregation and forwarding to a centralized monitoring system. Without proper management, these background processes might consume excessive CPU or memory, impacting the performance of the microservices. Furthermore, if one of these processes crashes or hangs, it can disrupt the entire application’s functionality.

In the realm of Terraform coding for Cloud infrastructure, the “Processes” principle from the 12 Factor App offers a crucial beacon of guidance. With its focus on executing tasks as stateless, isolated processes, it serves as a guardrail against resource inefficiencies and potential system failures. Picture this: DevOps teams spinning up EC2 instances, each necessitating background processes for vital functions like log aggregation. Without adherence to this principle, these processes could turn into resource hogs, jeopardizing the performance of our microservices. But by embracing the 12 Factor approach, we ensure that our processes remain lightweight, scalable and resilient. So, as we code our Terraform configurations, let’s keep the essence of “Processes” alive, safeguarding our infrastructure’s stability and efficiency.

Port Binding — Export services via port binding

Port binding is like musical chairs for your applications — they need a seat at the table to make beautiful music together. But don’t let them fight over the best spot — keep your ports organized and your applications harmonizing like a well-tuned orchestra!

Scenario 1: The “Port Collision” Conundrum

Imagine this scenario: you’re deploying a shiny new microservice into your Cloud environment, complete with all the bells and whistles. Everything seems to be going smoothly until you hit a snag — port collision. You see, in the wild world of Cloud infrastructure, ports are like prime real estate — everyone wants a piece of the action. But when two services try to stake their claim on the same port, chaos ensues. It’s like trying to fit two puzzle pieces into the same slot — it just doesn’t work.

As a DevOps engineer, you find yourself playing referee in this port showdown, trying to untangle the mess without disrupting the flow of traffic. You’re juggling firewall rules, network configurations, and angry service owners, all while praying that your troubleshooting skills are sharper than a katana. It’s a high-stakes game of “Portopoly,” where one wrong move could send your entire infrastructure crashing down like a house of cards.

Scenario 2: The “Ephemeral Port Panic” Predicament

Picture this: you’ve spent weeks fine-tuning your Cloud infrastructure, optimizing every last byte for peak performance. But just when you think you’ve got it all figured out, along comes the dreaded ephemeral port panic. Ephemeral ports are like the wild cards of the networking world — they come and go as they please, leaving chaos in their wake. One minute, your service is happily chugging along, and the next, it’s stuck in a port purgatory, unable to communicate with the outside world.

As a Cloud and Infrastructure Architect, you find yourself on the front lines of this ephemeral port panic, desperately trying to keep your services afloat amidst a sea of random port assignments. You’re wrangling load balancers, tweaking security groups, and praying to the demo gods for mercy, all while cursing the ephemeral nature of it all. It’s a wild ride through the port jungle, where the only law is Murphy’s Law — if something can go wrong, it will.

Scenario 3: Managing Port Allocation in Load Balancing Environments

In a load-balanced environment, ensuring proper port binding is crucial for efficient traffic distribution and high availability. Without adhering to the 12 Factor App principle of Port Binding, you might face challenges in managing port allocations across different instances of your application behind the load balancer. For instance, if you’re using Terraform to provision EC2 instances in AWS and configure them behind an Elastic Load Balancer (ELB), specifying static ports in your Terraform code can lead to port conflicts or inefficient resource utilization. Consider a scenario where you have multiple EC2 instances serving the same application behind an ELB. Without dynamic port binding mechanisms, you might configure each instance to listen on a predefined port, such as port 80. However, as your application scales and additional instances are launched, conflicts may arise as each instance competes for the same port. By following the Port Binding principle and utilizing features like dynamic port mapping in Terraform, you can ensure that each instance binds to a unique port dynamically, allowing the load balancer to efficiently distribute traffic without encountering port conflicts.

In conclusion, adopting the Port Binding principle within your Terraform code is essential for seamless scalability and efficient resource utilization in Cloud environments. By dynamically assigning ports to instances behind load balancers, you mitigate the risk of port conflicts and ensure optimal traffic distribution. Terraform’s dynamic port mapping capabilities empower you to embrace this principle effectively, enabling smoother deployments and enhancing the overall reliability of your infrastructure. Embracing Port Binding not only aligns with 12 Factor App principles, but also fosters agility and resilience within your Cloud architecture. So, let’s bind wisely, scale effortlessly, and keep our applications running smoothly in the Cloud.

Concurrency — Scale out via the process model

The juggling act of your infrastructure circus — it’s all about keeping multiple balls in the air without dropping a single one. But watch out for the clown who thinks they can juggle flaming torches — keep your concurrency safe and your audience entertained!

Scenario 1: The “Racecar Deployment” Conundrum

Imagine you’re deploying updates to your Cloud infrastructure, sprinting towards the finish line like a Formula 1 driver. Everything seems to be going smoothly, until you hit a curveball: multiple engineers deploying changes simultaneously. Suddenly, it’s less like a race and more like rush hour traffic in downtown LA. Requests collide, resources clash, and chaos ensues. You find yourself stuck in a deadlock, waiting for one deployment to finish before another can even start. It’s a racecar deployment, where everyone’s trying to be the first across the finish line, but nobody’s getting anywhere fast.

Scenario 2: The “Herding Cats” Debacle

Ever tried to coordinate with a group of DevOps engineers, each with their own agenda and timeline? It’s like herding cats on a caffeine bender. One engineer wants to deploy updates to the database, while another insists on rolling out changes to the networking infrastructure. Meanwhile, you’re stuck in the middle, trying to keep everyone moving in the same direction. But just when you think you’ve got everyone on the same page, someone decides to throw a spanner in the works by deploying their changes without warning. It’s a circus of chaos, where coordination is about as likely as finding a unicorn in your server room.

Scenario 3: Resource Collisions during Parallel Execution

Imagine a scenario where a team of DevOps engineers is managing a large-scale infrastructure on AWS using Terraform. They have multiple modules defining various resources like EC2 instances, RDS databases, and security groups. Each engineer works on different modules simultaneously to speed up development and deployment. However, without careful coordination, this concurrent development can lead to resource collisions. For example, two engineers might inadvertently attempt to create resources with the same name or in the same subnet, causing conflicts and potentially breaking the infrastructure. To address this, engineers must implement strategies to ensure resource isolation and prevent collisions. One approach is to use Terraform workspaces or state locking mechanisms provided by Terraform backends like Amazon S3 or HashiCorp Consul. By utilizing these features, engineers can ensure that only one process can modify the state at a time, preventing conflicts and maintaining the integrity of the infrastructure.

In conclusion, while the allure of concurrent development in Cloud infrastructure management is undeniable for accelerating deployment, it introduces a critical challenge: concurrency. Without proper handling, simultaneous work on Terraform modules can lead to resource collisions, jeopardizing the stability of the infrastructure. Fortunately, by embracing the 12 Factor App principle of Concurrency, DevOps teams can implement strategies like Terraform workspaces and state locking mechanisms to ensure resource isolation and prevent conflicts. These measures not only maintain the integrity of the infrastructure but also foster smoother collaboration among engineers. So, remember, when it comes to managing Cloud infrastructure with Terraform, prioritizing concurrency management isn’t just a best practice — it’s a necessity for sustained efficiency and reliability.

More Read

Part 1

Part 2

Part 4

--

--

Manish Warang
Engineered @ Publicis Sapient

Cloud Architect | Writing about cloud solutions, architecture, and innovation. Follow for insights and practical tips.