best terraform cloud alternative

Complete Terraform Tutorial

Building a cloud infrastructure, design first!

--

At Brainboard, we’re deeply committed to continuous learning and collaboration. Every Friday, we dedicate time to learn from one another. Recently, I seized this opportunity to start from scratch and master the design and deployment of a fictional Azure cloud infrastructure using Brainboard. Alongside the whole team, we’ve decided to share our journey with you.

This is an invitation for anyone aspiring to become a cloud architect: join us and learn terraform by designing.

Key Learning Outcomes:

  1. Understand Infrastructure as Code (IaC) and Terraform Basics
  2. Terraform 2.0
  3. Understanding Terraform Providers
  4. Design-First Approach to Building Cloud Infrastructure
  5. Insights from a newly certified Cloud Architect
  6. Resources: Stay alert with the Latest Computing Trends

Excited? Let’s go!

How to Use this Terraform Guide

The forthcoming will cover the nine distinct areas outlined in the Terraform Review Guide. Engage with the provided documentation, participate in the tutorials, and explore the extra resources linked in each section.

The content here represents the key insights gathered for each domain, though this guide does not encompass all there is to know. Your familiarity with the specific domain knowledge will determine the extent to which you need to explore the additional resources linked in each section to fully grasp the subject matter.

1. Understand Terraform’s Chaos

  • Evolution of the Cloud
  • Explain what IaC is
  • Diverse Categories of IaC Tools
  • The provisioning tools landscape
  • Advantages of implementing IaC
  • Terraform 1.0
  • Terraform concepts you need to know
  • Group terraform resources
  • Terraform Count versus for_each
  • Setting up Terraform manually
  • Ways to get started
  • Developer workflows
  • GPTs to help
  • Writing terraform code
  • vs. on Brainboard

Evolution of the Cloud

Let’s start when the cloud started.

  • Pre-cloud: If we think about what tech companies who are building applications that were deployed to the web needed to do in the early 90s and 2000s, here’s what it looked like: You would come up with your idea, you would then need to write the software for your application, and then you need to go off and buy a whole bunch of servers and set up a data center somewhere, handle all of the power management and networking and operational overhead that comes with running your own data center. This was a very challenging process.
  • Cloud: this shifted quite a bit in the 2010s so now again you have your idea, you program it up on a much more modern personal computer, and then rather than provision your own servers, you deploy to the cloud. It’s become pretty much the de facto standard but you can obviously still buy your own servers and host them yourselves. Welcome to the on-demand era!

The major differences:

  • Infrastructure provisioned via APIs
  • Servers created & destroyed in seconds
  • Long-lived + mutable → short-lived + immutable

There are three main approaches for provisioning cloud resources:

provisioning cloud resource
  1. GUI: Cloud console
  2. API or command-line interface (CLI)
  3. Infrastructure as Code, that’s our focus today!

Explain what IaC is:

Infrastructure as Code (IaC) represents a transformative approach to infrastructure management, allowing you to define and manage your entire infrastructure through code. This methodology ensures complete transparency and control over the infrastructure, providing an accurate overview of your environment at any given moment. — Insights from “Terraform: Up & Running Writing Infrastructure as Code” by O’Reilley

Diverse Categories of IaC Tools

  • Ad hoc Scripts: These are custom scripts written to automate specific tasks or processes within an infrastructure. They are quick to create but might lack consistency and scalability.
  • Configuration Management Tools: Tools like Ansible, Puppet, and Chef are designed to automate the configuration of software and systems on existing servers. They ensure that machines are configured to an exact specification.
  • Server Templating Tools: Tools such as AMIs (Amazon Machine Images) capture the entire server state, including the OS, installed software, and configurations, allowing for rapid provisioning of identical servers.
  • Orchestration Tools: Kubernetes exemplifies this category, managing the deployment, scaling, and operation of containerized applications across clusters of servers.
  • Provisioning Tools: These tools focus on automating the setup of servers and other infrastructure components from scratch.

The provisioning tools landscape is divided based on their cloud compatibility:

  • Cloud-Specific Tools: These are tied to a particular cloud platform, such as AWS CloudFormation, Azure Resource Manager, or Google Cloud Deployment Manager. They offer deep integration with their respective clouds but do not support multi-cloud environments.
  • Cloud-Agnostic Tools: Terraform and Pulumi stand out in this category, offering the flexibility to manage infrastructure across any cloud provider. This approach enables a consistent workflow and tooling, irrespective of the underlying cloud platform.

The advantages of implementing Infrastructure as Code (IaC) are manifold, offering a transformative approach to infrastructure management and scalability.

  • IaC enhances the capacity to efficiently manage and expand your infrastructure.
  • It underpins DevOps methodologies, facilitating swift, uniform deployment and infrastructure management. By automating provisioning and management processes, IaC ensures routines are consistent and replicable, significantly diminishing manual interventions and minimizing the risk of human error.
  • Treating infrastructure configurations as code allows for version control through platforms like Git. This practice supports change tracking, enables reverting to earlier configurations, and promotes team collaboration, streamlining the development and deployment lifecycle.

Terraform 1.0

AWS terraform cloud infrastructure
That’s the cloud infrastructure you will be able to duplicate from the templates’ catalog on Brainboard!

Terraform is a robust tool designed for creating, modifying, and maintaining infrastructure with precision and efficiency. It incorporates best practices for application software directly into infrastructure management, ensuring a streamlined and reliable process. — HashiCorp

  • Cloud Agnostic: Compatible with many clouds and services (anything with an API)
  • Terraform is often used in conjunction with other DevOps tools to create a comprehensive infrastructure automation strategy. This includes: Ansible for configuration management, Packer for server templating or Kubernetes for orchestration.
  • Architecture Overview: At its core, Terraform operates through Terraform Core, a powerful engine that processes configuration files (including state and configuration details). It intelligently interacts with cloud providers’ APIs to align the actual infrastructure state with the desired state outlined in configurations. This is achieved via providers, such as the AWS Terraform Provider or the Cloudflare Terraform Provider, which serve as intermediaries between Terraform and the specific cloud services.

Terraform concepts you need to know

terraform directory
  • terraform init: Terraform will go to that terraform block, download the code for the selected provider from the terraform registry, insert it in the working directory. The lock file will contain all the specific dependencies and providers information that are installed within the workspace.
  • terraform modules: Essentially, modules are containers for multiple resources that are used together. A module consists of a collection of .tf and/or .tf.json files kept together in a directory. They are the main way to package and reuse resource configurations with Terraform.

We’ve dedicated a section below on Terraform Modules!

Terraform registry
Browse module section of the Terraform registry
  • terraform state file: Basically, it is Terraform’s representation of the world, a JSON file containing information about every resource and data object. It also contains sensitive info (eg. database password). It can be stored locally or remotely.
  • terraform plan: Taking the terraform configuration (desired state) and compares it with the terraform state (actual state).
  • Variables & Outputs: There are different variables you can use in terraform: imput variables — var.<name>, local variables — local.<name>, Output variables — take multiple configurations together.
  • Expressions: Template strings, operators, conditionals, for, splat, dynamic blocks, constraints
  • Functions: Numeric, string, collection, encoding, filesystem, date & time, Hash & crypto, IP network, type conversion.
terraform dependency
  • Meta Arguments like depends_on: Terraform automatically generate dependency graph based on references. If 2 resources depend on each other (but not each others data), depends_on specifies that dependency to enforce ordering. For eg, if software on the instance needs access to S3, trying to create the aws_instance would fail if attempting to create it before the aws_iam_role_policy.
  • Meta arguments like Count: Allows for creation of multiple resources/modules from a single block. It’s useful when the multiple necessary resources are nearly identical. The Meta arguments like for_each allows more control to customize each resource than count.
  • Provisioner: Allows you to perform some actions (locally or remotely). For eg. file, local-exec, remote-exec, vendor (chef or puppet)
  • Terraform Environments: Think development, staging or production environment. Two main approaches: (1) workspaces uses multiple named sections within a single backend & (2) break thing out into different subdirectories within your file system.
  • Code Rot: Refers to the concept that over time, things change about your software systems and if you don’t test and use code, the code will degrade (out of band changes, unpinned versions, deprecated dependencies, unapplied changes)
  • Static Checks: scan your code base (1) built in with terraform fmt, validate, plan & custom rules or (2) external with tflint, checkov, tfsec, terrascan, terraform-compliance or Terraform Sentinal.

Grouping

group terraform resources

This feature is designed to organize and display the structure of your project files, although it’s not technically creating groups but rather categorizing files for better management. For instance, you might start with all your Terraform configurations in a single file, say main.tf. However, to improve organization, you decide to allocate database-related configurations to a distinct folder. To achieve this, you manually select the relevant configurations, right-click, and choose the option to edit the Terraform file name, specifying that these configurations should be moved to a database.tf file. This action effectively creates what we refer to as a "group," though in reality, it's a method of file segmentation.

Following this process, your project will consist of at least two files: the original main.tf, which contains the bulk of your configurations, and the newly created database.tf for database-specific configurations. When navigating the interface, clicking on different sections allows you to view the contents of either main.tf or database.tf, providing a simplified overview of the project's file structure. This feature is particularly useful for highlighting and understanding the organization of your configurations across different "groups" or files.

Terraform Count versus for_each

In a typical scenario, an architecture configured with Terraform might include a set of Azure resources: a single Azure Function, a Service Plan, a Private Endpoint, and a Storage Account. This setup works well for small-scale applications or projects. However, as requirements evolve, there’s often a need to scale these resources. For instance, a project may require not just one, but three Azure Functions, three Service Plans, three Storage Accounts, and three Private Endpoints to support increased load or to provide high availability.

Replicating the configuration manually for each resource is not only time-consuming but also prone to errors. Moreover, it contradicts the DRY (Don’t Repeat Yourself) principle, a fundamental software development practice aimed at reducing repetition.

  1. Manually duplicating resources: By employing this method, I can efficiently duplicate all relevant elements in one go, and if needed, duplicate them again. However, this approach can rapidly complicate the infrastructure, particularly when scaling up to create numerous components, such as ten Azure functions or even a hundred.
  2. Using Count for Resource Management: This method involves the strategic use of variables to manage and scale cloud resources efficiently. For instance, when the goal is to create multiple storage accounts, you can leverage a variable, such as count, to specify the desired quantity. By setting count to three, you establish a directive to create three storage accounts. Similarly, you can define a variable, say VAR, as stacks to track the number of stacks being deployed, initializing it with a default value like three stacks.
  3. Using terraform for_each function: In essence, the count method is suited for creating a predefined number of resources that share the same configuration, with minor variations such as incrementally named identifiers (e.g., appending 1, 2, 3 to resource names). It's a straightforward approach for scaling resources uniformly.

Read more here:

Setting up Terraform manually

basic Terraform workflow
Diagram showing a basic Terraform workflow

Manually, setting up Terraform is a step-by-step process:

  1. Begin by installing Terraform to set up the necessary software on your system.
  2. Authenticate with your cloud provider (e.g., AWS) using IAM roles to ensure Terraform has the necessary permissions for resources such as RDS, EC2, IAM, S3, DynamoDB, and Route53. Tools like the AWS Command Line Interface facilitate this process.
  3. Execute the configuration command to retrieve essential credentials, including the access key ID, secret access key, and default region. This step is pivotal for enabling Terraform to interact with your cloud environment.
  4. Develop your Terraform configuration (Tf configuration), which defines your infrastructure requirements. This includes specifying the provider, region, and resources such as EC2 instances.
  5. Validate and deploy your configuration using Terraform’s workflow commands: init to prepare your environment, plan to preview changes, and apply to enact the changes. Afterward, the destroy command can be used to cleanly remove deployed resources.

Ways to get started:

remote backend explained
  1. Local backend: The state file right alongside your code. Downsides include (1) sensitive values in plain text, (2) uncollaborative and (3) manual.
  2. Remote backend: Separates the individual (developer) from the state file, stored in a remote server somewhere. Terraform Cloud will host your state files for you and manage permissions. You can also self manage a remote backend to store those state files using something like Amazon S3. This solution allows sensitive date encrypted, collaborative & automation friendly. Downsides include the increase of complexity.

Developer workflows:

  1. Write / update code
  2. Run changes locally (for dev environment)
  3. Create pull request
  4. Run tests via CI
  5. Deploy to staging via CD (merge to main)
  6. Deploy to production via CD (release)

Additional tools to consider when working with Terraform:

  • Minimize code repetition withe templating systems
  • Enable multi-account seperation (improved isolation)
  • Cleanup your resources
  • Preventing human errors
  • CICD tools like GitHub Actions, GitLab, Azure DevOps, etc…

GPTs to help

After conducting extensive evaluations on various GPT models specifically designed for infrastructure development, we have compiled the following refined list:

azure architecture guide
  1. Azure Architect Guide
  2. InfraAI
  3. Azure terraformer vs. Azure Architect vs. Power Platform Helper
  4. DevOps Genie
  5. IaC your cloud
  6. HashiBot

Writing terraform code

resource "aws_instance" "example" {
tags = merge(var.tags, {})
instance_type = "f1.2xlarge"
ami = "ami-0c55b159cbfafe1f0"
}

# This is a single-line comment
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0" # Another comment
}

variable "instance_name" {
description = "The name of the instance" # String
default = 5 # Number
}

output "instance_ip_addr" {
value = "${aws_instance.example.public_ip}" # Expression
}

variable "subnets" {
type = list(string)
default = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"]
}

variable "tags" {
type = map(string)
default = {
Environment = "Dev"
Team = "Backend"
}
}

variable "image_id" {
type = string
default = "ami-0c55b159cbfafe1f0"
}

resource "aws_instance" "example" {
tags = {
Name = "Instance-${var.instance_name}" # Interpolation
}
}

resource "aws_instance" "example" {
count = var.instance_count > 0 ? var.instance_count : 0 # Conditional
}

output "lower_instance_name" {
value = lower(var.instance_name) # Function usage
}

resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
subnet_id = aws_subnet.example.id # Dependency on another resource
}

resource "aws_subnet" "example" {
# Subnet configuration...
}

On Brainboard

That’s the terraform resource you will be able to configure with Brainboard!

2. Terraform 2.0

  • Learn the basics
  • Terraform code structure
  • Improved ways to start with Terraform
  • Design your producer/consumer model
  • Deploy
  • Account Set-up checklist
  • Terraform Variables
  • Terraform Modules
  • Terraform Secrets
  • Terraform Azure use case
  • Azure Landing Zone Masterclass
  • Terraform Best Practices on Brainboard

Learn the basics

Terraform code structure

That’s the cloud infrastructure you will be able to configure with Brainboard!

Learning HCL is much more than just a programming language. It is a tool to control anything in your infra. Learning blocks, their types, attributes, data types, conditional statements, functions and resource dependencies can be daughting. here is a summary of what you should know:

terraform cloud alternative Brainboard

Improved Ways to Start with Terraform

Each method has its unique advantages and considerations:

get started with terraform
  1. Starting From Scratch: If you’re beginning with a specific project specification or concept, this method allows you to build your infrastructure from the ground up. Tailor your architecture precisely to your needs and understand each component thoroughly as you add it. Ideal for unique or custom projects where pre-existing templates don’t fit. Requires a solid understanding of both Terraform and your infrastructure needs. It can be time-consuming but highly educational.
  2. Using Pre-Defined Templates: Leveraging existing Terraform templates can jumpstart your project. These templates are pre-configured pieces of code that provide a basic structure for common infrastructure setups. Great for learning and quick deployment. It helps in understanding how different Terraform components fit together in a practical scenario.
  3. Migrating Existing Infrastructure: This involves transitioning your existing infrastructure into Terraform. It’s about converting current setups (possibly manually managed or managed through other IaC tools) into Terraform code. Provides an opportunity to audit and optimize your current setup. You can identify redundant resources or dependencies and streamline your infrastructure. Requires a careful approach to avoid service disruption. It’s a chance to refactor and improve your infrastructure but demands a thorough understanding of both the existing setup and Terraform.
  4. Using AI for Cloud Architecture and Terraform Implementation: Leveraging Brainboard’s AI assistant presents a new solution to the complexities of cloud architecture design and Terraform code generation. This process begins with the user crafting a precise ChatGPT prompt, enabling Bob to conceptualize the cloud architecture. Following this, Bob generates the necessary Terraform code. This streamlines the transition straight to the cloud configuration and deployment stages of the infrastructure, markedly simplifying the entire workflow.

The choice depends on your project requirements, existing infrastructure, and familiarity with Terraform and infrastructure concepts.

Design your producer/consumer model

Deploy

Account Set-up checklist

When learning about cloud architecture and IaC, there are several universally valuable takeaways:

Terraform onboarding program
  1. Cloud Architecture Fundamentals: Understanding how to structure and manage cloud projects, environments, and architectures is a core skill in cloud computing, applicable across various platforms and tools.
  2. CI/CD and Infrastructure-as-Code (IaC): Skills in continuous integration and continuous deployment, along with IaC, are critical in modern cloud environments. These concepts are essential for efficient, scalable, and reliable software development and deployment.
  3. Working with Major Cloud Providers: Knowledge of cloud services provided by major platforms like Azure, AWS, Google Cloud Platform (GCP), and Oracle Cloud Infrastructure (OCI) is valuable. Each provider has unique features and services, but understanding one deeply often provides insights into others.
  4. Version Control and Code Management: Proficiency in version control systems, particularly Git, is essential in software development and DevOps. Understanding how to manage code changes, handle merge requests, and maintain version history is crucial.
  5. Security Practices in Cloud Computing: Learning about securing cloud environments, managing sensitive data, and implementing compliance measures is critical. This includes understanding how to configure and use remote backends for state management, encryption, and access control.
  6. Effective Documentation: The ability to create clear, comprehensive documentation for cloud architectures, code, and processes is invaluable. Good documentation ensures that projects are maintainable, scalable, and understandable by others.
  7. Problem-Solving and Analytical Skills: Developing strong problem-solving and analytical skills is beneficial in any technical role. These skills help in troubleshooting, optimizing performance, and designing efficient systems.
  8. Adaptability and Learning New Tools: The cloud computing field is dynamic, with new tools and technologies emerging regularly. Learning one tool or technology often makes it easier to adapt to others, as many share underlying principles.
  9. Module Management and Customization: Understanding how to effectively use, customize, and manage modules (reusable code blocks) is important for efficient cloud architecture development.
  10. Advanced Cloud Management Techniques: Skills like creating loops in configurations, managing dependencies, and handling multi-subscription environments are advanced techniques that enhance your ability to design and manage complex cloud architectures.

Terraform Variables

Terraform Modules

Creating a Terraform Module is like drafting the blueprint for a component of the building. It starts with understanding what resources need to be grouped and how they interact. You define these in a collection of .tf or .tf.json files in a directory, which includes the resources that the module will manage. By defining input variables, you can customize the module's behavior without altering the underlying code, and output values can expose a subset of the resource's attributes to other parts of your Terraform code.

Using a Terraform Module is comparable to using a prefabricated part on the construction site. You reference the module in your Terraform configuration with the module block, and provide values for the input variables. The module's output can then be used by other parts of your infrastructure, fitting seamlessly into the larger design.
terraform modules
No building is an island, and neither is a Terraform Module. Sharing is facilitated through the Terraform Registry, a central repository where you can publish your modules, making them available to the community. You can also use modules shared by others, leveraging the collective knowledge and experience of the community to avoid reinventing the wheel for common infrastructure patterns.

When starting a new infrastructure project, it’s crucial to think about how to decompose your infrastructure into reusable modules. Begin with a ‘designing-first’ approach, considering what components will be needed across different projects and environments. This foresight is like the architect planning for different spaces within the building — each with a specific purpose and requirement, yet all working together to form a cohesive whole.

Terraform Secrets

Terraform Azure use case

This diagram and code snippet demonstrates a terraform code, which defines a resource group, a virtual network, and a subnet in Azure:

Azure cloud infrastructure diagram
That’s the cloud infrastructure you will be able to create with Brainboard!

Code Structure and Logic:

  1. Provider Declaration: The provider block configures the specified provider, in this case azurerm, which is for Azure Resource Manager. The features {} block is required but can be empty for the Azure provider.
  2. Resource Group: The azurerm_resource_group resource defines a new resource group named "example-resources". Resource groups in Azure are a way to organize Azure assets.
  3. Virtual Network (VNet): The azurerm_virtual_network resource creates a virtual network. It is given a name, an address space (in CIDR notation), and it references the location and the name of the resource group created earlier. This establishes a network within which you can place further Azure services.
  4. Subnet: The azurerm_subnet resource creates a subnet within the Virtual Network. It specifies the name, associated resource group, the VNet it belongs to, and the address range for the subnet. This subnet is where you could place virtual machines or other resources.
  5. Resource Dependencies: Terraform automatically understands resource dependencies. The subnet depends on the virtual network, which in turn depends on the resource group. Terraform creates and manages these in the correct order.
  6. Variable Usage: This example hardcodes values like names and CIDR blocks. In a dynamic real-world scenario, these would typically be replaced with variables for reusability and flexibility.

Azure Landing Zone Masterclass

Terraform Best Practices on Brainboard

  1. Self-service model: Create repeatable patterns to standardize infrastructure components with terraform modules and templates. Architectures can be turned into private or public templates.
  2. Consistency Across Environments: Maintain consistency across different cloud environments by promoting architectures from one environment to another.
  3. Terraform Variables: Use terraform variables & locals to ensure customization and flexibility. Define output variables to display useful information.
  4. Remote Backend: Configure a shared remote storage for the Terraform state file to enable team collaboration and prevent concurrent state file edits.
  5. Edit code if needed: Edit the generated terraform code on Brainboard for better flexibility.
  6. Versioning: Enable versioning to keep a history of changes, allowing you to track infrastructure evolution and revert to previous versions if necessary.
  7. Terraform commands: Execute Terraform commands in parallel and ensure resources are provisioned or destroyed in the correct order.
  8. Respect your Standards: Build cost-efficient Terraform infrastructures, monitor for security vulnerabilities, and implement remediation strategies. Continuously test your infrastructure both before and after deployment to ensure it meets requirements and standards.
  9. Webhooks: Use webhooks to trigger external actions as part of your automation strategy.
  10. Code Visualization: Visualize resources when importing Terraform code and keep your terraform code organized by grouping resources into multiple .tf files.
  11. Documentation: Document your infrastructure work and any changes with Readme files, input descriptions, and tags.

3. Terraform Provider 1.1

  • Introduction of Terraform Provider
  • Terraform Provider Block Example
  • Commonly Used Terraform Providers List
  • Mix & Match Terraform Resources
  • Configure Azure on Brainboard
  • Setting Up AWS Infrastructure Using Terraform
  • extra: Website on Azure Cloud with Brainboard
  • Potential Gotchas

Introduction of Terraform Provider

As previously said, Terraform providers are plugins enabling interaction with APIs. Bluntly, they are written in Go and use the Terraform Plugin SDK. They include Cloud providers and Software-as-a-service providers. Hundreds of providers are available for Terraform, offering versatility, unlike platform-specific languages. Most providers are mainly maintained by HashiCorp, dedicated teams or community groups.

Provider configurations should be declared in the root module of your Terraform project. They can be declared in any .tf file. I would recommend either putting them in the main.tf , creating a providers.tf file specifically for the Providers and nothing else, or alternatively a versions.tf file which would include the required_providers block and specify the Terraform version. Any child modules receive their provider configuration from the root module.

Terraform Provider Block Example

Terraform providers are set up using a provider block, tailored for specific configurations like AWS access keys and regions. For AWS, replace Azure-specific configurations with AWS-related settings such as access_key, secret_key, and region. Use alias for managing resources across different AWS accounts or regions. Here’s how you’d configure it for AWS:

provider "aws" {
region = "us-west-2"
access_key = "your_access_key"
secret_key = "your_secret_key"
}

provider "aws" {
alias = "east"
region = "us-east-1"
access_key = "your_access_key"
secret_key = "your_secret_key"
}

resource "aws_instance" "example" {
provider = aws.east
# other configuration
}

Commonly Used Terraform Providers List

Many providers exist, but for now, we are focusing on majors:

  1. Azure: Microsoft Azure is a comprehensive cloud platform by Microsoft, encompassing a broad array of services such as IaaS, PaaS, and SaaS, catering to diverse needs like computing, storage, and application development. In Terraform, Azure Resource Providers facilitate interaction with Azure’s various services, allowing for efficient management and provisioning of Azure resources through Terraform’s infrastructure as code approach.
  2. AWS: Amazon Web Services is a leading cloud platform by Amazon, offering a wide range of scalable and cost-effective cloud services for application development, deployment, and management. In Terraform, AWS services are managed through specific resource providers, facilitating the creation and control of AWS resources directly from Terraform configurations, bridging the gap between code and AWS’s infrastructure services.
  3. GCP: Google Cloud Platform is Google’s extensive suite of cloud services, providing diverse solutions for computing, storage, databases, machine learning, and data analytics. The Google Cloud Terraform Provider, developed by Google and HashiCorp, enables users to manage GCP infrastructure using Terraform, streamlining the deployment and management of GCP resources.
  4. OCI: Oracle Cloud Infrastructure is Oracle Corporation’s cloud platform offering a broad spectrum of services including IaaS, PaaS, and SaaS. Unlike many other cloud providers, the OCI Terraform Provider is managed directly by Oracle’s open source team, rather than HashiCorp, highlighting its unique integration and support for Oracle Cloud resources.

These 4 providers, with their cloud and data resources are available natively on Brainboard.

Mix & Match Terraform Resources

aws vs azure vs gcp

Configure Azure on Brainboard

Below is a more complex Terraform example that provisions an Azure Virtual Network, a couple of Subnets, a Network Security Group with rules, and an Azure Virtual Machine. This diagram and terraform configuration demonstrates a more comprehensive setup within Azure:

azure terraform infrastructure diagram
That’s the cloud infrastructure you will be able to import with Brainboard!
  1. A resource group for organizing related resources.
  2. A virtual network with an address space to host subnets.
  3. Two subnets for separating internal and external traffic.
  4. A network security group (NSG) to define a set of network security rules.
  5. A network security rule that allows inbound SSH traffic.
  6. A public IP to be used by a network interface.
  7. A network interface attached to the internal subnet and associated with the public IP.
  8. A virtual machine configured with this network interface, using an Ubuntu Server image.

In this setup, the security rule allows for SSH access (typically port 22), which is useful for initial configuration and management. The virtual machine’s OS disk is defined to be created from an image, specifying its properties like caching and the type of managed disk.

Setting Up AWS Infrastructure Using Terraform

Start by exploring the GitHub repository containing Terraform configurations for AWS infrastructure:

  • Backend + Provider Configuration: Configure Terraform backend using an S3 bucket for state management, enabling collaboration and state locking. Set up the AWS provider in Terraform by specifying your desired region and authentication method (e.g., IAM roles, access keys).
  • EC2 Instances: Define aws_instance resources for your virtual servers, selecting appropriate AMI, instance type, and configuring SSH access with a key pair. Attach security groups to manage network access to instances securely.
  • S3 Bucket: Create an S3 bucket with aws_s3_bucket for object storage, configuring access policies and region settings as needed.
  • VPC Configuration: Establish a Virtual Private Cloud (aws_vpc) with a defined CIDR block for resource isolation and network management. Within the VPC, create subnets (aws_subnet) specifying their CIDR blocks and availability zones for regional distribution.
  • Security Groups + Rules: Utilize aws_security_group resources to define inbound and outbound rules, ensuring secure access to and from your AWS resources.
  • Application Load Balancer (ALB): Deploy an ALB (aws_lb) to distribute incoming traffic across multiple targets, specifying necessary subnets and security groups. Set up an ALB target group (aws_lb_target_group) and attach EC2 instances (aws_lb_target_group_attachment) to evenly distribute traffic.
  • Route 53 Zone + Record: Configure a Route 53 hosted zone (aws_route53_zone) for DNS management and domain name services. Create DNS records (aws_route53_record) to link your domain to AWS resources like your ALB, facilitating easy access.
  • RDS Instance: Deploy an RDS instance (aws_db_instance) selecting a database engine and configuring instance size and storage, ensuring data persistence and scalability.
aws terraform infrastructure diagram
That’s the cloud infrastructure you will be able to create with Brainboard!

For a detailed walkthrough on manually configuring AWS architecture, including practical examples and best practices, consider watching instructional videos like the one linked: Configure AWS Infrastructure.

Extra: Website on Azure Cloud with Brainboard

As you design, Brainboard automatically updates the Terraform code, reflecting your infrastructure’s current state.

In today’s tutorial, we will embark on constructing a sleek yet robust cloud architecture for hosting a website on Azure, using Brainboard.

  1. Azure Website Infrastructure Overview
  2. Infrastructure as Code with Terraform
  3. CI/CD Workflows and Deployment
  4. Monitoring and Drift Detection
  5. Deployment and Validation
  6. Public Template Available for FREE
I’ve meticulously designed this architecture from the ground up and am excited to share the template with everyone publicly! For those interested in leveraging this design, please visit the templates section to access it.

Potential Gotchas

Here are some potential gotchas with terraform that can lead you to have a bad day:

  • Name changes when refactoring
  • Sensitive data in terraform state files
  • Cloud timeouts
  • Naming conflicts
  • Forgetting to destroy test-infra
  • Uni-directional version upgrades
  • Multiple ways to accomplish same configuration
  • Some Params are immutable
  • Out of band changes.

4. Design to Terraform Code

  • Let’s start with Brainboard
  • Tutorial
  • Design to Code methodologies
  • Shift left methodologies
  • Git as source of truth?
  • Resources

Let’s start with Brainboard

  1. A tool to analyze each step of the cloud infrastructure design process.
  2. Same tooling for each step of the process.
  3. A design tool for cloud infrastructure that doesn’t drift from the actual implementation.
  4. A way to track both high-level and low-level designs.
  5. A tool or method to translate designs into code, preferably Terraform.
  6. Standardizations and best practices for writing infrastructure code.
  7. A review process for the code before it’s pushed to the repository.
  8. Testing tools for infrastructure as code, both locally and remotely.
  9. A tool or method to maintain and update local testing plugins.
  10. A pre-commit tool that works seamlessly and doesn’t require extensive maintenance.
  11. A clear review process that considers both the design and the code.
  12. A reliable deployment method post-approval.
  13. A holistic approach to building infrastructure that includes people working on it.

Tutorial

Join Chafik in this enlightening journey on Brainboard, where you’ll learn the ropes of crafting your initial cloud infrastructure setup, designed for effortless scaling down the line. This step-by-step tutorial is your golden ticket to mastering infrastructure as code, specifically tailored for users of Azure with Terraform.

Here’s what you’ll uncover:

  • Function app in Azure with the private Endpoint
  • Terraform Resources available with Azure provider
  • Location & variables setup
  • Resource group
  • Linux function app + service plan & storage account
  • Resources Configuration
  • Terraform code autogenerated
  • Embedded documentation for every idcard field
  • Validate if my tf code is valid (or not)
  • Terraform init & Terraform Plan
  • Everything is good! Let’s continue adding the private endpoint!
  • Don’t forget the subnet before!
  • Brainboard automatically detected the relationship between vnet and subnet
  • Add private endpoint, Brainboard detected the the Subnet ID, research group name and the location that is a variable.
  • Leverage the CI/CD to create a workflow that allow me to check for security, costs, naming conventions, policies before I deploy.
  • Version my infrastructure
  • Pull request in your preferred repository
That’s the cloud infrastructure you will be able to create with Brainboard!

Design to Code methodologies

That’s the cloud infrastructure you will be able to design & deploy with Brainboard!

The “Design First, Code When Needed” methodology revolutionizes software development by prioritizing seamless integration between design and code. It counters the traditional, compartmentalized workflow that often results in a disconnect between these two critical phases, leading to inefficiencies and a disjointed final product. This approach advocates for keeping the design and coding phases in sync, ensuring updates to the code are reflected in the design and vice versa. By doing so, it addresses a common flaw in software development: the failure to maintain updated documentation that matches the actual codebase. Emphasizing the importance of matching design needs with code changes, this methodology facilitates a more cohesive and effective development process.

The introduction of tools like drag & drop designers and Continuous Integration/Continuous Deployment (CI/CD) pipelines further streamlines this process, reducing the feedback loop and enhancing efficiency. While the cost of implementation may vary, the long-term benefits of improved efficiency and a more robust final product highlight the value of adopting this forward-thinking approach.

Shift left methodologies

cloud infrastructure management

The “Shift Left” methodology in cloud infrastructure management emphasizes an early and continuous focus on quality and efficiency throughout the design, coding, testing, and deployment phases. This approach encourages teams to integrate crucial practices such as planning, testing, and review early in the development cycle, thereby identifying and addressing potential issues well before deployment.

By starting with a detailed design phase, teams can visualize and understand the cloud infrastructure’s complexities upfront, reducing the risk of “design drift” — where initial designs become outdated due to rapid changes in cloud infrastructure, leading to discrepancies that are hard to reconcile later.

Transitioning from design to code, the methodology leverages tools like Terraform to translate intricate designs into executable code, emphasizing the importance of code review to ensure infrastructure robustness and reliability. Testing, an essential step in this approach, is conducted to verify security, cost-effectiveness, and functionality, employing various plugins and tools for a thorough assessment.

The “Shift Left” methodology culminates in a review and approval process, ensuring the final cloud infrastructure aligns with the initial design and operates effectively in real-world scenarios. This proactive approach to cloud infrastructure management not only improves the efficiency and reliability of the infrastructure but also significantly reduces the likelihood of issues arising post-deployment, facilitating a smoother implementation process. Through understanding and applying these steps, teams can master cloud infrastructure management, ensuring successful deployments that meet or exceed project requirements and expectations.

Git as source of truth?

Customers typically know whether they require Git integration based on two main reasons.

  1. A significant majority, approximately 80%, prefer not to deploy directly with Brainboard. Despite planning and designing with Brainboard, these users adhere to strict security protocols and pipeline processes that necessitate the use of Git. This integration allows them to maintain their existing workflows, including code reviews and automated deployments, by pushing their final Brainboard configurations into Git.
  2. A smaller segment of our user base leverages Git as a method for backup. While they conduct their entire planning and deployment process within Brainboard, they opt to synchronize their work with Git. This ensures that, in the event of Brainboard becoming unavailable or experiencing issues, they retain access to their Terraform code externally.
  3. There are users who bypass Git altogether, preferring to apply their configurations directly from Brainboard. This group does not engage in pull requests or external code storage, relying solely on Brainboard for their deployment needs.

5. Insights from a newly certified Cloud Architect

Shifting trends

By 2031, the DevOps market is projected to hit $27.6 billion, with a staggering growth rate of 24.59% from 2024 to 2031. Key drivers and trends include:

  • Kubernetes: Continues as a cornerstone for container orchestration
  • AIOps: Employing AI to automate and optimize IT operations, costs and drifts.
  • MLOps: Focus on automating and streamlining the deployment and maintenance of ML.
  • DevSecOps: Integrating security into development pipelines enhances safety and efficiency.
  • Multi + Hybrid: Managing applications across multiple cloud providers.
  • GitOps: Using Git as a source of truth.
  • Platform Engineering: Intelligent recommendations for performance optimization.
  • Serverless: Reduces operational overhead, costs, and complexity, focusing on event-triggered code execution.
  • Low/No-Code Platforms: Enable non-technical users to participate in DevOps processes.
  • Industry-specific: Cloud services tailored to specific industries, such as AWS Healthcare and IBM Cloud for Financial Services.

Is the future AI-driven infrastructure?

Good question. Since the launch of AI in early 2023, the landscape of my workforce has undergone a profound transformation. As a leader, my focus has shifted significantly. No longer am I entrenched in the details of writing Terraform code or immersing myself in the nitty-gritty of infrastructure tasks.

Instead, AI has empowered me to elevate my attention to strategic matters, steering my team with a broader vision and deeper insight. This technological shift has not only streamlined our operations but has also reshaped the way we approach our work, allowing us to be more innovative and forward-thinking.

Our vision with Brainboard AI is to facilitate the initial stages of your infrastructure projects, or to provide clarity in complex Terraform configurations. However, it’s important to note that while our AI can assist in these areas, it isn’t designed to autonomously generate valid Terraform code or produce perfectly detailed diagrams on its own.

We stand by the philosophy that having a starting point, even if it’s not complete, is far more advantageous than having nothing at all.

We’re thrilled to share that our AI feature is officially out for beta testers! This is the way to get access.

If you liked this tutorial, don’t forget to 👏 .

If you liked the tool I used, check Brainboard.

If you think Brainboard is a match for your organization, contact us.

learn terraform fast

--

--

Mike Tyson of the Cloud (MToC)
Mike Tyson of the Cloud (MToC)

Written by Mike Tyson of the Cloud (MToC)

As a growth architect in the cloud (AKA Brainboard), I build scalable solutions to drive business growth and improve efficiency while learning to code.

Responses (1)