Provisioning AWS Infrastructure Using Terraform and Amazon Linux Based EKS Optimized Golden AMI Built by Packer

Paul Zhao
Paul Zhao Projects
Published in
71 min readFeb 10, 2024

This project is intended to provide a terraform template to provision EKS and its resources using Amazon Linux based EKS optimized Golden AMI built by Packer. It can be tweaked for other purposes

Github Repos for this project:

Repo: Amazon Linux Based EKS Optimized Golden AMI Built by Packer

Repo: Provisioning-AWS-Infrastructure-using-Terraform-Packer-Kubernetes-Ansible

Objectives:

  • Infrastructure Provisioning Automation: Implement automation for provisioning AWS infrastructure using Terraform to define and manage cloud resources efficiently.
  • Integration of Packer and Terraform: Integrate Packer-built Amazon Linux-based EKS Optimized Golden AMI with Terraform to ensure standardized and efficient deployment of worker nodes within Amazon EKS clusters.
  • Customized Worker Node Configuration: Utilize Terraform to leverage the launch template in Amazon EKS node groups, allowing for the customization of worker nodes’ configuration via userdata, facilitating the implementation of specific configurations or software installations (e.g., Ansible).
  • Enhanced Scalability and Efficiency: Utilize Amazon EKS to automatically manage the scaling and deployment of containerized applications, leveraging the optimized Golden AMI to ensure consistent performance and reliability across worker nodes.
  • Security and Compliance: Implement best practices for security and compliance by ensuring that infrastructure deployments adhere to AWS security standards and policies, leveraging Terraform’s infrastructure as code approach to enforce security controls and configurations consistently.
  • Documentation and Knowledge Transfer: Develop comprehensive documentation detailing the setup, configuration, and usage of the Terraform scripts, Packer-built Golden AMI, and Amazon EKS clusters to facilitate knowledge transfer and enable smooth maintenance and future enhancements.
  • Testing and Validation: Establish testing procedures to validate the correctness and reliability of the infrastructure provisioning process, including testing for infrastructure scalability, fault tolerance, and compatibility with containerized applications.
  • Continuous Integration and Deployment (CI/CD): Integrate the infrastructure provisioning process into CI/CD pipelines to automate the deployment and management of AWS infrastructure changes, ensuring rapid iteration and deployment cycles while maintaining reliability and consistency.

Tools:

Packer

Packer is an open-source tool developed by HashiCorp that automates the creation of identical machine images or artifacts for multiple platforms from a single source configuration. These machine images can be in various formats, such as VirtualBox, VMware, AWS, Azure, Docker containers, and others. Packer allows developers and system administrators to define machine configurations as code, using configuration files written in JSON or HCL (HashiCorp Configuration Language). It then uses this configuration to automatically create machine images, ensuring consistency and reproducibility across different environments. Packer is commonly used in conjunction with other HashiCorp tools like Vagrant, Terraform, and Consul to streamline the development and deployment processes in infrastructure as code workflows.

Terraform

Terraform is an open-source infrastructure as code (IaC) tool developed by HashiCorp. It enables users to define and provision infrastructure resources declaratively using a high-level configuration language. With Terraform, you can describe the desired state of your infrastructure in configuration files, specifying the resources and their configurations such as servers, networks, storage, and more.

Key features of Terraform include:

  1. Declarative Configuration: Users define the desired state of their infrastructure in configuration files using a domain-specific language (DSL). Terraform then works to reconcile the current state of the infrastructure with the desired state declared in the configuration.
  2. Resource Graph: Terraform builds a dependency graph of all resources defined in the configuration files, enabling it to determine the order in which resources should be provisioned or updated.
  3. Execution Plans: Before making any changes to the infrastructure, Terraform generates an execution plan showing what actions it will take, such as creating new resources, updating existing ones, or destroying resources.
  4. State Management: Terraform keeps track of the state of the infrastructure it manages, storing this information in a state file. This state file allows Terraform to understand the current state of the infrastructure and make changes accordingly.
  5. Provider Ecosystem: Terraform supports a wide range of cloud providers, infrastructure technologies, and services through provider plugins. These plugins allow Terraform to interact with various APIs to provision and manage resources.
  6. Immutable Infrastructure: Terraform encourages the use of immutable infrastructure patterns, where infrastructure changes are made by creating new resources rather than modifying existing ones. This approach enhances reliability and makes it easier to roll back changes if necessary.

Overall, Terraform simplifies the process of managing infrastructure by treating it as code, enabling automation, versioning, and collaboration in infrastructure provisioning and management workflows.

AWS EKS

AWS EKS stands for Amazon Elastic Kubernetes Service. It is a managed Kubernetes service provided by Amazon Web Services (AWS) that simplifies the deployment, management, and scaling of containerized applications using Kubernetes on AWS infrastructure.

Key features of AWS EKS include:

  1. Managed Kubernetes Control Plane: AWS EKS manages the Kubernetes control plane for you, including the etcd cluster, API server, scheduler, and other components. This relieves users from the operational overhead of managing these components themselves.
  2. Integration with AWS Services: AWS EKS integrates with other AWS services such as Elastic Load Balancing (ELB), Identity and Access Management (IAM), CloudTrail, CloudWatch, and more, providing a seamless experience for deploying and managing Kubernetes applications on AWS.
  3. Security and Compliance: AWS EKS helps users implement security best practices by providing features such as encryption at rest and in transit, IAM integration for fine-grained access control, and support for Kubernetes network policies.
  4. High Availability and Scalability: AWS EKS is designed for high availability and scalability. It runs Kubernetes control plane instances across multiple Availability Zones (AZs) for redundancy and automatically scales to accommodate the workload demands of your applications.
  5. Compatibility with Standard Kubernetes Tools: AWS EKS is compatible with standard Kubernetes tools and APIs, allowing users to leverage existing Kubernetes skills, workflows, and ecosystem tools.
  6. Hybrid and Multi-Cloud Deployments: AWS EKS supports hybrid and multi-cloud deployments, allowing users to run Kubernetes clusters across AWS and on-premises environments, or even across different cloud providers.

Overall, AWS EKS simplifies the process of running Kubernetes clusters on AWS infrastructure, enabling users to focus on building and deploying containerized applications without worrying about the underlying infrastructure management complexities.

Prerequisites:

FYI: Ubuntu solution is provided in this project, but Windows and Mac OS are allowed

  1. AWS CLI installation
  2. Configure AWS programmatically to communicate Terraform with AWS
  3. Packer installation

Step by step instructions:

FYI: All instructions assume you may use ubuntu server. In case you are on a different OS, please do adjustment accordingly. For instance, on centos server, you may do yum install instead of apt install

Install AWS CLI:

Official guidance: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

You can install the AWS CLI on Ubuntu with the following commands:

sudo apt install awscli

check aws cli

aws --version

Configure AWS programmatically to communicate in between Serverless and Lambda

To configure aws cli with default profile

aws configure

Only below 2 values need to be provided

AWS Access Key ID []:
AWS Secret Access Key []:

Where should you obtain them?

Let us jump into AWS console on IAM

Make sure you have a user with admin user (if you don’t allow admin level of access, you must be granted with all permissions need to interact with all resources created in AWS)

FYI: You may need to test along the way if you don’t have admin level of access since programmatic access is associated with policies attached the user

Install Packer:

To build Amazon Linux Based EKS Optimized Golden AMI, we need to have packer installed.

Install packer on ubuntu server

Official Guidance: https://developer.hashicorp.com/packer/tutorials/docker-get-started/get-started-install-cli

You can install the Packer on Ubuntu with the following commands:

Add the HashiCorp GPG key

curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -

Add the official HashiCorp Linux repository

sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"

Update and install

sudo apt-get update && sudo apt-get install packer

To verify Packer installation

packer --version

Let us now kick off our project!

Git clone ami from the repo

Assuming you have git pre installed. If not, please install using guidance here

git clone https://github.com/lightninglife/golden-ami-amazon-eks-optimized-from-aws-official-repo.git

FYI: You’d better git clone my repo instead of official repo since I tweaked a few to make it serve our needs (I will explain them in details)

Git clone terraform codes from the repo

git clone https://github.com/lightninglife/Provisioning-AWS-Infrastructure-using-Terraform-Packer-Kubernetes-Ansible.git

We now build our ami

FYI: I would like to explain about the key file in this folder for variables we need to adjust

eks-worker-al2-variables.json

{
"additional_yum_repos": "",
"ami_component_description": "(k8s: {{ user `kubernetes_version` }}, docker: {{ user `docker_version` }}, containerd: {{ user `containerd_version` }})",
"ami_description": "EKS Kubernetes Worker AMI with AmazonLinux2 image",
"ami_regions": "us-east-1",
"ami_users": "",
"associate_public_ip_address": "",
"aws_access_key_id": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_region": "us-east-1",
"aws_secret_access_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"aws_session_token": "{{env `AWS_SESSION_TOKEN`}}",
"binary_bucket_name": "amazon-eks",
"binary_bucket_region": "us-west-2",
"cache_container_images": "false",
"cni_plugin_version": "v1.2.0",
"containerd_version": "1.7.*",
"creator": "{{env `USER`}}",
"docker_version": "20.10.*",
"enable_fips": "false",
"encrypted": "false",
"kernel_version": "",
"kms_key_id": "",
"launch_block_device_mappings_volume_size": "8",
"pause_container_version": "3.5",
"pull_cni_from_github": "true",
"remote_folder": "/tmp",
"runc_version": "1.1.*",
"security_group_id": "",
"source_ami_filter_name": "amzn2-ami-kernel-5.10-hvm-2.0.20240131.0-x86_64-gp2",
"source_ami_id": "ami-0cf10cdf9fcd62d37",
"source_ami_owners": "137112412989",
"ssh_interface": "",
"ssh_username": "ec2-user",
"ssm_agent_version": "",
"subnet_id": "",
"temporary_security_group_source_cidrs": "",
"volume_type": "gp2",
"working_dir": "{{user `remote_folder`}}/worker"
}

Above file is for all variables that we may tweak accordingly

FYI: Keep this in mind, don’t ever change this value unless your ip address from China

"binary_bucket_region": "us-west-2"

Do not assume you need to adjust it based on the region you use in AWS since this is where binary is located in AWS, not the region you may apply your infrastructure to

To select based AMI used for this golden age

"source_ami_filter_name": "amzn2-ami-kernel-5.10-hvm-2.0.20240131.0-x86_64-gp2",
"source_ami_id": "ami-0cf10cdf9fcd62d37",
"source_ami_owners": "137112412989",

It is self explanatory that ami name, ami id and ami owners need to supplied with

FYI: Keep in mind, this ami has to be Amazon Linux2. From AWS console, we may pinpoint it as shown below

Click Launch instances

As highlighted below, Amazon Linux 2 was selected and ami id is shown, and make sure you double check the region (AMI id is region specific — same ami is with different id in different regions)

Search ami id as shown below

Click Community AMIs Tab to find all info needed

ami name, ami id and owner of the ami

FYI: We’re using community AMIs since it’s open to public without you subscribing

Adjust below variable for whatever region you would like to deploy your resources in AWS

"ami_regions": "us-east-1"

There are other variables you may update accordingly, but above variables may request your attention

FYI: You would like to install more tools in the worker nodes, you may add it in this file (I don’t recommend this way since it’s gonna mix with your other builds down the road. It’s way better if you keep this build with knowing it’s an Amazon Official EKS optimized Linux 2 Server)

install-worker.sh

We may run below script inside amazon-eks-ami folder

make k8s=1.29

FYI: To make sure you build an AMI with a specific Kubernetes version, adjust k8s=1.29, this is crucial as older version of kubernetes may lead to errors

We may now check instance in the region you would like to apply your resources (e.g. us-east-1)

FYI: You may have errors when building, you may fix it based on errors from terminal

At the final build stage, you may observe that the instance is stopped

From terminal, you may see AMI to become ready

From ami in AWS console

Now the build is completed

From terminal

From AWS console

Now we’re all set for terraform deploy as Golden AMI built

I will go over all terraform files before deploying it with troubleshooting below

Folder tree

├── ansible_userdata.tpl
├── assume_role_policy.json
├── data.tf
├── main.tf
├── modules
│ ├── alb
│ │ ├── alb.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── asg
│ │ ├── asg.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── eks
│ │ ├── bastion.tf
│ │ ├── eks.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── iam
│ │ ├── iam.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── outputs.tf
│ ├── rds
│ │ ├── rds.tf
│ │ └── variables.tf
│ ├── sg
│ │ ├── outputs.tf
│ │ ├── sg.tf
│ │ └── variables.tf
│ └── vpc
│ ├── outputs.tf
│ ├── variables.tf
│ └── vpc.tf
├── outputs.tf
├── providers.tf
├── s3.tf
├── terraform.tfvars
├── variables.tf
├── versions.tf
├── web-ec2.pem

Modules

FYI: Each Subfolder has 3 files inside (man.tf, outputs.tf and variables.tf)

alb — subfolder

alb.tf

# Create alb for Web Servers
resource "aws_lb" "web" {
name = var.web_alb_name
internal = var.web_alb_internal
load_balancer_type = var.load_balancer_type_web
security_groups = [var.security_group]
subnets = var.subnets
}

resource "aws_lb_target_group" "web_tg" {
name = var.web_tg_name # "web-tg"
port = var.port_80 # 80
protocol = var.protocol_web # "HTTP"
vpc_id = var.vpc_id
}

resource "aws_lb_listener" "web_listener" {
load_balancer_arn = aws_lb.web.arn
port = var.port_80 # "80"
protocol = var.protocol_web # "HTTP"

default_action {
type = var.web_listener_type # "forward"
target_group_arn = aws_lb_target_group.web_tg.arn
}
}

Above is to create an Application Load Balancer for Web Servers with a Target Group and a listener

FYI: This is only a template for basic ALB. Normally, there should be a listener for port 80 and port 443. In case of custom url needed, acm cert requested and approved need to be attached to alb. And there are listener rules are subject to be updated based on specific needs

outputs.tf (empty file)

# ALB Listener for Jenkins variables
variable "protocol_web" {
description = "Protocol for Jenkins ALB Listener"
type = string
}

# ALB for Web Servers variables
variable "web_alb_name" {
description = "Name of the example web ALB"
type = string
}

variable "web_alb_internal" {
description = "Whether the example web ALB is internal"
type = bool
}

variable "load_balancer_type_web" {
description = "Type of the load balancer for example web ALB"
type = string
}

# ALB Target Group for Web Servers variables
variable "web_tg_name" {
description = "Name of the example web Target Group"
type = string
}

# ALB Listener for Web Servers variables
variable "web_listener_type" {
description = "Type of action for example web ALB Listener"
type = string
}

variable "port_8080" {
description = "Port for HTTP access for Jenkins (e.g., 8080)"
type = number
}

variable "port_80" {
description = "Port for HTTP traffic (e.g., 80)"
type = number
}

variable "subnets" {
description = "Subnets"
type = list(string)
}

variable "security_group" {
description = "security_group"
type = string
}

variable "vpc_id" {
description = "vpc id"
type = string
}

Variables needed in alb.tf file

asg — subfolder

asg.tf

resource "aws_launch_template" "web" {
name_prefix = var.aws_launch_template_web_name_prefix # "web-lt"
image_id = var.aws_launch_template_web_image_id # "ami-0c55b159cbfafe1f0" # Replace with a valid AMI ID
instance_type = var.aws_launch_template_web_instance_type # "t2.micro"

user_data = filebase64(var.aws_launch_template_web_user_data) # file("${path.module}/../web_userdata.sh")

block_device_mappings {
device_name = var.aws_launch_template_web_block_device_mappings_device_name # "/dev/sda1"
ebs {
volume_size = var.aws_launch_template_web_block_device_mappings_volume_size # 20
}
}

key_name = var.key_pair_name

network_interfaces {
security_groups = var.aws_launch_template_web_network_interfaces_security_groups
associate_public_ip_address = true
}


lifecycle {
create_before_destroy = true # var.aws_launch_template_web_create_before_destroy # true
}
}


resource "aws_autoscaling_group" "web" {
desired_capacity = var.aws_autoscaling_group_web_desired_capacity # 2
max_size = var.aws_autoscaling_group_web_max_size # 5
min_size = var.aws_autoscaling_group_web_min_size # 1
vpc_zone_identifier = var.aws_autoscaling_group_web_vpc_zone_identifier
launch_template {
id = aws_launch_template.web.id
version = var.aws_autoscaling_group_web_launch_template_version # "$Latest"
}

tag {
key = var.aws_autoscaling_group_web_tag_key # "Name"
value = var.aws_autoscaling_group_web_tag_value # "web-asg-instance"
propagate_at_launch = var.aws_autoscaling_group_web_tag_propagate_at_launch # true
}
}


# ansible

resource "aws_launch_template" "ansible" {
name_prefix = var.aws_launch_template_ansible_name_prefix # "web-lt"
image_id = var.aws_launch_template_ansible_image_id # "ami-0c55b159cbfafe1f0" # Replace with a valid AMI ID
instance_type = var.aws_launch_template_ansible_instance_type # "t2.micro"

user_data = base64encode(var.aws_launch_template_ansible_user_data) # file("${path.module}/../web_userdata.sh")



vpc_security_group_ids = var.aws_launch_template_ansible_vpc_security_group_ids

block_device_mappings {
device_name = var.aws_launch_template_ansible_block_device_mappings_device_name # "/dev/sda1"
ebs {
volume_size = var.aws_launch_template_ansible_block_device_mappings_volume_size # 20
}
}

key_name = var.key_pair_name

lifecycle {
create_before_destroy = true # var.aws_launch_template_web_create_before_destroy # true
}
}

resource "aws_autoscaling_group" "ansible" {
desired_capacity = var.aws_autoscaling_group_ansible_desired_capacity # 2
max_size = var.aws_autoscaling_group_ansible_max_size # 5
min_size = var.aws_autoscaling_group_ansible_min_size # 1
vpc_zone_identifier = var.aws_autoscaling_group_ansible_vpc_zone_identifier
launch_template {
id = aws_launch_template.ansible.id
version = var.aws_autoscaling_group_ansible_launch_template_version # "$Latest"
}

tag {
key = var.aws_autoscaling_group_ansible_tag_key # "Name"
value = var.aws_autoscaling_group_ansible_tag_value # "web-asg-instance"
propagate_at_launch = var.aws_autoscaling_group_ansible_tag_propagate_at_launch # true
}
}

Above script creates 2 launch templates (one for web servers, one for kubernetes’ worker nodes)and 2 auto scaling groups (one for web servers, one for kubernetes’ worker nodes)

aws_launch_template_ansible_user_data

From above script, you could see that we use a variable named aws_launch_template_ansible_user_data for userdata, whose value is provided in terraform.tfvars file

FYI: There is another alternative to apply to use data "template_file" function in terraform. But we will not be available to provide a variable in template_file. secret access key is involved, so I chose not to use this method. However, it might be useful for you in certain circumstance, please refer to reference pages below for userdata variables

Issue with solutions: https://stackoverflow.com/questions/50835636/accessing-terraform-variables-within-user-data-provider-template-file



You can do this using a template_file data source:

data "template_file" "init" {
template = "${file("router-init.sh.tpl")}"

vars = {
some_address = "${aws_instance.some.private_ip}"
}
}

Then reference it inside the template like:

#!/bin/bash

echo "SOME_ADDRESS = ${some_address}" > /tmp/

Then use that for the user_data:

user_data = ${data.template_file.init.rendered}

Official terraform page: https://registry.terraform.io/providers/hashicorp/template/latest/docs/data-sources/file

Terraform EC2 userdata and variables: https://faun.pub/terraform-ec2-userdata-and-variables-a25b3859118a

It’s noted that terraform doesn’t suggest this option either

Although in principle template_file can be used with an inline template string, we don't recommend this approach because it requires awkward escaping. Instead, just use template syntax directly in the configuration.

Below script is to create an auto scaling group with anisble related name.

FYI: I create this asg with launch template with ansible is to present that Kubernetes node group will create a EKS managed asg on behalf of us

resource "aws_autoscaling_group" "ansible" {
desired_capacity = var.aws_autoscaling_group_ansible_desired_capacity # 2
max_size = var.aws_autoscaling_group_ansible_max_size # 5
min_size = var.aws_autoscaling_group_ansible_min_size # 1
vpc_zone_identifier = var.aws_autoscaling_group_ansible_vpc_zone_identifier
launch_template {
id = aws_launch_template.ansible.id
version = var.aws_autoscaling_group_ansible_launch_template_version # "$Latest"
}

tag {
key = var.aws_autoscaling_group_ansible_tag_key # "Name"
value = var.aws_autoscaling_group_ansible_tag_value # "web-asg-instance"
propagate_at_launch = var.aws_autoscaling_group_ansible_tag_propagate_at_launch # true
}
}

outputs.tf — asg

output "launch_template_id" {
value = aws_launch_template.web.id
}

output "launch_template_id_ansible" {
value = aws_launch_template.ansible.id
}

Above script will output launch template ids we are about to create for both web and ansible

variables.tf — asg

# web
variable "aws_launch_template_web_name_prefix" {
description = "Name prefix for the AWS launch template"
type = string
}

variable "aws_launch_template_web_image_id" {
description = "AMI ID for the AWS launch template"
type = string
}

variable "aws_launch_template_web_instance_type" {
description = "Instance type for the AWS launch template"
type = string
}

variable "aws_launch_template_web_block_device_mappings_device_name" {
description = "Device name for block device mappings in the AWS launch template"
type = string
}

variable "aws_launch_template_web_block_device_mappings_volume_size" {
description = "Volume size for block device mappings in the AWS launch template"
type = number
}

variable "aws_launch_template_web_create_before_destroy" {
description = "Lifecycle setting for create_before_destroy in the AWS launch template"
type = bool
}

variable "aws_autoscaling_group_web_desired_capacity" {
description = "Desired capacity for the AWS Auto Scaling Group"
type = number
}

variable "aws_autoscaling_group_web_max_size" {
description = "Maximum size for the AWS Auto Scaling Group"
type = number
}

variable "aws_autoscaling_group_web_min_size" {
description = "Minimum size for the AWS Auto Scaling Group"
type = number
}

variable "aws_autoscaling_group_web_launch_template_version" {
description = "Launch template version for the AWS Auto Scaling Group"
type = string
}

variable "aws_autoscaling_group_web_tag_key" {
description = "Tag key for the AWS Auto Scaling Group instances"
type = string
}

variable "aws_autoscaling_group_web_tag_value" {
description = "Tag value for the AWS Auto Scaling Group instances"
type = string
}

variable "aws_autoscaling_group_web_tag_propagate_at_launch" {
description = "Tag propagation setting for the AWS Auto Scaling Group instances"
type = bool
}

variable "aws_launch_template_web_user_data" {
description = "Userdata file"
type = string
}

variable "aws_autoscaling_group_web_vpc_zone_identifier" {
description = "subnet id"
type = list(string)
}

variable "key_pair_name" {
description = "Name of the AWS Key Pair to associate with EC2 instances"
type = string
# Set a default value if needed
}


variable "aws_launch_template_web_network_interfaces_security_groups" {
description = "List of security group IDs to associate with network interfaces in the launch template"
type = list(string)
# You can set default security groups here if needed
}

# ansible

variable "aws_launch_template_ansible_vpc_security_group_ids" {
description = "List of security group IDs for the AWS Launch Template used in Ansible EKS setup"
type = list(string)
# You can provide a default value if needed:
# default = ["sg-xxxxxxxxxxxxxxx", "sg-yyyyyyyyyyyyyyy"]
}

variable "aws_launch_template_ansible_name_prefix" {
description = "Name prefix for the AWS launch template"
type = string
}

variable "aws_launch_template_ansible_image_id" {
description = "AMI ID for the AWS launch template"
type = string
}

variable "aws_launch_template_ansible_instance_type" {
description = "Instance type for the AWS launch template"
type = string
}

variable "aws_launch_template_ansible_block_device_mappings_device_name" {
description = "Device name for block device mappings in the AWS launch template"
type = string
}

variable "aws_launch_template_ansible_block_device_mappings_volume_size" {
description = "Volume size for block device mappings in the AWS launch template"
type = number
}

variable "aws_launch_template_ansible_create_before_destroy" {
description = "Lifecycle setting for create_before_destroy in the AWS launch template"
type = bool
}

variable "aws_autoscaling_group_ansible_desired_capacity" {
description = "Desired capacity for the AWS Auto Scaling Group"
type = number
}

variable "aws_autoscaling_group_ansible_max_size" {
description = "Maximum size for the AWS Auto Scaling Group"
type = number
}

variable "aws_autoscaling_group_ansible_min_size" {
description = "Minimum size for the AWS Auto Scaling Group"
type = number
}

variable "aws_autoscaling_group_ansible_launch_template_version" {
description = "Launch template version for the AWS Auto Scaling Group"
type = string
}

variable "aws_autoscaling_group_ansible_tag_key" {
description = "Tag key for the AWS Auto Scaling Group instances"
type = string
}

variable "aws_autoscaling_group_ansible_tag_value" {
description = "Tag value for the AWS Auto Scaling Group instances"
type = string
}

variable "aws_autoscaling_group_ansible_tag_propagate_at_launch" {
description = "Tag propagation setting for the AWS Auto Scaling Group instances"
type = bool
}

variable "aws_launch_template_ansible_user_data" {
description = "Userdata file"
type = string
}

variable "aws_autoscaling_group_ansible_vpc_zone_identifier" {
description = "subnet id"
type = list(string)
}

variable "aws_launch_template_ansible_network_interfaces_security_groups" {
description = "List of security group IDs to associate with network interfaces in the launch template"
type = list(string)
# You can set default security groups here if needed
}

variable "eks_cluster_ansible_name" {
description = "Name of the Ansible EKS cluster"
type = string
}

variable "aws_eks_node_group_instance_types" {
description = "Instance types for the EKS node group"
type = string
}

variable "aws_eks_cluster_ansible_version" {
description = "The version of Ansible to use with AWS EKS cluster"
type = string
# You can set your desired default value here
}

Variables needed in asg.tf file

eks — subfolder

bastion.tf

resource "aws_instance" "eks_cluster_ansible_bastion_host" {
ami = var.aws_instance_eks_cluster_ansible_bastion_host_ami # "ami-12345678" # Specify an appropriate AMI for your region
instance_type = var.aws_instance_eks_cluster_ansible_bastion_host_instance_type # "t2.micro"
key_name = var.key_pair_name
subnet_id = var.aws_instance_eks_cluster_ansible_bastion_host_subnet_id # "subnet-12345678" # Specify the ID of your public subnet

security_groups = var.aws_instance_eks_cluster_ansible_bastion_host_security_groups # aws_security_group.bastion_host_sg.id

tags = {
Name = var.aws_instance_eks_cluster_ansible_bastion_host_tags # "bastion-host"
}

provisioner "file" {
source = var.aws_instance_eks_cluster_ansible_bastion_host_provisioner_source # "/path/to/your/key.pem"
destination = var.aws_instance_eks_cluster_ansible_bastion_host_provisioner_destination # "/home/ec2-user/key.pem" # Adjust the destination path as needed

connection {
type = "ssh"
user = "ec2-user"
private_key = file(var.aws_instance_eks_cluster_ansible_bastion_host_provisioner_source)
host = self.public_ip
}

}
}

resource "null_resource" "force_provisioner" {
triggers = {
always_run = timestamp()
}

depends_on = [aws_instance.eks_cluster_ansible_bastion_host]
}


resource "null_resource" "trigger_remote_exec" {
depends_on = [aws_instance.eks_cluster_ansible_bastion_host]

triggers = {
always_run = timestamp()
}

provisioner "remote-exec" {
inline = var.aws_instance_eks_cluster_ansible_bastion_host_remote_exec_inline # "chmod 400 /home/ec2-user/web-ec2.pem"


connection {
type = "ssh"
user = "ec2-user"
private_key = file(var.aws_instance_eks_cluster_ansible_bastion_host_provisioner_source) # Specify the path to your private key
host = aws_instance.eks_cluster_ansible_bastion_host.public_ip
}
}
}

resource "aws_eip" "eks_cluster_ansible_bastion_eip" {
instance = aws_instance.eks_cluster_ansible_bastion_host.id
}

# Define other resources such as route tables, security groups for EKS worker nodes, etc.

Above script will create a bastion host for EKS worker nodes with private key added to the server using

resource "null_resource" "force_provisioner" {
triggers = {
always_run = timestamp()
}

depends_on = [aws_instance.eks_cluster_ansible_bastion_host]
}

FYI: Above script was tweak to make sure the userdata will be run as expected when updating the terraform pipeline

resource "null_resource" "trigger_remote_exec" {
depends_on = [aws_instance.eks_cluster_ansible_bastion_host]

triggers = {
always_run = timestamp()
}

provisioner "remote-exec" {
inline = var.aws_instance_eks_cluster_ansible_bastion_host_remote_exec_inline # "chmod 400 /home/ec2-user/web-ec2.pem"


connection {
type = "ssh"
user = "ec2-user"
private_key = file(var.aws_instance_eks_cluster_ansible_bastion_host_provisioner_source) # Specify the path to your private key
host = aws_instance.eks_cluster_ansible_bastion_host.public_ip
}
}
}

Also, provisioner “file” in instance section will connect to the server using private key locally

 provisioner "file" {
source = var.aws_instance_eks_cluster_ansible_bastion_host_provisioner_source # "/path/to/your/key.pem"
destination = var.aws_instance_eks_cluster_ansible_bastion_host_provisioner_destination # "/home/ec2-user/key.pem" # Adjust the destination path as needed

connection {
type = "ssh"
user = "ec2-user"
private_key = file(var.aws_instance_eks_cluster_ansible_bastion_host_provisioner_source)
host = self.public_ip
}

}

Lastly, we create an EIP to attach to this bastion for public access

resource "aws_eip" "eks_cluster_ansible_bastion_eip" {
instance = aws_instance.eks_cluster_ansible_bastion_host.id
}

eks.tf — eks

# esk ansible
resource "aws_eks_cluster" "ansible" {
name = var.eks_cluster_ansible_name # "ansible-cluster"
role_arn = var.aws_eks_cluster_ansible_role_arn

vpc_config {
subnet_ids = var.subnets # Replace with your subnet IDs
security_group_ids = [var.aws_eks_cluster_ansible_security_group_ids]
}

version = var.aws_eks_cluster_ansible_version
}

resource "aws_eks_node_group" "ansible" {
cluster_name = aws_eks_cluster.ansible.name
node_group_name = var.aws_eks_node_group_ansible_name # "ansible-node-group"
node_role_arn = var.aws_eks_node_group_ansible_role_arn
subnet_ids = var.subnets # Replace with your subnet IDs
# instance_types = [var.aws_eks_node_group_instance_types] # ["t2.micro"]
scaling_config {
desired_size = var.aws_eks_node_group_desired_capacity # 2
min_size = var.aws_eks_node_group_min_size # 1
max_size = var.aws_eks_node_group_max_size # 3
}
launch_template {
id = var.aws_eks_node_group_launch_template_name_prefix_ansible # "id"
version = var.aws_eks_node_group_launch_template_version # "$Latest"

depends_on = [
var.eks_worker_node_policy_attachment_ansible,
var.eks_cni_policy_attachment_ansible,
var.eks_ec2_container_registry_readonly_attachment_ansible,
]
}

resource "aws_eks_addon" "ansible" {
cluster_name = aws_eks_cluster.ansible.name
addon_name = var.aws_eks_addon_ansible_addon_name # "vpc-cni"
addon_version = var.aws_eks_addon_ansible_addon_version # "v1.16.2-eksbuild.1" #e.g., previous version v1.9.3-eksbuild.3 and the new version is v1.10.1-eksbuild.1
}

To create eks cluster for ansible, we use below script

resource "aws_eks_cluster" "ansible" {
name = var.eks_cluster_ansible_name # "ansible-cluster"
role_arn = var.aws_eks_cluster_ansible_role_arn

vpc_config {
subnet_ids = var.subnets # Replace with your subnet IDs
security_group_ids = [var.aws_eks_cluster_ansible_security_group_ids]
}

version = var.aws_eks_cluster_ansible_version
}

FYI: Please keep this in mind. Otherwise, you will experience networking issue when trying to join worker node to EKS cluster

security_group_ids = [var.aws_eks_cluster_ansible_security_group_ids]

Above security group must include an ingress rule to allow access to vpc on port 443

Also, make sure version of the kubernetes server is matching with kubernetes’ client we install when building Golden ami. If you still can recall, the version was 1.29.

If this version is not matching, when using Kubectl, you may expect errors

version = var.aws_eks_cluster_ansible_version

outputs.tf — eks

output "eks_cluster_ansible_endpoint" {
value = aws_eks_cluster.ansible.endpoint
}

output "eks_cluster_ansible_certificate_authority" {
value = aws_eks_cluster.ansible.certificate_authority
}

output "eks_cluster_ansible_name" {
value = aws_eks_cluster.ansible.name
}


output "eks_nodegroup_ansible_name" {
value = aws_eks_node_group.ansible.id
}

data "aws_eks_cluster" "eks_cluster_ansible" {
name = aws_eks_cluster.ansible.name # Replace with your EKS cluster name

depends_on = [aws_eks_cluster.ansible]
}

output "eks_cluster_security_group_ids" {
value = data.aws_eks_cluster.eks_cluster_ansible.vpc_config[0].security_group_ids
}

Here we output eks cluster and node group related values for use in our main.tf file

variables.tf — eks

variable "eks_cluster_ansible_name" {
description = "Name of the Ansible EKS cluster"
type = string
}

variable "aws_eks_node_group_ansible_name" {
description = "Name of the Ansible EKS node group"
type = string
}

variable "aws_eks_node_group_instance_types" {
description = "Instance types for the EKS node group"
type = string
}

variable "aws_eks_node_group_desired_capacity" {
description = "Desired capacity for the EKS node group"
type = number
}

variable "aws_eks_node_group_min_size" {
description = "Minimum size for the EKS node group"
type = number
}

variable "aws_eks_node_group_max_size" {
description = "Maximum size for the EKS node group"
type = number
}

variable "aws_eks_node_group_launch_template_name_prefix" {
description = "Name prefix for the EKS node group launch template"
type = string
}

variable "aws_eks_node_group_launch_template_version" {
description = "Version for the EKS node group launch template"
type = string
}

variable "aws_eks_node_group_device_name" {
description = "Device name for the EKS node group block device mappings"
type = string
}

variable "aws_eks_node_group_volume_size" {
description = "Volume size for the EKS node group block device mappings"
type = number
}

variable "subnets" {
description = "subnets"
type = list(string)
}

variable "aws_eks_cluster_ansible_role_arn" {
description = "EKS Cluster for Ansible's role arn"
type = string
}

variable "aws_eks_node_group_ansible_role_arn" {
description = "EKS node group for Ansible's role arn"
type = string
}

variable "aws_eks_cluster_ansible_version" {
description = "The version of Ansible to use with AWS EKS cluster"
type = string
# You can set your desired default value here
}

variable "ec2_ssh_key" {
description = "Name of the EC2 SSH key pair"
type = string
# You can set a default value if needed
# default = "example-key-pair-name"
}

variable "eks_worker_node_policy_attachment_ansible" {
description = "IAM policy attachment name for worker nodes in Ansible EKS setup"
type = string
}

variable "eks_cni_policy_attachment_ansible" {
description = "IAM policy attachment name for CNI (Container Network Interface) in Ansible EKS setup"
type = string
}

variable "eks_ec2_container_registry_readonly_attachment_ansible" {
description = "IAM policy attachment name for read-only access to the EC2 container registry in Ansible EKS setup"
type = string
}

variable "aws_eks_node_group_launch_template_name_prefix_ansible" {
description = "Prefix for the name of the AWS EKS Node Group launch template in Ansible setup"
type = string
# You can provide a default prefix if needed
}

variable "aws_eks_addon_ansible_addon_name" {
description = "Name of the AWS EKS addon for Ansible"
type = string
}

variable "aws_eks_addon_ansible_addon_version" {
description = "Version of the AWS EKS addon for Ansible"
type = string
}

variable "aws_eks_cluster_ansible_security_group_ids" {
description = "Security group IDs for the EKS cluster used by Ansible"
type = string
}

# bastion
variable "aws_instance_eks_cluster_ansible_bastion_host_ami" {
description = "The AMI ID for the bastion host"
type = string
}

variable "aws_instance_eks_cluster_ansible_bastion_host_instance_type" {
description = "The instance type for the bastion host"
type = string
}

variable "key_pair_name" {
description = "The name of the AWS key pair used to access the bastion host"
type = string
}

variable "aws_instance_eks_cluster_ansible_bastion_host_subnet_id" {
description = "The ID of the subnet where the bastion host will be launched"
type = string
}

variable "aws_instance_eks_cluster_ansible_bastion_host_security_groups" {
description = "The ID of the security group(s) for the bastion host"
type = list(string)
}

variable "aws_instance_eks_cluster_ansible_bastion_host_tags" {
description = "Tags for the bastion host instance"
type = string
}

variable "aws_instance_eks_cluster_ansible_bastion_host_provisioner_source" {
description = "Source path of the file to be provisioned to the bastion host"
type = string
}

variable "aws_instance_eks_cluster_ansible_bastion_host_provisioner_destination" {
description = "Destination path on the bastion host where the file will be copied"
type = string
}

variable "aws_instance_eks_cluster_ansible_bastion_host_remote_exec_inline" {
description = "Inline script to be executed on the bastion host using remote-exec provisioner"
type = list(string)
}

variables for eks.tf and bastion.tf files

iam — subfolder

iam.tf

# IAM for cluster
resource "aws_iam_role" "eks_cluster_ansible" {
name = var.aws_iam_role_eks_cluster_ansible_name # "ansible-cluster-role"

assume_role_policy = var.aws_iam_role_eks_cluster_assume_role_policy_ansible # file("${path.module}/assume_role_policy.json")
}


# Associate IAM Policy to IAM Role for cluster
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster_ansible.name
}

resource "aws_iam_role_policy_attachment" "eks-AmazonEKSVPCResourceController" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
role = aws_iam_role.eks_cluster_ansible.name
}


# IAM for node group

resource "aws_iam_role" "eks_nodegroup_role_ansible" {
name = var.aws_iam_role_eks_nodegroup_role_ansible_name

assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = ["ec2.amazonaws.com", "eks.amazonaws.com"]
}
}]
Version = "2012-10-17"
})
}

resource "aws_iam_role_policy" "eks_nodegroup_role_ansible_policy" {
name = "eks-nodegroup-role-ansible-describe"
role = aws_iam_role.eks_nodegroup_role_ansible.name

policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Action = [
"eks:DescribeCluster",
"eks:AccessKubernetesApi"
],
Effect = "Allow",
Resource = "*" # You can specify the ARN of your EKS cluster if needed.
}
]
})
}

resource "aws_iam_policy_attachment" "eks_worker_node_policy" {
name = "eks-worker-node-policy-attachment" # Unique name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
roles = [aws_iam_role.eks_nodegroup_role_ansible.id]
}


resource "aws_iam_policy_attachment" "eks_cni_policy_attachment" {
name = "eamazoneks_cni-policy"
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
roles = [aws_iam_role.eks_nodegroup_role_ansible.id]
}

resource "aws_iam_policy_attachment" "eks_ec2_container_registry_readonly" {
name = "eks_worker_nodes_policy"
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
roles = [aws_iam_role.eks_nodegroup_role_ansible.id]
}

Below script will create iam role for the eks cluster and assume role to the cluster

FYI: Keep in mind, we we talk about policy here, it’s the trust relationship you may see here rather than policy under permission tab

# IAM for cluster
resource "aws_iam_role" "eks_cluster_ansible" {
name = var.aws_iam_role_eks_cluster_ansible_name # "ansible-cluster-role"

assume_role_policy = var.aws_iam_role_eks_cluster_assume_role_policy_ansible # file("${path.module}/assume_role_policy.json")
}

Here the json file we use for this role policy

{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": ["eks.amazonaws.com","ec2.amazonaws.com" ]
}
}
]
}

eks and ec2 assumed to the cluster

Here we use a file to assume this role

Below script is used to attach 2 policies needed for eks cluster

FYI: These policies are under permission tab

resource "aws_iam_role_policy_attachment" "eks-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster_ansible.name
}

resource "aws_iam_role_policy_attachment" "eks-AmazonEKSVPCResourceController" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
role = aws_iam_role.eks_cluster_ansible.name
}

Below script is to create iam role for eks nodegroup and assume the policy to the role

resource "aws_iam_role" "eks_nodegroup_role_ansible" {
name = var.aws_iam_role_eks_nodegroup_role_ansible_name

assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = ["ec2.amazonaws.com", "eks.amazonaws.com"]
}
}]
Version = "2012-10-17"
})
}

As you can see, it’s assuming straight without using a variable.

The choice is yours, but I’d prefer using files in main.tf’s folder as it allows us to better manage everything with variables

The below script is to create inline policy and attach it to nodegroup iam role

resource "aws_iam_role_policy" "eks_nodegroup_role_ansible_policy" {
name = "eks-nodegroup-role-ansible-describe"
role = aws_iam_role.eks_nodegroup_role_ansible.name

policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Action = [
"eks:DescribeCluster",
"eks:AccessKubernetesApi"
],
Effect = "Allow",
Resource = "*" # You can specify the ARN of your EKS cluster if needed.
}
]
})
}

The below script is to attach 3 needed policies to nodegroup iam role

resource "aws_iam_policy_attachment" "eks_worker_node_policy" {
name = "eks-worker-node-policy-attachment" # Unique name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
roles = [aws_iam_role.eks_nodegroup_role_ansible.id]
}


resource "aws_iam_policy_attachment" "eks_cni_policy_attachment" {
name = "eamazoneks_cni-policy"
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
roles = [aws_iam_role.eks_nodegroup_role_ansible.id]
}

resource "aws_iam_policy_attachment" "eks_ec2_container_registry_readonly" {
name = "eks_worker_nodes_policy"
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
roles = [aws_iam_role.eks_nodegroup_role_ansible.id]
}

outputs.tf — iam

output "eks_ansible_cluster_iam_role_arn" {
value = aws_iam_role.eks_cluster_ansible.arn
}

output "eks_ansible_nodegroup_iam_role_arn" {
value = aws_iam_role.eks_nodegroup_role_ansible.arn
}

output "eks_worker_node_policy_attachment_ansible" {
value = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
}

output "eks_cni_policy_attachment_ansible" {
value = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
}

output "eks_ec2_container_registry_readonly_attachment_ansible" {
value = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
}

Output needed from IAM to be used in main.tf file

variables.tf — iam

#iam
variable "aws_iam_role_eks_cluster_ansible_name" {
description = "Iam role name for esk cluster"
type = string
}

variable "aws_iam_role_eks_cluster_assume_role_policy_ansible" {
description = "file of the policy ansible"
type = string
}

variable "aws_iam_role_eks_nodegroup_role_ansible_name" {
description = "Name of the IAM role associated with EKS nodegroups for Ansible"
type = string
# You can set a default value if needed
# default = "example-role-name"
}

variables for iam.tf file

rds — subfolder

rds.tf

resource "aws_db_instance" "rds" {
allocated_storage = var.db_allocated_storage
storage_type = var.db_storage_type
engine = var.db_engine
engine_version = var.db_engine_version
instance_class = var.db_instance_class
name = var.db_name
username = var.db_username
password = var.db_password
parameter_group_name = aws_db_parameter_group.default.name
skip_final_snapshot = var.skip_final_snapshot
vpc_security_group_ids = [var.security_group]
db_subnet_group_name = aws_db_subnet_group.rds.name

}

resource "aws_db_subnet_group" "rds" {
name = var.db_subnet_group_name
subnet_ids = var.subnets
# family = var.db_parameter_group_family
}

resource "aws_db_parameter_group" "default" {
name = var.db_parameter_group_name
family = var.db_parameter_group_family

parameter {
name = var.db_parameter_server_name
value = var.character_set_server
}

parameter {
name = var.db_parameter_client_name
value = var.character_set_client
}
}

Above script will create rds — mysql with subnet group and parameter group

FYI: make sure you provide security group ids, otherwise, rds will be created in a default vpc rather than your expected vpc

variables.tf

# RDS Database variables
variable "db_allocated_storage" {
description = "Allocated storage for the RDS database"
type = number
}

variable "db_storage_type" {
description = "Storage type for the RDS database"
type = string
}

variable "db_engine" {
description = "Database engine for the RDS instance"
type = string
}

variable "db_engine_version" {
description = "Database engine version for the RDS instance"
type = string
}

variable "db_instance_class" {
description = "Instance class for the RDS database"
type = string
}

variable "db_name" {
description = "Name of the RDS database"
type = string
}

variable "db_username" {
description = "Username for the RDS database"
type = string
}

variable "db_password" {
description = "Password for the RDS database"
type = string
}

variable "db_parameter_group_name" {
description = "Parameter group name for the RDS instance"
type = string
}

variable "skip_final_snapshot" {
description = "Skip final snapshot when deleting the RDS instance"
type = bool
}

variable "db_subnet_group_name" {
description = "Name of the DB subnet group"
type = string
}

variable "subnets" {
description = "subnets"
type = list(string)
}

variable "security_group" {
description = "security_group"
type = string
}

variable "db_parameter_group_family" {
description = "The family of the DB parameter group."
type = string
# Set your desired default family here
}

variable "db_parameter_server_name" {
description = "Name for the 'character_set_server' parameter"
type = string
}

variable "db_parameter_client_name" {
description = "Name for the 'character_set_client' parameter"
type = string
}

variable "character_set_server" {
description = "Value for the 'character_set_server' parameter"
type = string
}

variable "character_set_client" {
description = "Value for the 'character_set_client' parameter"
type = string
}

variables needed for rds.tf file

security group (sg)— subfolder

sg.tf

# Define a security group for HTTP/HTTPS access
resource "aws_security_group" "all" {
name = var.security_group_name
description = var.security_group_description
vpc_id = var.vpc_id

# Allow incoming HTTP (port 80) traffic
ingress {
from_port = var.port_80
to_port = var.port_80
protocol = var.security_group_protocol
cidr_blocks = [var.web_cidr]
}

# Allow incoming HTTPS (port 443) traffic
ingress {
from_port = var.port_443
to_port = var.port_443
protocol = var.security_group_protocol
cidr_blocks = [var.web_cidr]
}

# Allow SSH access for Ansible (port 22)
ingress {
from_port = var.port_22
to_port = var.port_22
protocol = var.security_group_protocol
cidr_blocks = [var.private_ip_address]
self = true
}

# Allow HTTP access for Jenkins (port 8080)
ingress {
from_port = var.port_8080
to_port = var.port_8080
protocol = var.security_group_protocol
cidr_blocks = [var.private_ip_address]
}

# Allow MySQL access for RDS (port 3306)
ingress {
from_port = var.port_3306
to_port = var.port_3306
protocol = var.security_group_protocol
cidr_blocks = [var.private_ip_address]
}

# Allow all outbound traffic
egress {
from_port = 0
to_port = 0
protocol = "-1" # All protocols
cidr_blocks = ["0.0.0.0/0"] # Allow traffic to all destinations
}
}


# Define a security group for eks clusters
# Allow incoming HTTPS (port 443) traffic
resource "aws_security_group" "eks_cluster" {
name = var.security_group_name_eks_cluster
description = var.security_group_description_eks_cluster
vpc_id = var.vpc_id

ingress {
from_port = var.port_443
to_port = var.port_443
protocol = var.security_group_protocol
cidr_blocks = [var.vpc_cidr_block]
}
# Allow all outbound traffic
egress {
from_port = 0
to_port = 0
protocol = "-1" # All protocols
cidr_blocks = ["0.0.0.0/0"] # Allow traffic to all destinations
}
}

# workder node to the bastion

resource "aws_security_group_rule" "allow_ssh_from_bastion" {
type = "ingress"
from_port = var.port_22
to_port = var.port_22
protocol = var.security_group_protocol
security_group_id = aws_security_group.all.id
source_security_group_id = aws_security_group.all.id
}

FYI: self = true will gurantee port 22 is always open for the security group itself, which will allow bastion to access to workder node even when a update is run for terraform pipeline

Below script is to create security group needed for overall access

# Define a security group for HTTP/HTTPS access
resource "aws_security_group" "all" {
name = var.security_group_name
description = var.security_group_description
vpc_id = var.vpc_id

# Allow incoming HTTP (port 80) traffic
ingress {
from_port = var.port_80
to_port = var.port_80
protocol = var.security_group_protocol
cidr_blocks = [var.web_cidr]
}

# Allow incoming HTTPS (port 443) traffic
ingress {
from_port = var.port_443
to_port = var.port_443
protocol = var.security_group_protocol
cidr_blocks = [var.web_cidr]
}

# Allow SSH access for Ansible (port 22)
ingress {
from_port = var.port_22
to_port = var.port_22
protocol = var.security_group_protocol
cidr_blocks = [var.private_ip_address]
}

# Allow HTTP access for Jenkins (port 8080)
ingress {
from_port = var.port_8080
to_port = var.port_8080
protocol = var.security_group_protocol
cidr_blocks = [var.private_ip_address]
}

# Allow MySQL access for RDS (port 3306)
ingress {
from_port = var.port_3306
to_port = var.port_3306
protocol = var.security_group_protocol
cidr_blocks = [var.private_ip_address]
}

# Allow all outbound traffic
egress {
from_port = 0
to_port = 0
protocol = "-1" # All protocols
cidr_blocks = ["0.0.0.0/0"] # Allow traffic to all destinations
}
}

Basically, we open port 80, 443 to all, port 22, 3306, 8080 to only my only ip address locally

port 80 for web service

port 443 for application service

port 22 for ssh service

port 3306 for mysql service

port 8080 for jenkins service

FYI: Don’t forget egress!!!

When using AWS console, you may not aware of this part as it’s pre defined.

But you may have this access issue if not setting in terraform

# Allow all outbound traffic
egress {
from_port = 0
to_port = 0
protocol = "-1" # All protocols
cidr_blocks = ["0.0.0.0/0"] # Allow traffic to all destinations
}

Above script is to allow outbound traffic to all

Here are the meets — EKS cluster security group must allow access to the vpc on port 443, I state it here one more time here as it took too much of my time to figure out this and it’s not documented clear on AWS official site

resource "aws_security_group" "eks_cluster" { 
name = var.security_group_name_eks_cluster
description = var.security_group_description_eks_cluster
vpc_id = var.vpc_id

ingress {
from_port = var.port_443
to_port = var.port_443
protocol = var.security_group_protocol
cidr_blocks = [var.vpc_cidr_block]
}
# Allow all outbound traffic
egress {
from_port = 0
to_port = 0
protocol = "-1" # All protocols
cidr_blocks = ["0.0.0.0/0"] # Allow traffic to all destinations
}
}

Below script is to create ssh access in security group for eks worker node bastion server

resource "aws_security_group_rule" "allow_ssh_from_bastion" {
type = "ingress"
from_port = var.port_22
to_port = var.port_22
protocol = var.security_group_protocol
security_group_id = aws_security_group.all.id
source_security_group_id = aws_security_group.all.id
}

FYI: It allows on port 22 but only with security group for the bastion. The trick is here

security group id and source security group need both to refer to the security group created previously

security_group_id = aws_security_group.all.id
source_security_group_id = aws_security_group.all.id

outputs.tf — sg

output "security_group_id" {
value = aws_security_group.all.id
}

output "security_group_ids" {
value = [aws_security_group.all.id]
}

output "security_group_id_eks_cluster" {
value = aws_security_group.eks_cluster.id
}

output above values to refer in main.tf file

variables.tf — sg

variable "security_group_name" {
description = "Name of the AWS security group"
type = string
}

variable "security_group_description" {
description = "Description of the AWS security group"
type = string
}

variable "security_group_name_eks_cluster" {
description = "Name of the AWS security group for eks cluster"
type = string
}

variable "security_group_description_eks_cluster" {
description = "Description of the AWS security group for eks cluster"
type = string
}

variable "vpc_id" {
description = "ID of the VPC where the security group will be created"
type = string
}

variable "port_80" {
description = "Port for HTTP traffic (e.g., 80)"
type = number
}

variable "port_443" {
description = "Port for HTTPS traffic (e.g., 443)"
type = number
}

variable "port_22" {
description = "Port for SSH access (e.g., 22)"
type = number
}

variable "port_8080" {
description = "Port for HTTP access for Jenkins (e.g., 8080)"
type = number
}

variable "port_3306" {
description = "Port for MySQL access for RDS (e.g., 3306)"
type = number
}

variable "security_group_protocol" {
description = "Protocol for the security group rules (e.g., 'tcp', 'udp', 'icmp', etc.)"
type = string
}

variable "web_cidr" {
description = "CIDR block for incoming HTTP and HTTPS traffic"
type = string
}

variable "private_ip_address" {
description = "CIDR block for private IP addresses (e.g., for SSH, Jenkins, MySQL)"
type = string
}

variable "vpc_cidr_block" {
description = "CIDR block for the VPC"
type = string
}

variables for sg.tf file

vpc — subfolder

vpc.tf

resource "aws_vpc" "all" {
cidr_block = var.vpc_cidr_block
tags = {
Name = var.vpc_name
}
}

resource "aws_subnet" "public" {
count = length(var.public_subnet_cidr_blocks)

vpc_id = aws_vpc.all.id
cidr_block = var.public_subnet_cidr_blocks[count.index]
availability_zone = var.availability_zones[count.index]
map_public_ip_on_launch = true

}


resource "aws_subnet" "private" {
count = length(var.private_subnet_cidr_blocks)

vpc_id = aws_vpc.all.id
cidr_block = var.private_subnet_cidr_blocks[count.index]
availability_zone = var.availability_zones[count.index]
}


resource "aws_internet_gateway" "all" {
vpc_id = aws_vpc.all.id
tags = {
Name = var.igw_name
}
}

resource "aws_nat_gateway" "all" {
count = length(var.availability_zones)
allocation_id = aws_eip.nat[count.index].id
subnet_id = aws_subnet.public[count.index].id

}

resource "aws_eip" "nat" {
count = length(var.availability_zones)
}


resource "aws_route_table" "public" {
vpc_id = aws_vpc.all.id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.all.id
}

}

resource "aws_route_table" "private" {
count = length(var.availability_zones)
vpc_id = aws_vpc.all.id

route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.all[count.index].id
}

}

resource "aws_route_table_association" "public" {
count = length(aws_subnet.public)

subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}

resource "aws_route_table_association" "private" {
count = length(aws_subnet.private)

subnet_id = aws_subnet.private[count.index].id
route_table_id = aws_route_table.private[count.index].id
}

Above script can be used as a template to create a vpc with all necessary configs

VPC, Public/ Private Subnets, Internet Gateway (IGW), Nat Gateway, Elastic IP Address (EIP), Public/ Private Route Tables, Public/ Private Route Table Assocations included

outputs.tf — vpc

output "vpc_id" {
value = aws_vpc.all.id
}

# Output the subnet IDs created by the aws_subnet resource
output "subnet_ids" {
# alue = [for idx, subnet_id in aws_subnet.all[*].id : subnet_id if element(aws_subnet.all[*].availability_zone, idx) != element(aws_subnet.all[*].availability_zone, 0)]
value = aws_subnet.private[*].id
}


# Output the subnet IDs created by the aws_subnet resource
output "subnet_id" {
value = aws_subnet.private[0].id
}

# Output internet gateway
output "igw" {
value = aws_internet_gateway.all.id
}

# Output cidr block
output "vpc_cidr_block" {
value = aws_vpc.all.cidr_block
}

output "public_subnet" {
value = aws_subnet.public[0].id
}

output needed to be referred in main.tf file

variables.tf — vpc

# VPC variables
variable "vpc_cidr_block" {
description = "CIDR block for the VPC"
type = string
}

variable "vpc_name" {
description = "Name for the VPC"
type = string
}

# Subnet variables
variable "public_subnet_cidr_blocks" {
description = "List of CIDR blocks for public subnets"
type = list(string)
}

variable "private_subnet_cidr_blocks" {
description = "List of CIDR blocks for private subnets"
type = list(string)
}

variable "subnet" {
description = "Name of the subnet"
type = string
}


# Internet Gateway variables
variable "igw_name" {
description = "Name for the Internet Gateway"
type = string
}

# Route Table variables
variable "rt_name" {
description = "Name for the Route Table"
type = string
}

# Route Table Association variables
variable "rt_association" {
description = "Name prefix for Route Table Association"
type = string
}

variable "web_cidr" {
description = "Cidr block for web"
type = string
}

variable "availability_zones" {
type = list(string)
}

variable "aws_subnet_all_map_public_ip_on_launch" {
description = "Set to true to enable auto-assign public IP address for all subnets"
type = bool
}

variables needed in vpc.tf file

Now we are done with modules file

main.tf — root level

module "alb" {
source = "./modules/alb" # Replace with the actual path to your module directory

# Use the same variable for multiple input arguments
port_8080 = var.port_8080
protocol_web = var.protocol_web
web_alb_name = var.web_alb_name
web_alb_internal = var.web_alb_internal
port_80 = var.port_80
web_listener_type = var.web_listener_type
web_tg_name = var.web_tg_name
load_balancer_type_web = var.load_balancer_type_web
subnets = module.vpc.subnet_ids
security_group = module.sg.security_group_id
vpc_id = module.vpc.vpc_id
}

module "asg" {
source = "./modules/asg" # Replace with the actual path to your module directory

# Use the same variable for multiple input arguments
aws_launch_template_web_name_prefix = var.aws_launch_template_web_name_prefix
aws_launch_template_web_image_id = var.aws_launch_template_web_image_id
aws_launch_template_web_instance_type = var.aws_launch_template_web_instance_type
aws_launch_template_web_block_device_mappings_device_name = var.aws_launch_template_web_block_device_mappings_device_name
aws_launch_template_web_block_device_mappings_volume_size = var.aws_launch_template_web_block_device_mappings_volume_size
aws_launch_template_web_create_before_destroy = var.aws_launch_template_web_create_before_destroy
aws_autoscaling_group_web_desired_capacity = var.aws_autoscaling_group_web_desired_capacity
aws_autoscaling_group_web_max_size = var.aws_autoscaling_group_web_max_size
aws_autoscaling_group_web_min_size = var.aws_autoscaling_group_web_min_size
aws_autoscaling_group_web_launch_template_version = var.aws_autoscaling_group_web_launch_template_version
aws_autoscaling_group_web_tag_key = var.aws_autoscaling_group_web_tag_key
aws_autoscaling_group_web_tag_value = var.aws_autoscaling_group_web_tag_value
aws_autoscaling_group_web_tag_propagate_at_launch = var.aws_autoscaling_group_web_tag_propagate_at_launch
aws_launch_template_web_user_data = "${path.module}/web_userdata.sh"
aws_autoscaling_group_web_vpc_zone_identifier = module.vpc.subnet_ids
key_pair_name = var.key_pair_name
aws_launch_template_web_network_interfaces_security_groups = module.sg.security_group_ids
aws_launch_template_ansible_vpc_security_group_ids = module.sg.security_group_ids
aws_launch_template_ansible_name_prefix = var.aws_launch_template_ansible_name_prefix
aws_launch_template_ansible_image_id = var.aws_launch_template_ansible_image_id
aws_launch_template_ansible_instance_type = var.aws_launch_template_ansible_instance_type
aws_launch_template_ansible_block_device_mappings_device_name = var.aws_launch_template_ansible_block_device_mappings_device_name
aws_launch_template_ansible_block_device_mappings_volume_size = var.aws_launch_template_ansible_block_device_mappings_volume_size
aws_launch_template_ansible_create_before_destroy = var.aws_launch_template_ansible_create_before_destroy
aws_autoscaling_group_ansible_desired_capacity = var.aws_autoscaling_group_ansible_desired_capacity
aws_autoscaling_group_ansible_max_size = var.aws_autoscaling_group_ansible_max_size
aws_autoscaling_group_ansible_min_size = var.aws_autoscaling_group_ansible_min_size
aws_autoscaling_group_ansible_launch_template_version = var.aws_autoscaling_group_ansible_launch_template_version
aws_autoscaling_group_ansible_tag_key = var.aws_autoscaling_group_ansible_tag_key
aws_autoscaling_group_ansible_tag_value = var.aws_autoscaling_group_ansible_tag_value
aws_autoscaling_group_ansible_tag_propagate_at_launch = var.aws_autoscaling_group_ansible_tag_propagate_at_launch
aws_launch_template_ansible_user_data = var.aws_launch_template_ansible_user_data
aws_autoscaling_group_ansible_vpc_zone_identifier = module.vpc.subnet_ids
aws_launch_template_ansible_network_interfaces_security_groups = module.sg.security_group_ids
eks_cluster_ansible_name = var.eks_cluster_ansible_name
aws_eks_node_group_instance_types = var.aws_eks_node_group_instance_types
# kubernetes_network_policy_jenkins_network_policy_spec_ingress_app = var.kubernetes_network_policy_jenkins_network_policy_spec_ingress_app
aws_eks_cluster_ansible_version = var.aws_eks_cluster_ansible_version
}

module "eks" {
source = "./modules/eks" # Replace with the actual path to your module directory

# Use the same variable for multiple input arguments
eks_cluster_ansible_name = var.eks_cluster_ansible_name
aws_eks_node_group_ansible_name = var.aws_eks_node_group_ansible_name
aws_eks_node_group_instance_types = var.aws_eks_node_group_instance_types
aws_eks_node_group_desired_capacity = var.aws_eks_node_group_desired_capacity
aws_eks_node_group_min_size = var.aws_eks_node_group_min_size
aws_eks_node_group_max_size = var.aws_eks_node_group_max_size
aws_eks_node_group_launch_template_name_prefix = module.asg.launch_template_id
aws_eks_node_group_launch_template_version = var.aws_eks_node_group_launch_template_version
aws_eks_node_group_device_name = var.aws_eks_node_group_device_name
aws_eks_node_group_volume_size = var.aws_eks_node_group_volume_size
subnets = module.vpc.subnet_ids
aws_eks_cluster_ansible_role_arn = module.iam.eks_ansible_cluster_iam_role_arn
aws_eks_node_group_ansible_role_arn = module.iam.eks_ansible_nodegroup_iam_role_arn
ec2_ssh_key = "${path.module}/web-ec2.pem"
eks_worker_node_policy_attachment_ansible = module.iam.eks_worker_node_policy_attachment_ansible
eks_cni_policy_attachment_ansible = module.iam.eks_cni_policy_attachment_ansible
eks_ec2_container_registry_readonly_attachment_ansible = module.iam.eks_ec2_container_registry_readonly_attachment_ansible
aws_eks_node_group_launch_template_name_prefix_ansible = module.asg.launch_template_id_ansible
aws_eks_addon_ansible_addon_name = var.aws_eks_addon_ansible_addon_name
aws_eks_addon_ansible_addon_version = var.aws_eks_addon_ansible_addon_version
aws_eks_cluster_ansible_security_group_ids = module.sg.security_group_id_eks_cluster
aws_eks_cluster_ansible_version = var.aws_eks_cluster_ansible_version
aws_instance_eks_cluster_ansible_bastion_host_ami = var.aws_instance_eks_cluster_ansible_bastion_host_ami
aws_instance_eks_cluster_ansible_bastion_host_instance_type = var.aws_instance_eks_cluster_ansible_bastion_host_instance_type
key_pair_name = var.key_pair_name
aws_instance_eks_cluster_ansible_bastion_host_subnet_id = module.vpc.public_subnet
aws_instance_eks_cluster_ansible_bastion_host_security_groups = module.sg.security_group_ids
aws_instance_eks_cluster_ansible_bastion_host_tags = var.aws_instance_eks_cluster_ansible_bastion_host_tags
aws_instance_eks_cluster_ansible_bastion_host_provisioner_destination = var.aws_instance_eks_cluster_ansible_bastion_host_provisioner_destination
aws_instance_eks_cluster_ansible_bastion_host_provisioner_source = "${path.module}/web-ec2.pem"
aws_instance_eks_cluster_ansible_bastion_host_remote_exec_inline = var.aws_instance_eks_cluster_ansible_bastion_host_remote_exec_inline

}

module "iam" {
source = "./modules/iam" # Replace with the actual path to your module directory

# Use the same variable for multiple input arguments
aws_iam_role_eks_cluster_ansible_name = var.aws_iam_role_eks_cluster_ansible_name
aws_iam_role_eks_cluster_assume_role_policy_ansible = file("assume_role_policy.json")
aws_iam_role_eks_nodegroup_role_ansible_name = var.aws_iam_role_eks_nodegroup_role_ansible_name
}

module "rds" {
source = "./modules/rds" # Replace with the actual path to your module directory

# Use the same variable for multiple input arguments
db_allocated_storage = var.db_allocated_storage
db_storage_type = var.db_storage_type
db_engine = var.db_engine
db_engine_version = var.db_engine_version
db_instance_class = var.db_instance_class
db_name = var.db_name
db_username = var.db_username
db_password = var.db_password
db_parameter_group_name = var.db_parameter_group_name
skip_final_snapshot = var.skip_final_snapshot
db_subnet_group_name = var.db_subnet_group_name
subnets = module.vpc.subnet_ids
security_group = module.sg.security_group_id
db_parameter_group_family = var.db_parameter_group_family
db_parameter_server_name = var.db_parameter_server_name
db_parameter_client_name = var.db_parameter_client_name
character_set_server = var.character_set_server
character_set_client = var.character_set_client

}

module "sg" {
source = "./modules/sg" # Replace with the actual path to your module directory

# Use the same variable for multiple input arguments
security_group_name = var.security_group_name
security_group_description = var.security_group_description
security_group_name_eks_cluster = var.security_group_name_eks_cluster
security_group_description_eks_cluster = var.security_group_description_eks_cluster
vpc_id = module.vpc.vpc_id
port_80 = var.port_80
port_443 = var.port_443
port_22 = var.port_22
port_8080 = var.port_8080
port_3306 = var.port_3306
security_group_protocol = var.security_group_protocol
web_cidr = var.web_cidr
private_ip_address = var.private_ip_address
vpc_cidr_block = var.vpc_cidr_block
}

module "vpc" {
source = "./modules/vpc" # Replace with the actual path to your module directory

# Use the same variable for multiple input arguments
vpc_cidr_block = var.vpc_cidr_block
vpc_name = var.vpc_name
public_subnet_cidr_blocks = var.public_subnet_cidr_blocks
private_subnet_cidr_blocks = var.private_subnet_cidr_blocks
subnet = var.subnet
igw_name = var.igw_name
web_cidr = var.web_cidr
rt_name = var.rt_name
rt_association = var.rt_association
availability_zones = var.availability_zones
aws_subnet_all_map_public_ip_on_launch = var.aws_subnet_all_map_public_ip_on_launch
}

Above script is to call resources from modules folder for different resources

FYI:

There are a few types of values

var.variables = value (the value is from terraform.tfvars file)

module.vpc.public_subnet (e.g. this is calling from outputs.tf file under vpc subfolder)

file(“assume_role_policy.json”) (calling the file in the same folder)

“${path.module}/web-ec2.pem” (calling private key in the same folder)

outputs.tf — root level

output "security_group_name" {
value = var.security_group_name
}

output "security_group_description" {
value = var.security_group_description
}

output "security_group_name_eks_cluster" {
value = var.security_group_name_eks_cluster
}

output "security_group_description_eks_cluster" {
value = var.security_group_description_eks_cluster
}

output "web_cidr" {
value = var.web_cidr
}

output "private_ip_address" {
value = var.private_ip_address
}

output "protocol_web" {
value = var.protocol_web
}

output "web_alb_name" {
value = var.web_alb_name
}

output "web_alb_internal" {
value = var.web_alb_internal
}

output "load_balancer_type_web" {
value = var.load_balancer_type_web
}

output "web_tg_name" {
value = var.web_tg_name
}

output "db_allocated_storage" {
value = var.db_allocated_storage
}

output "db_storage_type" {
value = var.db_storage_type
}

output "db_engine" {
value = var.db_engine
}

output "db_engine_version" {
value = var.db_engine_version
}

output "db_instance_class" {
value = var.db_instance_class
}

output "db_name" {
value = var.db_name
}

output "db_username" {
value = var.db_username
}

output "db_password" {
value = var.db_password
sensitive = true
}

output "db_parameter_group_name" {
value = var.db_parameter_group_name
}

output "skip_final_snapshot" {
value = var.skip_final_snapshot
}

output "db_subnet_group_name" {
value = var.db_subnet_group_name
}

output "subnet" {
value = var.subnet
}

output "db_parameter_group_family" {
value = var.db_parameter_group_family
}

output "db_parameter_server_name" {
value = var.db_parameter_server_name
}

output "db_parameter_client_name" {
value = var.db_parameter_client_name
}

output "character_set_server" {
value = var.character_set_server
}

output "character_set_client" {
value = var.character_set_client
}

output "vpc_cidr_block" {
value = var.vpc_cidr_block
}

output "vpc_name" {
value = var.vpc_name
}

output "public_subnet_cidr_blocks" {
value = var.public_subnet_cidr_blocks
}

output "private_subnet_cidr_blocks" {
value = var.private_subnet_cidr_blocks
}

output "availability_zones" {
value = var.availability_zones
}

output "aws_subnet_all_map_public_ip_on_launch" {
value = var.aws_subnet_all_map_public_ip_on_launch
}

output "igw_name" {
value = var.igw_name
}

output "rt_name" {
value = var.rt_name
}

output "rt_association" {
value = var.rt_association
}

output "eks_cluster_ansible_name" {
value = var.eks_cluster_ansible_name
}

output "aws_eks_node_group_ansible_name" {
value = var.aws_eks_node_group_ansible_name
}

output "aws_eks_node_group_instance_types" {
value = var.aws_eks_node_group_instance_types
}

output "aws_eks_node_group_desired_capacity" {
value = var.aws_eks_node_group_desired_capacity
}

output "aws_eks_node_group_min_size" {
value = var.aws_eks_node_group_min_size
}

output "aws_eks_node_group_max_size" {
value = var.aws_eks_node_group_max_size
}

output "aws_eks_node_group_launch_template_name_prefix" {
value = var.aws_eks_node_group_launch_template_name_prefix
}

output "aws_eks_node_group_launch_template_version" {
value = var.aws_eks_node_group_launch_template_version
}

output "aws_eks_node_group_device_name" {
value = var.aws_eks_node_group_device_name
}

output "aws_eks_node_group_volume_size" {
value = var.aws_eks_node_group_volume_size
}

output "aws_eks_cluster_ansible_version" {
value = var.aws_eks_cluster_ansible_version
}

output "aws_eks_addon_ansible_addon_name" {
value = var.aws_eks_addon_ansible_addon_name
}

output "aws_eks_addon_ansible_addon_version" {
value = var.aws_eks_addon_ansible_addon_version
}

output "aws_launch_template_web_name_prefix" {
value = var.aws_launch_template_web_name_prefix
}

output "aws_launch_template_web_image_id" {
value = var.aws_launch_template_web_image_id
}

output "aws_launch_template_web_instance_type" {
value = var.aws_launch_template_web_instance_type
}

output "aws_launch_template_web_block_device_mappings_device_name" {
value = var.aws_launch_template_web_block_device_mappings_device_name
}

output "aws_launch_template_web_block_device_mappings_volume_size" {
value = var.aws_launch_template_web_block_device_mappings_volume_size
}

output "aws_launch_template_web_create_before_destroy" {
value = var.aws_launch_template_web_create_before_destroy
}

output "aws_autoscaling_group_web_desired_capacity" {
value = var.aws_autoscaling_group_web_desired_capacity
}

output "aws_autoscaling_group_web_max_size" {
value = var.aws_autoscaling_group_web_max_size
}

output "aws_autoscaling_group_web_min_size" {
value = var.aws_autoscaling_group_web_min_size
}

output "aws_autoscaling_group_web_launch_template_version" {
value = var.aws_autoscaling_group_web_launch_template_version
}

output "aws_autoscaling_group_web_tag_key" {
value = var.aws_autoscaling_group_web_tag_key
}

output "aws_autoscaling_group_web_tag_value" {
value = var.aws_autoscaling_group_web_tag_value
}

output "aws_autoscaling_group_web_tag_propagate_at_launch" {
value = var.aws_autoscaling_group_web_tag_propagate_at_launch
}

output "key_pair_name" {
value = var.key_pair_name
}

output "aws_launch_template_ansible_name_prefix" {
value = var.aws_launch_template_ansible_name_prefix
}

output "aws_launch_template_ansible_image_id" {
value = var.aws_launch_template_ansible_image_id
}

output "aws_launch_template_ansible_instance_type" {
value = var.aws_launch_template_ansible_instance_type
}

outputs from main.tf file

# security group
variable "security_group_name" {
description = "Name of the AWS security group"
type = string
}

variable "security_group_description" {
description = "Description of the AWS security group"
type = string
}

variable "security_group_name_eks_cluster" {
description = "Name of the AWS security group for eks cluster"
type = string
}

variable "security_group_description_eks_cluster" {
description = "Description of the AWS security group for eks cluster"
type = string
}

variable "port_80" {
description = "Port for HTTP traffic (e.g., 80)"
type = number
}

variable "port_443" {
description = "Port for HTTPS traffic (e.g., 443)"
type = number
}

variable "port_22" {
description = "Port for SSH access (e.g., 22)"
type = number
}

variable "port_8080" {
description = "Port for HTTP access for Jenkins (e.g., 8080)"
type = number
}

variable "port_3306" {
description = "Port for MySQL access for RDS (e.g., 3306)"
type = number
}

variable "security_group_protocol" {
description = "Protocol for the security group rules (e.g., 'tcp', 'udp', 'icmp', etc.)"
type = string
}

variable "web_cidr" {
description = "CIDR block for incoming HTTP and HTTPS traffic"
type = string
}

variable "private_ip_address" {
description = "CIDR block for private IP addresses (e.g., for SSH, Jenkins, MySQL)"
type = string
}

# ALB Listener for Jenkins variables
variable "protocol_web" {
description = "Protocol for Jenkins ALB Listener"
type = string
}

# ALB for Web Servers variables
variable "web_alb_name" {
description = "Name of the example web ALB"
type = string
}

variable "web_alb_internal" {
description = "Whether the example web ALB is internal"
type = bool
}

variable "load_balancer_type_web" {
description = "Type of the load balancer for example web ALB"
type = string
}

# ALB Target Group for Web Servers variables
variable "web_tg_name" {
description = "Name of the example web Target Group"
type = string
}

# ALB Listener for Web Servers variables
variable "web_listener_type" {
description = "Type of action for example web ALB Listener"
type = string
}

# rds
variable "db_parameter_group_family" {
description = "The family of the DB parameter group."
type = string
# Set your desired default family here
}


# RDS Database variables
variable "db_allocated_storage" {
description = "Allocated storage for the RDS database"
type = number
}

variable "db_storage_type" {
description = "Storage type for the RDS database"
type = string
}

variable "db_engine" {
description = "Database engine for the RDS instance"
type = string
}

variable "db_engine_version" {
description = "Database engine version for the RDS instance"
type = string
}

variable "db_instance_class" {
description = "Instance class for the RDS database"
type = string
}

variable "db_name" {
description = "Name of the RDS database"
type = string
}

variable "db_username" {
description = "Username for the RDS database"
type = string
}

variable "db_password" {
description = "Password for the RDS database"
type = string
}


variable "skip_final_snapshot" {
description = "Skip final snapshot when deleting the RDS instance"
type = bool
}

variable "db_subnet_group_name" {
description = "Name of the DB subnet group"
type = string
}

variable "db_parameter_group_name" {
description = "Name for the custom DB parameter group"
type = string
}

variable "db_parameter_server_name" {
description = "Name for the 'character_set_server' parameter"
type = string
}

variable "db_parameter_client_name" {
description = "Name for the 'character_set_client' parameter"
type = string
}

variable "character_set_server" {
description = "Value for the 'character_set_server' parameter"
type = string
}

variable "character_set_client" {
description = "Value for the 'character_set_client' parameter"
type = string
}


# vpc
variable "availability_zones" {
type = list(string)
}

variable "aws_subnet_all_map_public_ip_on_launch" {
description = "Set to true to enable auto-assign public IP address for all subnets"
type = bool
}

# VPC variables
variable "vpc_cidr_block" {
description = "CIDR block for the VPC"
type = string
}

variable "vpc_name" {
description = "Name for the VPC"
type = string
}

# Subnet variables
variable "public_subnet_cidr_blocks" {
description = "List of CIDR blocks for public subnets"
type = list(string)
}

variable "private_subnet_cidr_blocks" {
description = "List of CIDR blocks for private subnets"
type = list(string)
}

variable "subnet" {
description = "Name of the subnet"
type = string
}

# Internet Gateway variables
variable "igw_name" {
description = "Name for the Internet Gateway"
type = string
}

variable "rt_name" {
description = "Name for the Route Table"
type = string
}

# Route Table Association variables
variable "rt_association" {
description = "Name prefix for Route Table Association"
type = string
}

# eks

variable "aws_eks_cluster_ansible_version" {
description = "The version of Ansible to use with AWS EKS cluster"
type = string
# You can set your desired default value here
}

variable "eks_cluster_ansible_name" {
description = "Name of the Ansible EKS cluster"
type = string
}

variable "aws_eks_node_group_ansible_name" {
description = "Name of the Ansible EKS node group"
type = string
}

variable "aws_eks_node_group_instance_types" {
description = "Instance types for the EKS node group"
type = string
}

variable "aws_eks_node_group_desired_capacity" {
description = "Desired capacity for the EKS node group"
type = number
}

variable "aws_eks_node_group_min_size" {
description = "Minimum size for the EKS node group"
type = number
}

variable "aws_eks_node_group_max_size" {
description = "Maximum size for the EKS node group"
type = number
}

variable "aws_eks_node_group_launch_template_name_prefix" {
description = "Name prefix for the EKS node group launch template"
type = string
}

variable "aws_eks_node_group_launch_template_version" {
description = "Version for the EKS node group launch template"
type = string
}

variable "aws_eks_node_group_device_name" {
description = "Device name for the EKS node group block device mappings"
type = string
}

variable "aws_eks_node_group_volume_size" {
description = "Volume size for the EKS node group block device mappings"
type = number
}

variable "aws_eks_addon_ansible_addon_name" {
description = "Name of the AWS EKS addon for Ansible"
type = string
}

variable "aws_eks_addon_ansible_addon_version" {
description = "Version of the AWS EKS addon for Ansible"
type = string
}


# asg
variable "key_pair_name" {
description = "Name of the AWS Key Pair to associate with EC2 instances"
type = string
# Set a default value if needed
}

variable "aws_launch_template_web_name_prefix" {
description = "Name prefix for the AWS launch template"
type = string
}

variable "aws_launch_template_web_image_id" {
description = "AMI ID for the AWS launch template"
type = string
}

variable "aws_launch_template_web_instance_type" {
description = "Instance type for the AWS launch template"
type = string
}

variable "aws_launch_template_web_block_device_mappings_device_name" {
description = "Device name for block device mappings in the AWS launch template"
type = string
}

variable "aws_launch_template_web_block_device_mappings_volume_size" {
description = "Volume size for block device mappings in the AWS launch template"
type = number
}

variable "aws_launch_template_web_create_before_destroy" {
description = "Lifecycle setting for create_before_destroy in the AWS launch template"
type = bool
}

variable "aws_autoscaling_group_web_desired_capacity" {
description = "Desired capacity for the AWS Auto Scaling Group"
type = number
}

variable "aws_autoscaling_group_web_max_size" {
description = "Maximum size for the AWS Auto Scaling Group"
type = number
}

variable "aws_autoscaling_group_web_min_size" {
description = "Minimum size for the AWS Auto Scaling Group"
type = number
}

variable "aws_autoscaling_group_web_launch_template_version" {
description = "Launch template version for the AWS Auto Scaling Group"
type = string
}

variable "aws_autoscaling_group_web_tag_key" {
description = "Tag key for the AWS Auto Scaling Group instances"
type = string
}

variable "aws_autoscaling_group_web_tag_value" {
description = "Tag value for the AWS Auto Scaling Group instances"
type = string
}

variable "aws_autoscaling_group_web_tag_propagate_at_launch" {
description = "Tag propagation setting for the AWS Auto Scaling Group instances"
type = bool
}

variable "aws_launch_template_ansible_name_prefix" {
description = "Name prefix for the AWS launch template"
type = string
}

variable "aws_launch_template_ansible_image_id" {
description = "AMI ID for the AWS launch template"
type = string
}

variable "aws_launch_template_ansible_instance_type" {
description = "Instance type for the AWS launch template"
type = string
}

variable "aws_launch_template_ansible_block_device_mappings_device_name" {
description = "Device name for block device mappings in the AWS launch template"
type = string
}

variable "aws_launch_template_ansible_block_device_mappings_volume_size" {
description = "Volume size for block device mappings in the AWS launch template"
type = number
}

variable "aws_launch_template_ansible_create_before_destroy" {
description = "Lifecycle setting for create_before_destroy in the AWS launch template"
type = bool
}

variable "aws_autoscaling_group_ansible_desired_capacity" {
description = "Desired capacity for the AWS Auto Scaling Group"
type = number
}

variable "aws_autoscaling_group_ansible_max_size" {
description = "Maximum size for the AWS Auto Scaling Group"
type = number
}

variable "aws_autoscaling_group_ansible_min_size" {
description = "Minimum size for the AWS Auto Scaling Group"
type = number
}

variable "aws_autoscaling_group_ansible_launch_template_version" {
description = "Launch template version for the AWS Auto Scaling Group"
type = string
}

variable "aws_autoscaling_group_ansible_tag_key" {
description = "Tag key for the AWS Auto Scaling Group instances"
type = string
}

variable "aws_autoscaling_group_ansible_tag_value" {
description = "Tag value for the AWS Auto Scaling Group instances"
type = string
}

variable "aws_autoscaling_group_ansible_tag_propagate_at_launch" {
description = "Tag propagation setting for the AWS Auto Scaling Group instances"
type = bool
}

variable "aws_launch_template_ansible_user_data" {
description = "Userdata file"
type = string
}

# iam
variable "aws_iam_role_eks_cluster_ansible_name" {
description = "Iam role name for esk cluster ansible"
type = string
}

variable "aws_iam_role_eks_nodegroup_role_ansible_name" {
description = "Name of the IAM role associated with EKS nodegroups for Ansible"
type = string
# You can set a default value if needed
# default = "example-role-name"
}

# bastion
variable "aws_instance_eks_cluster_ansible_bastion_host_ami" {
description = "The AMI ID for the bastion host"
type = string
}

variable "aws_instance_eks_cluster_ansible_bastion_host_instance_type" {
description = "The instance type for the bastion host"
type = string
}

variable "aws_instance_eks_cluster_ansible_bastion_host_tags" {
description = "Tags for the bastion host instance"
type = string
}

variable "aws_instance_eks_cluster_ansible_bastion_host_provisioner_destination" {
description = "Destination path on the bastion host where the file will be copied"
type = string
}

variable "aws_instance_eks_cluster_ansible_bastion_host_remote_exec_inline" {
description = "Inline script to be executed on the bastion host using remote-exec provisioner"
type = list(string)
}

variables needed in main.tf file

web-ec2.pem — root level

-----BEGIN RSA PRIVATE KEY-----
************************
-----END RSA PRIVATE KEY-----

Above script is for private key used to access to all instances created in AWS

providers.tf — root level

provider "aws" {
region = "us-east-1"
profile = "default"
}

This script is to communicate with AWS in the us-east-1 region and AWS configure’s file with default profile

FYI: If no profile is configured, default is used

s3.tf — root level

terraform {
backend "s3" {
bucket = "provisioning-aws-infrastructure-using-terraform"
key = "terraform.tfstate"
region = "us-east-1"
encrypt = true
profile = "default"
}
}

To make sure above script is functional, we need to create S3 bucket with the name of provisioning-aws-infrastructure-using-terraform in S3 in AWS with object of terraform.tfstate in us-east-1 region

It’s also encripted and use aws configure profile default

terraform.tfvars — root level

# security group
security_group_name = "all"
security_group_description = "security group for all"
security_group_name_eks_cluster = "eks_cluster"
security_group_description_eks_cluster = "security group for eks cluster"
port_80 = 80
port_443 = 443
port_22 = 22
port_8080 = 8080
port_3306 = 3306
security_group_protocol = "tcp"
web_cidr = "0.0.0.0/0"
private_ip_address = "888.888.888.888/32"

# alb
# ALB for Jenkins variables
# ALB Listener for Jenkins variables
protocol_web = "HTTP"

# ALB for Web Servers variables
web_alb_name = "web-alb"
web_alb_internal = false
load_balancer_type_web = "application"

# ALB Target Group for Web Servers variables
web_tg_name = "web-target-group"

# ALB Listener for Web Servers variables
web_listener_type = "forward"

# rds
# RDS Database variables
db_allocated_storage = 20
db_storage_type = "gp2"
db_engine = "mysql"
db_engine_version = "5.7"
db_instance_class = "db.t2.micro"
db_name = "terraformed_rds"
db_username = "rds"
db_password = "terraformed!"
db_parameter_group_name = "rds-parameter-group"
skip_final_snapshot = true
db_subnet_group_name = "rds-subnet-group"
subnet = "subnet"
db_parameter_group_family = "mysql5.7"
db_parameter_server_name = "character_set_server"
db_parameter_client_name = "character_set_client"
character_set_server = "swe7"
character_set_client = "latin1"


# vpc
# VPC variables
vpc_cidr_block = "10.0.0.0/16"
vpc_name = "vpc"
public_subnet_cidr_blocks = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
private_subnet_cidr_blocks = ["10.0.10.0/24", "10.0.11.0/24", "10.0.12.0/24"]
availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
aws_subnet_all_map_public_ip_on_launch = true


# Internet Gateway variables
igw_name = "igw"

# Route Table variables
rt_name = "route-table"

# Route Table Association variables
rt_association = "rt-association"

# eks
eks_cluster_ansible_name = "eks-cluster"
aws_eks_node_group_ansible_name = "eks-node-group"
aws_eks_node_group_instance_types = "t2.micro"
aws_eks_node_group_desired_capacity = 2
aws_eks_node_group_min_size = 1
aws_eks_node_group_max_size = 3
aws_eks_node_group_launch_template_name_prefix = "ansible"
aws_eks_node_group_launch_template_version = "$Latest"
aws_eks_node_group_device_name = "xvda"
aws_eks_node_group_volume_size = 20
aws_eks_addon_ansible_addon_name = "vpc-cni"
aws_eks_addon_ansible_addon_version = "v1.16.2-eksbuild.1"


# asg
aws_launch_template_web_name_prefix = "web-launch-template"
aws_launch_template_web_image_id = "ami-075e152940664b7b2"
aws_launch_template_web_instance_type = "t2.micro"
aws_launch_template_web_block_device_mappings_device_name = "xvdb"
aws_launch_template_web_block_device_mappings_volume_size = 20
aws_launch_template_web_create_before_destroy = true
aws_autoscaling_group_web_desired_capacity = 2
aws_autoscaling_group_web_max_size = 4
aws_autoscaling_group_web_min_size = 1
aws_autoscaling_group_web_launch_template_version = "$Latest"
aws_autoscaling_group_web_tag_key = "Environment"
aws_autoscaling_group_web_tag_value = "Dev"
aws_autoscaling_group_web_tag_propagate_at_launch = true
key_pair_name = "web-ec2"
aws_launch_template_ansible_name_prefix = "ansible-launch-template"
aws_launch_template_ansible_image_id = "ami-075e152940664b7b2"
aws_launch_template_ansible_instance_type = "t2.micro"
aws_launch_template_ansible_block_device_mappings_device_name = "xvdc"
aws_launch_template_ansible_block_device_mappings_volume_size = 20
aws_launch_template_ansible_create_before_destroy = true
aws_autoscaling_group_ansible_desired_capacity = 2
aws_autoscaling_group_ansible_max_size = 4
aws_autoscaling_group_ansible_min_size = 1
aws_autoscaling_group_ansible_launch_template_version = "$Latest"
aws_autoscaling_group_ansible_tag_key = "Environment"
aws_autoscaling_group_ansible_tag_value = "Dev"
aws_autoscaling_group_ansible_tag_propagate_at_launch = true
eks_bootstrap_script = <<-EOT
#!/bin/bash
set -ex
sudo yum update -y
sudo amazon-linux-extras install ansible2 -y

# Define the log file
LOG_FILE="/var/log/userdata.log"

# Redirect all output (stdout and stderr) to the log file
exec >> "$LOG_FILE" 2>&1

set -x
AWS_ACCESS_KEY_ID="AKIA53CSBY6XJM5R2AVT"
AWS_SECRET_ACCESS_KEY="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"
AWS_REGION="us-east-1"
CLUSTER_NAME="eks-cluster"
NODE_GROUP_INSTANCE_TYPE="t2.micro"
KUBERNETES_VERSION="1.29"

sudo -u ec2-user /usr/bin/aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
sudo -u ec2-user /usr/bin/aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
sudo -u ec2-user /usr/bin/aws configure set default.region $AWS_REGION
echo "AWS CLI configured with your credentials."

# Install kubectl
if ! [ -x "$(command -v kubectl)" ]; then
echo "Installing kubectl..."
set -x
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/bin/
echo "kubectl installed."
else
echo "kubectl is already installed."
fi

if ! [ -x "$(command -v git)" ]; then
echo "Installing git..."
set -x
sudo yum install git -y
echo "git installed."
else
echo "git is already installed."
fi

# Fetch Kubectl Info
set -x
sudo -u ec2-user /usr/bin/aws eks update-kubeconfig --region $AWS_REGION --name $CLUSTER_NAME

# Fetch max pods
set -x
curl -O https://raw.githubusercontent.com/awslabs/amazon-eks-ami/master/files/max-pods-calculator.sh
chmod +x max-pods-calculator.sh

# Fetch the cluster's certificate authority data
CERTIFICATE_AUTHORITY=$(sudo -u ec2-user /usr/bin/aws eks describe-cluster --query "cluster.certificateAuthority.data" --output text --name $CLUSTER_NAME --region $AWS_REGION)

# Define the cluster endpoint URL
API_SERVER_ENDPOINT=$(sudo -u ec2-user /usr/bin/aws eks describe-cluster --region $AWS_REGION --name $CLUSTER_NAME --query "cluster.endpoint" --output text)

# Fetch the cluster's CIDR
SERVICE_CIDR=$(sudo -u ec2-user /usr/bin/aws eks describe-cluster --query "cluster.kubernetesNetworkConfig.serviceIpv4Cidr" --output text --name $CLUSTER_NAME --region $AWS_REGION | sed 's/\.0\/16//')

CNI_VERSION=$(sudo -u ec2-user /usr/bin/aws eks describe-addon-versions --addon-name vpc-cni --kubernetes-version $KUBERNETES_VERSION --region $AWS_REGION | jq -r '.addons[] | select(.addonName == "vpc-cni") | .addonVersions[].addonVersion' | head -n 1 | sed 's/^v//')

MAX_PODS=$(./max-pods-calculator.sh --instance-type $NODE_GROUP_INSTANCE_TYPE --cni-version $CNI_VERSION)

# Join worker nodes to the eks cluster
set -x
/etc/eks/bootstrap.sh eks-cluster $CLUSTER_NAME \
--b64-cluster-ca $CERTIFICATE_AUTHORITY \
--apiserver-endpoint $API_SERVER_ENDPOINT \
--dns-cluster-ip $SERVICE_CIDR.10 \
--kubelet-extra-args "--max-pods=$MAX_PODS" \
--use-max-pods false
EOT

#iam
aws_iam_role_eks_cluster_ansible_name = "ansible-cluster-role"
aws_iam_role_eks_nodegroup_role_ansible_name = "ansible-nodegroup-role"


# bastion
aws_instance_eks_cluster_ansible_bastion_host_ami = "ami-0e731c8a588258d0d"
aws_instance_eks_cluster_ansible_bastion_host_instance_type = "t2.micro"
aws_instance_eks_cluster_ansible_bastion_host_tags = "bastion-host"
aws_instance_eks_cluster_ansible_bastion_host_provisioner_destination = "/home/ec2-user/web-ec2.pem"
aws_instance_eks_cluster_ansible_bastion_host_remote_exec_inline = ["sudo chmod 400 /home/ec2-user/web-ec2.pem"]

Above script is for values needed in main.tf. I provide this for your convenience, make sure you never have it public for best security practice (I marked password with XXX though)

FYI: set -x should be in front of every command line, so logs will record every command when running

versions.tf — root level

terraform {
required_version = "~>1.7"

required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
# You can specify additional required providers here.
}
}

Above script is to control versions used for aws cli and terraform.

required_version = “~>1.7” means for terraform installed, any version more than 1.7 is allowed, e.g. (1.71, 1.72, 1.73)

required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
# You can specify additional required providers here.
}

above means “hashicorp/aws” has to have version more than 3.27, which means v3.76.1 is allowed, but something like 3.3 or 3.25 may not be

That’s all for the folders and files we need.

Next, we will run terraform script and test!

In Provisioning-AWS-Infrastructure-using-Terraform-Packer-Kubernetes-Ansible folder

terraform init
terraform validate
terraform plan

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
<= read (data resources)

Terraform will perform the following actions:

 # module.alb.aws_lb.web will be created
 + resource "aws_lb" "web" {
+ arn = (known after apply)
+ arn_suffix = (known after apply)
+ desync_mitigation_mode = "defensive"
+ dns_name = (known after apply)
+ drop_invalid_header_fields = false
+ enable_deletion_protection = false
+ enable_http2 = true
+ enable_waf_fail_open = false
+ id = (known after apply)
+ idle_timeout = 60
+ internal = false
+ ip_address_type = (known after apply)
+ load_balancer_type = "application"
+ name = "web-alb"
+ security_groups = (known after apply)
+ subnets = (known after apply)
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
+ zone_id = (known after apply)
}

 # module.alb.aws_lb_listener.web_listener will be created
 + resource "aws_lb_listener" "web_listener" {
+ arn = (known after apply)
+ id = (known after apply)
+ load_balancer_arn = (known after apply)
+ port = 80
+ protocol = "HTTP"
+ ssl_policy = (known after apply)
+ tags_all = (known after apply)

+ default_action {
+ order = (known after apply)
+ target_group_arn = (known after apply)
+ type = "forward"
}
}

 # module.alb.aws_lb_target_group.web_tg will be created
 + resource "aws_lb_target_group" "web_tg" {
+ arn = (known after apply)
+ arn_suffix = (known after apply)
+ connection_termination = false
+ deregistration_delay = "300"
+ id = (known after apply)
+ lambda_multi_value_headers_enabled = false
+ load_balancing_algorithm_type = (known after apply)
+ name = "web-target-group"
+ port = 80
+ preserve_client_ip = (known after apply)
+ protocol = "HTTP"
+ protocol_version = (known after apply)
+ proxy_protocol_v2 = false
+ slow_start = 0
+ tags_all = (known after apply)
+ target_type = "instance"
+ vpc_id = (known after apply)
}

 # module.asg.aws_autoscaling_group.ansible will be created
 + resource "aws_autoscaling_group" "ansible" {
+ arn = (known after apply)
+ availability_zones = (known after apply)
+ default_cooldown = (known after apply)
+ desired_capacity = 2
+ force_delete = false
+ force_delete_warm_pool = false
+ health_check_grace_period = 300
+ health_check_type = (known after apply)
+ id = (known after apply)
+ max_size = 4
+ metrics_granularity = "1Minute"
+ min_size = 1
+ name = (known after apply)
+ name_prefix = (known after apply)
+ protect_from_scale_in = false
+ service_linked_role_arn = (known after apply)
+ vpc_zone_identifier = (known after apply)
+ wait_for_capacity_timeout = "10m"

+ launch_template {
+ id = (known after apply)
+ name = (known after apply)
+ version = "$Latest"
}

+ tag {
+ key = "Environment"
+ propagate_at_launch = true
+ value = "Dev"
}
}

 # module.asg.aws_autoscaling_group.web will be created
 + resource "aws_autoscaling_group" "web" {
+ arn = (known after apply)
+ availability_zones = (known after apply)
+ default_cooldown = (known after apply)
+ desired_capacity = 2
+ force_delete = false
+ force_delete_warm_pool = false
+ health_check_grace_period = 300
+ health_check_type = (known after apply)
+ id = (known after apply)
+ max_size = 4
+ metrics_granularity = "1Minute"
+ min_size = 1
+ name = (known after apply)
+ name_prefix = (known after apply)
+ protect_from_scale_in = false
+ service_linked_role_arn = (known after apply)
+ vpc_zone_identifier = (known after apply)
+ wait_for_capacity_timeout = "10m"

+ launch_template {
+ id = (known after apply)
+ name = (known after apply)
+ version = "$Latest"
}

+ tag {
+ key = "Environment"
+ propagate_at_launch = true
+ value = "Dev"
}
}

 # module.asg.aws_launch_template.ansible will be created
 + resource "aws_launch_template" "ansible" {
+ arn = (known after apply)
+ default_version = (known after apply)
+ id = (known after apply)
+ image_id = "ami-075e152940664b7b2"
+ instance_type = "t2.micro"
+ key_name = "web-ec2"
+ latest_version = (known after apply)
+ name = (known after apply)
+ name_prefix = "ansible-launch-template"
+ tags_all = (known after apply)
+ user_data = "IyEvYmluL2Jhc2gKc2V0IC1leApzdWRvIHl1bSB1cGRhdGUgLXkKc3VkbyBhbWF6b24tbGludXgtZXh0cmFzIGluc3RhbGwgYW5zaWJsZTIgLXkKCiMgRGVmaW5lIHRoZSBsb2cgZmlsZQpMT0dfRklMRT0iL3Zhci9sb2cvdXNlcmRhdGEubG9nIgoKIyBSZWRpcmVjdCBhbGwgb3V0cHV0IChzdGRvdXQgYW5kIHN0ZGVycikgdG8gdGhlIGxvZyBmaWxlCmV4ZWMgPj4gIiRMT0dfRklMRSIgMj4mMQoKQVdTX0FDQ0VTU19LRVlfSUQ9IkFLSUE1M0NTQlk2WEpNNVIyQVZUIgpBV1NfU0VDUkVUX0FDQ0VTU19LRVk9Ik9VWDltd3pwZkg1N2Myd0F4L0pLa1pVL3h0cWlEbDNCTWxkdmtxYkMiCkFXU19SRUdJT049InVzLWVhc3QtMSIKQ0xVU1RFUl9OQU1FPSJla3NfY2x1c3RlciIKTk9ERV9HUk9VUF9JTlNUQU5DRV9UWVBFPSJ0Mi5taWNybyIKS1VCRVJORVRFU19WRVJTSU9OPSIxLjI5IgoKc3VkbyAtdSBlYzItdXNlciAvdXNyL2Jpbi9hd3MgY29uZmlndXJlIHNldCBhd3NfYWNjZXNzX2tleV9pZCAkQVdTX0FDQ0VTU19LRVlfSUQKc3VkbyAtdSBlYzItdXNlciAvdXNyL2Jpbi9hd3MgY29uZmlndXJlIHNldCBhd3Nfc2VjcmV0X2FjY2Vzc19rZXkgJEFXU19TRUNSRVRfQUNDRVNTX0tFWQpzdWRvIC11IGVjMi11c2VyIC91c3IvYmluL2F3cyBjb25maWd1cmUgc2V0ICBkZWZhdWx0LnJlZ2lvbiAkQVdTX1JFR0lPTgplY2hvICJBV1MgQ0xJIGNvbmZpZ3VyZWQgd2l0aCB5b3VyIGNyZWRlbnRpYWxzLiIKCiMgSW5zdGFsbCBrdWJlY3RsCmlmICEgWyAteCAiJChjb21tYW5kIC12IGt1YmVjdGwpIiBdOyB0aGVuCiAgICBlY2hvICJJbnN0YWxsaW5nIGt1YmVjdGwuLi4iCiAgICBjdXJsIC1MTyAiaHR0cHM6Ly9kbC5rOHMuaW8vcmVsZWFzZS8kKGN1cmwgLUwgLXMgaHR0cHM6Ly9kbC5rOHMuaW8vcmVsZWFzZS9zdGFibGUudHh0KS9iaW4vbGludXgvYW1kNjQva3ViZWN0bCIKICAgIGNobW9kICt4IGt1YmVjdGwKICAgIHN1ZG8gbXYga3ViZWN0bCAvdXNyL2Jpbi8KICAgIGVjaG8gImt1YmVjdGwgaW5zdGFsbGVkLiIKZWxzZQogICAgZWNobyAia3ViZWN0bCBpcyBhbHJlYWR5IGluc3RhbGxlZC4iCmZpCgojIEZldGNoIEt1YmVjdGwgSW5mbwpzdWRvIC11IGVjMi11c2VyIC91c3IvYmluL2F3cyBla3MgdXBkYXRlLWt1YmVjb25maWcgLS1yZWdpb24gJEFXU19SRUdJT04gLS1uYW1lICRDTFVTVEVSX05BTUUKCiMgRmV0Y2ggbWF4IHBvZHMKY3VybCAtTyBodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vYXdzbGFicy9hbWF6b24tZWtzLWFtaS9tYXN0ZXIvZmlsZXMvbWF4LXBvZHMtY2FsY3VsYXRvci5zaApjaG1vZCAreCBtYXgtcG9kcy1jYWxjdWxhdG9yLnNoCgojIEZldGNoIHRoZSBjbHVzdGVyJ3MgY2VydGlmaWNhdGUgYXV0aG9yaXR5IGRhdGEKQ0VSVElGSUNBVEVfQVVUSE9SSVRZPSQoc3VkbyAtdSBlYzItdXNlciAvdXNyL2Jpbi9hd3MgZWtzIGRlc2NyaWJlLWNsdXN0ZXIgLS1xdWVyeSAiY2x1c3Rlci5jZXJ0aWZpY2F0ZUF1dGhvcml0eS5kYXRhIiAtLW91dHB1dCB0ZXh0IC0tbmFtZSAkQ0xVU1RFUl9OQU1FIC0tcmVnaW9uICRBV1NfUkVHSU9OKQoKIyBEZWZpbmUgdGhlIGNsdXN0ZXIgZW5kcG9pbnQgVVJMCkFQSV9TRVJWRVJfRU5EUE9JTlQ9JChzdWRvIC11IGVjMi11c2VyIC91c3IvYmluL2F3cyBla3MgZGVzY3JpYmUtY2x1c3RlciAtLXJlZ2lvbiAkQVdTX1JFR0lPTiAtLW5hbWUgJENMVVNURVJfTkFNRSAtLXF1ZXJ5ICJjbHVzdGVyLmVuZHBvaW50IiAtLW91dHB1dCB0ZXh0KQoKIyBGZXRjaCB0aGUgY2x1c3RlcidzIENJRFIKU0VSVklDRV9DSURSPSQoc3VkbyAtdSBlYzItdXNlciAvdXNyL2Jpbi9hd3MgZWtzIGRlc2NyaWJlLWNsdXN0ZXIgLS1xdWVyeSAiY2x1c3Rlci5rdWJlcm5ldGVzTmV0d29ya0NvbmZpZy5zZXJ2aWNlSXB2NENpZHIiIC0tb3V0cHV0IHRleHQgLS1uYW1lICRDTFVTVEVSX05BTUUgLS1yZWdpb24gJEFXU19SRUdJT04gfCBzZWQgJ3MvXC4wXC8xNi8vJykKCkNOSV9WRVJTSU9OPSQoc3VkbyAtdSBlYzItdXNlciAvdXNyL2Jpbi9hd3MgZWtzIGRlc2NyaWJlLWFkZG9uLXZlcnNpb25zIC0tYWRkb24tbmFtZSB2cGMtY25pIC0ta3ViZXJuZXRlcy12ZXJzaW9uICRLVUJFUk5FVEVTX1ZFUlNJT04gLS1yZWdpb24gJEFXU19SRUdJT04gfCBqcSAtciAnLmFkZG9uc1tdIHwgc2VsZWN0KC5hZGRvbk5hbWUgPT0gInZwYy1jbmkiKSB8IC5hZGRvblZlcnNpb25zW10uYWRkb25WZXJzaW9uJyB8IGhlYWQgLW4gMSB8IHNlZCAncy9edi8vJykKCk1BWF9QT0RTPSQoLi9tYXgtcG9kcy1jYWxjdWxhdG9yLnNoIC0taW5zdGFuY2UtdHlwZSAkTk9ERV9HUk9VUF9JTlNUQU5DRV9UWVBFIC0tY25pLXZlcnNpb24gJENOSV9WRVJTSU9OKQoKIyBKb2luIHdvcmtlciBub2RlcyB0byB0aGUgZWtzIGNsdXN0ZXIKL2V0Yy9la3MvYm9vdHN0cmFwLnNoIGVrcy1jbHVzdGVyICRDTFVTVEVSX05BTUUgXAogIC0tYjY0LWNsdXN0ZXItY2EgJENFUlRJRklDQVRFX0FVVEhPUklUWSBcCiAgLS1hcGlzZXJ2ZXItZW5kcG9pbnQgJEFQSV9TRVJWRVJfRU5EUE9JTlQgXAogIC0tZG5zLWNsdXN0ZXItaXAgJFNFUlZJQ0VfQ0lEUi4xMCBcCiAgLS1rdWJlbGV0LWV4dHJhLWFyZ3MgIi0tbWF4LXBvZHM9JE1BWF9QT0RTIiBcCiAgLS11c2UtbWF4LXBvZHMgZmFsc2UK"
+ vpc_security_group_ids = (known after apply)

+ block_device_mappings {
+ device_name = "xvdc"

+ ebs {
+ iops = (known after apply)
+ throughput = (known after apply)
+ volume_size = 20
+ volume_type = (known after apply)
}
}
}

 # module.asg.aws_launch_template.web will be created
 + resource "aws_launch_template" "web" {
+ arn = (known after apply)
+ default_version = (known after apply)
+ id = (known after apply)
+ image_id = "ami-075e152940664b7b2"
+ instance_type = "t2.micro"
+ key_name = "web-ec2"
+ latest_version = (known after apply)
+ name = (known after apply)
+ name_prefix = "web-launch-template"
+ tags_all = (known after apply)
+ user_data = "IyEvYmluL2Jhc2gKCiMgRGVmaW5lIHRoZSBsb2cgZmlsZQpMT0dfRklMRT0iL3Zhci9sb2cvdXNlcmRhdGEubG9nIgoKIyBSZWRpcmVjdCBhbGwgb3V0cHV0IChzdGRvdXQgYW5kIHN0ZGVycikgdG8gdGhlIGxvZyBmaWxlCmV4ZWMgPj4gIiRMT0dfRklMRSIgMj4mMQoKQVdTX0FDQ0VTU19LRVlfSUQ9IkFLSUE1M0NTQlk2WEpNNVIyQVZUIgpBV1NfU0VDUkVUX0FDQ0VTU19LRVk9Ik9VWDltd3pwZkg1N2Myd0F4L0pLa1pVL3h0cWlEbDNCTWxkdmtxYkMiCkFXU19SRUdJT049InVzLWVhc3QtMSIKQ0xVU1RFUl9OQU1FPSJla3MtY2x1c3RlciIKCiMgVXBkYXRlIHBhY2thZ2UgbGlzdHMgYW5kIHVwZ3JhZGUgZXhpc3RpbmcgcGFja2FnZXMKc3VkbyBhcHQgaW5zdGFsbCB1bnppcApzdWRvIGFwdC1nZXQgdXBkYXRlIC15CnN1ZG8gYXB0LWdldCB1cGdyYWRlIC15CgojIEluc3RhbGwgcmVxdWlyZWQgcGFja2FnZXMKc3VkbyBhcHQtZ2V0IGluc3RhbGwgLXkganEgY3VybAoKIyBDaGVjayBpZiBBV1MgQ0xJIGlzIGluc3RhbGxlZAppZiAhIFsgLXggIiQoY29tbWFuZCAtdiBhd3MpIiBdOyB0aGVuCiAgICBlY2hvICJJbnN0YWxsaW5nIEFXUyBDTEkuLi4iCiAgICBjdXJsICJodHRwczovL2QxdnZodmwyeTkydnZ0LmNsb3VkZnJvbnQubmV0L2F3c2NsaS1leGUtbGludXgteDg2XzY0LnppcCIgLW8gImF3c2NsaXYyLnppcCIKICAgIHVuemlwIGF3c2NsaXYyLnppcAogICAgc3VkbyAuL2F3cy9pbnN0YWxsCiAgICBybSAtcmYgYXdzIGF3c2NsaXYyLnppcAogICAgZWNobyAiQVdTIENMSSBpbnN0YWxsZWQuIgplbHNlCiAgICBlY2hvICJBV1MgQ0xJIGlzIGFscmVhZHkgaW5zdGFsbGVkLiIKZmkKCiMgIyBDb25maWd1cmUgQVdTIENMSSB3aXRoIHlvdXIgY3JlZGVudGlhbHMgKHJlcGxhY2Ugd2l0aCB5b3VyIEFXUyBhY2Nlc3Mga2V5IGFuZCBzZWNyZXQga2V5KQojIGlmIFsgISAtZCAvaG9tZS91YnVudHUvLmF3cyBdOyB0aGVuCiMgICAgIHN1ZG8gbWtkaXIgLXAgL2hvbWUvdWJ1bnR1Ly5hd3MvCiMgICAgIHN1ZG8gdG91Y2ggL2hvbWUvdWJ1bnR1Ly5hd3MvY3JlZGVudGlhbHMKIyAgICAgc3VkbyBjaG1vZCAtUiA3NzcgL2hvbWUvdWJ1bnR1Ly5hd3MvY3JlZGVudGlhbHMKIyBmaQoKIyBjYXQgPDxFT0wgPiAvaG9tZS91YnVudHUvLmF3cy9jcmVkZW50aWFscwojIFtkZWZhdWx0XQojIGF3c19hY2Nlc3Nfa2V5X2lkID0gJEFXU19BQ0NFU1NfS0VZX0lECiMgYXdzX3NlY3JldF9hY2Nlc3Nfa2V5ID0gJEFXU19TRUNSRVRfQUNDRVNTX0tFWQojIEVPTAoKCnN1ZG8gLXUgdWJ1bnR1IC91c3IvbG9jYWwvYmluL2F3cyBjb25maWd1cmUgc2V0IGF3c19hY2Nlc3Nfa2V5X2lkICRBV1NfQUNDRVNTX0tFWV9JRApzdWRvIC11IHVidW50dSAvdXNyL2xvY2FsL2Jpbi9hd3MgY29uZmlndXJlIHNldCBhd3Nfc2VjcmV0X2FjY2Vzc19rZXkgJEFXU19TRUNSRVRfQUNDRVNTX0tFWQpzdWRvIC11IHVidW50dSAvdXNyL2xvY2FsL2Jpbi9hd3MgY29uZmlndXJlIHNldCAgZGVmYXVsdC5yZWdpb24gJEFXU19SRUdJT04KZWNobyAiQVdTIENMSSBjb25maWd1cmVkIHdpdGggeW91ciBjcmVkZW50aWFscy4iCgojIEluc3RhbGwga3ViZWN0bAppZiAhIFsgLXggIiQoY29tbWFuZCAtdiBrdWJlY3RsKSIgXTsgdGhlbgogICAgZWNobyAiSW5zdGFsbGluZyBrdWJlY3RsLi4uIgogICAgY3VybCAtTE8gImh0dHBzOi8vZGwuazhzLmlvL3JlbGVhc2UvJChjdXJsIC1MIC1zIGh0dHBzOi8vZGwuazhzLmlvL3JlbGVhc2Uvc3RhYmxlLnR4dCkvYmluL2xpbnV4L2FtZDY0L2t1YmVjdGwiCiAgICBjaG1vZCAreCBrdWJlY3RsCiAgICBzdWRvIG12IGt1YmVjdGwgL3Vzci9sb2NhbC9iaW4vCiAgICBlY2hvICJrdWJlY3RsIGluc3RhbGxlZC4iCmVsc2UKICAgIGVjaG8gImt1YmVjdGwgaXMgYWxyZWFkeSBpbnN0YWxsZWQuIgpmaQoKCiMgRmV0Y2ggdGhlIGNsdXN0ZXIncyBjZXJ0aWZpY2F0ZSBhdXRob3JpdHkgZGF0YQpDQV9EQVRBPSQoc3VkbyAtdSB1YnVudHUgL3Vzci9sb2NhbC9iaW4vYXdzIGVrcyBkZXNjcmliZS1jbHVzdGVyIC0tcmVnaW9uICRBV1NfUkVHSU9OIC0tbmFtZSAkQ0xVU1RFUl9OQU1FIC0tcXVlcnkgImNsdXN0ZXIuY2VydGlmaWNhdGVBdXRob3JpdHkuZGF0YSIgLS1vdXRwdXQgdGV4dCkKCiMgRGVmaW5lIHRoZSBjbHVzdGVyIGVuZHBvaW50IFVSTApDTFVTVEVSX0VORFBPSU5UPSQoc3VkbyAtdSB1YnVudHUgL3Vzci9sb2NhbC9iaW4vYXdzIGVrcyBkZXNjcmliZS1jbHVzdGVyIC0tcmVnaW9uICRBV1NfUkVHSU9OIC0tbmFtZSAkQ0xVU1RFUl9OQU1FIC0tcXVlcnkgImNsdXN0ZXIuZW5kcG9pbnQiIC0tb3V0cHV0IHRleHQpCgojIEluc3RhbGwgQVdTIElBTSBBdXRoZW50aWNhdG9yCmN1cmwgLW8gYXdzLWlhbS1hdXRoZW50aWNhdG9yIGh0dHBzOi8vYW1hem9uLWVrcy5zMy51cy1lYXN0LTEuYW1hem9uYXdzLmNvbS8xLjIyLjAvMjAyMS0wNy0wNS9iaW4vbGludXgvYW1kNjQvYXdzLWlhbS1hdXRoZW50aWNhdG9yCmNobW9kICt4IGF3cy1pYW0tYXV0aGVudGljYXRvcgpzdWRvIG12IGF3cy1pYW0tYXV0aGVudGljYXRvciAvdXNyL2xvY2FsL2Jpbi8KCiMgQ2hlY2sgaWYgQVdTIENMSSBjb21tYW5kcyB3ZXJlIHN1Y2Nlc3NmdWwKaWYgWyAkPyAtZXEgMCBdOyB0aGVuCiAgICAjIENvbmZpZ3VyZSBrdWJlY29uZmlnIHdpdGggY2x1c3RlciBpbmZvcm1hdGlvbgogICAga3ViZWN0bCBjb25maWcgc2V0LWNsdXN0ZXIgJENMVVNURVJfTkFNRSAtLXNlcnZlciAkQ0xVU1RFUl9FTkRQT0lOVCAtLWNlcnRpZmljYXRlLWF1dGhvcml0eSAkQ0FfREFUQQogICAgCiAgICAjIFVwZGF0ZSBrdWJlY29uZmlnIHdpdGggYXV0aGVudGljYXRpb24KICAgIHN1ZG8gLXUgdWJ1bnR1IC91c3IvbG9jYWwvYmluL2F3cyBla3MgdXBkYXRlLWt1YmVjb25maWcgLS1yZWdpb24gJEFXU19SRUdJT04gLS1uYW1lICRDTFVTVEVSX05BTUUKICAgIAogICAgaWYgWyAkPyAtZXEgMCBdOyB0aGVuCiAgICAgICAgZWNobyAiS3ViZWNvbmZpZyB1cGRhdGVkIHN1Y2Nlc3NmdWxseS4iCiAgICBlbHNlCiAgICAgICAgZWNobyAiRmFpbGVkIHRvIHVwZGF0ZSBrdWJlY29uZmlnLiIKICAgIGZpCmVsc2UKICAgIGVjaG8gIkZhaWxlZCB0byBmZXRjaCBFS1MgY2x1c3RlciBpbmZvcm1hdGlvbi4iCmZpCgoKIyBSZWRpcmVjdCBvdXRwdXQgdG8gYSBsb2cgZmlsZQpzdWRvIHRvdWNoIC92YXIvbG9nL3VzZXJkYXRhLmxvZwpzdWRvIGNobW9kIDY0NCAvdmFyL2xvZy91c2VyZGF0YS5sb2c="

+ block_device_mappings {
+ device_name = "xvdb"

+ ebs {
+ iops = (known after apply)
+ throughput = (known after apply)
+ volume_size = 20
+ volume_type = (known after apply)
}
}

+ network_interfaces {
+ associate_public_ip_address = "true"
+ security_groups = (known after apply)
}
}

 # module.eks.data.aws_eks_cluster.eks_cluster_ansible will be read during apply
# (depends on a resource or a module with changes pending)
 <= data "aws_eks_cluster" "eks_cluster_ansible" {
+ arn = (known after apply)
+ certificate_authority = (known after apply)
+ created_at = (known after apply)
+ enabled_cluster_log_types = (known after apply)
+ endpoint = (known after apply)
+ id = (known after apply)
+ identity = (known after apply)
+ kubernetes_network_config = (known after apply)
+ name = "eks-cluster"
+ platform_version = (known after apply)
+ role_arn = (known after apply)
+ status = (known after apply)
+ tags = (known after apply)
+ version = (known after apply)
+ vpc_config = (known after apply)
}

 # module.eks.aws_eip.eks_cluster_ansible_bastion_eip will be created
 + resource "aws_eip" "eks_cluster_ansible_bastion_eip" {
+ allocation_id = (known after apply)
+ association_id = (known after apply)
+ carrier_ip = (known after apply)
+ customer_owned_ip = (known after apply)
+ domain = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ network_border_group = (known after apply)
+ network_interface = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ public_ipv4_pool = (known after apply)
+ tags_all = (known after apply)
+ vpc = (known after apply)
}

 # module.eks.aws_eks_addon.ansible will be created
 + resource "aws_eks_addon" "ansible" {
+ addon_name = "vpc-cni"
+ addon_version = "v1.16.2-eksbuild.1"
+ arn = (known after apply)
+ cluster_name = "eks-cluster"
+ created_at = (known after apply)
+ id = (known after apply)
+ modified_at = (known after apply)
+ tags_all = (known after apply)
}

 # module.eks.aws_eks_cluster.ansible will be created
 + resource "aws_eks_cluster" "ansible" {
+ arn = (known after apply)
+ certificate_authority = (known after apply)
+ created_at = (known after apply)
+ endpoint = (known after apply)
+ id = (known after apply)
+ identity = (known after apply)
+ name = "eks-cluster"
+ platform_version = (known after apply)
+ role_arn = (known after apply)
+ status = (known after apply)
+ tags_all = (known after apply)
+ version = "1.29"

+ vpc_config {
+ cluster_security_group_id = (known after apply)
+ endpoint_private_access = false
+ endpoint_public_access = true
+ public_access_cidrs = (known after apply)
+ security_group_ids = (known after apply)
+ subnet_ids = (known after apply)
+ vpc_id = (known after apply)
}
}

 # module.eks.aws_eks_node_group.ansible will be created
 + resource "aws_eks_node_group" "ansible" {
+ ami_type = (known after apply)
+ arn = (known after apply)
+ capacity_type = (known after apply)
+ cluster_name = "eks-cluster"
+ disk_size = (known after apply)
+ id = (known after apply)
+ instance_types = (known after apply)
+ node_group_name = "eks-node-group"
+ node_group_name_prefix = (known after apply)
+ node_role_arn = (known after apply)
+ release_version = (known after apply)
+ resources = (known after apply)
+ status = (known after apply)
+ subnet_ids = (known after apply)
+ tags_all = (known after apply)
+ version = (known after apply)

+ launch_template {
+ id = (known after apply)
+ name = (known after apply)
+ version = "$Latest"
}

+ scaling_config {
+ desired_size = 2
+ max_size = 3
+ min_size = 1
}
}

 # module.eks.aws_instance.eks_cluster_ansible_bastion_host will be created
 + resource "aws_instance" "eks_cluster_ansible_bastion_host" {
+ ami = "ami-0e731c8a588258d0d"
+ arn = (known after apply)
+ associate_public_ip_address = (known after apply)
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ disable_api_termination = (known after apply)
+ ebs_optimized = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ id = (known after apply)
+ instance_initiated_shutdown_behavior = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "t2.micro"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = "web-ec2"
+ monitoring = (known after apply)
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ placement_partition_number = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "bastion-host"
}
+ tags_all = {
+ "Name" = "bastion-host"
}
+ tenancy = (known after apply)
+ user_data = (known after apply)
+ user_data_base64 = (known after apply)
+ vpc_security_group_ids = (known after apply)
}

 # module.eks.null_resource.trigger_remote_exec will be created
 + resource "null_resource" "trigger_remote_exec" {
+ id = (known after apply)
}

 # module.iam.aws_iam_policy_attachment.eks_cni_policy_attachment will be created
 + resource "aws_iam_policy_attachment" "eks_cni_policy_attachment" {
+ id = (known after apply)
+ name = "eamazoneks_cni-policy"
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
+ roles = (known after apply)
}

 # module.iam.aws_iam_policy_attachment.eks_ec2_container_registry_readonly will be created
 + resource "aws_iam_policy_attachment" "eks_ec2_container_registry_readonly" {
+ id = (known after apply)
+ name = "eks_worker_nodes_policy"
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
+ roles = (known after apply)
}

 # module.iam.aws_iam_policy_attachment.eks_worker_node_policy will be created
 + resource "aws_iam_policy_attachment" "eks_worker_node_policy" {
+ id = (known after apply)
+ name = "eks-worker-node-policy-attachment"
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
+ roles = (known after apply)
}

 # module.iam.aws_iam_role.eks_cluster_ansible will be created
 + resource "aws_iam_role" "eks_cluster_ansible" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = [
+ "eks.amazonaws.com",
+ "ec2.amazonaws.com",
]
}
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ managed_policy_arns = (known after apply)
+ max_session_duration = 3600
+ name = "ansible-cluster-role"
+ name_prefix = (known after apply)
+ path = "/"
+ tags_all = (known after apply)
+ unique_id = (known after apply)
}

 # module.iam.aws_iam_role.eks_nodegroup_role_ansible will be created
 + resource "aws_iam_role" "eks_nodegroup_role_ansible" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = [
+ "ec2.amazonaws.com",
+ "eks.amazonaws.com",
]
}
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ managed_policy_arns = (known after apply)
+ max_session_duration = 3600
+ name = "ansible-nodegroup-role"
+ name_prefix = (known after apply)
+ path = "/"
+ tags_all = (known after apply)
+ unique_id = (known after apply)
}

 # module.iam.aws_iam_role_policy.eks_nodegroup_role_ansible_policy will be created
 + resource "aws_iam_role_policy" "eks_nodegroup_role_ansible_policy" {
+ id = (known after apply)
+ name = "eks-nodegroup-role-ansible-describe"
+ policy = jsonencode(
{
+ Statement = [
+ {
+ Action = [
+ "eks:DescribeCluster",
+ "eks:AccessKubernetesApi",
]
+ Effect = "Allow"
+ Resource = "*"
},
]
+ Version = "2012-10-17"
}
)
+ role = "ansible-nodegroup-role"
}

 # module.iam.aws_iam_role_policy_attachment.eks-AmazonEKSClusterPolicy will be created
 + resource "aws_iam_role_policy_attachment" "eks-AmazonEKSClusterPolicy" {
+ id = (known after apply)
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
+ role = "ansible-cluster-role"
}

 # module.iam.aws_iam_role_policy_attachment.eks-AmazonEKSVPCResourceController will be created
 + resource "aws_iam_role_policy_attachment" "eks-AmazonEKSVPCResourceController" {
+ id = (known after apply)
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
+ role = "ansible-cluster-role"
}

 # module.rds.aws_db_instance.rds will be created
 + resource "aws_db_instance" "rds" {
+ address = (known after apply)
+ allocated_storage = 20
+ apply_immediately = false
+ arn = (known after apply)
+ auto_minor_version_upgrade = true
+ availability_zone = (known after apply)
+ backup_retention_period = (known after apply)
+ backup_window = (known after apply)
+ ca_cert_identifier = (known after apply)
+ character_set_name = (known after apply)
+ copy_tags_to_snapshot = false
+ db_subnet_group_name = "rds-subnet-group"
+ delete_automated_backups = true
+ endpoint = (known after apply)
+ engine = "mysql"
+ engine_version = "5.7"
+ engine_version_actual = (known after apply)
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ identifier = (known after apply)
+ identifier_prefix = (known after apply)
+ instance_class = "db.t2.micro"
+ kms_key_id = (known after apply)
+ latest_restorable_time = (known after apply)
+ license_model = (known after apply)
+ maintenance_window = (known after apply)
+ monitoring_interval = 0
+ monitoring_role_arn = (known after apply)
+ multi_az = (known after apply)
+ name = "terraformed_rds"
+ nchar_character_set_name = (known after apply)
+ option_group_name = (known after apply)
+ parameter_group_name = "rds-parameter-group"
+ password = (sensitive value)
+ performance_insights_enabled = false
+ performance_insights_kms_key_id = (known after apply)
+ performance_insights_retention_period = (known after apply)
+ port = (known after apply)
+ publicly_accessible = false
+ replicas = (known after apply)
+ resource_id = (known after apply)
+ skip_final_snapshot = true
+ snapshot_identifier = (known after apply)
+ status = (known after apply)
+ storage_type = "gp2"
+ tags_all = (known after apply)
+ timezone = (known after apply)
+ username = "rds"
+ vpc_security_group_ids = (known after apply)
}

 # module.rds.aws_db_parameter_group.default will be created
 + resource "aws_db_parameter_group" "default" {
+ arn = (known after apply)
+ description = "Managed by Terraform"
+ family = "mysql5.7"
+ id = (known after apply)
+ name = "rds-parameter-group"
+ name_prefix = (known after apply)
+ tags_all = (known after apply)

+ parameter {
+ apply_method = "immediate"
+ name = "character_set_client"
+ value = "latin1"
}
+ parameter {
+ apply_method = "immediate"
+ name = "character_set_server"
+ value = "swe7"
}
}

 # module.rds.aws_db_subnet_group.rds will be created
 + resource "aws_db_subnet_group" "rds" {
+ arn = (known after apply)
+ description = "Managed by Terraform"
+ id = (known after apply)
+ name = "rds-subnet-group"
+ name_prefix = (known after apply)
+ subnet_ids = (known after apply)
+ tags_all = (known after apply)
}

 # module.sg.aws_security_group.all will be created
 + resource "aws_security_group" "all" {
+ arn = (known after apply)
+ description = "security group for all"
+ egress = [
+ {
+ cidr_blocks = [
+ "0.0.0.0/0",
]
+ description = ""
+ from_port = 0
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "-1"
+ security_groups = []
+ self = false
+ to_port = 0
},
]
+ id = (known after apply)
+ ingress = [
+ {
+ cidr_blocks = [
+ "0.0.0.0/0",
]
+ description = ""
+ from_port = 443
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 443
},
+ {
+ cidr_blocks = [
+ "0.0.0.0/0",
]
+ description = ""
+ from_port = 80
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 80
},
+ {
+ cidr_blocks = [
+ "xxx.xxx.xxx/32",
]
+ description = ""
+ from_port = 22
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 22
},
+ {
+ cidr_blocks = [
+ "xxx.xxx.xxx.xxx/32",
]
+ description = ""
+ from_port = 3306
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 3306
},
+ {
+ cidr_blocks = [
+ "xxx.xxx.xxx.xxx/32",
]
+ description = ""
+ from_port = 8080
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 8080
},
]
+ name = "all"
+ name_prefix = (known after apply)
+ owner_id = (known after apply)
+ revoke_rules_on_delete = false
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}

 # module.sg.aws_security_group.eks_cluster will be created
 + resource "aws_security_group" "eks_cluster" {
+ arn = (known after apply)
+ description = "security group for eks cluster"
+ egress = [
+ {
+ cidr_blocks = [
+ "0.0.0.0/0",
]
+ description = ""
+ from_port = 0
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "-1"
+ security_groups = []
+ self = false
+ to_port = 0
},
]
+ id = (known after apply)
+ ingress = [
+ {
+ cidr_blocks = [
+ "10.0.0.0/16",
]
+ description = ""
+ from_port = 443
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 443
},
]
+ name = "eks_cluster"
+ name_prefix = (known after apply)
+ owner_id = (known after apply)
+ revoke_rules_on_delete = false
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}

 # module.sg.aws_security_group_rule.allow_ssh_from_bastion will be created
 + resource "aws_security_group_rule" "allow_ssh_from_bastion" {
+ from_port = 22
+ id = (known after apply)
+ protocol = "tcp"
+ security_group_id = (known after apply)
+ self = false
+ source_security_group_id = (known after apply)
+ to_port = 22
+ type = "ingress"
}

 # module.vpc.aws_eip.nat[0] will be created
 + resource "aws_eip" "nat" {
+ allocation_id = (known after apply)
+ association_id = (known after apply)
+ carrier_ip = (known after apply)
+ customer_owned_ip = (known after apply)
+ domain = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ network_border_group = (known after apply)
+ network_interface = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ public_ipv4_pool = (known after apply)
+ tags_all = (known after apply)
+ vpc = (known after apply)
}

 # module.vpc.aws_eip.nat[1] will be created
 + resource "aws_eip" "nat" {
+ allocation_id = (known after apply)
+ association_id = (known after apply)
+ carrier_ip = (known after apply)
+ customer_owned_ip = (known after apply)
+ domain = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ network_border_group = (known after apply)
+ network_interface = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ public_ipv4_pool = (known after apply)
+ tags_all = (known after apply)
+ vpc = (known after apply)
}

 # module.vpc.aws_eip.nat[2] will be created
 + resource "aws_eip" "nat" {
+ allocation_id = (known after apply)
+ association_id = (known after apply)
+ carrier_ip = (known after apply)
+ customer_owned_ip = (known after apply)
+ domain = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ network_border_group = (known after apply)
+ network_interface = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ public_ipv4_pool = (known after apply)
+ tags_all = (known after apply)
+ vpc = (known after apply)
}

 # module.vpc.aws_internet_gateway.all will be created
 + resource "aws_internet_gateway" "all" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "igw"
}
+ tags_all = {
+ "Name" = "igw"
}
+ vpc_id = (known after apply)
}

 # module.vpc.aws_nat_gateway.all[0] will be created
 + resource "aws_nat_gateway" "all" {
+ allocation_id = (known after apply)
+ connectivity_type = "public"
+ id = (known after apply)
+ network_interface_id = (known after apply)
+ private_ip = (known after apply)
+ public_ip = (known after apply)
+ subnet_id = (known after apply)
+ tags_all = (known after apply)
}

 # module.vpc.aws_nat_gateway.all[1] will be created
 + resource "aws_nat_gateway" "all" {
+ allocation_id = (known after apply)
+ connectivity_type = "public"
+ id = (known after apply)
+ network_interface_id = (known after apply)
+ private_ip = (known after apply)
+ public_ip = (known after apply)
+ subnet_id = (known after apply)
+ tags_all = (known after apply)
}

 # module.vpc.aws_nat_gateway.all[2] will be created
 + resource "aws_nat_gateway" "all" {
+ allocation_id = (known after apply)
+ connectivity_type = "public"
+ id = (known after apply)
+ network_interface_id = (known after apply)
+ private_ip = (known after apply)
+ public_ip = (known after apply)
+ subnet_id = (known after apply)
+ tags_all = (known after apply)
}

 # module.vpc.aws_route_table.private[0] will be created
 + resource "aws_route_table" "private" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ propagating_vgws = (known after apply)
+ route = [
+ {
+ carrier_gateway_id = ""
+ cidr_block = "0.0.0.0/0"
+ destination_prefix_list_id = ""
+ egress_only_gateway_id = ""
+ gateway_id = ""
+ instance_id = ""
+ ipv6_cidr_block = ""
+ local_gateway_id = ""
+ nat_gateway_id = (known after apply)
+ network_interface_id = ""
+ transit_gateway_id = ""
+ vpc_endpoint_id = ""
+ vpc_peering_connection_id = ""
},
]
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}

 # module.vpc.aws_route_table.private[1] will be created
 + resource "aws_route_table" "private" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ propagating_vgws = (known after apply)
+ route = [
+ {
+ carrier_gateway_id = ""
+ cidr_block = "0.0.0.0/0"
+ destination_prefix_list_id = ""
+ egress_only_gateway_id = ""
+ gateway_id = ""
+ instance_id = ""
+ ipv6_cidr_block = ""
+ local_gateway_id = ""
+ nat_gateway_id = (known after apply)
+ network_interface_id = ""
+ transit_gateway_id = ""
+ vpc_endpoint_id = ""
+ vpc_peering_connection_id = ""
},
]
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}

 # module.vpc.aws_route_table.private[2] will be created
 + resource "aws_route_table" "private" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ propagating_vgws = (known after apply)
+ route = [
+ {
+ carrier_gateway_id = ""
+ cidr_block = "0.0.0.0/0"
+ destination_prefix_list_id = ""
+ egress_only_gateway_id = ""
+ gateway_id = ""
+ instance_id = ""
+ ipv6_cidr_block = ""
+ local_gateway_id = ""
+ nat_gateway_id = (known after apply)
+ network_interface_id = ""
+ transit_gateway_id = ""
+ vpc_endpoint_id = ""
+ vpc_peering_connection_id = ""
},
]
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}

 # module.vpc.aws_route_table.public will be created
 + resource "aws_route_table" "public" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ propagating_vgws = (known after apply)
+ route = [
+ {
+ carrier_gateway_id = ""
+ cidr_block = "0.0.0.0/0"
+ destination_prefix_list_id = ""
+ egress_only_gateway_id = ""
+ gateway_id = (known after apply)
+ instance_id = ""
+ ipv6_cidr_block = ""
+ local_gateway_id = ""
+ nat_gateway_id = ""
+ network_interface_id = ""
+ transit_gateway_id = ""
+ vpc_endpoint_id = ""
+ vpc_peering_connection_id = ""
},
]
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}

 # module.vpc.aws_route_table_association.private[0] will be created
 + resource "aws_route_table_association" "private" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}

 # module.vpc.aws_route_table_association.private[1] will be created
 + resource "aws_route_table_association" "private" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}

 # module.vpc.aws_route_table_association.private[2] will be created
 + resource "aws_route_table_association" "private" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}

 # module.vpc.aws_route_table_association.public[0] will be created
 + resource "aws_route_table_association" "public" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}

 # module.vpc.aws_route_table_association.public[1] will be created
 + resource "aws_route_table_association" "public" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}

 # module.vpc.aws_route_table_association.public[2] will be created
 + resource "aws_route_table_association" "public" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}

 # module.vpc.aws_subnet.private[0] will be created
 + resource "aws_subnet" "private" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1a"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.10.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}

 # module.vpc.aws_subnet.private[1] will be created
 + resource "aws_subnet" "private" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1b"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.11.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}

 # module.vpc.aws_subnet.private[2] will be created
 + resource "aws_subnet" "private" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1c"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.12.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}

 # module.vpc.aws_subnet.public[0] will be created
 + resource "aws_subnet" "public" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1a"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.1.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = true
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}

 # module.vpc.aws_subnet.public[1] will be created
 + resource "aws_subnet" "public" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1b"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.2.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = true
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}

 # module.vpc.aws_subnet.public[2] will be created
 + resource "aws_subnet" "public" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1c"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.3.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = true
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}

 # module.vpc.aws_vpc.all will be created
 + resource "aws_vpc" "all" {
+ arn = (known after apply)
+ cidr_block = "10.0.0.0/16"
+ default_network_acl_id = (known after apply)
+ default_route_table_id = (known after apply)
+ default_security_group_id = (known after apply)
+ dhcp_options_id = (known after apply)
+ enable_classiclink = (known after apply)
+ enable_classiclink_dns_support = (known after apply)
+ enable_dns_hostnames = (known after apply)
+ enable_dns_support = true
+ id = (known after apply)
+ instance_tenancy = "default"
+ ipv6_association_id = (known after apply)
+ ipv6_cidr_block = (known after apply)
+ ipv6_cidr_block_network_border_group = (known after apply)
+ main_route_table_id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "vpc"
}
+ tags_all = {
+ "Name" = "vpc"
}
}

Plan: 51 to add, 0 to change, 0 to destroy.

Changes to Outputs:
+ availability_zones = [
+ "us-east-1a",
+ "us-east-1b",
+ "us-east-1c",
]
+ aws_autoscaling_group_web_desired_capacity = 2
+ aws_autoscaling_group_web_launch_template_version = "$Latest"
+ aws_autoscaling_group_web_max_size = 4
+ aws_autoscaling_group_web_min_size = 1
+ aws_autoscaling_group_web_tag_key = "Environment"
+ aws_autoscaling_group_web_tag_propagate_at_launch = true
+ aws_autoscaling_group_web_tag_value = "Dev"
+ aws_eks_addon_ansible_addon_name = "vpc-cni"
+ aws_eks_addon_ansible_addon_version = "v1.16.2-eksbuild.1"
+ aws_eks_cluster_ansible_version = "1.29"
+ aws_eks_node_group_ansible_name = "eks-node-group"
+ aws_eks_node_group_desired_capacity = 2
+ aws_eks_node_group_device_name = "xvda"
+ aws_eks_node_group_instance_types = "t2.micro"
+ aws_eks_node_group_launch_template_name_prefix = "ansible"
+ aws_eks_node_group_launch_template_version = "$Latest"
+ aws_eks_node_group_max_size = 3
+ aws_eks_node_group_min_size = 1
+ aws_eks_node_group_volume_size = 20
+ aws_launch_template_ansible_image_id = "ami-075e152940664b7b2"
+ aws_launch_template_ansible_instance_type = "t2.micro"
+ aws_launch_template_ansible_name_prefix = "ansible-launch-template"
+ aws_launch_template_web_block_device_mappings_device_name = "xvdb"
+ aws_launch_template_web_block_device_mappings_volume_size = 20
+ aws_launch_template_web_create_before_destroy = true
+ aws_launch_template_web_image_id = "ami-075e152940664b7b2"
+ aws_launch_template_web_instance_type = "t2.micro"
+ aws_launch_template_web_name_prefix = "web-launch-template"
+ aws_subnet_all_map_public_ip_on_launch = true
+ character_set_client = "latin1"
+ character_set_server = "swe7"
+ db_allocated_storage = 20
+ db_engine = "mysql"
+ db_engine_version = "5.7"
+ db_instance_class = "db.t2.micro"
+ db_name = "terraformed_rds"
+ db_parameter_client_name = "character_set_client"
+ db_parameter_group_family = "mysql5.7"
+ db_parameter_group_name = "rds-parameter-group"
+ db_parameter_server_name = "character_set_server"
+ db_password = (sensitive value)
+ db_storage_type = "gp2"
+ db_subnet_group_name = "rds-subnet-group"
+ db_username = "rds"
+ eks_cluster_ansible_name = "eks-cluster"
+ igw_name = "igw"
+ key_pair_name = "web-ec2"
+ load_balancer_type_web = "application"
+ private_ip_address = "xxx.xxx.xxx.xxx/32"
+ private_subnet_cidr_blocks = [
+ "10.0.10.0/24",
+ "10.0.11.0/24",
+ "10.0.12.0/24",
]
+ protocol_web = "HTTP"
+ public_subnet_cidr_blocks = [
+ "10.0.1.0/24",
+ "10.0.2.0/24",
+ "10.0.3.0/24",
]
+ rt_association = "rt-association"
+ rt_name = "route-table"
+ security_group_description = "security group for all"
+ security_group_description_eks_cluster = "security group for eks cluster"
+ security_group_name = "all"
+ security_group_name_eks_cluster = "eks_cluster"
+ skip_final_snapshot = true
+ subnet = "subnet"
+ vpc_cidr_block = "10.0.0.0/16"
+ vpc_name = "vpc"
+ web_alb_internal = false
+ web_alb_name = "web-alb"
+ web_cidr = "0.0.0.0/0"
+ web_tg_name = "web-target-group"

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: tfplan.out

To perform exactly these actions, run the following command to apply:
terraform apply "tfplan.out"

Now are you ready for the moment of truth?

terraform apply --auto-approve

When deploying for the first time, I encountered an issue for workder node to join the eks cluster. So it proves the value of implementing logs for userdata

in one of worker nodes, we run below script and found the issue

cat /var/log/userdata.log
An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: eks_cluster.
[ec2-user@ip-10-0-10-119 ~]$
Broadcast message from root@localhost (Sat 2024-02-10 00:55:54 UTC):

The system will power off now!

The name of eks custer is rather eks-cluster

So I updated the name in terraform.tfvars

To make sure the script is fully functional, I destroyed the whole terraform pipeline and retested it

terraform destroy --auto-approve

This time around it was deployed successfully and let us cross check in AWS console for the resources created to see if they are all expected

ALB

One Application Loader Balancer with One Target group and one listner rule created

ASGs with 2 Launch Templates

FYI: You might be confused why there are 3 launch templates created rather than 2?

The one with eks is an EKS managed auto scaling group that was automatically generated by EKS node group to provide High Availability for the node group

The other 2 launch templates are for web and ansible respectively

Under ASG Tab, there are 3 ASGs as well

One for web, one for ansible and one also was auto generated by EKS node group

EKS cluster with its resources

EKS custler checked

Under Compute Tab, node group and nodes are shown

Under Resources Tab, there will be a lot more info for this cluster

Under Add-ons Tab, we checked VPC CNI

FYI: You may add more as needed, this template is only used as a guidance

Node group checked

When checking a single node, you will have detailed info as shown below

If you ever doubt about how to pinpoint the instance (worker node), please check the highlighted instance

How to connect to worker nodes via Bastion

switch to the folder where web-ec2.pem file is located on your local computer

mac or linux — pem

In case you’re on windows, you may choose putty session with .ppk file generated from .pem file

FYI: Make sure the permission of web-ec2.pem file is 400, which means only read is granted to the user as it is required by AWS instance

As shown above, we are on bastion via SSH from local computer

Our instance provisioner also configured the same private key under root level of our bastion server, so we can access to one the worker nodes now

As show below, we may access using

ssh -i "web-ec2.pem" root@10.0.10.84

Make sure you do change root to ec2-user user

FYI: In our script, we already grant 400 in bastion server, so we are good to go now

Now let us check if everything expected in worker node is here

To make it easier, I put all checks in one screenshot

aws cli verified

kubectl verified with 2 workder nodes

ansible installed as expected

Voila, our final check is done from worker node for EKS!

IAM roles and their policies

EKS cluster IAM role — spotted under Overview page of EKS cluster

2 policies attached

Trusted relationships attached

EKS cluster node group’s IAM role — seen under node group’s details

3 policies with one inline policy attached

Trusted relationships attached

RDS

verified

Security groups

EKS cluster default security group

It’s open to all via EKS security group

Added EKS cluster security group

FYI: This was discussed previously, port 443 must be open to the vpc for worker node to communicate with EKS cluster

All security group with all other access requirements

Port 22, 3306 and 8080 are open for my own ip address

VPC

vpc checked

Public/ Private Subnets checked

One default route table with one public route table and 3 private route tables

Internet Gateway checked

EIP for bastion checked

3 Nat gateways checked

All resources are checked and verified now!

Overall time spent to complete this terraform pipeline (around 12 minutes)

Clean up all resources (around 9 mins when destroying)

terraform destroy --auto-approve

Conclusion:

This project aims to automate the provisioning of AWS infrastructure, particularly Amazon Elastic Kubernetes Service (EKS), using Terraform and an Amazon Linux-based EKS Optimized Golden AMI created with Packer. The approach involves defining infrastructure as code with Terraform, integrating the Packer-built Golden AMI into the EKS node group launch template, and customizing worker node configurations via userdata.

By leveraging Terraform and the Packer-built Golden AMI, this project provides a streamlined and efficient solution for setting up Amazon EKS clusters. It ensures consistency, scalability, and customization while reducing manual intervention. Additionally, the project documentation offers clear guidance for users to replicate and adapt the infrastructure provisioning process for their specific requirements.

In conclusion, this project empowers users to automate the deployment of Amazon EKS clusters, facilitating the efficient management of containerized applications on Kubernetes in the AWS cloud environment.

--

--

Paul Zhao
Paul Zhao Projects

Amazon Web Service Certified Solutions Architect Professional & Devops Engineer