6. AWS Cloud Architecture Building using Terraform (00-main.tf, 01-variables.tf, 02-VPC.tf)

Julia R
18 min readJul 29, 2023

--

This article is about AWS Cloud Architecture Building using Terraform. It is part of the project where all is automated. From the very beginning of data collection until the data analytics publishing on the website, there is no manual process.

More can be found in Github.

Architecture overview

Terraform is an open-source infrastructure as code tool that provides to create a well-architected, secure and efficient cloud infrastructure.

Terraform helps to achieve a fully automated process by uploading and building website resources based on 5 previous articles:
1. Data Collection using Python
2. AWS Lambda & ECS (Elastic Container service) in Data ETL with Python (Part 1 — Parent / Child Function)
3.AWS Lambda & ECS (Elastic Container service) in Data Processing with Python (Part 2 — Docker Container)
4. Data Visualization on the website hosted in AWS S3 (HTML, JavaScript, CSS)
5. Stored Procedures for Database Init, Data ETL and Data Reporting with MySQL

Specifically in this article:
Let’s discuss all the AWS resources needed for the project (VPC, ALB, Route53, ASG, S3, RDS, ESC, Lambda, EC2, Cloudfront, ACM, WAF, SQS, SNS, CloudWatch, IAM…).

The article is very long if I put all .tf files together. So I divided them into several articles. They are separated, but they related to each other closely in one work folder.

============> 00-main.tf

#=============================================================================================
#Below are terraform to build website in the AWS Cloud
#=============================================================================================
#Terraform
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0.0"
}
}

required_version = ">= 1.2.5"
}
###########################################################
#the region will be obtained according the var.environment
#development, qa, staging, production
provider "aws" {
profile = "default"
region = "${lookup(var.aws_region, var.environment)}"
allowed_account_ids = local.AccountList
}
#common prefix for all resources' names and tags
locals {
prefix="here is the string"
#Below is to get timestamp for naming AWS resources
current_timestamp = timestamp()
current_day = formatdate("YYYY-MM-DD", local.current_timestamp)
current_time = formatdate("hh:mm:ss", local.current_timestamp)
current_day_name = formatdate("EEEE", local.current_timestamp)
}

#===========================================================
# Below is to configure the backend for terraform state file
# The other settings (e.g., bucket, region) are stored in backend.hcl
# Run 'terraform init -backend-config=backend.hcl' when setup
terraform {
backend "s3" {
bucket = "here is the bucket name"
key = "here is the key for tfstate file"
region = "here is your region"
dynamodb_table = "here is your dynamodb table name"
encrypt = true
}
}

Note:

  1. There is a .tfstate file keeping record of resources every time we create or modify the cloud structure. It is in plain text unfortunately and we need to encrypted in a remote s3 bucket with a DynamoDB table. Meanwhile in my Terraform work folder locally, I deleted the .tfstate file and its backup file as well.
  2. There might be other methods to encrypt the files. I found the combination of s3 and DynamoDB table is easier for me to understand and configure. By the way, s3 bucket and DynamDB table are encrypted by default when we build them.
  3. It is important to make your data center secure no matter it is in the cloud or not. I may write a article to summarize how I enhance the cloud security in the project.

======================> 01-variables.tf

# Module of variable only offers hard values,
# Module of locals can use functions and variant of values
#============================================================
# for cloudfront, we need one provider other than default
# as AWS requires us-east-1 as the only region for certificate for cloudfront
provider "aws" {
alias = "acm_provider"
region = "us-east-1"
}
#============================================================
#### AWS Region ####
# below is an example of changing regions
# in different phases of the project.

variable "aws_region" {
description = "AWS Region"
type = map
default = {
"development" = "aws-region-1"
"qa" = "aws-region-2"
"staging" = "aws-region-3"
"production" = "aws-region-4"
}
}
locals {
aws_region="${lookup(var.aws_region, var.environment)}"
}
#============================================================
#### AWS Environment ####
#below is to decide which environment to use
#using the environment value we can get the desired aws region
variable "environment" {
type = string
description = "Options: development, qa, staging, production"
default = "development"
}
#### AWS Account No. ####
#below enviroment is for aws account
locals {
AccountIds={
"account0"={
"email"="your email address"
"id"="your AWS account No."
},
"account1"={
"email"="your 2nd email address"
"id"="your 2nd AWS account No."
}
}

}
#below is to convert the above MAP to a LIST
#this list can be provided to provider.aws.allowed_ids:
locals {
AccountList=[
for r in local.AccountIds:"${r.id}"
]
}
#below is get a specific account id
locals {
# to get the AccountID your data center is built in
AccountID=local.AccountList[0]
}
#============================================================
#System name can be optional
#it just helps to tell the resources built by which project
variable "system_name" {
default="here is the string"
}
#============================================================
#### Website Domain, Subdomains ####
# below are domains for your resources (Cloudfront, ALB, S3)
locals {
domain_name="example.com"
}
locals {
domain_name_alb="alb.example.com"
}
locals {
domain_name_cf="cf.example.com"
domain_name_cf_s3="cfs3.example.com"
}
locals {
domain_name_subdomain_s3="www.example.com"
}
#============================================================
#### S3 Buckets ####
#below is the bucket store the uploaded data
locals {
bucket_name_for_db = "${local.prefix}-upload"
}
#below is the bucket for domain_name:
locals {
bucket_name_for_web = "www.${local.domain_name}"
}
#============================================================
#### S3 Buckets ####
#below is the bucket to backup or store report data
#it is pre-built and won't be destroyed with Terraform Commands
variable "bucket_for_backup_sourcedata" {
description="the bucket to backup uploaded files and to provide source data for BI tool"
default="here is your bucket name"
}
variable "bucket_arn_for_backup_sourcedata" {
description="the bucket to backup uploaded files and to provide source data for BI tool"
default = "arn:aws:s3:::your_bucket_name"
}
#============================================================
#### SQS ####
#below is SQS for Data ETL
variable "sqs_name" {
type=string
default = "s3-sqs-lambda"
}
variable "dlq_name" {
type = string
default = "sqs-lambda-dlq"
}
#below is SQS for Cloudfront Cache Invalidation
variable "sqs_name_data_analysis" {
type=string
default = "data-analysis"
}
variable "dlq_name_data_analysis" {
type = string
default = "data-analysis-dlq"
}
#============================================================
#### SNS ####
#below is variables for SNS
#in this project, Lambda and CloudWatch publish messages through SNS topics
variable "sns_topic_name" {
type = string
description = "sns topic name"
default = "here is the word you define as success or any topic for yes"
}
locals {
sns_topic_name="${local.prefix}-topic-${var.sns_topic_name}"
}
variable "sns_topic_name2" {
type = string
description = "sns topic name2"
default = "here is the word you define as failure or any topic for no"
}
locals {
sns_topic_name2="${local.prefix}-topic-${var.sns_topic_name2}"
}
variable "sns_topic_sqs_alert" {
type = string
description = "sns topic sqs_alert"
default = "dead-letter-queue"
}
locals {
sns_topic_sqs_alert="${local.prefix}-topic-${var.sns_topic_sqs_alert}"
}
locals {
sns_topics_name=[local.sns_topic_name,local.sns_topic_name2,local.sns_topic_sqs_alert]
}

locals {
# to create ARN of SNS topics on your own
topic_arn_on_success=join(":",["arn:aws:sns","${local.aws_region}","${local.AccountID}","${local.sns_topic_name}"])
topic_arn_on_failure=join(":",["arn:aws:sns","${local.aws_region}","${local.AccountID}","${local.sns_topic_name2}"])
topic_arn_on_dlq=join(":",["arn:aws:sns","${local.aws_region}","${local.AccountID}","${local.sns_topic_sqs_alert}"])
}
locals {
sns_topics_arn=[local.topic_arn_on_success,local.topic_arn_on_failure,local.topic_arn_on_dlq]
}

variable "sns_subscription_email_address_list" {
type = string
description = "List of email addresses as string(space separated)"
default = "1234@example.com"
}
variable "sns_subscription_email_address_list2" {
type = list(string)
description = "List of email addresses as string(space separated)"
default = ["aaa@gmail.com", "bbb@gmail.com","ccc@outlook.com"]
}
variable "sns_subscription_protocol" {
type = string
default = "email"
description = "SNS subscription protocol"
}
#============================================================
#### Lambda ####
#below is the layer for lambda function:
#search here for the correct ARN
#https://aws-sdk-pandas.readthedocs.io/en/stable/layers.html
variable "AWSSDKPandas" {
description = "part of the name of aws managed layer version"
default = ":336392948345:layer:AWSSDKPandas-Python39:8"
}
locals {
AWSSDKPandas=join("",["arn:aws:lambda:","${local.aws_region}","${var.AWSSDKPandas}"])
}
#============================================================
#### Availability Zone ####
#below is to get available zones for your region defined previously
data "aws_availability_zones" "available" {
state = "available"
}
#below gives a tumple with multiple elements about available zones
locals {
availability_zones = data.aws_availability_zones.available.names
# to get the number of AZs in one AWS region
no_az_all="${length(local.availability_zones)}"
}
#output all AZs (optional)
output "all_azs" {
value = data.aws_availability_zones.available.names
description = "All AZs"
}
# different region has different number of AZs, like us-east-1 has 6 AZs
#Note:
# EC2 Instance Type (or ECS in Fargate Type) is not supported in all AZs in AWS
# Error throws when creating ASG (autoscaling group) if unsupported type is found by AWS

# We need to check if desired resources are supported in our target region.
# Terraform provides service to check for EC2. But it does not have a managed resource to get all those AZs that support fargate.
# If you apply EC2, you can build Terraform codes to automatically check for you.
# While for ECS, unfortunately, there is no way but to manually add those AZs in below modules according to AWS docs.
# https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate-Regions.html
# Hopefully, Terraform will add a new resource and help us check for Fargate soon ^.^
locals {
az_for_fargate=["first supporting AZ","second supporting AZ"]
}

# for EC2:
# define the desired instances type:
locals {
instance_type="t2.micro"
instance_type_alternative="t3.micro"
}
# to select the AZs that support my primary instance type
data "aws_ec2_instance_type_offerings" "supports-my-instance" {
filter {
name = "instance-type"
values = [local.instance_type]
}
location_type = "availability-zone"
}
locals {
az_for_pri=data.aws_ec2_instance_type_offerings.supports-my-instance.locations
# get all AZs supporting t2.micro
no_az_for_pri="${length(local.az_for_pri)}"
# get how many AZs not supporting t2.micro in current AWS region
}
# to find all AZs that support my secondary instance type
data "aws_ec2_instance_type_offerings" "supports-my-instance-2" {
filter {
name = "instance-type"
values = [local.instance_type_alternative]
}
location_type = "availability-zone"
}
locals {
az_for_sec=data.aws_ec2_instance_type_offerings.supports-my-instance-2.locations
#get all AZs supporting t3.micro
}
# after we get AZs for primary and secondary types,
# 2 sets of AZs can overlap
# to pick out the AZs which supports secondary instance only
locals {
az_for_diff=[
for az in local.availability_zones:
!contains(local.az_for_pri,az) ? az : ""
]
}
locals {
az_that_support_t3_only=[
for az in local.az_for_diff:
"${az}"
if az != ""
]
# to delete all null values in the list
# because the list of 'az_that_support_t3_only' we got contains null values
no_az_for_sec="${length(local.az_that_support_t3_only)}"
# to get how many AZs support t3.micro only
}
# since we get the right AZs for our primary and secondary instance types respectively
# next is to build subnets for the instances
#============================================================
#### Subnets ####
# next is to build subnets in supported AZs respectively
# the cidr should be in the format of
# x.x.x.x/24 or x.x.x.x/16
#============================================================
#below is to get the first half of cidr for vpc and subnet
# Note:
# the cidr_block should be planned carefully ahead
# you can let terraform to generate randomly for you
# at least you control within which IP Ranges Terraform can generate subnets randomly
variable "cidr_first_half" {
type = map
default = {
development = "xxx.xx"
qa = "xxx.xx"
staging = "xxx.xx"
production = "xxx.xx"
}
}
#below is to get the third part of cidr
#(like 172.31.1.x/24)
#and determines the max number of subnets that should be created

locals {
#------------------------------------------
cidr_c_public_subnets = 10
#public subnets will be xxx.xx.10.0/24,
# xxx.xx.11.0/24...
#for 2nd instance type:
#public subnets will be xxx.xx.100.0/24,
#------------------------------------------
cidr_c_public_subnets_2 = 20
#------------------------------------------
cidr_c_private_subnets = 30
#private subnet will start from 30
# which is xxx.xx.30.0/24
#------------------------------------------
cidr_c_private_subnets_2 = 40
#the second subnets is for the secondary instance type
#------------------------------------------
cidr_c_database_subnets = 50
# the private subnets for database will start from 50
#------------------------------------------
max_private_subnets = 3
max_database_subnets = 3
max_public_subnets = 3
}

#====================================================
# below is your own IP for the testing environment:
variable "yourownIP" {
default = "xx.xx.xx.xx/32"
sensitive = true
}
#============================================================
#### Cloudfront -- Geo Restriction ####
locals {
geo_restriction=["US","CA","name_code of the desired countries"]
}
#### Cloudfront -- Custom Header ####
locals {
cf_custom_header ="some characters"
cf_custom_header_value="some characters"
}
# These values can be stored in Secrets Manager
#============================================================
#### ALB -- Target Group -- Health Check ####
locals {
tg_health_check_path="/"
}
#============================================================
#### RDS Secret ####
# first to manually create a key pair in Secrets Manager in AWS Console
# i didn't use secret format for RDS
# i used general format (key pair) in the Secrets Manager
data "aws_secretsmanager_secret_version" "mysql-creds" {
# Fill in the name you gave to your secret
secret_id = "here is the name you give to your key pair in AWS Console"
}
locals {
mysql-creds = jsondecode(
data.aws_secretsmanager_secret_version.mysql-creds.secret_string
)
}
# from the secret_string, the terraform can create rds using your username/password pair
locals {
mysql-creds-arn =data.aws_secretsmanager_secret_version.mysql-creds.arn
}
# later ECS can use this username and password to log in your rds
# the user we used to create RDS is a master user, and it's too powerful
# I don't use it to complete the daily task like Data ETL
# After ECS login the RDS, it will create a Admin User.
# The Admin user will be used by ECS and Lambda later for detailed work.

#### RDS Secret- for db maintanance ####
# first to manually create a key pair in secrets manager
# find the secret using terraform
data "aws_secretsmanager_secret_version" "mysql-creds-db-maintanance" {
# Fill in the name you gave to your secret
secret_id = "here is the name you give to your key pair in AWS Console"
}
locals {
mysql-creds-db-maintanance = jsondecode(
data.aws_secretsmanager_secret_version.mysql-creds-db-maintanance.secret_string
)
}
locals {
mysql-creds-db-maintanance-arn =data.aws_secretsmanager_secret_version.mysql-creds-db-maintanance.arn
}
#============================================================
#### Lambda+Docker Image ####
# below is the repository name in ECR
locals {
lambda_repo_name="your repo name"
}

Note:

  1. Global services in AWS (like Cloudfront) needs the provider to be in the region of ‘us-east-1’, If you apply certificate for the Cloudfront, the ACM region should also be ‘us-east-1’. The rest of the data center can still be deployed in the desired regions.

===================> 02-VPC.tf


#1 to create VPC

#2 to create private/public subnets under the VPC
# 2.1 public subnets --> ALB
# 2.2 private subnets --> ECS
# 2.3 private subnets --> RDS
# 2.4 public(or private) subnets --> EC2
# 2.3.1 to create private subnet in each AZ for RDS
# 2.3.2 to create one subnet group including all private subnets
# 2.3.3 later to attach this subnet group to RDS
# then the RDS can be connected from multi AZs through multi subnets

#3 to create internet gateway and attach it to VPC
#(if resources in the VPC need internet connection)
#3.1 to create internet gateway
#3.2 to create route table for internet gateway
#3.3 to associate public subnet(s) to route table
#3.4 ! to assign public Ip so that EC2/ECS can connect internet
# Internet Gateway alone is not enough for Internet connection

#(if EC2/ECS in the VPC can't connect to the internet due to security reason)
#==============================================================
# EC2:
# to create AMI with necessary updates and installs
# otherwise, the target group health check failed on EC2
# as the health check is based on APP on EC2

# for patching, to use patched AMI from Systems Manager
# once AMI changes, it will trigger the instance refresh in ASG
# to use NAT gateway is also an option, but needs to pay
#==============================================================
# ECS:
# to deploy ECS in private subnets
# If we use Fargate Type, no worries to manage EC2 patching
#==============================================================
# ALB:
# ALB will be in public subnets
#==============================================================

#4 to create NAT gateway
#4.1 to create EIP(s), these EIP(s) are prepared for resources in private subnets
# later these EIPs will be associated to NAT gateways
#4.2 to create NAT gateway in public subnets for each AZ for high availability
#4.3 to create (private) route tables for private subnets
# the route tables will route traffic to NAT gateways
#4.4 to associate private subnets to private route table

#5 to create security group
#5.1 to add inbound rules
#5.2 to attach SG to target EC2/ECS(after EC2 is created)

#===========================================================
#1 below is to create VPC for the project:
resource "aws_vpc" "web-vpc" {
cidr_block = "${lookup(var.cidr_ab, var.environment)}.0.0/16"
instance_tenancy = "default"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "${local.prefix}-vpc"
}
}

#============================================================
#2 below is to create public/private subnets for each AZ
#the values of cidr and available zones can be
#obtained from local variables dynamically
#subnet 1: xxx.xx.10.0/24 in 1a for t2
#subnet 2: xxx.xx.11.0/24 in 1b for t2
#subnet 3: xxx.xx.100.0/24 in 1c for t3
#============================================================
# 2.1 public subnets --> ALB

resource "aws_subnet" "public_subnets" {
count = "${length(local.az_for_fargate)}"
# the reason I choose AZs that support fargate is that
# not all AZs supporting Fargate
# ALB from non-supporting AZ won't be able to direct requests to ECS at all
vpc_id = "${aws_vpc.web_vpc.id}"
cidr_block = "${lookup(var.cidr_ab, var.environment)}.${local.cidr_c_public_subnets+count.index}.0/24"
availability_zone = "${local.az_for_fargate[count.index]}"
#map_public_ip_on_launch = true
tags ={
Name = "${local.prefix}-Public"
}
depends_on = [
aws_vpc.web_vpc
]
}
#============================================================
# 2.2 private subnets --> ECS
# to create private subnets for fargate
resource "aws_subnet" "private_subnets_for_fargate" {
count = "${length(local.az_for_fargate)}"

vpc_id = "${aws_vpc.web_vpc.id}"
cidr_block = "${lookup(var.cidr_ab, var.environment)}.${local.cidr_c_private_subnets+count.index}.0/24"
availability_zone = "${local.az_for_fargate[count.index]}"
#map_public_ip_on_launch = true
tags ={
Name = "${local.prefix}-Private-fargate"
}
depends_on = [
aws_vpc.web_vpc
]
}
#============================================================
# 2.3 private subnets --> RDS
# RDS in this project is enabled multi-az
# 2.3.1 private subnets will be created in each AZ for RDS
resource "aws_subnet" "private_subnets_for_rds" {
count = "${length(local.availability_zones)}"

vpc_id = "${aws_vpc.web_vpc.id}"
cidr_block = "${lookup(var.cidr_ab, var.environment)}.${local.cidr_c_database_subnets+count.index}.0/24"
availability_zone = "${local.availability_zones[count.index]}"
#map_public_ip_on_launch = true
tags ={
Name = "${local.prefix}-Private-rds"
}
depends_on = [
aws_vpc.web_vpc
]
}
# there is one more step: RDS(or AWS Data Warehouse -- Redshift) requires a subnet group
# 2.3.2 below is to include those newly created subnets into one group:
# this group will be attached to RDS later
resource "aws_db_subnet_group" "rds" {
name = "${local.prefix}-subnetgroup-rds"
subnet_ids = local.private_database_subnets
tags = {
environment = "dev"
Name = "${local.prefix}-subnetgroup-rds"
}
}
#============================================================
# 2.4 public(or private) subnets --> EC2
#(if we choose t2.micro as primary, then t3.micro as secondary)
# to create subnets for primary first
# EC2 is deployed in public/private subnets according to requirements
# below is just an example, in this project, ECS is applied
resource "aws_subnet" "public_subnets_for_primary" {
count = "${length(data.aws_ec2_instance_type_offerings.primary-instance.locations)}"

vpc_id = "${aws_vpc.web-ec2.id}"
cidr_block = "${lookup(var.cidr_ab, var.environment)}.${local.cidr_c_public_subnets+count.index}.0/24"
availability_zone = "${data.aws_ec2_instance_type_offerings.primary-instance.locations[count.index]}"
#map_public_ip_on_launch = true
tags ={
Name = "${local.prefix}-PublicSubnets-forprimary"
}
depends_on = [
aws_vpc.web-ec2
]
}
# next is to create subnets for t3
resource "aws_subnet" "public_subnets_for_secondary" {
count = "${length(local.az_that_support_t3_only)}"

vpc_id = "${aws_vpc.web-ec2.id}"
cidr_block = "${lookup(var.cidr_ab, var.environment)}.${local.cidr_c_public_subnets_2+count.index}.0/24"
availability_zone = "${local.az_that_support_t3_only[count.index]}"
#map_public_ip_on_launch = true
tags ={
Name = "${local.prefix}-PublicSubnets-forsecondary"
}

depends_on = [
aws_vpc.web-ec2
]
}
locals {
# get all subnets for EC2
all_subnets = concat(aws_subnet.public_subnets_for_primary[*].id,aws_subnet.public_subnets_for_secondary[*].id)
}
output "all_subnets" {
value=local.all_subnets
}
#============================================================
# up until now, we have created
# 2 public subnets --> later for nat gateway/internet gateway/alb
# 2 private subnets --> later for ecs(fargate)
# 3 private subnets --> later for database(mySql)
locals {
# get all subnets
public_subnets=concat(aws_subnet.public_subnets[*].id)
private_subnets=concat(aws_subnet.private_subnets_for_fargate[*].id)
private_database_subnets=concat(aws_subnet.private_subnets_for_rds[*].id)
all_private_subnets = concat(aws_subnet.private_subnets_for_fargate[*].id,aws_subnet.private_subnets_for_rds[*].id)
}
/*
output "public_subnets" {
value=local.public_subnets
}

output "private_subnets_database" {
value=local.private_database_subnets
}

output "private_subnets_fargate" {
value=local.private_subnets
}
*/
#============================================================
#3.1 below is to create internet gateway for VPC
resource "aws_internet_gateway" "igw" {
vpc_id = "${aws_vpc.web-vpc.id}"

tags = {
Name = "${local.prefix}-igw"
}
}
#============================================================
#3.2 below is to create router for VPC
#when we create a VPC, a default router will be created
#the default router will connect and route all subnets under this VPC automatically
#the default router will route xxx.xx.0.0/16 --> local
#============================================================
# we need to create a 2nd router so that 0.0.0.0/0 --> internet gateway
resource "aws_route_table" "router-igw" {
vpc_id = "${aws_vpc.web-ec2.id}"

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
#route {
#ipv6_cidr_block = "::/0"
#egress_only_gateway_id = aws_egress_only_internet_gateway.example.id
#}
tags = {
Name = "${local.prefix}-router-igw"
}
}
#3.3 to associate the public subnets to the 2nd router for internet connection
resource "aws_route_table_association" "public-subnets-route-to-internet" {
count = "${length(local.public_subnets)}"

subnet_id = "${element(aws_subnet.public_subnets.*.id, count.index)}"
route_table_id = aws_route_table.router-igw.id
}
#===========================================================
#4 to create NAT gateway
# if we choose computation systems (lambda in the vpc, ec2, ecs) with private subnets
# we can use NAT for internet connection (or VPC Endpoints which will be discussed later with ECS)
# NAT gateway is charged by hour, it's wise to used it only when needed
# or we can compare the usage of NAT Gateway with VPC Endpoints and decide which to use according to cost
# in this project, I applied VPC Endpoints, below is just for reference
#4.1 before creating NAT, eip(s) should be specified
# each AZ need one NAT gateway to connect to internet
# each NAT gateway need creating in one public subnet
/*
resource "aws_eip" "nat_gateway" {
count=length(local.public_subnets)
domain = "vpc"
depends_on = [
aws_internet_gateway.igw
]
}
output "eip_for_private_subnets" {
value = aws_eip.nat_gateway[*].public_ip
}
#4.2 below is to create one NAT gateway per AZ
# as ECS fargate does not support all AZ,
# here, we need to create
resource "aws_nat_gateway" "main" {
count=length(local.public_subnets)
allocation_id = element(aws_eip.nat_gateway.*.id, count.index)
subnet_id = element(local.public_subnets[*],count.index)
#each public subnet has one NAT gateway

# To ensure proper ordering, it is recommended to add an explicit dependency
# on the Internet Gateway for the VPC.
depends_on = [aws_internet_gateway.igw]
tags = {
Name="${local.prefix}-nat"
}
}
*/
#============================================================
#4.3 below is to create a (private) route table for private subnets
#this private route table is to connect nat gateway
# the nat gateway is assigned with a public EIP

#its different from the web-route,
#which connects to internet gateway directly

# each private subnet has one route table

# resource "aws_route_table" "nat_gateway" {
# count = length(local.private_subnets)
# vpc_id = "${aws_vpc.web_vpc.id}"
# tags = {
# Name="${local.prefix}-router-nat"
# }
# }

#each route table needs one nat gateway associated
# resource "aws_route" "nat_gateway" {
# count = length(local.private_subnets)
# route_table_id = element(aws_route_table.nat_gateway.*.id,count.index)
# destination_cidr_block = "0.0.0.0/0"
# nat_gateway_id = element(aws_nat_gateway.main.*.id, count.index)
# }

#============================================================
#4.4 below is to associate private subnets to the private route table
# For ECS:
resource "aws_route_table" "private_fargate" {
count = length(local.private_subnets)
vpc_id = "${aws_vpc.web_vpc.id}"
tags = {
Name="${local.prefix}-router-ecs"
}
}
resource "aws_route_table_association" "private_fargate" {
count = length(local.private_subnets)
subnet_id = element(local.private_subnets.*, count.index)
route_table_id = element(aws_route_table.nat_gateway.*.id, count.index)
}

# For RDS:
# below is to create 3 route table for 3 private subnets for rds
resource "aws_route_table" "rds" {
count = length(local.private_database_subnets)
vpc_id = "${aws_vpc.web_vpc.id}"
tags = {
Name="${local.prefix}-router-rds"
}
}
# below is to associate private subnets to the private route table
resource "aws_route_table_association" "private_rds" {
count = length(local.private_database_subnets)
subnet_id = element(local.private_database_subnets.*, count.index)
route_table_id = element(aws_route_table.rds.*.id, count.index)
}
# Note:
# if we need RDS publicly accessible in development environment,
# please make sure
# aaa)RDS has a subnet group with public subnets
# the aws subnets become public when they are associated with route tables which are connected to internet gateway
# we can't apply NAT gateway for 2 reasons. RDS' public access doesnot go in this way.
# NAT gateway is one way direction only.
# bbb) RDS security group accepts connection from your IP address (check "what is my IP" online)
# ccc) if the security group is created by us ,not by aws, there is no default outbound rule in the security group
# which means, the traffic coming from RDS can't reach us, do add or check the outbound rule in SG
# ddd) modify RDS, and set it as 'public accessible'
# eee) download/install a database tool online. build the connection using the parameters in RDS aws console.
# The database tools are so many. MySql Workbench is recommended by AWS for MySql database.
#===========================================================
#5 below is to create security group for EC2 (just as an example):
# before we create our own EC2/ASG, there is one more step to complete
# when we create VPC, a default ACL and a default SG are created
# ACL controls all inbound/outbound rules for the subnets
# SG controls all in/out rules for specific resource(s) within VPC
# the default ACL and SG allows 0.0.0.0/0 from and to anywhere in the internet
# this is not for production environment
# below is to create 2 SGs for EC2
#======================================================================
/*
#the default SG will only maintain for SSH access
resource "aws_default_security_group" "ec2_security_group_ssh" {
vpc_id = "${aws_vpc.web-ec2.id}"

lifecycle {
create_before_destroy = true
}
tags = {
Name = "${local.prefix}-securitygroup-ec2-ssh"
}
depends_on = [
aws_vpc.web-ec2
]
}
# to add port 22 into SG
resource "aws_security_group_rule" "inbound-rules-22" {
type = "ingress"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks =[var.myownIP]

security_group_id = aws_default_security_group.ec2_security_group_ssh.id
}

#other inbound/outbound rules will be added in another SG
resource "aws_security_group" "ec2_security_group_public" {
vpc_id = "${aws_vpc.web-ec2.id}"
name = "${local.prefix}-securitygroup-ec2-public"
description = "Allow connection to public EC2s"
lifecycle {
create_before_destroy = true
}
tags = {
Name = "${local.prefix}-securitygroup-ec2-public"
}
depends_on = [
aws_vpc.web-ec2
]
}

resource "aws_security_group_rule" "outbound-rules" {
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks =["0.0.0.0/0"]

security_group_id = aws_security_group.ec2_security_group_public.id
}
# to open port 80 for HTTP for ALB only
# the inbound rule can't be added in ALB.tf file

# to open port 443 for HTTPs
# if we only allow ALB to connect to EC2, there is no need for Port 443
# ALB will securely connect to EC2 in the target group by port 80
# below is just for testing
resource "aws_security_group_rule" "production_web_server" {
type = "ingress"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = [var.yourownIP]

security_group_id =aws_security_group.ec2_security_group_public.id

}
*/
# Security Group Creation for ECS/ALB/Lambda will be shared in their respective .tf files.

Note:

  1. An internet gateway makes sure services in public subnets with Public IP can connect to internet.
  2. A nat gateway makes sure service in private subnets can connect to internet for one-way traffic. Resources outside of VPC can’t initiate the connections.
  3. Codes for Security Group of EC2/ECS/ALB are in their .tf file respectively. Here, I just list a complete steps to build VPC.
  4. Every time we execute ‘Terraform apply’ to modify the Cloud Data Center, Terraform might delete the current subnets and recreate new ones. However, old subnets might not be updated in their attached resources and error throws telling us ‘Subnets not found’. The same is true for Inbound/Outbound rules in Security Group. We can add the ‘ignore_changes’ lifecycle to the subnet resources.
  5. There are 3 computation systems in AWS — Lambda, EC2, ECS. (Other cloud providers have the same.) To apply which service(s) depends on the requirements of the project. I include all 3 services in my terraform codes. It does not mean all these 3 are needed at the same time.
  6. If Autoscaling Group is applied for EC2/ECS, we need to use terraform to find the supporting AZ zones.
  7. In theory, one NAT Gateway for one subnet increases high availability. Of course, more expensive. In practice, all depends on project requirements.

Requirements

* AWS CLI (with at least one profile)
* Terraform (website building automation)
* Python (Lambda/ECS automation)

Deploying

terraform init
terraform validate
terraform plan
terraform apply
terraform destroy

--

--