Deploying a Traditional Three-Tier Architecture using Terraform

Narayan
9 min readMar 20, 2024

--

I recommend creating a fresh new Google Cloud project for this task. If you are using a service account for Terraform, it is essential to assign the appropriate IAM permissions, including Project IAM Admin and Secret Manager Admin permissions.

In this tutorial, we will deploy a three-tier architecture using Terraform. This architecture primarily includes a load balancer, a managed instance group (MIG), and Cloud SQL. However, some may not consider a traditional three-tier architecture, consisting a load balancer, a managed instance group (MIG) of virtual machines for the application layer, and Cloud SQL for the database tier, as a cloud-native architecture. source code is here

Three-tier architecture diagram

Following Google Cloud best practices, we use a custom VPC with minimal subnets and firewall rules. As we are catering to HTTP traffic, we are using a global load balancer, which acts as the entry point and direct traffic to virtual machines that host our application code.

We deploy the virtual machines (Compute Engine) in a Managed Instance Group (MIG) to facilitate scaling. For the database tier, we create a Cloud SQL instance and leverage Secret Manager to store the connection information. This enables the application code to access and connect to the database securely.

We will use a layered approach to provision resources. Each layer, or sub directory, has its own state file, and we use the terraform_remote_state data source to expose the state information of the different layers. In the first layer, we enable the required APIs and configure the custom VPC, along with the appropriate firewall rules. Next, we provision the database instance, the database, and the database user. Finally, we provision the MIG and the load balancer.

Laying the foundation

In the initial layer, we lay down the groundwork for this architecture. Initially, we activate the necessary APIs using the google_project_service resource. For this project, the essential APIs include:

Cloud Resource Manager
Compute Engine
Identity and Access Management
Secret Manager API

The google_project_service resource has an optional argument, disable_on_destroy, which is set to true by default. It is usually better to set it to false, which mimics the behavior when we enable APIs using the web console.

As we are creating multiple google_project_service resources, we could use the for_each meta-argument. for_each requires a set. We use the toset() function which converts a list to a set. This function is useful in conjunction with the for_each meta-argument, as toset() removes any duplicates and discards the ordering.

foundation/project-services.tf

resource "google_project_service" "this" {
for_each = toset(var.services)
service = "${each.key}.googleapis.com"
disable_on_destroy = false
}

In this layer, we also configure the VPC, which includes setting up firewall rules. Since we operate within a single region, we can establish a basic VPC with just one subnet. Adhering to top security practices, we implement robust firewall rules. The global load balancer manages all incoming traffic, eliminating the need for provisioning firewall rules for HTTP inbound traffic. Nonetheless, the load balancer does require a firewall rule to permit ingress for health checks.

During development, we want to provide SSH access to the virtual machines. Still, we want to restrict access to authorized users only, and we use Google Cloud Identity-Aware Proxy (IAP). . IAP is part of the Google Cloud security framework and enables virtual machines to access without the need for a VPN or bastion host. To allow access to virtual machines using IAP, we need to set up a firewall rule to allow SSH access from a specific set of source IP ranges. We can utilize the google_netblock_ip_ranges data source to retrieve those specific IP ranges used in the firewall rule.

foundation/data.tf

data "google_netblock_ip_ranges" "iap_forwarders" {
range_type = "iap-forwarders"
}

data "google_netblock_ip_ranges" "health_checkers" {
range_type = "health-checkers"
}

Hence, we employ two google_netblock_ip_ranges data sources to fetch the respective IP ranges, which we then utilize to configure the firewall rules, as depicted below

foundation/firewall.tf

resource "google_compute_firewall" "allow_iap" {
name = "${local.network_name}-allow-iap"
network = local.network_name

allow {
protocol = "tcp"
ports = ["22"]
}

source_ranges = data.google_netblock_ip_ranges.iap_forwarders.cidr_blocks_ipv4
target_tags = ["allow-iap"]
}

resource "google_compute_firewall" "allow_health_check" {
name = "${local.network_name}-allow-health-check"
network = local.network_name
allow {
protocol = "tcp"
ports = ["80"]
}

source_ranges = data.google_netblock_ip_ranges.health_checkers.cidr_blocks_ipv4
target_tags = ["allow-health-check"]
}

Following the security principle of least privilege, the virtual machines in the instance group uses a dedicated service account that possesses only the necessary permissions to execute its tasks. In this scenario, we grant the cloudsql.client and secretmanager.secretAccessor permissions. Consequently, we must create a service account and allocate the appropriate roles. Terraform provides two resources for assigning IAM roles: google_project_iam_binding and google_project_iam_member.

The main difference between the two resources is that the former is authoritative, whereas the latter is non-authoritative. That is, google_project_iam_binding overwrites any other IAM permission, whereas google_project_iam_member is additive, which preserves any existing IAM permission. In general, it is better to use google_project_iam_member or the use IAM module from Google (read more about the best practices here

foundation/sa.tf

resource "google_service_account" "this" {
depends_on = [google_project_service.this["iam"]]
account_id = var.sa_name
display_name = "${var.sa_name} Service Account"
}

resource "google_project_iam_member" "this" {
project = var.project_id
count = length(var.roles)
role = "roles/${var.roles[count.index]}"
member = "serviceAccount:${google_service_account.this.email}"
}

Now that we have enabled all the required APIs, set up the VPC including the firewall rules, and provisioned the service account, we can move on to the second layer, which involves creating the database and database users.

Provisioning the database

We will now provision a complete database along with a user and password. Following good security practices, we use Google Cloud Secret Manager to securely store the passwords for retrieval in the application layer.

So, first we generate the root and user password using the random_password Terraform resource. Next, we store the generated passwords in the secret manager. We need to use two resources each — google_secret_manager_secret to provision the secret and google_secret_manager_secret_version to store the actual password.

database/secrets.tf

resource "random_password" "root" {
length = 12
special = false
}

resource "google_secret_manager_secret" "root_pw" {
secret_id = "db-root-pw"
replication {
automatic = true
}
}

resource "google_secret_manager_secret_version" "root_pw" {
secret = google_secret_manager_secret.root_pw.id
secret_data = random_password.root.result
}

resource "random_password" "user" {
length = 8
special = false
}

resource "google_secret_manager_secret" "user_pw" {
secret_id = "db-user-pw"
replication {
automatic = true
}
}

resource "google_secret_manager_secret_version" "user_pw" {
secret = google_secret_manager_secret.user_pw.id
secret_data = random_password.user.result
}

Once we have created and stored the passwords, we can create the database instance, database, and database user.

database/main.tf

resource "random_string" "this" {
length = 4
upper = false
special = false
}

resource "google_sql_database_instance" "this" {
name = "${var.db_settings.instance_name}-${random_string.this.result}"
database_version = var.db_settings.database_version
region = var.region
root_password = random_password.root.result

settings {
tier = var.db_settings.database_tier
}
deletion_protection = false
}

resource "google_sql_database" "this" {
name = var.db_settings.db_name
instance = google_sql_database_instance.this.name
}

resource "google_sql_user" "sql" {
name = var.db_settings.user_name
instance = google_sql_database_instance.this.name
password = random_password.user.result
}

resource "google_secret_manager_secret" "connection_name" {
secret_id = "connection-name"
replication {
automatic = true
}
}

resource "google_secret_manager_secret_version" "connection_name" {
secret = google_secret_manager_secret.connection_name.id
secret_data = google_sql_database_instance.this.connection_name
}

Now that we have completed the database provisioning, we can proceed to
the last layer of provisioning the load balancer and the MIG.

Provisioning a MIG and global load balancer

To create a MIG, we first create an instance template, and then the MIG, which utilizes the instance template. The instance group is analogous to creating a virtual machine. Please note that we specify the service account we created and set the scopes to cloud-platform. This follows Google Cloud best practices.

Now, in this scenario, we can effectively utilize the create_before_destroy lifecycle rule. Suppose we intend to modify the instance template, such as altering the startup script. Typically, Terraform would destroy the google_compute_instance_template resource before applying the change and creating a new resource. However, Google Cloud prohibits us from destroying google_compute_instance_template because the MIG still rely on it. Therefore, we use the create_before_destroy lifecycle meta-argument to instruct Terraform to create a new template, apply it to the MIG, and only then destroy the old one. However, there is one more step we need to take. If we specify a fixed name for the instance template, the create_before_destroy lifecycle would encounter a naming conflict. This is because it would attempt to create a second instance template with the same name. Therefore, instead of assigning a name directly to google_compute_instance_template, we could use the name_prefix attribute, as demonstrated below:

main/mig.tf

# # google_compute_instance_template.this:
resource "google_compute_instance_template" "this" {
name_prefix = var.mig.instance_template_name_prefix
region = var.region
machine_type = var.mig.machine_type

disk {
source_image = var.mig.source_image
}

network_interface {
subnetwork = data.terraform_remote_state.foundation.outputs.subnetwork_self_links["iowa"]
access_config {
// Ephemeral public IP
}
}

metadata_startup_script = file("startup.sh")

tags = [
"allow-iap",
"allow-health-check"
]
service_account {
email = data.terraform_remote_state.foundation.outputs.service_account_email
scopes = ["cloud-platform"]
}

lifecycle {
create_before_destroy = true
}
}

resource "google_compute_region_instance_group_manager" "this" {
name = var.mig.mig_name
region = var.region
base_instance_name = var.mig.mig_base_instance_name
target_size = var.mig.target_size

version {
instance_template = google_compute_instance_template.this.id
}

named_port {
name = "http"
port = 80
}

update_policy {
type = "PROACTIVE"
minimal_action = "REPLACE"
max_surge_fixed = 3
}
}

As we want our instance group to be high availability (HA), we create a
regional MIG. In this example, we want to apply updates automatically, so
we choose proactive updates. Thus, as soon as a change is detected in the instance template, Google Cloud updates the virtual machines

Defining the global load balancer follows a four-step process, similar to creating it via the web console. Initially, we create a frontend configuration using the google_compute_global_forwarding_rule resource. Subsequently, we define a backend service (google_compute_backend_service) and a health check (google_compute_health_check). Finally, we establish the path rules using the google_compute_url_map resource.

main/lb.tf

resource "google_compute_global_forwarding_rule" "this" {
name = var.load_balancer.forward_rule_name
ip_protocol = "TCP"
load_balancing_scheme = "EXTERNAL"
port_range = "80"
target = google_compute_target_http_proxy.this.self_link
}

resource "google_compute_health_check" "this" {
name = "http-health-check"
http_health_check {
port = 80
}
}

resource "google_compute_backend_service" "this" {
name = var.load_balancer.backend_service_name
health_checks = [google_compute_health_check.this.self_link]
load_balancing_scheme = "EXTERNAL"

backend {
balancing_mode = "UTILIZATION"
group = google_compute_region_instance_group_manager.this.instance_group
}
}

# #
resource "google_compute_url_map" "this" {
name = var.load_balancer.url_map_name
default_service = google_compute_backend_service.this.self_link
}

resource "google_compute_target_http_proxy" "this" {
name = var.load_balancer.target_proxy_name
url_map = google_compute_url_map.this.self_link
}

Now, we can run Terraform and then access the resulting website using the IP address shown in the output.

We included some sample code to demonstrate the use of accessing the secret manager during startup time, and the use of an SQL proxy to use the access database. Thus, we can ssh into the instance from the web console and cut and paste the code shown into the command line of your ssh session to connect to the database. We can retrieve the password using the web console or the following command:

gcloud secrets versions access latest --secret="db-user-pw"

We see that we can successfully connect to the database without any hard coded connection information.

--

--