GCP — How to deploy Cloud NAT with Terraform.

Sumit K
Google Cloud - Community
10 min readJan 29, 2023

Network Address Translation:

Imagine you have a multi-tiered application set up in the cloud and an updated server in Private Subnet.You want to allow your instances outbound access to the Internet for updates, patching, config management, and more without having an external IP address. As a result, you can keep your instances entirely internal facing in a controlled and efficient manner. A Network Address Translation (NAT) gateway machine can be the answer for this network. It routes traffic and lets multiple VMs in a subnet reach the Internet using a single public IP address.

So, what exactly is Cloud NAT? Cloud NAT (network address translation) lets certain resources without external IP addresses create outbound connections to the internet. In simple words, Cloud NAT will be deployed for internet egress traffic. Simple as that :)

Cloud NAT provides outgoing connectivity for the following resources:

  1. Compute Engine VM
  2. GKE

3. Cloud Run

4. Cloud Function

5. App Engine

High Level — Cloud NAT with 3-tier Architecture

In the preceding diagram, Subnet1 and Subnet2 are private subnets and hence any VMs in these subnets have hot a private IP and can’t reach out to internet.The only way for them to talk to the internet is via Cloud NAT.

Architecture

Cloud NAT is a distributed, software-defined managed service. It’s not based on proxy VMs or appliances. Cloud NAT configures the Andromeda software that powers your Virtual Private Cloud (VPC) network so that it provides source network address translation (source NAT or SNAT) for VMs without external IP addresses. Cloud NAT also provides destination network address translation (destination NAT or DNAT) for established inbound response packets.

Now, You must be thinking what is SNAT and DNAT?

SNAT: It is a technique that generally translates the source IP address when connecting from a private IP address to a public IP address. The most common form of NAT is used when an internal host needs to initiate a session with an external host or public host.

SNAT Translation

DNAT: It is a technique that translates the destination IP address generally when connecting from public IP address to a private IP address.

DNAT Translation

Cloud NAT allows outbound connections and inbound responses to those connections. Each Cloud NAT gateway performs source NAT on egress, and destination NAT for established response packets.

As a part of this demo, the following resource will be deployed with the terraform.

  1. A VPC with a single subnet.
  2. A compute VM with private IP.
  3. Firewall to allow SSH connection from IAP. Click here to read more about it.
  4. IAP permissions.
  5. Cloud router.(It act as BGP speaker and responder and also serves as a control plane for cloud NAT, basically advertise IP range.)
  6. Finally, Cloud NAT Gateway.

IAP Desktop uses Identity-Aware-Proxy (IAP) to connect to VM instances so that you can:

Connect to VM instances that don’t have a public IP address

Connect from anywhere over the internet

To follow this tutorial you will need:

  • A GCP Account.
  • gcloud CLI.
  • Service Account. (Give it Owner permission and download the JSON file).
  • Download and install the IAP Desktop on your local machine. Click here to download.

Lets get started:

Step1. First Create a Directory for the terraform configuration.

mkdir cloud-nat-demo

Step2. Change into the Directory

cd cloud-nat-demo

Step3. Create a new file for the configuration block.

$ touch provider.tf
$ touch main.tf
$ touch variable.tf

Step4. Here is my directory structure looks alike.

$ tree

|-- main.tf
|-- provider.tf
|-- tcb-project-371706-4c5de465c0d5.json
|-- variable.tf

Step5. Paste the configuration below into provider.tf and save it.

provider "google" {
region = var.region
project = var.project_name
credentials = file("tcb-project-371706-4c5de465c0d5.json")
zone = "us-east4-c"
}

Step6. Paste the configuration below into variable.tf and save it.

variable "region" {
default = "us-east4"
}

variable "project_name" {
default = "tcb-project-371706"
}

Step7. Paste the configuration below into main.tf and save it.

# Create a VPC
resource "google_compute_network" "vpc" {
name = "my-custom-network-2"
auto_create_subnetworks = "false"

}

# Create a Subnet
resource "google_compute_subnetwork" "my-custom-subnet" {
name = "my-custom-subnet-2"
ip_cidr_range = "10.10.0.0/24"
network = google_compute_network.vpc.name
region = var.region
}

## Create a VM in the above subnet

resource "google_compute_instance" "my_vm" {
project = var.project_name
zone = "us-east4-c"
name = "nat-demo-vm"
machine_type = "e2-medium"
boot_disk {
initialize_params {
image = "debian-cloud/debian-11"
}
}
network_interface {
network = "my-custom-network-2"
subnetwork = google_compute_subnetwork.my-custom-subnet.name # Replace with a reference or self link to your subnet, in quotes
}
}

# Create a firewall to allow SSH connection from the specified source range
resource "google_compute_firewall" "rules" {
project = var.project_name
name = "allow-ssh"
network = "my-custom-network-2"

allow {
protocol = "tcp"
ports = ["22"]
}
source_ranges = ["35.235.240.0/20"]
}

## Create IAP SSH permissions for your test instance

resource "google_project_iam_member" "project1" {
project = var.project_name
role = "roles/iap.tunnelResourceAccessor"
member = "serviceAccount:terraform-demo-aft@tcb-project-371706.iam.gserviceaccount.com"
}

## Create Cloud Router

resource "google_compute_router" "router" {
project = var.project_name
name = "nat-router"
network = "my-custom-network-2"
region = var.region
}

## Create Nat Gateway

resource "google_compute_router_nat" "nat" {
name = "my-router-nat"
router = google_compute_router.router.name
region = var.region
nat_ip_allocate_option = "AUTO_ONLY"
source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"

log_config {
enable = true
filter = "ERRORS_ONLY"
}
}

Step8. Initialize the backend, Format, and validate your configuration.

$ terraform validate
Success! The configuration is valid.
$ terraform fmt
main.tf
provider.tf

Step9. Create your resources

$ terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
+ create

Terraform will perform the following actions:

# google_compute_firewall.rules will be created
+ resource "google_compute_firewall" "rules" {
+ creation_timestamp = (known after apply)
+ destination_ranges = (known after apply)
+ direction = (known after apply)
+ enable_logging = (known after apply)
+ id = (known after apply)
+ name = "allow-ssh"
+ network = "my-custom-network-2"
+ priority = 1000
+ project = "tcb-project-371706"
+ self_link = (known after apply)
+ source_ranges = [
+ "35.235.240.0/20",
]

+ allow {
+ ports = [
+ "22",
]
+ protocol = "tcp"
}
}

# google_compute_instance.my_vm will be created
+ resource "google_compute_instance" "my_vm" {
+ can_ip_forward = false
+ cpu_platform = (known after apply)
+ current_status = (known after apply)
+ deletion_protection = false
+ guest_accelerator = (known after apply)
+ id = (known after apply)
+ instance_id = (known after apply)
+ label_fingerprint = (known after apply)
+ machine_type = "e2-medium"
+ metadata_fingerprint = (known after apply)
+ min_cpu_platform = (known after apply)
+ name = "nat-demo-vm"
+ project = "tcb-project-371706"
+ self_link = (known after apply)
+ tags_fingerprint = (known after apply)
+ zone = "us-east4-c"

+ boot_disk {
+ auto_delete = true
+ device_name = (known after apply)
+ disk_encryption_key_sha256 = (known after apply)
+ kms_key_self_link = (known after apply)
+ mode = "READ_WRITE"
+ source = (known after apply)

+ initialize_params {
+ image = "debian-cloud/debian-11"
+ labels = (known after apply)
+ size = (known after apply)
+ type = (known after apply)
}
}

+ confidential_instance_config {
+ enable_confidential_compute = (known after apply)
}

+ network_interface {
+ ipv6_access_type = (known after apply)
+ name = (known after apply)
+ network = "my-custom-network-2"
+ network_ip = (known after apply)
+ stack_type = (known after apply)
+ subnetwork = "my-custom-subnet-2"
+ subnetwork_project = (known after apply)
}

+ reservation_affinity {
+ type = (known after apply)

+ specific_reservation {
+ key = (known after apply)
+ values = (known after apply)
}
}

+ scheduling {
+ automatic_restart = (known after apply)
+ instance_termination_action = (known after apply)
+ min_node_cpus = (known after apply)
+ on_host_maintenance = (known after apply)
+ preemptible = (known after apply)
+ provisioning_model = (known after apply)

+ node_affinities {
+ key = (known after apply)
+ operator = (known after apply)
+ values = (known after apply)
}
}
}

# google_compute_network.vpc will be created
+ resource "google_compute_network" "vpc" {
+ auto_create_subnetworks = false
+ delete_default_routes_on_create = false
+ gateway_ipv4 = (known after apply)
+ id = (known after apply)
+ internal_ipv6_range = (known after apply)
+ mtu = (known after apply)
+ name = "my-custom-network-2"
+ project = (known after apply)
+ routing_mode = (known after apply)
+ self_link = (known after apply)
}

# google_compute_router.router will be created
+ resource "google_compute_router" "router" {
+ creation_timestamp = (known after apply)
+ id = (known after apply)
+ name = "nat-router"
+ network = "my-custom-network-2"
+ project = "tcb-project-371706"
+ region = "us-east4"
+ self_link = (known after apply)
}

# google_compute_router_nat.nat will be created
+ resource "google_compute_router_nat" "nat" {
+ enable_dynamic_port_allocation = (known after apply)
+ enable_endpoint_independent_mapping = true
+ icmp_idle_timeout_sec = 30
+ id = (known after apply)
+ name = "my-router-nat"
+ nat_ip_allocate_option = "AUTO_ONLY"
+ project = (known after apply)
+ region = "us-east4"
+ router = "nat-router"
+ source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
+ tcp_established_idle_timeout_sec = 1200
+ tcp_transitory_idle_timeout_sec = 30
+ udp_idle_timeout_sec = 30

+ log_config {
+ enable = true
+ filter = "ERRORS_ONLY"
}
}

# google_compute_subnetwork.my-custom-subnet will be created
+ resource "google_compute_subnetwork" "my-custom-subnet" {
+ creation_timestamp = (known after apply)
+ external_ipv6_prefix = (known after apply)
+ fingerprint = (known after apply)
+ gateway_address = (known after apply)
+ id = (known after apply)
+ ip_cidr_range = "10.10.0.0/24"
+ ipv6_cidr_range = (known after apply)
+ name = "my-custom-subnet-2"
+ network = "my-custom-network-2"
+ private_ip_google_access = (known after apply)
+ private_ipv6_google_access = (known after apply)
+ project = (known after apply)
+ purpose = (known after apply)
+ region = "us-east4"
+ secondary_ip_range = (known after apply)
+ self_link = (known after apply)
+ stack_type = (known after apply)
}

# google_project_iam_member.project1 will be created
+ resource "google_project_iam_member" "project1" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = "serviceAccount:terraform-demo-aft@tcb-project-371706.iam.gserviceaccount.com"
+ project = "tcb-project-371706"
+ role = "roles/iap.tunnelResourceAccessor"
}

Plan: 7 to add, 0 to change, 0 to destroy.

Step10. type yes at the confirmation prompt to proceed.

Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

google_project_iam_member.project1: Creating...
google_compute_network.vpc: Creating...
google_compute_router.router: Creating...
google_compute_firewall.rules: Creating...
google_compute_network.vpc: Still creating... [10s elapsed]
google_project_iam_member.project1: Still creating... [10s elapsed]
google_project_iam_member.project1: Creation complete after 11s [id=tcb-project-371706/roles/iap.tunnelResourceAccessor/serviceAccount:terraform-demo-aft@tcb-project-371706.iam.gserviceaccount.com]
google_compute_network.vpc: Creation complete after 13s [id=projects/tcb-project-371706/global/networks/my-custom-network-2]
google_compute_subnetwork.my-custom-subnet: Creating...
google_compute_subnetwork.my-custom-subnet: Still creating... [10s elapsed]
google_compute_subnetwork.my-custom-subnet: Creation complete after 16s [id=projects/tcb-project-371706/regions/us-east4/subnetworks/my-custom-subnet-2]
google_compute_instance.my_vm: Creating...
google_compute_instance.my_vm: Still creating... [10s elapsed]
google_compute_instance.my_vm: Creation complete after 18s [id=projects/tcb-project-371706/zones/us-east4-c/instances/nat-demo-vm]
google_compute_router.router: Creating...
google_compute_firewall.rules: Creating...
google_compute_firewall.rules: Still creating... [10s elapsed]
google_compute_router.router: Still creating... [10s elapsed]
google_compute_firewall.rules: Creation complete after 13s [id=projects/tcb-project-371706/global/firewalls/allow-ssh]
google_compute_router.router: Creation complete after 13s [id=projects/tcb-project-371706/regions/us-east4/routers/nat-router]
google_compute_router_nat.nat: Creating...
google_compute_router_nat.nat: Still creating... [10s elapsed]
google_compute_router_nat.nat: Creation complete after 16s [id=tcb-project-371706/us-east4/nat-router/my-router-nat]

Apply complete! Resources: 7 added, 0 changed, 0 destroyed.

Step11. To see the list of resources created.Run terraform state list

$ terraform state list
google_compute_firewall.rules
google_compute_instance.my_vm
google_compute_network.vpc
google_compute_router.router
google_compute_router_nat.nat
google_compute_subnetwork.my-custom-subnet
google_project_iam_member.project1

As shown above, 7 resources have been created. Now, Let’s move on to the console.

Verify these resources by Login into GCP Console. you will find that these resources are created which include. VPC/Subnet, Virtual Machine, firewall rule, IAP SSH Permission, Cloud Router, NAT Gateway.

VPC/Subnet
Compute Virtual machine
Firewall rule to allow SSH from source IP range: 35.235.240.0/20
IAP SSH connection
cloud router
NAT Gateway

Let’s test Internet connectivity from the virtual machine. To access the virtual machine with SSH, you need to open IAP Desktop and sign in with your GCP account. once logged in, you will be able to see your projects and Compute machines.Simply connect your vm and try to ping Google.com, You will see response from internet. This concludes our demo.Go Ahead and cleanup your environment by simply running terraform destroy in the terminal.

VM can reach out to the internet via cloud NAT without having external IP

Conclusion: With Cloud NAT, your VM instances can send outbound packets to the internet and they can receive inbound reslineses to those packets without the need for a public IP address. With terraform, you can deploy infrastructure in just a matter of time and can easily repeat, manage and scale with just a few line of code.

I hope you like this article. Please share it if helpful and don’t forget to follow me :)

Pls subscribe for upcoming blogs.

Thanks for Reading….

--

--

Sumit K
Google Cloud - Community

Humanity is the quality that we lack so much in real life, An Abide leaner, Cloud Architect⛅️, Love DevOps, AWS Community Builder 2023, Proud Hindu 🕉️