Automating Network Deployment on Google Cloud Platform Using Deployment Manager

Krishan Sharma
Jun 15 · 5 min read
Using the Deployment Manager to create VPC, firewall rules and compute engine instance

Infrastructure as a code (IAC) is the process of managing and provisioning infrastructure needed by an application using a software or a script, rather than using a manual process to configure the infrastructure. IAC helps in creating an easily reusable script for creating infrastructure and also helps in creating idempotent environments.

Many tools are available in the market to achieve the same. some of them are vendor specific, for example, Cloud Formation (AWS), Deployment Manager (GCP) and some of them are open-source, for example, Terraform.

In this article, we will focus on the Google Cloud Platform and will leverage the Deployment Manager to automate the creation networks, firewalls, and VMs. Deployment Manager is an infrastructure deployment service provided that automates the creation and management of Google Cloud Platform resources. It helps in writing flexible template and configuration files and uses them to create deployments that have a variety of Cloud Platform services, such as Google Cloud Storage, Virtual machines and VPC networks, configured to work together. It allows us to use python or jinja templates to create resources.

In this article, we will deploy 2 networks with firewall rules and compute engine instances, as shown in the diagram below:

To follow this article, we need a GCP account and a GCP project with the billing enabled. Make sure that we enabled the deployment manager API in the GCP project.

Activate the cloud shell and clone the below repository

Go to the cloned folder and let’s explore the content. Creating a base template is a great starting point for provisioning the resources in the GCP. The cloned repository contains the templates for creating an auto-mode network, custom-mode network, subnetwork, firewall and compute instance. The auto-mode network looks like the following:

- name: {{env["name"]}}
autoCreateSubnetworks: true

The name field allows us to name the resource, and the type field allows us to specify the GCP resource that we want to create. We can also define properties, but these are optional for some resources. By definition, an auto-mode network automatically creates a subnetwork in each region. Therefore, we are setting autoCreateSubnetworks to true. The custom network is same as auto-mode network except for the autoCreateSubnetworks property is set false. We will manually create the subnetwork in the region of choice. Therefore, we need a template to create the subnetwork which is as follows:

- name: {{ env["name"] }}
type: compute.v1.subnetwork
ipCidrRange: {{ properties["ipCidrRange"]}}
network: {{ properties["network"]}}
region: {{ properties["region"]}}

The ipCidrRange property takes the IP address range of the subnet. The network property takes the name of the subnet’s network and the region property takes the name of the region that the subnet is created in. In this template, we are declaring arbitrary template properties instead of hard-coding specific IP ranges, networks, and regions. When we use the template, we will provide values for these properties in the top-level configuration. Next, we need to create the template for the firewall rules as given below:

- name: {{ env["name"] }}
type: compute.v1.firewall
network: {{ properties["network"] }}
sourceRanges: [""]
- IPProtocol: {{ properties["IPProtocol"] }}
ports: {{ properties["Port"] }}

The network property defines the network that the firewall rule applies to, the sourceRange property defines the source IP ranges that traffic is allowed from, IPProtocol property specifies the protocol that the rule applies to and the port property specifies the ports for the specified protocols. We are setting the sourceRanges property to to allow ingress from any IP source address. Next, we need to create the template for creating the compute engine instance.

- name: {{ env["name"] }}
type: compute.v1.instance
machineType: zones/{{ properties["zone"] }}/machineTypes/{{ properties["machineType"] }}
zone: {{ properties["zone"] }}
- network: {{ properties["network"] }}
subnetwork: {{ properties["subnetwork"] }}
- name: External NAT
- deviceName: {{ env["name"] }}
boot: true
autoDelete: true

The machineType property defines the machine type and the zone property tells about the instance’s zone, the networkInterfaces property takes the network and subnetwork that VM is attached to, the accessConfigs property is needed to give the instance a public IP address and the disks property takes the boot disk and its name and image. Because these firewall rules depend on their network, we are using the $(ref.mynetwork.selfLink) reference to instruct Deployment Manager to resolve these resources in a dependent order. Specifically, the network is created before the firewall rules. By default, the Deployment Manager creates all resources in parallel, so there is no guarantee that dependent resources are created in the correct order unless we use references.

We require a configuration file to deploy the resources using the deployment manager. A huge configuration file can be difficult to manage. Templates are a way to break down configurations into composable units that can be separately updated and can be reused. Therefore, first, we need to import all the templates created and then need to define all the resources that we want to create from these templates. A snippet from the configuration file is given below. We can view the complete configuration file in the cloud shell.

- path: autonetwork-template.jinja
- path: customnetwork-template.jinja
- path: subnetwork-template.jinja
- path: firewall-template.jinja
- path: instance-template.jinja
- name: network1
type: autonetwork-template.jinja

Now, its time to deploy the configuration. In the cloud shell run the following command from the cloned folder.

gcloud deployment-manager deployments create gcpnetwork --config=config.yaml

Here, gcpnetwork is the name of the deployment. Wait for 4–5 minutes for the resources to be created and listed in cloud shell. We can now go and see all the resources in the GCP as defined in the configuration file. Don’t forget to delete the deployment after finished experimenting. To delete the deployment and the resources run the following from the cloud shell.

gcloud deployment-manager deployments delete gcpnetwork

Happy Coding!!!!

Krishan Sharma

Written by

I am Google certified cloud architect and a full stack developer. I love doing hiking and trekking.