GCP Infrastructure as Code with Deployment Manager

David Schweikert
Google Cloud - Community
9 min readMay 2, 2018

--

Infrastructure as code is the practice of making the configuration of your infrastructure reproducible, scalable, and easy to review, by describing it using code. Infrastructure as code comes from the realization that infrastructure is also “software”, which is particularly true for the public cloud. Because it is software, it should by also version-controlled, tested, and reviewed. Even better, you can start putting the description of the required infrastructure together with the code of the application, and have thus a complete definition of all that is required in one place. You can start testing and deploying everything together.

There are many tools available that help you achieve this, including for example Ansible and Terraform, but they don’t always provide a perfect match to what is available on public cloud hosting platforms such as AWS, Azure, or Google Cloud Platform. That should be no surprise, considering the incredible pace of new features that are being introduced. That’s the reason why some cloud providers also give you their own infrastructure as code tools, as part of their offering. AWS has “Amazon Cloud Formation”, Azure has “Azure Resource Manager”, and for the Google Cloud there is “Deployment Manager”. They all provide a way to define your cloud infrastructure resources using a declarative language, which you can then put in a git repository, for example. Also, you can use templates, which allows you to avoid code duplication, and makes it possible to test the same infrastructure code in various stages of deployment.

In this article, we will focus on Google Cloud Platform and how you can use Google Cloud Deployment Manager to automate the configuration of all your GCP resources. As a practical demonstration, we will deploy a Kubernetes cluster and a simple application running on it, all created automatically. It includes the following resources:

  • a Kubernetes cluster (Google Kubernetes Engine)
  • a Kubernetes deployment
  • a Kubernetes service
  • a Kubernetes ingress definition

Deployment Manager Basics

Before we start with the creation of GCP resources using Deployment Manager, let’s start with a very quick summary of how it works.

Deployment Manager has the concept of a deployment, which is a collection of GCP resources that form a logical unit and that are deployed together. The resources of a deployment can be anything available on GCP: VMs, IP addresses, database servers, Kubernetes clusters, etc.

In order to create a deployment, you need a deployment configuration, which is a YAML file containing the definition of the resources. To give you an idea, it could look as follows:

resources:
- name: example-vm
type: compute.v1.instance
properties:
zone: europe-west1-b
machineType: zones/europe-west1-b/machineTypes/n1-standard-1
disks:
...

Each listed resource always has a type, which is the kind of resource that will be created (a VM, an IP address, etc.), a name, and properties describing with what parameters the resource should be created.

An important concept, in comparison to say Ansible, is that when you create a deployment, it exists as a resource itself inside GCP. If you later change the configuration of the deployment (by modifying the YAML file, for example), and run the update command, deployment manager will compare the new configuration to what it deployed before, and will only make the required changes. Not only less work needs to be done, but more importantly, it also ensures that any removed resources from the configuration, will also be removed from the GCP infrastructure.

You can find many deployment configuration examples at the Google Deployment Manager samples project on GitHub. It is often more useful than the official documentation, to find out exactly what parameters you should define.

Preparations (if you want to execute the code)

If you want to learn how deployment manager works, I suggest that you also try to deploy this example application when reading along. You will see, how incredibly quick and easy it is to deploy all of these components. What would take usually days to set up, will only need minutes. If you want to do that, prepare as follows:

First of all, make sure that you have a GCP account and a GCP project with billing enabled. Also, you will need the Google Cloud SDK installed, which includes the “gcloud” command-line tool that we will use.

$ gcloud auth login
$ gcloud config set project $MYPROJECT_ID

Next, clone the github repo that I prepared with all the code examples in this article:

$ git clone https://github.com/schweikert/gcp-infra-as-code
$ cd gcp-infra-as-code

Creating a Kubernetes cluster

The first thing that we will do, is to create a Kubernetes cluster. You can find in the list of supported resources that the name of the resource in deployment manager is “container.v1.cluster”. A very simple definition looks as follows:

resources:

- name: cluster
type: container.v1.cluster
properties:
zone: europe-west1-b
cluster:
description: "My example cluster"
initialNodeCount: 2

This would create a Kubernetes cluster in a single zone with 2 nodes.

The cluster.yaml file in the github repository specifies a few more parameters, such as distributing the nodes across two zones and enabling “auto-upgrade” for the cluster nodes.

You can instantiate this deployment by using the “create” command:

$ gcloud deployment-manager deployments create example-cluster --config cluster-1/cluster.yaml

This creates a deployment called “example-cluster”, using the definition found in cluster-1/cluster.yaml. You can later then update the yaml file, and update the deployed deployment with the “update” command:

$ gcloud deployment-manager deployments update example-cluster --config cluster-1/cluster.yaml

This command compares the deployed deployment with the new definition and executes only the needed changes (for example creates a new resource that you added).

For now, delete the cluster. We will re-create it again later.

$ gcloud deployment-manager deployments delete example-cluster

Templating

These yaml files are actually your infrastructure “code”, and you probably want to test that code first, before deploying it. For this to be effective, you should deploy the same code both for testing and for production, even though these environments might need a few parameters to be specified differently.

That’s where templating becomes very useful. You can have a single template used for all your environments, and just parametrize what needs to be different. Deployment Manager supports Python scripts and Jinja 2 as templating language. I will demonstrate templating using Jinja 2 because even though Python is recommended by Google, I find Jinja 2 much more adequate and easy to read for this use case.

The “cluster-2” directory contains the same example but split into two files.

The resource definition:

imports:
- path: templates/cluster.jinja
name: cluster.jinja

resources:
- name: example-cluster
type: cluster.jinja
properties:
description: "Example Cluster"
zones:
- europe-west3-b
- europe-west3-c
initialNodeCount: 1

This file has a structure very similar to the non-templated version, but instead of instantiating GCP resources directly, it instantiates a template. That template instance also has a name and properties, which can then be used in the template definition.

The template:

resources:

- name: {{ env['name'] }}
type: container.v1.cluster
properties:
zone: {{ properties['zones'][0] }}
...

With the Jinja 2 syntax, the template instance name and properties are used to parametrize it.

The command to create the deployment is the same as before:

$ gcloud deployment-manager deployments create example-cluster --config cluster-2/example-cluster.yaml

Deployment Manager and 3rd-Party Resources

An amazing ability of Deployment Manager is that you can teach it to manage any kind of resource that you want, besides what it already knows about. Provided that the API to manage these resources fulfills some criteria, you can add that API and define third-party resource also as part of your Deployment Manager deployments.

One very interesting use-case for this is the management of Kubernetes resources. Deployment manager doesn’t know out of the box how to create, modify or delete Kubernetes resources of a Kubernetes cluster, but you can add code to make it possible.

Kubernetes resources are managed by interacting with Kubernetes via a well-defined and versioned API. To make it at the same time extensible, but also backward compatible, there are multiple API endpoints which reflect the scope and maturity of the managed resources. For example, “services” are part of the core functionality and are considered mature, as such methods to manage service resources are part of the API endpoint “/api/v1”. Other more recently added resource types are found at other endpoints: “deployments” are considered beta and are managed with the API endpoint “/apis/extensions/v1beta1”.

For each of these API endpoints, we will need to add instructions for Deployment Manager, so that it knows about how to use it. The needed code for that is usually deployed as part of the cluster deployment so that as soon as you have created the cluster, you can also start managing Kubernetes resources in it.

The “cluster-3” example contains an additional part that will make it possible to deploy Kubernetes resources. If you look at the “cluster-3/templates/cluster.jinja” file, you will find this:

{% set K8S_ENDPOINTS = {
'': 'api/v1',
'-apps': 'apis/apps/v1',
'-rbac': 'apis/rbac.authorization.k8s.io/v1',
'-v1beta1-extensions': 'apis/extensions/v1beta1'
} %}

...

{% for typeSuffix, endpoint in K8S_ENDPOINTS.iteritems() %}
- name: {{ env['name'] }}-type{{ typeSuffix }}
type: deploymentmanager.v2beta.typeProvider
properties:
...
descriptorUrl: https://$(ref.{{ env['name'] }}.endpoint)/swaggerapi/{{ endpoint }}
{% endfor %}

We won’t go into the details, but you can see that it uses a Jinja loop to create definitions for each of the API endpoints that we want to use. It uses a Swagger or OpenAPI URL to discover what methods are available at each endpoint, and how to call them. The first iteration of the loop in the template will produce this:

- name: example-cluster-type
type: deploymentmanager.v2beta.typeProvider
properties:
descriptorUrl: https://$(ref.example-cluster.endpoint)/swaggerapi/api/v1
...

The $(ref.example-cluster.endpoint) is an inline-reference and resolves at deploy-time to a property (“endpoint”) of a created resource (“example-cluster”). In this case, it resolves to the IP address of the Kubernetes master service. What it also does, is set a dependency between these two resources (example-cluster-type and example-cluster), and deployment manager will make sure that the resources are created in the right order.

The descriptorUrl becomes something like: https://35.198.131.165/swaggerapi/api/v1, and is used to resolve the api/v1” methods of Kubernetes. The example contains three further API endpoints, and you can extend it as you need (i.e. according to the APIs that you need to access for your deployment).

You can start using these new resource types in the same deployment, or also in new deployments that you create in the same project.

After the cluster is deployed, you can see the new types either in the GCP Console (note that there is a section dedicated to the Deployment Manager), or using the gcloud tool:

$ gcloud deployment-manager types list | grep example
example-cluster-type
example-cluster-type-apps
example-cluster-type-rbac
example-cluster-type-v1beta1-extensions

A Kubernetes Application

Now we are finally ready to create a Kubernetes application using Deployment Manager. We will define three resources: a Kubernetes “deployment” with a pod specification, a service, and an ingress to make the application accessible from outside the cluster.

As you can see in the template file, each Kubernetes resource is created by referencing one of the above API-based types that we defined as part of the cluster deployment. For example:

- name: example-hello-world-svc
type: env[‘project’]/example-cluster-type:/api/v1/namespaces/{namespace}/services
properties:
apiVersion: v1
kind: Service

Some macros are used to make it a bit easier to write the resources (NAME_PREFIX, CLUSTER_TYPE, etc.).

You can deploy the template with the 3 Kubernetes resources as follows:

$ gcloud deployment-manager deployments update example-hello-world --config hello-world-1/example.yaml

GCP will assign automatically an ephemeral static IP address to access your application. You can find out what it is using the kubectl command, but first you need to get the needed credentials for it:

$ gcloud container clusters get-credentials example-cluster --zone europe-west1-b

$ kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-world-example-75d79ccdd5-pmjcg 1/1 Running 0 1m
hello-world-example-75d79ccdd5-vtb6g 1/1 Running 0 1m

Now you can look at the ingress resource to find out the external IP address:

$ kubectl get ingress -o wide
NAME HOSTS ADDRESS PORTS AGE
hello-world-example * 35.201.65.191 80 10m

… and access your application here: http://35.201.65.191.

Voilà, a fully reproducible Kubernetes cluster and application described as code :)

To continue experimenting with Deployment Manager, you could have a look at Google KMS, which allows you to encrypt secrets using a key that you store in GCP (so that you can put encrypted values in your configuration files), and generate for example Kubernetes secrets using that. Also something that I already tested and works well, is creating a PostgreSQL database and deploying the Cloud SQL Proxy with a service account that is generated via deployment manager. Let me know if you would be interesting in reading an article about that.

Don’t forget to clean up everything after you have finished experimenting so that you don’t waste GCP credits.

--

--

David Schweikert
Google Cloud - Community

Cloud infrastructure enthousiast, open source developer, technical solution engineer at Google