Cloud Deployment Manager & Kubernetes

Daz Wilkin
Google Cloud - Community
9 min readJun 12, 2018

Yes, you can :-)

I’ve been familiarizing myself with Helm. Helm is decent and more straightforward than I expected. However, my major gripe with Helm is that it’s just for Kubernetes. I understand that Helm has no pretensions to be anything else. But it means that before I can use Helm, I must provision a Kubernetes cluster and possibly related services (e.g. Cloud Spanner, some Service Accounts).

I want the one-tool-to-rule-them-all. There’s much interest in Hashicorp’s Terraform these days as this uber-configurer but even Terraform is limited in its applicability to deploying Kubernetes applications.

It turns out that Google’s own Cloud Deployment Manager (DM) has a powerful (relatively new, relatively obscure) feature called “Type Providers” that enables DM to be configured to deploy pretty much anything including our beloved Kubernetes. Howzat!?

Credit due to David Schweikert and his post “GCP Infrastructure as Code with Deployment Manager” which you may want to read first and instead of my post. Credit due also to the Deployment Manager team that provided a sample for Kubernetes even if it is hidden in the bowels of GitHub (link).

In full-disclosure, I submitted a PR for the DM team to update its Kubernetes (called “GKE”) samples but I don’t (yet) fully understand how it works. It does work though, and that’s 99% of the struggle: using it, I can apply a Deployment and a Service to a Kubernetes cluster.

Setup

Have a Google Cloud Platform project lying around. If not already, enable Deployment Manager (DM):

PROJECT=[[YOUR-PROJECT]]gcloud services enable deploymentmanager.googleapis.com \
--project=$PROJECT

I’m going to assume (!) that DM will enable services e.g. Kubernetes Engine for us. [it does]

I’m going to assume you’re somewhat familiar with DM. Very simply (!) it’s a service that accepts sets of Google Cloud Platform API calls. These API calls are Google’s published (definitive) APIs for each of its services.

In essence (!) if there’s a Google Cloud Platform API, then you can automate it using Deployment Manager. Kubernetes Engine is — of course — part of this set of GCP APIs but there’s a difference between the GCP API that provisions Kubernetes Engine and the Kubernetes API that Kubernetes (Engine) uses to provisions resources within itself.

Extending Deployment Manager to support the Kubernetes API is what’s enabled by Deployment Manager’s Type Providers.

Kubernetes Engine

Here’s a simple script that will deploy a Kubernetes cluster to your favorite zone. I’ll set myself a stretch goal of adding variant of this deployment script that provisions a Regional Cluster [see ‘Regional Cluster’ at end of post]:

You may run this and provide the parameters (CLUSTER_NAME, CLUSTER_ZONE, NUM_NODES) that it expects with:

NAME=a
ZONE=us-west1-a
gcloud deployment-manager deployments create ${NAME} \
--template=kubernetes_engine.py \
--properties=CLUSTER_NAME:${NAME},CLUSTER_ZONE:${ZONE},NUM_NODES:1 \
--project=$PROJECT

And you should receive:

The fingerprint of the deployment is 8I1cswAm1ptrCkgKmwvVKA==
Waiting for update [operation-1528751969941-...]...done.
Update operation operation-1528751969941-... completed successfully.
NAME TYPE STATE ERRORS INTENT
a container.v1.cluster COMPLETED []

And, for some picture prettiness:

Kubernetes Engine cluster “a”

And, what Deployment Manager reports:

Deployment Manager created “container.v1.cluster”

Magic? Not quite. In line #11 type: container.v1.cluster refers to a GCP API call listed under Deployment Manager’s support resource types:

https://cloud.google.com/deployment-manager/docs/configuration/supported-resource-types

Supported Reosource Types “container.v1.cluster”

Which links to Kubernetes Engine’s v1 (!) REST API documentation:

https://cloud.google.com/kubernetes-engine/reference/rest/v1/projects.zones.clusters

projects.zone.clusters

And this object provides us with the properties that we enumerate in our deployment script between lines 12–29.

NB in lines 31–34 we define outputs. Specifically we create an output called endpoint that refers to the IP address of this cluster’s master. When we try to create Kubernetes API, we’ll need to use this cluster to source these APIs for us and we’ll use this value then.

Enough with the digression already.

Suffice to say that Deployment Manager has a default Type Provider in Google Cloud Platform. We don’t need to define Google Cloud Platform APIs. We do however need to define others.

Please delete the Deployment (which deletes the cluster) as we’ll recreate the cluster when we create Types against it in the next section:

gcloud deployment-manager deployments delete ${NAME} \
--project=${PROJECT}

Type Providers

OK, I’m going to let Google’s documentation do its job explaining how Type Providers work and how they’re defined. It isn’t entirely clear to me.

As with our Deployment Script above, we need to generate Type Provider resources for Deployment Manager to consume. So, let’s create another script that should structurally be similiar to the Kubernetes Engine cluster script (and this is almost exactly a copy of Google’s script):

The heart of the script is lines 4–8. The script iterates over each of the items listed in this section and creates Type Providers (lines 13–49) for them. I’ll leave that boilerplate to Google to explain. Suffice to say that it infers from the APIs the methods that are supported using the APIs Swagger documentation. But, what are these APIs?

First, let’s deploy our scripts. We’re going to need to combine these because we want to access the Kubernetes cluster created in the first script in order to enumerate its types (for its APIs) in the second. Create a configuration file:

The configuration file imports both our scripts (lines 1–3) and applies them both (lines 6–11, 12–16):

gcloud deployment-manager deployments create ${NAME} \
--config=generate_apis.yaml \
--project=$PROJECT

For the cluster (lines 6–11), we’re effectively converting our command-line into a configuration. Please replace [[YOUR-CLUSTER-NAME]] with the name you’d like to give your cluster, and replace [[YOUR-CLUSTER-ZONE]] with the zone in which you’d like it created.

For the types and I’ve called these types because I’m unoriginal (lines 12–16), the property endpoint is the output from our cluster creation. It is assigned the value of the cluster’s master’s IP address. This is used by the script to introspect the Kubernetes cluster’s list of APIs and build types for them.

After this completes, we’ll have both a cluster and a set of types:

types: kubernetes, kubernetes-apps, kubernetes-v1-beta1-extensions

Let’s gain access to our cluster so that we may review these:

gcloud container clusters get-credentials [[YOUR-CLUSTER-NAME]] \
--project=${PROJECT}

And you may then:

kubectl cluster-infoKubernetes master is running at https://[[YOUR-MASTER-IP]]

NB Grab that Kubernetes master IP address

Line #10: api/v1/

This is Kubernetes foundational API. All the types listed in the Kubernetes (v1.10) documentation are available from this API:

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/

Take [[YOUR-MASTER-IP]] address and browse or curl the endpoint with /api/v1 postfixed and you will receive an enumeration of all Kubernetes v1 types:

curl \
--insecure \
--silent \
https://[[YOUR-MASTER-IP]]/api/v1/ \
| jq --raw-output .resources[].name

NB Needs that final v1/.

Currently this list includes (subset) some family favorites:

configmaps
namespaces
nodes
pods
secrets
serviceaccounts
services

Line #11: apis/apps/v1beta1

Take [[YOUR-MASTER-IP]] address and postfix /apis/apps/v1beta1:

curl \
--insecure \
--silent \
https://[[YOUR-MASTER-IP]]/apis/apps/v1beta1 \
| jq --raw-output .resources[].name

NB It’s /apis this time no /api.

Currently this returns:

controllerrevisions
deployments
deployments/rollback
deployments/scale
deployments/status
statefulsets
statefulsets/scale
statefulsets/status

Others

The set of APIs supported by your cluster is documented here:

curl \
--insecure \
--silent \
https://[[YOUR-MASTER-IP]]/apis/ \
| jq --raw-output .groups[].name
apiregistration.k8s.io
extensions
apps
authentication.k8s.io
authorization.k8s.io
autoscaling
batch
certificates.k8s.io
networking.k8s.io
policy
rbac.authorization.k8s.io
storage.k8s.io
apiextensions.k8s.io

You may add any|all of these to the Deployment Manager script so that you may access them through Deployment Manager.

From the Deployment Manager script, you’ll recall we used /swaggerapi as our endpoint, so have a look at that too:

curl \
--insecure \
--silent \
https://[[YOUR-MASTER-IP]/swaggerapi

NB It used to be possible to browse Kubernetes’ Swagger docs through a Kubernetes-hosted Swagger UI browser but this appears to not be working for me :-(

OK, we can deploy Kubernetes clusters and use Type Provider to define Kubernetes types, so… !?

Deploy Kubernetes Resources

[See ‘Adding an Ingress’ at end of post]

Dun Dun Dun… Let’s cut to the chase:

Deploy this using:

gcloud deployment-manager deployments update k \
--template=deployment.py \
--project=$PROJECT \
--properties=name:henry,port:80,image:nginx
The fingerprint of the deployment is Q_zFbcIkVc3UTACp-5Gw1g==
Waiting for update [operation-1528761005721-...]...done.
Update operation operation-1528761005721-... completed successfully.
NAME TYPE STATE ERRORS
henry-deployment kubernetes-apps:/apis/.../deployments COMPLETED
henry-service kubernetes:/api/.../services COMPLETED

And — most excitingly —

Kubernetes Engine console: Deployments

Showing the Deployment of Nginx on port 80 and exposed through a Service via a NodePort on Kubernetes.

All achieved using only Deployment Manager !!

There are two things to know about this Kubernetes Deployment script.

First, we must back-reference the Types that we created using Type Provider. These are uniquely referenced in the form:

[[PROJECT]]/[[TYPE]]:[[TYPE-API]]

So, let’s explain this more completely for Deployment. As you’ll recall, Kubernetes Deployments are defined here:

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#deployment-v1-apps

They require apps/v1beta1 that we defined in line #6 of kubernetes_engine_apis.py:

'-apps': 'apis/apps/v1beta1',

We used kubernetes as the root name of all the types in this project. So, when we call these in Deployment manager, we must reference the type [[PROJECT]]/kubernetes-app. Here it is highligted in the Deployment Manager console:

Deployment Manager types: “kubernetes-apps”

Then we must tell Deployment Manager what API call to make on that Type. In this case, if you scour that endpoint for deployments, you’ll find that it’s: /apis/apps/v1beta1/namespaces/{namespace}/deploymentand so that’s what we call in line 41 of deployment.py.

Second, we need to determine what properties (body) to provide to the method. This is define in the Kubernetes API. If you review the link provided above, you’ll see that deployment.py lines 42–68 reflect this structure.

Caution

When you delete Deployment Manager deployments that create Kubernetes resources, not all the Kubernetes resources will be deleted. In the above example, the Kubernetes Service will (!) be deleted but the Deployment will not (!) be deleted. That seems curious to me.

If you delete the Deployment Manager deployment that created the Kubernetes cluster, of course you will whack everything (Service and Deployment too).

Conclusion

Deployment Manager can provision GCP resources and Kubernetes resources. And, if you want to try it for yourself, you can likely get it to provision resources for your API too!

Yes, you may want to stick with Helm or Jsonnet or Terraform. But, if you’d like to use Deployment Manager, hopefully this story showed you that it can be used for more than you perhaps thought.

Feedback always welcome.
That’s all!

Update 180611: Regional Cluster

With thanks to my colleague Adam for helping me get this working, here’s a simple Deployment Manager template to create a Kubernetes Engine Regional Cluster:

NB The trick is to use the v1beta1 type as shown in line 12.

If you would prefer to use a Regional Cluster in the previous example, simply replace kubernetes_engine.py with kubernetes_engine_regional_cluster.py in generate_apis.yaml (there are 2 occurrences) and update your deployment.

Update 180611: Adding an Ingress

In the kubernetes_engine_apis.py script, lies a previously unused reference to apis/extensions/v1beta. This API includes the Ingress resource that permits us to expose the Service through a GCP HTTP/S Load-Balancer. Let’s add that configuration to our Kubernetes Deployment:

In line #8, we add the reference to apis/extensions/v1beta for Ingress. The Ingress is then defined in lines 71–88. In lines 82–85, we reference the previously created Service as this provides the backend for the Ingress.

When we deploy this script, we’ll see an Ingress is created:

Kubernetes Engine: Ingress Details

NB It’s unclear why it reports a warning on the backend service because it appears all good. Perhaps some latency?

Here’s the Ingress resource deployed by Deployment Manager:

Deployment Manager: Kubernetes Deployment w/ Ingress

Here’s the Cloud Console Network Services Load Balancer showing the HTTP/S Load-Balancer that Kubernetes (Ingress) provisioned for us:

HTTP/S Load-Balancer provisioned by Ingress

Grabbing the IP address, we can browse to its endpoint and see our Nginx deployment:

--

--