Developing With Containers Done Right

Build and ship. Build and ship. Rinse and repeat.

The goal of this post is to establish a seamless, non-intrusive workflow for developers and dev teams to work with Kubernetes, enabled through GKE. This workflow will eventually integrate without modification into a set of both untrusted and trusted environments in a secure, least-privilege format. The only addendum to that is that developers may not provision the cluster directly themselves, though that is up to the organization and how trust is established. Here is what we will build:

Note that it is entirely possible, if preferred, to utilize an external Git repository (GitHub for example) in place of Cloud Source Repositories. Google Cloud Build can pull from either GitHub directly, Google Source Repository (which is based on git), or from BitBucket.

This post assumes you already have gcloud installed, and a GCP Project to work in. Since this is all a one-time setup anyway, I highly recommend using Google Cloud Shell since it has everything pre-installed and configured. Once you run through this guide you can stay local. If you have neither gcloud or a GCP project to try this with, check out the following:

Let’s get started. Provision a GKE cluster (this will take 3 minutes or less!):

gcloud container clusters create dev-cluster --zone=us-central1-a

Retrieve cluster credentials:

gcloud container clusters get-credentials dev-cluster --zone=us-central1-a

Since Google Cloud is rabid about least-privilege and security, we need to give Cloud Build permission to deploy to our cluster:

PROJECT="$(gcloud projects describe \
$(gcloud config get-value core/project -q) --format='get(projectNumber)')"
gcloud projects add-iam-policy-binding $PROJECT --member=serviceAccount:$ --role=roles/container.developer

Now let’s bring down the sample repo that has a basic web application, some Kubernetes deployment manifests, and a cloudbuild.yaml file that will allow us to utilize Google Cloud Build for automated build and deploys into a Kubernetes cluster from a simple Git push:

git clone -b devflow
cd hello-world-nginx

Before we can start iterative dev here, we need to quickly instantiate our initial deployment in the cluster so that future deploys have something to reference, we have a consistent endpoint to hit, and so that we don’t have to worry about this stuff later on:

kubectl apply -f .

Now let’s get this code into Google Source Repository (GSR), which is just a git repo hosted on GCP. You can utilize GitHub source repos as well if you want to either use another code tree that you already own, or copy this one into your GitHub. For these instructions we’ll just assume GSR. Start by running the following command:

gcloud source repos create hello-world-nginx

Now let’s authenticate to our newly created GSR repo and add our code in:

git config --global credential.
git remote add google<YOUR PROJECT ID>/r/hello-world-nginx
git push --all google

If you’d like to use a different code tree, simply copy over the cloudbuild.yaml and cloudbuild-deploy.yaml files over and the instructions are similar from here on out, with the exception being that instead of creating a new repo in GSR, you will create your Google Cloud Build trigger to point directly to your GitHub repo instead of the one in GSR which we will do now.

Navigate to Google Cloud Build (and enable the API if prompted):

On the Cloud Build page, navigate to the Triggers page by clicking on the icon on the left-side navigation:

Select “Create a Trigger”

Select “Cloud Source Repository” on the next page:

Select the “hello-world-nginx” repo from the list on the next page.

At this point, provide the following information for your trigger:

  1. Name
  2. Branch (leave the default wildcard)
  3. Select ‘cloudbuild.yaml’
  4. Specify ‘cloudbuild-deploy.yaml’ since we want to both build our image and deploy it to our GKE cluster
  5. Select “Create Trigger”

Here is the cloudbuild-deploy.yaml file that will provide your trigger with the information it needs to build your image from source and deploy it to your cluster:

Note that there are several places with angle brackets where you will need to fill in the info for your zone, project name, and cluster name. Once done, commit and push your update and the cloud build should trigger and deploy just based on your push. That’s your flow. Push and a few seconds later you have your deployment live. This post is just taking you through the one-time set up, at which point you can set it, forget it, and focus on your code while enjoying the benefits of container-centric development.

There are a dozen permutations of this flow depending upon how much you want to manage locally and customize, or have Google Cloud manage for you. This just happens to be the most automated and straightforward workflow that utilizes Google Cloud’s tooling primarily. There are optimizations that can be made here, like build caching for faster build times, but this is a great starting point.

Oh, and in case you hadn’t noticed from the cloudconfig-deploy.yaml definition, we’re not using docker to build our containers within Cloud Build.

We’re using Kaniko :).