Deploy Go application to Kubernetes in 30 seconds

I’ve been working on a simple web application written in Go, hosted in Google Cloud, using Google Kubernetes Engine. It turned to be such great experience, I couldn’t miss an opportunity to share it. It took me less than an hour to setup things for the first time, reading the documentation and trying to figure out how things work. But, once I finished, I could deploy a new version of my application in just 30 seconds!

The application I’m going to deploy is a simple RESTful content server, part of bigger microservice setup. It will expose little API to publish and fetch news. Data will be stored in a MySQL database. Although, for the moment I will leave database part aside, and just focus on delivering application to the server.

Application configuration is in YAML file which holds database DSN and some other parameters.

Source code is available at GitHub.


Our deployment process will run tests, build a binary, pack it into docker image, upload it to docker registry and then use kubectl to deploy docker image to Kubernetes cluster.

Let’s start by creating a Makefile to keep instructions for each deployment step. I like Makefiles, they are simple and yet very powerful. Makefiles are executed by make. It’s lightweight application and it’s pre-installed almost everywhere.

We would need some identifier for our builds, artifacts and deployments, some sort of version. We could use semantic versioning or sequential build numbers, but I find it simpler to use a hash of git commit. It’s unique enough, it helps to get a relation between commit log and artifacts, and best of all does not require much effort to generate. Let’s define a variable in Makefile to keep our version name

With little shell magic we’ve defined TAG variable, which can be set manually when running make, or, if not set, it will take a short hash (first 7 symbols) of last git commit. Then, we export this variable, so it’s available in commands run by make.


It seems like a good idea to start deployment by running tests. It will prevent us from making broken artifacts. Let’s create target test, which will simply run go test ./....


Now, we need to build a binary. We will define build target, which will simply run go build with few parameters.

Go will build a binary, statically linking everything required to run it. But we still need some deployable unit, something we can easily distribute. This is where docker comes into play, docker images and docker registry are perfect for our goal.

Let’s create Dockerfile to be able to pack our application into docker image

We could create an image from scratch, but I prefer alpine as a base image, with just a few extra MB we will get package manager and busybox.

Our image build process installs ca-certificates, exposes port 8080, adds binary we built in the previous step, and some default configuration. Finally, we define command required to run our application.

We would need another target in our Makefile, to build docker image.

Before making an image, it’s important to build a binary, otherwise, docker build will fail, or will use the wrong version of the application. This is why we define build step as a dependency for pack. The step itself will just run docker build and tag image using our TAG variable.

I’m using Google Container Registry because it’s better integrated with other Google Cloud services, but any docker registry will do.

The last step is to push our image to the registry, let’s make a target for it


We are going to deploy our application to Kubernetes cluster, there is a number of ways to setup one. You can use minikube to set up local environment or kops to set up cluster in AWS. I will be using Google Kubernetes Engine, it’s easy to setup, and fully managed by Google Cloud.

It’s not important how you setup your Kubernetes cluster, but before continue, make sure kubectl is connected to the cluster you want to use to deploy the application.

The application in the cloud would need a configuration file, so it knows how to connect to a database and the rest of the parameters. We could put proper configuration in the docker image, but it’s not really practical idea. If we would do so, our image will be environment aware, changing configuration would require rebuilding image, and if we would want to deploy the application to different Kubernetes clusters (for example, staging and production environments) we would need to build different images.

Instead, we will use Kubernetes ConfigMap, an object which will keep our configuration. We will mount it in the application container as a file.

Our deployment process will only deploy application container. The configuration will be deployed separately as part of cluster provisioning.

I prefer to keep configuration separately, not in the same repository as application code. It should be part of the cluster provisioning process rather than the application itself.

Here is my ConfigMap object:

You can deploy it using kubectl apply -f configmap.yml. Or, create ConfigMap using kubectl create command, it won’t make much difference.

In my experience, describing your kubernetes resources in YAML is a better choice. It allows you to track changes, you can store them in version control system. And overall YAML seems to be more consistent and explicit than a batch of kubectl create commands somewhere in a shell script. But use whatever you see fit better.

Now, let’s get back to our application and create a definition for our pods and services. I’m going to create k8s folder in our application repository, to keep track of all kubernetes resources we need to run our application.

First, we create Kubernetes deployment, in file k8s/deployment.yml

It defines a specification for all containers required for the application. At the moment we only need one container, which will run our application image. It will expose port 8080, and we will mount configuration we deployed before.

Note, I’m using ${TAG} placeholder to define container image because we are going to use a different tag for every deployment. We could put something like latest, but then Kubernetes won’t be able to see the difference between deployments, as well as we will introduce ambiguity about what we want to deploy. Instead, I prefer to use a placeholder and envsubst to substitute placeholder with actual value during deployment.

Another thing we need is Kubernetes service, a load balancer to be able to access our API from outside. So, let’s append k8s/deployment.yml with following

It defines how load balancer should discover target pods and which ports to use.

Well, that’s it, we ready to deploy our application to the cluster. Let’s create the last target in our Makefile

In this step, we will use envsubst to replace placeholders in YAML to actual values, and then apply changes to the cluster using kubectl apply.

And, finally, let’s deploy the application by running all steps

Or, we can define another target to run all the steps, so you don’t need to type too much

With this, we can deploy by simply running

After the first deployment, you would need to wait for Google Cloud (or other Kubernetes provider) to create your load balancer. Run kubectl get service to get load balancer external IP. Then, enter this IP in the browser and you should see JSON with the version of the application.

See complete Makefile at Github.

Well, and now, lets time it :)

real 0m23.103s
user 0m3.622s
sys 0m2.087s

Profit! We actually deployed a new version of the application in under 30 seconds. Of course, as the application grows, you will add more tests, things will get more complicated and time to deploy will increase. But it’s a still pretty good start if you ask me.

Google Cloud - Community

Google Cloud community articles and blogs

Google Cloud - Community

A collection of technical articles and blogs published or curated by Google Cloud Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.

Sergey Kolodyazhnyy

Written by

Software Engineer at Adobe, Golang and Kubernetes enthusiast and evangelist.

Google Cloud - Community

A collection of technical articles and blogs published or curated by Google Cloud Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.