Multi-Region Load Balancing With GO and Google Cloud Run — Part 1

Rob Charlwood
The Startup
Published in
8 min readJul 14, 2020

Learn how to deploy a simple GO application to four regions on Google’s Cloud Run platform and improve your service’s availability and latency.

Introduction

In this three part tutorial we will deploy a simple stateless API written in golang to Cloud Run in multiple regions. We’ll then use Google’s new serverless network endpoint groups to setup a global HTTPS load balancer to load balance between them. This will give our service higher availability and fail over whilst also improving user experience and latency by routing users to their nearest running instance.

In this part, we’ll write our simple go application and create a Dockerfile that contains our final binary and upload it to container registry.

In part two, we’ll provision the relevant infrastructure via terraform get our service running. We’ll also handle the provisioning of a Google managed SSL certificate.

In part three, we’ll run some tests against our new multi region service and ensure that the load balancing is working correctly.

Code and Documentation

You can find all the code and documentation used in this tutorial on GitHub.

https://github.com/robcharl wood/multi-region-cloud-run.

Assumptions and Prerequisites

This tutorial will make a few assumptions in order to keep this article to a sensible length. Links will be provided to relevant sites and documentation should you need to find out more.

  • You have a basic understanding of golang.
  • You have a basic understanding of terraform.
  • You have a basic understanding of docker.
  • You have a domain name available and free for use.
  • You have a Google Cloud Platform account setup with billing enabled.

Overview

The below image shows a top down view of what we are going to build and how everything will hang together and communicate. We will have four serverless NEG’s rather than just the two in the image, but the concept is the same for the remaining regions.

Diagram courtesy of Google

Let’s get GO-ing!

We will start with our simple golang application. This will be an API endpoint that returns a message and a region. The message will contain “Hello world” and the region will show the Cloud Run instance that served the request. This will be useful later on when we want to easily check our multi region load balancing.

Let’s start by creating a new go module for our API.

go mod init example.com/hello

This will generate go.mod and go.sum files. These files define and lock our project’s dependencies along with the version of go that should be used to build the project.

Next we will create our main app file. Create a file called hello.go in the same directory with the following content

The code above is fairly self explanatory, but basically this is creating an API service using the echo framework with two endpoints:

  • /
  • /health/

The health endpoint is not really required for this since Cloud Run does not require a health check endpoint. However, I decided to include one anyway so that we could replicate an API with multiple endpoints.

The root endpoint is where the meat of our API lives. This endpoint returns a JSON response with a message and region that will be used for the bulk of our testing.

Of particular note is line 22

var host string = getEnvDefault("K_SERVICE", "localhost")

The K_SERVICE environment variable is a reserved variable specific to Cloud Run and it will only be present when the service is running from the Cloud Run platform. It contains the name of the current instance of the service. If this variable is present, then we turn on the HTTPS redirect middleware to improve security. (Cloud Run requires SSL by default — but hey, it cant hurt right!). If it’s not present, then the code will default it to localhost — this is useful when running the code locally. Pretty cool right!?

It’s also worth noting that we override echo’s default port of 1323 with port 8080. This is because Cloud Run will look for your service on 8080 by default. This can be overridden and configured if you want, but for simplicity I’ve just mapped the service to the default port.

We also wrap our echo API process in Facebook’s graceful handler. This will ensure that our process exits safely and gracefully when it receives a SIGTERM signal to kill the instance.

Docker it to me baby

Now, let’s get our app into a nice Docker image that is easily deployable to Cloud Run via Google’s container registry. Create a Dockerfile in the same directory as your code with the following contents:

This is a multi-stage docker build where we first extend the golang (golang:1.14.4) image in order to compile and build our final binary. The second stage sees us create a slim Alpine linux (alpine:3.12.0) image and copy our final binary in from the first stage into its final production directory.

Particular things of note here:

RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags="-w -s" -o hello .

This line builds our final binary for Linux operating systems running AMD 64 bit architecture. The -w and -s flags omit the symbol and DWARF tables and debug information during compilation. These items are only used in debuggers and since this is a production image, there is no need to include all that extra bloat. It’s also worth noting that this does not remove data required for stack traces, so our panics will still be readable. Awesome.

The Dockerfile also ensures that the final go application is run by a non-root user to tighten security and prevent potential privilege escalation vulnerabilities.

Create a service account

Now we have a final image built, we need to upload it to Google’s container registry. To do that, we’ll need a service account with the correct permissions to perform the upload.

Login to Google Cloud Platform and add a service account for terraform. This service account will be used now to upload our image to the registry but it will also be used in part 2 to provision our infrastructure with terraform.

Create a service account for terraform

Now that you have a service account, go back into the account and generate a JSON key that we can use to authenticate to our project from our local machines or from a CI/CD process (which I highly recommend).

Make sure you download the key and store it somewhere safe. Do not store this key in your Git repository! I suggest that you either add it to a .keys directory and then add .keys to your .gitignore file, or store the key outside the root of your repository. If this key is ever leaked, you’ll need to revoke it immediately and generate a new key. Side note — you should be revoking and rotating your access keys on a regular basis anyway.

Generate and download a JSON key for the terraform service account

Next we need to add our new service account as an IAM user and furnish it with the permissions shown in the image below.

Adding IAM permissions to terraform service account

These permissions will give our terraform service account privilege to access all the required resources on our project.

The only permissions that are required for this part of the tutorial is Storage Admin, however you might want to add them all now to avoid forgetting to do it later.

Compute Instance Admin (beta)Compute Load Balancer AdminCompute Network AdminDNS AdministratorSecurity AdminCreate Service AccountsDelete Service AccountsService Account Key AdminService Account UserCloud Run AdminService Usage AdminStorage Admin

Important Note

The permissions granted above have not been scrutinised sufficiently to be used in production. It is strongly advised that you lock down all service accounts to only the bare essential permissions required for the use case of that service account. Terraform by its very nature needs quite sensitive permissions. However, I still suggest that you double check and review the permissions specified above should you decide that you’d like to do something more serious with the project.

Setup container registry and upload image

First up, you’ll need to turn on container registry’s API. To do this, login to Google Cloud Platform console and go to APIs and Services and then to Library. Once in the library search for and enable the Google Container Registry API. This will enable the registry for your project and will allow you to create, push and pull images.

Turn on the Google Container Registry API

It’s recommended that you turn on vulnerability scanning in Google Container Registry. It’s built into GCR and will monitor your Debian, Alpine, CentOS, RedHat and Ubuntu based images for vulnerabilities anytime you push to your repository. At the time of writing, vulnerability scanning is 100% free or partially free until January 2021.

Enable vulnerability scanning

Once this is done, we can write a little Makefile to handle the build and upload. Obviously you’ll need to replace the project and image names with your own values. You’ll also need to ensure that you update the script to point to the key file that you downloaded in the previous step. Depending on your region, you might also want to change the container registry to one nearer to your location — eu.gcr is Europe.

You can now build and upload your image to container registry by running the below command

make build

Or if you’d like to build a specific version, you can pass the version argument in to the command:

make build version=0.1.0

Make a note of the version of your final pushed image as you’ll need that in part 2 to provision and run our new cloud run instances.

Let’s wrap it up!

In this part we wrote a simple API service in golang and created a Dockerfile to build and compile the final binary and run it under a slim, secure Alpine image. We then created a service account for provisioning all of our infrastructure and used that account to upload the final image to Google’s container registry ready for running.

Join me again in part 2, where we’ll start provisioning our required infrastructure with terraform. See you there! :)

--

--