A Cloud-Native API Part 1: Google Cloud and Kubernetes

Over the last couple years, cloud native technology has evolved considerably and should be at the forefront of every engineering team’s mind. Tools such as Docker, Kubernetes, and plenty of others have made managing services in the cloud much easier. These tools have enabled our team at High Alpha to effectively and efficiently deploy and manage our services at scale. However, due to its relevancy being more recent, there can still be a lack of documentation or tutorials going over more specific use cases—something we have encountered at High Alpha on a number of occasions. One particular use case I think will be beneficial to cover is setting up, deploying, and managing an API that is public facing.

In a lot of ways, this is a pretty basic use case, but allows me to cover a number of topics that I found documentation to be somewhat light on at the time of our development. The different topics I will cover include:

  • Setting up a Kubernetes cluster on Google’s Kubernetes Engine (GKE)
  • Deploying an API to a Kubernetes cluster
  • Allowing external traffic to an API on Kubernetes
  • Monitor traffic and enforcing rules for an API using Google Endpoints
  • Setting up ssl for your API traffic using Let’s Encrypt

This reflects the setup we have used here at High Alpha and have found to be both easy to manage as well as fairly robust.

For the purpose of this guide, I have created a github repo with all of the code and configs I will go over throughout the blog series. Here is the link to the repo.


The Setup

Throughout this tutorial, we’ll be using Google Cloud’s sdk, Docker, and Kubernetes, so make sure you have these tools installed on your machine. Here is a link to Google Cloud’s sdk download page, and here is a link to download Docker. Google’s Cloud sdk (or ‘gcloud’) should also install the Kubernetes command line tool ‘kubectl’.

After installing the gcloud sdk, if it doesn’t prompt you to do so, run the following command to initialize your default settings:

gcloud init

This will ask you to set up an account, link a google account, as well as allow you to create an initial project. If you choose to create a project during the init step, then you may skip the next section about creating a project and setting it as the default.

Next, we will create our Google Cloud project. This can be done from Google’s web console or using the gcloud SDK we already installed. For this as well as the rest of the tutorial, we will use the gcloud SDK. To create the project with gcloud, we will run the following command:

gcloud projects create sample-gke-api --name="Sample GKE API"

This tells Google to create a project with the id sample-gke-api and the name Sample GKE API. There are some restrictions for what you can use as the id, however, an id like the one above will be fine.

After creating our project, we need to set the project we just created as the default project for our gcloud tool. To do this, run the following command:

gcloud config set project sample-gke-api

This command sets the project property in the gcloud config to the provided project id sample-gke-api.

Finally, we will need to enable billing on our account to allow us to utilize various Google API’s such as the Google Kubernetes Engine. Conveniently, Google will give $300 in free credits for the new account, which will be more than enough for this tutorial. To enable billing, go to the google cloud console web app and click on the billing section in the side menu. It should be pretty straight-forward from there to setup billing for your project.

If you already have an account, feel free to skip this step. If your account’s free credits are used up or expired, then you can still follow along, but you will not be able to use the Google services required to host this API.

The API

Now that we have a Google Cloud project ready and waiting to be used, let’s put together a simple API to use throughout this tutorial. We will use Golang to create the API, as it is simple and builds to a single small binary.

For this API, we’ll just make 2 routes: a health check route (used later for Kubernetes) and a /hello route which will simply return a greeting. Feel free to create your own API, in your own language of choice, just beware that there will be aspects of this tutorial which will correspondingly change. Here is the API:

package main
import (
"fmt"
"io"
"net/http"
)
func health(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, "ok")
}
func hello(w http.ResponseWriter, r *http.Request) {
// Default response
res := "Hello World!"
    // Check for a name parameter in the querystring
queryVals := r.URL.Query()
name := queryVals.Get("name")
if name != "" {
res = fmt.Sprintf("Hello %s!", name)
}
    // Write response
io.WriteString(w, res)
}
func main() {
// Create http handler mux
mux := http.NewServeMux()
    // Set hello route
mux.HandleFunc("/", health)
mux.HandleFunc("/hello", hello)
    // Start listening/serving
fmt.Println("Sample API server listening on 0.0.0.0:8080")
http.ListenAndServe("0.0.0.0:8080", mux)
}

Now, we’ll go ahead and get our API ready to deploy by building an appropriate binary for the containers this will run on:

CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o ./bin/api

This is just a fancy way of building our go binary which allows it to run on the very minimal Docker Alpine image.

Deploying to GKE

Now that we have our API, let’s deploy it to our Kubernetes cluster in GKE. However, we don’t actually have a Kubernetes cluster yet, so let’s create one. First we’ll need to enable the Kubernetes Engine API for our project, which like our earlier setup, can be accomplished pretty easily through a gcloud command:

gcloud services enable container.googleapis.com

This will allow you to use Google’s managed Kubernetes service.

Next, we’ll create a cluster on Google’s Kuebernetes Engine with the following command:

gcloud container clusters create sample-gke-api-cluster \
--zone us-central1-a --machine-type g1-small --num-nodes 1

This creates a Kubernetes cluster with the given name sample-gke-api-cluster, in the given zone us-central1-a, with the given machine type g1-small, as well as how many nodes should be in the cluster 1. Since we’re just doing a small project, a small cluster works well for us.

Now that we have our Kubernetes cluster created, we’re ready to start the process of deploying our API. To be able to run our API in our cluster, we’ll first need to create a docker image. If you’re unfamiliar with Docker, that’s fine, we will cover the basics to get through this tutorial. Assuming you installed Docker earlier in this tutorial (or if you already had it), let’s now build our Docker image:

docker build -f Dockerfile -t gcr.io/sample-gke-api/api:latest .

This tells docker to build an image based on the provided file -f Dockerfile with the given name:tag pair -t gcr.io/sample-gke-api/api:latest. Our Dockerfile below says we’ll build off the alpine:3.5 base image, add our API binary to the image and run the API:

FROM alpine:3.5
ADD ./bin/api /
CMD ["/api"]

Now that we have built our API’s Docker image, we need to push our image to a container registry. In our case this will be Google’s Container Registry (GCR).

gcloud docker -- push gcr.io/sample-gke-api/api:latest

This tells gcloud to push our local docker image gcr.io/sample-gke-api/api:latest to our project’s container registry.

Finally, we will deploy the docker image we just created to our Kubernetes cluster. To do this, we will need to create a Kubernetes resource, in this case the resource will be a Kubernetes deployment. The deployment will be a yaml config file and will look like this in our case:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
namespace: default
name: api
spec:
replicas: 2
strategy:
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
name: api
labels:
name: api
spec:
containers:
- name: api
image: gcr.io/sample-gke-api/api:latest
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /
port: 8080
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
cpu: 40m
memory: 128Mi
ports:
- containerPort: 8080
name: http

There is a lot going on here, but I’ll just focus on a few important pieces. If you want to see what everything there does or what other things you can do in a deployment, here is a link to their reference.

The first thing I want to point out, is the .spec.replicas property. This tells Kubernetes how many pods (instances of your service) with this template should be deployed. This can also be controlled after creation by manually scaling the pods up and down. The next important piece is the container definition. You can define any number of containers per pod by listing them in the .spec.template.spec.containers property. Here, we list the name of the container api, the Docker image to pull gcr.io/sample-gke-api/api:latest, we tell it to always pull the Docker image when initializing the container, how to determine if the container is live, set resource requirements and list what ports we want to expose from the container. Here is a complete list of container configurations.

Now that we have our deployment configuration, let’s use the Kubernetes command line tool (kubectl) to actually create the deployment in our Kubernetes cluster:

kubectl create -f ./api-deployment.yaml

This command creates any given Kubernetes resource; in our case we’re creating a resource from a file -f with the name ./api-deployment.yaml.

Make It Externally Accessible

After creating the deployment, we should now have 2 pods running in our cluster. We can check this by running the following command:

kubectl get pods

This should display 2 items with a name and some other high level information about the pods.

Awesome! We have an API running in our Kubernetes cluster! Unfortunately, we still can’t really do anything with it in its current state. Before we can receive greetings from our API, we will need to make it externally accessible.

First off, let’s create a Kubernetes service to allow internal cluster traffic to our pods. To do this, we will once again create a Kubernetes resource. However, in this case it will be a service resource type. Our service resource configuration will look like this:

apiVersion: v1
kind: Service
metadata:
namespace: default
name: api
labels:
name: api
spec:
ports:
- port: 80
targetPort: 8080
protocol: tcp
name: http
selector:
name: api
type: NodePort

The important piece for now, is the .spec.ports property. This is where you list any port forwarding rules, allowing traffic from within the cluster (as well as outside the cluster) through the specified ports. In this case we’re forwarding our pods’s port 8080 to port 80 at the cluster level. We’re also telling this service to select all pods with the name api via the .spec.selector property.

Next we will actually create the service in our Kubernetes cluster:

kubectl create -f ./api-svc.yaml

Now we should have a service exposing our pod internally to the cluster, but let’s check to make sure:

kubectl get services

You should see your API service listed along with it’s internal ip address and any ports it’s exposing.

We are now one step closer to being able to hit our API from our computer. Next we need to create what is called an ingress. An ingress is a Kubernetes resource which allows you to define what traffic is allowed into your cluster and where it goes. To create an ingress we’ll once again define it as a Kubernetes resource, it will look like:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: default
name: sample-gke-api
labels:
apiVersion: v1
app: sample-gke-api
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: api
servicePort: 80

This tells Kubernetes to create a resource of type Ingress with the given metadata. On Google Cloud, Kubernetes will create a Google L7 LoadBalancer with an ephemeral ip by default, and route traffic based on the rules defined in the spec portion of the config. You can find documentation on how this LoadBalancer is actually created on GCE here. So in our case, we have all traffic /* going to a Kubernetes service with the name api through port 80.

Just like the deployment and service, we’ll create this resource in our cluster:

kubectl create -f ./ingress.yaml

It can take a while for the Load Balancer to be provisioned, let alone for the routes to be listed as healthy and available, so if it doesn’t seem to be working right away, give it some time and wait to see if things become healthy before trying to debug further. I would wait anywhere from 5–10 minutes (maybe even more) before trying to debug further.

After you’ve created the ingress resource in Kubernetes, let’s check to see if it actually setup a load balancer, as well as see what ip address was assigned to our ingress. We should be able to find this out using the following command:

gcloud compute forwarding-rules list

This should return a list of forwarding-rules in your google cloud project. If the load balancer was setup properly you should see in the list an entry that has a name like k8s-fw-default-sample-gke-api--<some-hash>. Now, to get the ip address, look at the IP_ADDRESS column in the list for that entry.

Awesome! Now you have an API hosted on Google’s GKE service that you can hit externally! Let’s check to see if we can get a greeting from our API:

curl <ingress-ip>/hello

Hopefully you received a response that looked like ‘Hello World!’. If you did not, then something went wrong (obviously), and you should go back through this guide and make sure everything was setup properly.

If you’re still having issues, Google provides a tool out of the box for all of your hosted services called Stackdriver. Stackdriver automatically reads and stores console output for you, allowing you to more easily debug your applications. This is a good go-to tool for trying to figure out what’s wrong with your service (in this case, our API). If this doesn’t help, feel free to reach out via comments, Github issues, or twitter and I’ll try to help!

Recap and What’s Next

If you made it this far, congratulations and thanks for sticking with me! At this point, you should now have a Google Cloud project setup, the tools you need installed locally, a Kubernetes cluster created in your Google Cloud project, an API for greeting people, configurations for managing this API in your Kubernetes cluster, and proper services and ingress rules for allowing external traffic to be properly routed to your API. That’s a lot! However, we’re going to go beyond the basics and setup some API monitoring, as well as setup ssl for traffic to our cluster. In the next part of this guide, I will walk you through setting up Google’s Endpoints service for your API which will add API monitoring and a bunch of other goodies out of the box.

If you have any questions, issues, comments, or whatever else, feel free to drop a comment on this post or reach out to me via twitter!


High Alpha is a venture studio pioneering a new model for entrepreneurship that unites company building and venture capital. To learn more, visit highalpha.com or subscribe to our newsletter.