A Cloud-Native API Part 2: Google Endpoints

Alex Hester
High Alpha
Published in
7 min readDec 4, 2018
Kubernetes x Endpoints x Let’s Encrypt

In the first part of this guide (which you can find here), we created our Hello World API on Google’s Kubernetes Engine (GKE), and got to the point where we can hit the API externally. As I mentioned in the previous post, this is a great start and is already enough for you to start experimenting with adding other services, capabilities and more. However, there are still a number of features that are worth adding before we consider this production-ready.

The next piece I want to cover in making our API is adding some traffic monitoring as well as setting up a good way for us to enforce rules about the API. Fortunately, Google provides an awesome service which does pretty much all of this out of the box, called Google Endpoints.

If you’re following along with the Github project for this tutorial, you can find the updated files for this part of the series here. Note that there are a few changes to some of the files created in the previous part of this tutorial, though nothing that should be critical.

What Is It?

Google Endpoints is a service which allows you to monitor and easily configure and enforce rules for your API. The offering itself is actually pretty simple; you merely put Google’s Endpoints Service Proxy (ESP for short) in front of your actual API and configure it to send the traffic to your API. This allows you to monitor traffic to your API, see error rates, and even set up alerting. Along with monitoring, Google Endpoints is able to automatically enforce a number of rules you can setup in an OpenAPI 2.0 spec compliant config. So all-in-all, it’s a pretty nice set of capabilities right out of the box.

The Setup

Before we get started creating either the OpenAPI config or making any other changes, let’s make sure the Google Endpoints service is enabled for our project. Assuming you followed the first part of this guide and have the Google Cloud SDK (gcloud) installed on your computer, then enabling this service is easily done using the following command:

gcloud services enable endpoints.googleapis.com

This tells Google to enable the Endpoints service for our project.

The OpenAPI Config

Now that we have enabled Google Endpoints for our project, we can start creating our OpenAPI 2.0 spec compliant config. The OpenAPI config essentially just tells the Endpoints proxy how to handle traffic to our API. This includes knowing which routes are valid and which aren’t, setting up security for your API (even per route), as well as handling rate-limiting and quotas. Swagger has a nice online editor for creating these configs here. Google’s Endpoints service has a few limitations as well as extensions for defining your OpenAPI config that may be good to look over when making your own API. For our purposes however, we’ll use this config here:

swagger: '2.0'
info:
title: Sample GKE API
description: Sample API project on GKE
version: v1.0.0
host: api.endpoints.sample-gke-api.cloud.goog
schemes:
- http
- https
paths:
/:
get:
summary: Health Check
operationId: healthCheck
description: Used to determine if API is healthy
responses:
200:
description: ok
/hello:
get:
summary: Hello World
operationId: sayHello
description: Returns a greeting to given name, or world
parameters:
- name: name
in: query
description: The name to greet
required: false
type: string
responses:
200:
description: Returns a 200 with greeting.

In this config, we’re setting the spec version to 2.0 and providing some required basic info like title and API version. We then define our host (this is mainly for Google’s purposes and is not the actual host you will use to hit the API), as well as stating the supported schemes for your API. Finally, we list our supported routes. Each route can define behaviors for each http action and needs to contain at least an operationId. Note that we are listing our health check route, which will be necessary for our ingress’s health checks to pass.

After we have defined our OpenAPI config, we need to deploy the config to our project’s Endpoints service. To do this, we’ll use the following command:

gcloud endpoints services deploy ./openapi.yaml

This will create a versioned deployment of our config in our project, able to be pulled down by our ESP deployment. However, to sync up our ESP and the deployed config, we’ll need to use the id that the above command generates.

Update Kubernetes Configs and API

Once the service has been successfully deployed, grab the id that it logs, we’ll need that later. Now we’ll need to update our API’s deployment config ./api-deployment.yaml to create a container for Google’s ESP. Google’s ESP comes in a Docker image, so it’s fairly easy to set up. All we need to do is add a container definition like we did for the API in the containers list. The updated containers list should look like:

containers:
- image: ... # our API image
- image: gcr.io/endpoints-release/endpoints-runtime:1.5
name: esp
args: [
"-p", "8080",
"-a", "0.0.0.0:8081",
"-s", "api.endpoints.sample-gke-api.cloud.goog",
"-v", "<openapi-deploy-id>",
]
resources:
requests:
cpu: 20m
memory: 32Mi
limits:
cpu: 40m
memory: 64Mi
ports:
- containerPort: 8080

This new container definition should look very similar to our API’s. The important parts here are the args. The first arg -p sets the port used by the ESP, in this case 8080. Next we have the -a flag which tells the ESP where to forward traffic to. Next is the -s flag which tells the ESP the name of the API host, which should match the name of the host you set in your OpenAPI config. The last flag we’re setting is the -v flag which tells the ESP the id of the OpenAPI config deployment to pull and use; this is where we’ll put the deployment id we grabbed from the deploy command earlier.

After we have added this container definition to our API’s deployment config, we need to update the deployment in our Kubernetes cluster for the change to take effect. However, before we do that, one more important thing we need to do is change the port our API is listening on. Currently our API is listening on port :8080, but now we want our ESP to handle the traffic on :8080 so let’s go ahead and change the port our API is listening on to :8081:

api.go (ln 36–38)

// Start listening
fmt.Println("Sample API server listening on 0.0.0.0:8081") http.ListenAndServe("0.0.0.0:8081", mux)

api-deployment.yaml (ln 19–39)

- image: gcr.io/sample-gke-api/api:latest
name: api
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /
port: 8081
readinessProbe:
httpGet:
path: /
port: 8081
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
cpu: 40m
memory: 128Mi
ports:
- containerPort: 8081
name: http

Once you have finished updating both your API and the API’s container definition to listen on and expose port 8081, we’ll need to rebuild the API’s docker image and push it to our docker repo:

First, our build our API:

CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o ./bin/api

Next, we’ll build our Docker image:

docker build -f Dockerfile -t gcr.io/sample-gke-api/api:latest .

Finally, we’ll push the image to GCR:

gcloud docker -- push gcr.io/sample-gke-api/api:latest

Re-Deploy API with New Configs

Ok, now that we’ve deployed our OpenAPI config, updated our API’s Kubernetes deployment config, updated our API and updated our API’s Docker image, we’re finally ready to redeploy our API with the new ESP container.

To update our API, we can redeploy using the following command:

kubectl apply -f ./api-deployment.yaml

Kubectl’s apply command will update the deployment config in the cluster. If there were any changes, it will begin to restart each corresponding pod in the cluster, making sure they are up to date with the latest deployment.

Now let’s check to make sure our pods restarted:

kubectl get pods

You should see the API pod with a status of running, and the age should be pretty recent (however long ago it was that you ran the apply).

Now you can try hitting your API again. If you go to the /hello route, you should get the same result as before, but if you hit any other route, the ESP will automatically send you an error about the route not existing. You can then check your Endpoints dashboard online on the Google Cloud Console, and you will see the calls you just made.

Now that you have Endpoints setup and working, you can try experimenting with all of its features, such as automatically handling authentication schemes. I highly recommend looking into Endpoints’ full set of capabilities and trying to make good use of them; at High Alpha, we have found it to be a really nice tool with which to build an API.

Recap and What’s Next

Once again, let’s take a second to step back and appreciate what we have accomplished so far: a Google Cloud project, a Kubernetes cluster, a simple API running on the Kubernetes cluster, ingress rules setup allowing traffic into our cluster and routed to our API pods, and now real-time traffic monitoring as well as rule enforcement. That’s pretty good, eh? The last piece we’ll cover in the next part of this guide will be setting up self-managed ssl for all traffic into our cluster using Let’s Encrypt.

Once again, if you have any questions, issues, or comments, feel free to drop a comment on this post or reach out to me via twitter!

High Alpha is a venture studio pioneering a new model for entrepreneurship that unites company building and venture capital. To learn more, visit highalpha.com or subscribe to our newsletter.

--

--

Alex Hester
High Alpha

Developer, trumpeter, frisbee-er, and other stuff too.