How I containerised my resume API

mario menti
Google Cloud - Community
15 min readAug 10, 2017

…or (This) Idiot’s Guide to running Docker and Kubernetes on Google Cloud Platform

As I am currently looking for interesting new opportunities and am particularly interested in APIs, I created an API for my own CV/resume a few week ago, and wrote about it here: https://medium.com/@mariomenti/everything-is-an-api-including-me-and-my-cv-674ea433f283

Since then, I’ve spent some time looking into microservices and containers (namely Kubernetes and Docker), because I felt I wanted to get more familiar with some of these concepts. As ever hands-on even if I barely know what I’m doing, I thought it would be fun to describe how I moved my monolithic API into separate microservices, all running on Kubernetes on the Google Cloud Platform.

I was totally new to the oncept of containers when I started writing this, so this will hopefully serve as a useful idiot’s guide / starting point for people in a similar situation — but at the same time, I’ve almost certainly done things in a less than optimal (if not outright ugly or even wrong) way, so if any experts read this and feel sick, please comment and let me know where I’m barking up the wrong tree!

Why would I do this? Good question — in my particular case there probably isn’t that much reason, since my CV API is hardly groaning under the load (if it was, I wouldn’t have time to write articles like this!). But imagine this was a production-grade API: at the moment, everything, each endpoint of the API as well as the code that loads the resume data and API tokens, is all contained within one Go program.
If there are changes needed to one endpoint, it means recompiling and redeploying the entire thing. And, say, there was one API endpoint (for example the “/contact” endpoint to send me an email or SMS message) that suddenly gets hugely popular and the API starts to struggle, in order to scale in the existing setup, I would have to increase the capacity of the entire Compute Engine instance I’m running the API on.
By contrast, in a containerised world, because each API endpoint runs as its own microservice, I will be able to scale up the “/contact” endpoint independently, while leaving every other aspect of the API unchanged.
So you suddenly have much more fine-grained control over the different parts of your application.
Similarly, if an update is required to one API endpoint, it can be deployed independently of the rest of the API, without any downtime. In addition, Kubernetes is self-healing, so if the so-called pods that run an application go down, Kubernetes automatically replaces them.

So, without further ado, the pre-requisites (I’m working on a Ubuntu Linux laptop, and details may vary depending on the OS you are on, so I’m linking to fairly generic documents):

Good? Good. The first thing to do is to create a Kubernetes cluster to run our applications. I’m going with the default here, which creates a 3-machine cluster:

$ gcloud container clusters create marioapi-cluster

This will take a couple of minutes — once the cluster has been created, authenticate the Google Cloud SDK using your Google account by typing:

$ gcloud auth application-default login

Let’s take a step back now and look at my existing API setup, and how we want to transform it into a collection of microservices. It’s very simple, and contains these parts:

  • a function that reads my resume/CV in JSON format
  • a function that reads a JSON file containing the valid API token data, and checks the tokens passed to the API endpoints for validity
  • the gorilla/mux handlers that handle the different API endpoints:
r.HandleFunc("/full", api.FullCVHandler).Methods("GET")
r.HandleFunc("/summary", api.SummaryHandler).Methods("GET")
r.HandleFunc("/contact", api.ContactHandler).Methods("GET")
r.HandleFunc("/contact", api.ContactPostHandler).Methods("POST")
r.HandleFunc("/experience", api.ExperienceHandler).Methods("GET")
r.HandleFunc("/experience/{id}", api.ExperienceIdHandler).Methods("GET")
r.HandleFunc("/projects", api.ProjectsHandler).Methods("GET")
r.HandleFunc("/projects/{id}", api.ProjectsIdHandler).Methods("GET")
r.HandleFunc("/tags", api.TagsHandler).Methods("GET")
r.HandleFunc("/tags/{tag}", api.TagsTagHandler).Methods("GET")

All this is contained in the one Go program file. So to pick this apart, what I’m trying to do is to create these separate microservices:

  • a resume-server, which loads my CV and serves it in JSON format,
  • a token-server, which loads the valid API token definitions and checks them against what is sent by a client in an API call,
  • a separate microservice for each API endpoint (e.g. full-endpoint, summary-endpoint, contact-endpoint etc.),
  • and finally an nginx (web server) service that handles load balancing and passes requests to the relevant internal service.

Let’s look at the resume-server first. This is about a simple (and unrealistic) as it gets — a “real” API would of course have a datastore/database of some kind in place, but for the purposes of this demo, I’m just reading my resume/CV data from a JSON file.

package main

import (
"fmt"
"io/ioutil"
"log"
"net/http"
)

func LoadResume() string {

file, e := ioutil.ReadFile("/resume.json")
if e != nil {
return "{}"
}
return string(file)
}

func serve(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, LoadResume())
}

func main() {
http.HandleFunc("/", serve)
err := http.ListenAndServe(":80", nil)
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}

We next need to create a docker container for this, so we can then upload this to Kubernetes. Again, the Dockerfile is simplicity itself:

FROM golang:1.6-onbuild
COPY resume.json /

… the COPY line makes sure we copy the resume.json file to the container, so the pod application can read it. Build the docker container like so:

$ docker build -t gcr.io/<YOUR-GOOGLE-CLOUD-CLOUD-PROJECT>/resume-server:1.0 .

You can get the value you should use in place of <YOUR-GOOGLE-CLOUD-PROJECT> by typing…

$ gcloud config get-value project

Whenever you see <YOUR-GOOGLE-CLOUD-PROJECT> in any examples here, replace with your actual Google Cloud project name. Once the docker container has been built, we need to push it to Google Cloud:

$ gcloud docker -- push gcr.io/<YOUR-GOOGLE-CLOUD-CLOUD-PROJECT>/resume-server:1.0

Next, we want to deploy the application to Kubernetes. To do this, we create a deployment.yaml file for the app:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: resumeserver
spec:
replicas: 2
template:
metadata:
labels:
name: resumeserver-pods
spec:
containers:
- image: gcr.io/<YOUR-GOOGLE-CLOUD-CLOUD-PROJECT>/resume-server:1.0
name: resumeserver-container
imagePullPolicy: Always
ports:
- containerPort: 80
name: http-server

In this example, we want to always run 2 instances of the resumeserver, but if we wanted to be more fault-tolerant, we could specify a higher number of replicas. As I mentioned above, Kubernetes will make sure that at least X are always running, and restart them as required.

Deploy the application:

$ kubectl apply -f deployment.yaml

This creates 2 running pods in Kubernetes, and you can see details via these commands — the first lists the deployments on your cluster, the second the pods (as we specified, there are 2 pods running for our resumeserver deployment).

$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
resumeserver 2 2 2 2 2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
resumeserver-3871548837-8tf9w 1/1 Running 0 1m
resumeserver-3871548837-ks0lk 1/1 Running 0 2m

So far so good, but in order to be able to communicate with these running pods, we need to create a named service. To do this, we use this service.yaml definition:

apiVersion: v1
kind: Service
metadata:
name: resumeserver
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
name: resumeserver-pods

This defines a service called “resumeserver”, and it includes all pods that are running with the name “resumeserver-pods” (we specified this in the deployment.yaml file above). To create the service, run…

$ kubectl create -f service.yaml

We can now check and see if our service is running:

$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.23.240.1 <none> 443/TCP 13m
resumeserver 10.23.243.143 <none> 80/TCP 33s

You can see resumeserver is there, and has been allocated an internal IP address. The nice thing is that Kubernetes has a built in DNS service (more below), so from now on, when we want to connect to it from a different pod within our cluster, we’ll be able to refer to our resumeserver service by name.

OK, we now have the resume-server part up and running, and setting up the token-server is very similar, so I won’t go into details here, but all the files are on Github. Essentially, for each service we want to create, we follow these steps:

  • create the docker container
  • push the container to Google Cloud
  • run/apply the deployment
  • create the service

So after we created the tokenserver, following the above steps, we see the resumeserver and tokenserver both running on our cluster:

$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.23.240.1 <none> 443/TCP 30m
resumeserver 10.23.243.143 <none> 80/TCP 17m
tokenserver 10.23.244.115 <none> 80/TCP 7s

Next, let’s create one of the API endpoints. The most simple is the “/full” endpoint, which essentially just returns the entire resume/CV in JSON format. Some of the other endpoints are slightly more complex (but not much, this is a very simple demo :))

If you look at full.go, you can see that we refer to the tokenserver and resumeserver:

func LoadResume() (resumeData Resume, loadedOk bool) {

var res Resume

rsp, err := http.Get("http://resumeserver")
if err != nil {
return res, false
}
defer rsp.Body.Close()
bodyByte, err := ioutil.ReadAll(rsp.Body)
if err != nil {
return res, false
}

err = json.Unmarshal(bodyByte, &resumeData)
if err != nil {
return res, false
}
return resumeData, true

}
func serve(w http.ResponseWriter, r *http.Request) {

// check token, load resume, return relevant part(s)
token := r.FormValue("token")
rsp, err := http.Get("http://tokenserver?token=" + token)
if err != nil {
WriteApiError(w, 110, "Error checking token from token server")
return
}
defer rsp.Body.Close()
bodyBytes, err := ioutil.ReadAll(rsp.Body)
[...]

Because we created the services called “resumeserver” and “tokenserver”, we can now access them using their names (e.g. “http://resumeserver”) — Kubernetes’ internal DNS service takes care of that. Note this is internal only, these pods are not accessible from anywhere in the outside world (and neither do we want them to be), only from other pods within our cluster. In this demo we access them via HTTP, but if they were (say) redis or mySql servers instead of http servers, it would work in pretty much the same way.

To create the full-endpoint service, we go through the same steps again:

  • create the docker container:
$ docker build -t gcr.io/<YOUR-GOOGLE-CLOUD-CLOUD-PROJECT>/full-endpoint:1.0 .
  • push the container to Google Cloud:
$ gcloud docker -- push gcr.io/<YOUR-GOOGLE-CLOUD-CLOUD-PROJECT>/full-endpoint:1.0
  • create/apply the deployment:
$ kubectl apply -f deployment.yaml
  • create the service:
$ kubectl create -f service.yam

Once we’ve done this, we can see the new endpoint (full-endpoint) in the list of services:

$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
full-endpoint 10.23.241.128 <none> 80/TCP 37s
kubernetes 10.23.240.1 <none> 443/TCP 45m
resumeserver 10.23.243.143 <none> 80/TCP 33m
tokenserver 10.23.244.115 <none> 80/TCP 15m

The detailed code of each API endpoint is slightly different (you can see all the files in Github), but we go through the exact same process for each endpoint, until we end up with our cluster containing a service for each API endpoint:

$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contact-endpoint 10.23.244.122 <none> 80/TCP 4m
experience-endpoint 10.23.252.142 <none> 80/TCP 2m
full-endpoint 10.23.241.128 <none> 80/TCP 14m
kubernetes 10.23.240.1 <none> 443/TCP 59m
projects-endpoint 10.23.244.164 <none> 80/TCP 1m
resumeserver 10.23.243.143 <none> 80/TCP 46m
summary-endpoint 10.23.249.134 <none> 80/TCP 8m
tags-endpoint 10.23.246.242 <none> 80/TCP 7s
tokenserver 10.23.244.115 <none> 80/TCP 29m

We can also check on our deployments and pods (after the resumeserver service, I just specified 1 replica per service, but this could of course be scaled as appropriate):

$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
contact-endpoint 1 1 1 1 5m
experience-endpoint 1 1 1 1 3m
full-endpoint 1 1 1 1 15m
projects-endpoint 1 1 1 1 2m
resumeserver 2 2 2 2 56m
summary-endpoint 1 1 1 1 9m
tags-endpoint 1 1 1 1 1m
tokenserver 1 1 1 1 30m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
contact-endpoint-3271379398-6614g 1/1 Running 0 5m
experience-endpoint-4000525602-3qsw6 1/1 Running 0 3m
full-endpoint-960439747-wrch5 1/1 Running 0 15m
projects-endpoint-1973496552-q1p38 1/1 Running 0 2m
resumeserver-3871548837-8tf9w 1/1 Running 0 54m
resumeserver-3871548837-ks0lk 1/1 Running 0 56m
summary-endpoint-4173081044-2lf30 1/1 Running 0 8m
tags-endpoint-3958056375-vtpdv 1/1 Running 0 1m
tokenserver-4152829013-12w2j 1/1 Running 0 30m

Psssst… wanna know a secret? Before we go to the final part (actually exposing all these services to the outside world), let’s take a look at the “/contact” endpoint. This one is more interesting than the others (which TBH doesn’t say much) because it contains some code that sends SMS messages via Twilio, and emails via SendGrid. Not *that* interesting you say — but the interesting part here is figuring out how we best pass “secret” information like API keys to a container on Kubernetes, with the least risk of exposing this confidential data. Luckily Kubernetes has a concept of a Secret, which makes this very easy and secure.

To pass secret information to a Kubernets pod/deployment, we first create a Secret that contains our confidential API information. Here’s my secrets.yaml:

apiVersion: v1
kind: Secret
metadata:
name: contact-secrets
type: Opaque
data:
twiliosid: <base64-encoded version of your string>
twiliotoken: <base64-encoded version of your string>
twiliourl: <base64-encoded version of your string>
twilionumber: <base64-encoded version of your string>
alertnumber: <base64-encoded version of your string>
sendgridkey: <base64-encoded version of your string>

Then we create the Secret by running:

$ kubectl create -f ./secrets.yaml

(and obviously don’t make the secrets.yaml file public in any way, e.g. by checking into source control or similar…)

Now, when we create the deployment for our contact-endpoint service, we can refer to our Secret we named “contact-secret”, and pass on these values as environment variables:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: contact-endpoint
spec:
replicas: 1
template:
metadata:
labels:
name: contact-endpoint-pods
spec:
containers:
- image: gcr.io/airy-cortex-166611/contact-endpoint:1.0
name: contact-endpoint-container
imagePullPolicy: Always
ports:
- containerPort: 80
name: http-server
env:
- name: TWILIO_SID
valueFrom:
secretKeyRef:
name: contact-secrets
key: twiliosid
- name: TWILIO_TOKEN
valueFrom:
secretKeyRef:
name: contact-secrets
key: twiliotoken
- name: TWILIO_URL
valueFrom:
secretKeyRef:
name: contact-secrets
key: twiliourl
- name: TWILIO_NUMBER
valueFrom:
secretKeyRef:
name: contact-secrets
key: twilionumber
- name: ALERT_NUMBER
valueFrom:
secretKeyRef:
name: contact-secrets
key: alertnumber
- name: SENDGRID_KEY
valueFrom:
secretKeyRef:
name: contact-secrets
key: sendgridkey

And in the contacts.go program in the pod we just deployed, we can easily pick up these environment values:

var (
// twilio (for SMS) and SendGrid (for email) config
twilioSid string = os.Getenv("TWILIO_SID")
twilioToken string = os.Getenv("TWILIO_TOKEN")
twilioUrl string = os.Getenv("TWILIO_URL")
twilioNumber string = os.Getenv("TWILIO_NUMBER")
alertNumber string = os.Getenv("ALERT_NUMBER")
sendGridKey string = os.Getenv("SENDGRID_KEY")
)

This is pretty neat, right? ;)

OK, so now to the final part — we have all these services running, but nothing can access them. So what we need is something that is exposed to the outside world, and that can handle our API requests and pass them onto the relevant service to be processed. Enter nginx (of course).

Again, this is simple in the extreme — we just create a custom nginx.conf, and then create a docker container that uses it. The contents of nginx.conf look like this:

resolver 10.23.240.10 valid=5s;

upstream summary-endpoint {
server summary-endpoint.default.svc.cluster.local;
}
upstream full-endpoint {
server full-endpoint.default.svc.cluster.local;
}
upstream projects-endpoint {
server projects-endpoint.default.svc.cluster.local;
}
upstream experience-endpoint {
server experience-endpoint.default.svc.cluster.local;
}
upstream tags-endpoint {
server tags-endpoint.default.svc.cluster.local;
}
upstream contact-endpoint {
server contact-endpoint.default.svc.cluster.local;
}

server {
listen 80;

root /usr/share/nginx/html;

location / {
index index.html index.htm;
}

location /summary/ {
proxy_pass http://summary-endpoint/;
}
location /full/ {
proxy_pass http://full-endpoint/;
}
location /projects/ {
proxy_pass http://projects-endpoint/;
}
location /experience/ {
proxy_pass http://experience-endpoint/;
}
location /tags/ {
proxy_pass http://tags-endpoint/;
}
location /contact/ {
proxy_pass http://contact-endpoint/;
}
location = /summary {
proxy_pass http://summary-endpoint/;
}
location = /full {
proxy_pass http://full-endpoint/;
}
location = /projects {
proxy_pass http://projects-endpoint/;
}
location = /experience {
proxy_pass http://experience-endpoint/;
}
location = /tags {
proxy_pass http://tags-endpoint/;
}
location = /contact {
proxy_pass http://contact-endpoint/;
}

}

On the first line, we specify the DNS resolver we want to use — this may be a different value in your case, to get the value that you should use, run:

$ kubectl get services kube-dns --namespace=kube-system

The rest is pretty self-explanatory — each service is known by its service name, so we simply proxy requests to different locations to different services.

The Dockerfile to create the container:

FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d/default.conf

Let’s build and push our container:

$ docker build -t gcr.io/<YOUR-GOOGLE-CLOUD-CLOUD-PROJECT>/my-nginx:1.0 .
$ gcloud docker -- push gcr.io/<YOUR-GOOGLE-CLOUD-CLOUD-PROJECT>my-nginx:1.0

Next we create the deployment (this is again pretty much identical to the above services, I think you’re starting to see a pattern), and then create the service. This is where you see one difference to the services we previously created:

apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
name: nginx-pods

Unlike in previous service definitions, here we specify the type as “LoadBalancer”. This will give the service an external IP address. So after running “kubectl create -f service.yaml”, you should see something along these lines:

$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contact-endpoint 10.23.244.122 <none> 80/TCP 48m
experience-endpoint 10.23.252.142 <none> 80/TCP 46m
full-endpoint 10.23.241.128 <none> 80/TCP 58m
kubernetes 10.23.240.1 <none> 443/TCP 1h
nginx 10.23.252.234 35.189.117.234 80:30543/TCP 1m
projects-endpoint 10.23.244.164 <none> 80/TCP 45m
resumeserver 10.23.243.143 <none> 80/TCP 1h
summary-endpoint 10.23.249.134 <none> 80/TCP 53m
tags-endpoint 10.23.246.242 <none> 80/TCP 44m
tokenserver 10.23.244.115 <none> 80/TCP 1h

(It may take some time for the external IP address to appear, if it says “<pending>”, just re-run the command until you can see the external IP address.)

So now that our nginx service has an external IP address, we are ready to make some requests to the API… for example, call the /tags/golang endpoint:

$ curl http://35.189.117.234/tags/golang?token=public
{"Projects":[{"id":1,"name":"Gigfinder bot (Facebook Messenger bot)","summary":"Wondering when and where your favourite band is playing live next? Or just want to see what gigs are on tonight in your city? Message me and I'll tell you!","url":"https://www.facebook.com/gigfinderbot/","tags":["songkick","facebook","golang","supervisord","nginx","amazon","simpledb","redis"]},{"id":2,"name":"Songkick Alert bot (Facebook Messenger bot)","summary":"Say hello to the Songkick Alerts bot on Facebook Messenger to set up your Songkick alerts. Once set up and activated, the bot will send you instant notifications on Messenger for any newly announced shows by the Songkick artists you're tracking.","url":"https://www.facebook.com/songkickalerts/","tags":["songkick","last.fm","facebook","golang","supervisord","nginx","amazon","simpledb"]},{"id":4,"name":"Slack gig attendance bot (Slack integration)","summary":"A Slack bot that checks users' Songkick event attendance and automatically shares these to a Slack channel of your choice. As an example, we have a specific #gig Slack channel where notifications of the shows/gigs that users are planning to go to are being posted as soon as a user marks their attendance on Songkick. Includes artist images retrieved via the last.fm API.","url":"https://github.com/mmenti/songslack","tags":["songkick","slack","last.fm","golang","supervisord","nginx","simpledb"]},{"id":5,"name":"Slack Gigfinder bot (Slack bot)","summary":"Directly search for upcoming shows of an artist from within Slack, powered by the Songkick API.","url":"http://blog.songkick.com/2015/11/09/slack-magic/","tags":["songkick","slack","golang","supervisord","nginx"]}],"Experience":[{"id":1,"name":"Awne","dates":"2016 - 2017","location":"London, UK","job_title":"Technical co-founder","summary":"Technical co-founder (with two others) of Awne, a personal relationship assistant. Built a platform and API that lets us create content and deliver that content to users from one unified back-end to multiple potential delivery channels (messaging, bots, web, apps etc).\n Developed a Facebook Messenger bot to ask users daily questions, record user answers, and deliver tips and updates into their personal Awne home.\n Created prototypes/ proof-of-concepts for delivering Awne content (and collecting user data) via different channels (e.g. messaging, SMS, web, Amazon Alexa) via the same API-powered back-end/platform.\n Both API/platform and Messenger bots built using golang, running through supervisord and nginx.","tags":["golang","api","redis","wit.ai","awne","facebook","supervisord","nginx","mysql","amazonec2","amazonrds"]}]

POST to the /contact endpoint to send me an email:

$ curl -X POST -F 'channel=email' -F 'message=test email message dude!!' -F "from=test@test.net" -F "token=public" http://35.189.117.234/contact/ 
{"success_code":202,"success_text":"Email successfully sent, thanks so much!"}

And that’s it! (I think.) Our API is now fully containerised, yay!

Once everything is running, and you want to scale up or down a particular microservice, it’s as easy as running the kubectl scale command:

$ kubectl scale deployment contact-endpoint --replicas=5

This will create 5 replicas of the pod running the /contact API endpoint. Scaling down is the same, just replace 5 with a lower number (if you specify replicas=0, there will be no pods left running the app).

You can even auto-scale a deployment depending on CPU demand:

$ kubectl autoscale deployment contact-endpoint --min=1 --max=5 
--cpu-percent=80

To update a service, without any downtime at all, simply make changes to your program, content and/or Dockerfile, then go through the steps of building the docker container, push it to Google Cloud, and apply the deployment.yaml with the new container (no need to do anything to the service). This will replace the running pods in the deployment with the new versions.

If you screw up when updating a deploy, you can quickly roll back to the previous version of a given deployment:

$ kubectl rollout undo deployment/nginx

There’s of course plenty more stuff you can do with Kubernetes deployments, and I barely scratched the surface — if you want to know more, see the docs here: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/.

Ouch, this has turned into a bit of a monster of a post — if you’re still with me, I hope it proved useful in giving a hands-on walkthrough of some of the concepts in Docker and Kubernetes. As I said before, I’m very new to this, so no doubt there’s some clangers in here — if you spot any, please let me know and I’ll update the post!

--

--

Google Cloud - Community
Google Cloud - Community

Published in Google Cloud - Community

A collection of technical articles and blogs published or curated by Google Cloud Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.

mario menti
mario menti

Written by mario menti

inquisitive hacker. swiss-brit. runner. things I love: cats, japan, drums, vegan curries, noisy & mathy music. tech stuff: APIs, chat & voice interfaces