Code Cooking: Kubernetes

Christiaan Hees
Google Cloud - Community
5 min readJan 25, 2017

Welcome to Code Cooking with chef Christiaan!

Today we’re going to prepare a nice web burger. It will have a base of HTML, served on nginx by Kubernetes. Burgers are often associated with fast food, so we are going to make sure it’s ready to eat in no-time.

Børk! Børk! Børk!

Let’s start with gathering our ingredients and tools. We need these tools installed in our local kitchen:

I assume you have at least a bit of experience with these tools. If not: read some docs and familiarize yourself with the basics, or dive right in if you feel adventurous!

Now, my favourite way to serve code is the Google Cloud Platform. To prepare, I create a project on console.cloud.google.com called code-cooking. Today I feel like using Kubernetes. Kubernetes will manage our Docker containers for us. Let’s spin up a nice cluster to get started:

gcloud container clusters create code-cooking

This gives us 3 servers from Google on which we can run our burger app.

While those brand new servers are warming up, we prepare the code. For that we create static/index.html and press a few keys to make it look like this:

<!DOCTYPE html>
<html>
<h1>Best burger in town!</h1>
</html>

Doesn’t that look delicious already? No, it does not! You can’t serve raw food like that. Let’s put it in an nginx. Take a new file, call it nginx.conf and type away:

server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html;
expires 1h;
add_header Cache-Control "public";
}
}

While we’re in hyper typing mode anyway, let’s create a Dockerfile as well:

FROM nginx
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY static /usr/share/nginx/html

Wow, we already have a cluster and 3 files: Dockerfile, nginx.conf and static/index.html. This is going so fast!

On to the next part. It’s time to let Docker do its thing:

docker build -t eu.gcr.io/code-cooking/burger:0.1 .

This makes Docker build a container for us with our burger app inside. We tag the container with a name and version, so we’ll be able to find it later on.

Let that cook for 3 seconds and then push it into the Google Container Registry:

gcloud docker -- push eu.gcr.io/code-cooking/burger:0.1

Yum, can you smell that? I’m sure your guests can’t wait to check out your latest creation. We can use our fancy tool kubectl to set up our servers:

kubectl run burger --image=eu.gcr.io/code-cooking/burger:0.1 --port=80 --replicas=3kubectl expose deployment burger --port=80 --type=LoadBalancer

This creates 3 replicas of our burger app, runs them on our cluster and exposes them to the world through a load balancer on port 80.

Starting up your burger apps will be super fast, but creating a load balancer the first time can take a minute or so. Keep watching our burger service to see when that lazy load balancer finally gets an external IP:

kubectl get service burger --watch

Once it does, you’re ready to serve! Remember to always taste your creations first:

ab -n 1000 -c 10 http://<the external ip of the burger service>/

In this case I get my burger in about 40 milliseconds.

But wait, we can do better. I like to travel and when I do, I sometimes feel like eating a burger. This burger server I just made happens to be in Europe since that’s where I created the cluster. But when I’m in America I don’t want to go alllll the way back to Europe to get my burger. What if we could have these wonderful burgers over there as well? Let me tell you: we can make that happen!

First, let’s see what people on the other side of the world have to go through to get my burger. Create some machines in Oregon and Japan:

gcloud compute instances create oregon-test --zone us-west1-a
gcloud compute instances create japan-test --zone asia-northeast1-a

SSH into one:

gcloud compute ssh japan-test --zone asia-northeast1-a

and sudo apt-get install apache2-utils -y to install ab.

Let’s run our ab test again to see what it tastes like:

ab -n 1000 -c 10 http://<the external ip of the burger service>/

OMG, that is terrible: It takes 279ms for Oregon and a whopping 459ms for requests from Japan. And this is just a burger! Imagine what it must feel like for those poor people to get an HTML burger, a starter of CSS, a salad of images on the side and a big chunk of JS for dessert? The horror!

As a good cook I know I can’t let my guests wait that long. There are a couple of options to fix this and since we’re talking about static content, the easiest way is to add a CDN. Here we go!

We start by deleting our normal load balancer and replacing it with a different service:

kubectl delete svc burger
kubectl expose deployment burger --target-port=80 --type=NodePort

This allows us to hook it up to an ingress. The nice thing about an ingress is that you can enable the Google CDN for it, which makes everything deliciously fast.

To create the ingress we create a file called… ingress.yaml! In it we put this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: burger-ingress
spec:
backend:
serviceName: burger
servicePort: 80

Set it up on the cluster: kubectl create -f ingress.yaml and, like before, we can watch it until it has an address: kubectl get ing --watch

The ingress is a bit like the load balancer that we saw before, except an ingress uses the http load balancer instead of the network load balancer.

Now it’s time to add the magic ingredient:

BACKEND=$(kubectl get ing burger-ingress -o json | jq -j '.metadata.annotations."ingress.kubernetes.io/backends"' | jq -j 'keys[0]')gcloud compute backend-services update $BACKEND --enable-cdn

Tada! This is my big trick that you won’t find in any other cookbook. Basically, it looks at the ingress we just created and uses jq (great tool btw) to slice and dice it so we end up with only the backend. We tell gcloud to enable the CDN on that backend and we’re done.

Let’s check out the difference:

ab -n 1000 -c 10 http://<the address of our burger ingress>/

It now takes me just 31ms to get my burger in Europe. If we do the same test from our VM in Japan we get it in 34ms which is much better than the 459ms we saw before. Testing it from our VM in Oregon we even get served within a ridiculous 1ms!

Since the CDN servers from Google are now getting all the hits, our original 3 servers will hardly get any traffic anymore. That means we can scale them back to just 1:

kubectl scale deployment burger --replicas=1

If there are only 3 things you remember from this show, remember this:

  • You can make yummy things with Kubernetes
  • Burgers need to be served fast
  • You can enable the Google CDN if you use an ingress

And there you have it. Bon appétit!

--

--