Deploying your application to Kubernetes

Jorrit Salverda
Travix Engineering
Published in
7 min readJun 13, 2016

Posted by Jorrit Salverda, technical architect

At Travix we run about half of our 100 in-house developed applications in Google Container Engine, Google’s hosted version of the Kubernetes container management cluster. We started using it in May 2015 when Kubernetes was still in Alpha, and since then embraced it and use it as our default hosting platform for any new application.

In this article I’ll describe how to deploy your application to Kubernetes and expose it as a public service.

Connecting with Kubernetes

Before we get started make sure you’ve created a Google Container Engine cluster in the Google Cloud console and have the gcloud command line tool installed and configured on your computer.

The kubectl command line tool can be installed through the gcloud cli.

$ gcloud components install kubectl

And with the following command you configure kubectl to communicate with your cluster. Make sure to replace the name and zone with the correct values for your own cluster.

$ gcloud container clusters get-credentials <container engine cluster name> --zone <google cloud zone>

Namespace

In Kubernetes you run your applications in a namespace; inside the same namespace you can discover the other applications by service name. The isolation namespaces provide allow you to reuse the same service name in different namespaces, resolving to the applications running in those namespaces. This allows you to create your different “environments” in the same cluster if you wish to do so. For development, test, acceptance and production you would create 4 separate namespaces.

A namespace is required for setting up the other components, so let’s get started by creating one. Save a file called namespace.yaml with the following content.

apiVersion: v1
kind: Namespace
metadata:
name: my-namespace

And then run the following command to create the namespace in Kubernetes.

$ kubectl apply -f ./namespace.yaml

Service

A service in Kubernetes is the entry for traffic into your application. It can be used for accessing an application just internally in the Kubernetes cluster or to expose the application via an external load balancer to the public internet. We’ll do the last one.

Internally you can access the service with the url http://<service name>/. This will automatically resolve to the service as long as you’re in the same namespace. This is still possible when configured as an externally load balanced service.

Externally it will use a load balancer with a single ip address to lead to the same service.

Create a file called service.yaml with this content.

kind: Service
metadata:
name: my-app
namespace: my-namespace
labels:
app: my-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
selector:
app: my-app

And then create it in Kubernetes by running the following command

$ kubectl apply -f ./service.yaml

In the service the selector is used to direct traffic to all application with matching label(s). The application will need ports with the same names as used in targetPort. We named the port http in our case, which you’ll see later in the deployment manifest.

With the type set to LoadBalancer a network load balancer will be created automatically on the Google Cloud Platform listening on port 80.

Wait for the ip address of the network load balancer to become available with the following bash script.

loadBalancerIP=""while [ "$loadBalancerIP" == ""] || [ "$loadBalancerIP" == "null" ]
do
sleep 10s
svc_json=$(kubectl get svc -l app=my-app -o json) loadBalancerIP=$(echo $svc_json | jq -r ".items[].status.loadBalancer.ingress[0].ip")
done
echo "load balancer ip: $loadBalancerIP"

You can now use this ip address to connect to the service or set a DNS record for it. At Travix we do this automatically from our deployment script.

Pod

To run containers in Kubernetes it uses the concept of a “pod”. A pod is a grouping of one or more containers that are closely related. They start and stop together and run on the same host.

Each pod gets a dedicated ip address in the Kubernetes cluster. Multiple pods can run on the same host whilst being fully isolated from each other. Applications in different pods can use the same ports for their communication without causing problems. Outside the pod their ports are NAT’ed, allowing multiple applications using the same port to run on the same host.

Inside the pod localhost resolves to the pod, not the host running the pod. This can be used to let multiple applications running in the same pod to communicate with each other using localhost and the original port the applications listen to. This makes communication inside a pod fast and more secure as it doesn’t hit the network.

At Travix we use the pod to move some cross-cutting concerns out of our applications into dedicated containers. Each pod — besides running the main application of that deployment — also runs HAProxy for TLS termination. In this blogs post we’ll just run a single container inside the pod though.

Below you’ll see how a pod is configured and used for deploying your application.

Deployment

Now that the service is set up it has no place yet to send the traffic, because there are no pods with label(s) that match the selector in the service. This decoupling between services and pods in Kubernetes allows you to create them in any order.

To create those pods it’s easiest to use the deployment object even though it’s still in beta. It uses a manifest that specifies what containers to run in the pod, what environment variables to inject into these containers, which ports are exposed and what labels are set on the pods.

The deployment is a high level abstraction. When created it creates a ReplicaSet which is responsible for creating a number of pods equal to the configured number of replicas. For each new deployment a new ReplicaSet is created, but it keeps the old ones around for quick rollbacks, with a maximum of ReplicaSets equal to the value of revisionHistoryLimit. The rolling update of the deployment ensures all the old ReplicaSets are updated to 0 replicas and only the latest has pods running.

Create a file called deployment.yaml with the following content, replacing the container image with one of your own and the correct port it listens to.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
namespace: my-namespace
spec:
replicas: 3
strategy:
type: RollingUpdate
revisionHistoryLimit: 10
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: travix/my-app:247dc32
resources:
limits:
cpu: 100m
memory: 250Mi
requests:
cpu: 10m
memory: 125Mi
ports:
- name: http
containerPort: 5000
livenessProbe:
httpGet:
path: /liveness
port: 5000
initialDelaySeconds: 15
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /readiness
port: 5000
initialDelaySeconds: 0
timeoutSeconds: 1
env:
- name: "CONFIGURATION_ITEM"
value: "value"

To actually start the deployment you run the following command

$ kubectl apply -f ./deployment.yaml

The selector in the deployment manifest is used to match the ReplicaSets created by the deployment. Make sure to keep this the same when redeploying, otherwise it will create a new ReplicaSet, without setting the replica count for all other ReplicaSets to 0.

Although the application listens to port 5000 and the service uses port 80, because it uses the name of the port — http — it will automatically NAT the port and hides this translation from the user of the service.

Deploy new version

When you want to deploy the next version update the image tag and version label in the manifest and re-run

$ kubectl apply -f ./deployment.yaml

It’s as simple as that. The deployment will then execute a rolling update replacing each pod with a new one 1 at a time. This usually takes less than half a minute when only 3 replicas are used.

The kubectl apply command will not wait for the deployment to finish, for that you have to jump through some hoops.

Currently we do that with something like below

deployment_json=$(kubectl get deployment/my-app -o json)generation=$(echo $deployment_json | jq -r ".metadata.generation")observedGeneration=0
desiredReplicas=$MIN_PODS
updatedReplicas=0
availableReplicas=0
while [ "$observedGeneration" -lt "$generation" ] ||
[ "$updatedReplicas" -lt "$desiredReplicas" ] ||
[ "$availableReplicas" -lt "$updatedReplicas" ]
do
sleep 5s
deployment_json=$(kubectl get deployment/my-app -o json)

observedGeneration=$(echo $deployment_json | jq -r ".status.observedGeneration")
if ! [[ "$observedGeneration" =~ ^[0–9]+$ ]]
then
observedGeneration=0
fi
desiredReplicas=$(echo $deployment_json | jq -r ".spec.replicas")
if ! [[ "$desiredReplicas" =~ ^[0–9]+$ ]]
then
desiredReplicas=$MIN_PODS
fi
updatedReplicas=$(echo $deployment_json | jq -r ".status.updatedReplicas")
if ! [[ "$updatedReplicas" =~ ^[0–9]+$ ]]
then
updatedReplicas=0
fi
availableReplicas=$(echo $deployment_json | jq -r ".status.availableReplicas")
if ! [[ "$availableReplicas" =~ ^[0–9]+$ ]]
then
availableReplicas=0
fi
done

In here you probably want to make sure you set an upper time limit to wait for the deployment to finish and if it takes to long roll back using the following command

$ kubectl rollout undo deployment/my-app --namespace my-namespace

Templating manifests

Bash doesn’t have a template engine, but a basic form of doing replacements in the manifest can be done in the following way. Use variable notation in your manifest and make sure to use the curly braces.

apiVersion: v1
kind: Namespace
metadata:
name: ${NAMESPACE}

And run the following command to replace the variable in the manifest and create the namespace from the resulting manifest.

$ NAMESPACE="my-namespace" sed -r 's/^(.*)(\$\{[A-Z_]+\})/echo "\1\2"/e' ./namespace.yaml | kubectl apply -f -

Or if you have gettext installed use the shorter

$ cat ./namespace.yaml | NAMESPACE="my-namespace" envsubst | kubectl apply -f -

For all the objects we’ve used you can re-apply the same manifest and Kubernetes will create if it does not exists or update if it already does. So each time you deploy you can re-apply the namespace, service and deployment without any unwanted side-effects.

I hope this article shows you Kubernetes’ core concepts, how simple it is to run your application in Container Engine and gives you some ideas on how to automate the deployment in your deployment pipelines.

--

--