Image from Weaveworks

Kubernetes: Containers Made Easy

Noah Dietz
5 min readMay 17, 2016

--

Context

My introduction to enterprise software since I started interning at Apigee almost a year ago has been a fast-paced, but fruitful one. My most recent efforts have been in learning to use Docker and Kubernetes.

My introduction to Docker culminated in a how-to on the company blog about running our light-weight, API management system, Edge Microgateway, in a Docker container.

Since then, I’ve taken on learning the basics of Kubernetes, running it locally within Docker and how to manage my own node. Kubernetes is a container cluster management system made by Google that enables streamlined deployment, maintenance and organization of small or large scale container sets.

What I’ll cover is setting up local installations of Docker and Kubernetes, deploying a Docker container with a replication controller, hitting the deployed container, performing a rolling update, and scaling the pod.

Setup

I set this up on Linux , but it is also possible on OS X. Start by installing Docker locally, which can be done several ways. I installed with yum package manager:

$ sudo yum install docker

Immediately following I started the docker service to get the ball rolling:

$ sudo service docker start

Verify that is running with a quick docker command (the output should appear as follows):

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Finally, I installed the Kubernetes command-line tool, kubectl like so:

$ curl -sSL "http://storage.googleapis.com/kubernetes-release/release/v1.2.4/bin/linux/amd64/kubectl" > kubectl
$ chmod +x kubectl
$ export PATH=$PATH:~/<directory containing kubectl>

What we’ve done so far is get a local Docker installation running and installed the Kubernetes CLI that we will use to manage our node.

Get things running

Docker is not the only way to run Kubernetes locally, but in my opinion, running on Docker is extremely easy and intuitive.

Starting our node

First, export a few necessary environment variables, the most recent stable version of Kubernetes to be used and the processor architecture you’re running on:

$ export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/stable.txt)
$ export ARCH=amd64

Now, we are going to run a Docker image containing the kubelet for our configuration (version/architecture). The kubelet is like a service daemon whose job is to ensure that, within a node, a given set of containers is running properly. It will also initialize all of the other necessary node management containers. Kick this off with the following command:

$ docker run -d \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged \
gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override=127.0.0.1 \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--allow-privileged --v=2

Wait 30 seconds for everything to start up, then verify that our node is running with the following command (output should be similar):

$ kubectl get nodes
NAME STATUS AGE
127.0.0.1 Ready 5d

Running a container

We will start our app container with a replication controller. Replication controllers, based on its configuration, can start multiple instances of the same container app in individual pods. If one fails, it ensures that another is started in its place. It also enables scaling and rolling updates, both of which we will do.

Start by making a replication controller configuration named hello-repc.yaml containing the following:

apiVersion: v1
kind: ReplicationController
metadata:
name: hello
spec:
replicas: 3
selector:
app: hello
template:
metadata:
name: hello
labels:
app: hello
spec:
containers:
- name: hello
image: ndietz/hello
ports:
- containerPort: 3000

This, upon creation, will launch 3 replicas of the same Docker container, running the image pulled from my Docker Hub at ndietz/hello. This is a simple Hello World Node.js app listening on port 3000. Start our replication controller with the following command:

$ kubectl create -f ./hello-repc.yaml
replicationcontroller "hello" created

Running the describe command will give us info on the replication controller’s state:

$ kubectl describe replicationcontrollers/hello
...
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
...

This output is truncated, but should include the above two lines, showing that 3 replicas of our app are running in their own pods.

Playing with pods

Showing that they are running is one test, but actually the Node.js app being run is another. Expose the pods as a service, which will handle traffic in and out of the replication controller’s pods:

$ kubectl expose replicationcontrollers/hello --port=3000
service "hello" exposed

Then, export the IP address of the service to an environment variable for ease of testing:

$ export ip=$(kubectl get svc hello --template={{.spec.clusterIP}})

Finally, test it with a simple curl command:

$ curl $ip:3000
hello world

It’s alive! The service will load balance any requests across the available pods, and if you attempt to kill one of our containers with a docker command, the replication controller is going to start it back up.

Now, what if your app is a hit and we need more instances running? How about when another app needs more resources and we want to turn down some of our replicas? This is as simple as one command:

## scaling up$ kubectl scale rc hello --replicas=4
replicationcontroller "hello" scaled
## scaling down$ kubectl scale rc hello --replicas=2
replicationcontroller "hello" scaled

Another describe call will verify that any scalings have succeeded or show failures.

Last, but not least, we will run an rolling update. This is crucial when you want to implement an update to your production cluster without dropping any traffic. The replication controller will turn on a pod with the update and allow traffic to flow before turning down one of the old pods, until all of the new ones are on. We will update our pods with another Docker image from my Docker Hub:

$ kubectl rolling-update hello --image ndietz/hellov2
...
Update succeeded. Deleting old controller: hello
...
replicationcontroller "hello" rolling updated

This output is truncated, but this command outputs every step of the rolling update. Verify that the update worked:

$ curl $ip:3000
Hello World
this is late data
this is the end of the data

This Node.js app does something slightly different, but it makes the power of a rolling update obvious.

Conclusion

These are really just a few of the things you can do with Kubernetes, however, they are very powerful and helpful even to the small scale developer.

Check out more of Kubernetes and give it a shot in your next project.

Some commands were discovered from Kubernetes references.

--

--

Noah Dietz

Cal Poly, SLO Computer Science Grad ~ Software Engineer, Google ~ My thoughts are (unfortunately) my own