k8s — Keeping Up the Development Throughput

Ryan Day
Wireless Registry Engineering
4 min readMay 30, 2016

As we approached 24 hours of celebration around our new Kubernetes cluster, the divergence of production/development environments caused a crash in our Metrics microservice. In short, the local tests failed to account for all setups in the production. Therefore, we have a new challenge: How do we simulate our whole environment locally and thus have all tests executed fast? This would let us keep a high development throughput, and this is a must for any startup.

The Problem Statement

As discussed previously, our containers run Alpine Linux which in turn run Go binaries. Note that as Go binaries aren’t always static, we need a consistent build method for our services. Furthermore, as our infrastructure has inter/intra-cluster services, we need a way to simulate service discovery in the development environment. Finally, these needs must not slow down the development process. We are trying to make it easier and faster to release stable code, not add complexity. We need to make sure the complexity around k8s and containers does not handicap our developers.

The Virtual Environment

Vagrant still rocks. Today, new developers at Wireless Registry are given a Vagrant box. Then they simply vagrant up our entire architecture to build and test with. This process should remain in tact. All we have to do now is add the Kubernetes cluster inside the Vagrant box to have a full environment ready for local tests.

There is a great CoreOS guide to running Kubernetes inside a Vagrant box. But this guide launches a new vagrant box, and we have to consider our existing workflow. We already have one box running our entire architecture. Adding another box will increase system resources and networking problems. Instead, we chose to follow the Ubuntu guide, as if we were installing a cluster on a regular Ubuntu install. This would let us use our existing Vagrant box and avoid additional dependencies.

Vagrant allows you to specify a static IP address for your Virtual Machine that can be accessed from the host computer.

config.vm.network “private_network”, ip: “172.16.45.2”

This lets us expose an IP to services both inside and outside the Kubernetes cluster. Since this IP will be the same across all environments, we can use it in scripts and configuration files.

Since our cluster differs from the one that Ubuntu docs anticipate, this is the environment we have to create inside the Vagrant box:

You’ll notice we only have one node for running the Master and Minion. Since Kubelet is installed via ssh, you will have to generate a new key for the vagrant user and add the key to ~/.ssh/authorized_keys inside the box.

Running this script builds the Kubernetes cluster and installs the DNS add-ons. All the correct upstart scripts are created to make the cluster persist across reboots.

During development we will want to push test Docker images to this cluster. To do this, we set a Docker image registry inside the Vagrant box. See this guide to get a registry setup. Since this is all a local testing environment, we use the insecure-registry flag. This will prevent any complex issues from popping up down the line.

Makefile

We use a single Makefile to built our service inside a Docker container running Alpine Linux. To correctly build a fully static Go binary, we must disable CGO and set cross compilation variables. This produces a static Linux binary on all operating systems. Since you still need to build a Docker image, you still need Docker.

The default build will create a development image. For safety’s sake, if you want to create the production image, you have to specify that in the command line.

After the build, the test image is pushed to the Docker registry on the vagrant box. This allows our Kubernetes pod to pull the image in the same way we would normally deploy the service.

Remember our Docker registry is running in insecure mode. Our host machine needs to have the same insecure-registry flags as the vagrant box does.

YAML Differences

There are hardly any differences between our production YAML configuration, and the test YAML configuration. Of course, we specify the correct Docker registry for our test image. We also set the imagePullPolicy to always. It is annoying to update version numbers while testing just to make sure new images are used.

Caution: If everyone shares keys to the production cluster, you may accidentally deploy something! Set your defaults to the test cluster, and make the production procedure require extra steps.

Conclusion

With the updated Vagrant box and new Makefile, the development workflow looks like:

  1. Write code
  2. `make` which pushes the test image to the k8s cluster in the vagrant box
  3. Restart the k8s pod
  4. Test

This workflow only differs from the previous workflow in step 3. This is pretty good for having added a Kubernetes cluster to manage microservices.

--

--