From Docker Compose to Minikube

Some Learnings while running Kubernetes locally

Devin Burnette
Skillshare Writings
5 min readSep 7, 2017

--

One of the burdens of any software development team is trying to make your development environment look and feel as close to production as possible. This helps to ensure there aren’t too many surprises when releasing a feature. Last summer, the Skillshare Engineering Team moved both our production and QA environments to Kubernetes and it has since been instrumental in helping us scale as we continue to grow. AWS and tools like Kops have made running Kubernetes in production a cinch. However, our local development environment was different enough to cause a bit of frustration at times.

We were using Docker Compose with docker images built with local development in mind. When we heard about Minikube, a single node implementation of Kubernetes, we weren’t really sure of its stability since it was still in “pre-release” mode. We decided to try it out to see if it would meet our needs and we ran into a few snags along the way. I’ll walk you through how we solved them.

Converting Docker Compose yaml to Kubernetes

The first obstacle was generating Kubernetes pod specs from the existing Docker Compose files. Not a big deal. At this point, Kubernetes wasn’t new to us, and if you have Kubernetes experience already you might find it straightforward also. At first we took a look at another open source project called Kompose.io which provides a simple CLI to convert existing Docker Compose yaml files into the necessary Kubernetes configuration. The project was still in the early stages and didn’t quite give us what we were looking for. After all, our setup wasn’t as easy as a 1 to 1 conversion. We had very specific requirements so we tackled this part by hand instead.

We created a Kubernetes deployment and a Kubernetes service for every container that had an exposed port. Most used the default service type, ClusterIP, for internal cluster communication. There were a couple of services that needed to expose a port that we could access externally, like from a browser. For those, the NodePort service type is what we needed. You can see the before and after below:

Docker Compose Config Example

VS

Kubernetes Pod Spec Example

Exposing Port 80 as a NodePort

Our Nginx service needs to listen on port 80. We couldn’t change that without also changing the rest of our setup to expect a different port or use port forwarding. We wanted to make things as seamless as possible for developers during the transition, so these options were out of the question. Minikube has an “--extra-config” flag that has the ability to tweak the way certain aspects of the Kube API Server works. There’s a setting called “apiserver.ServiceNodePortRange” that takes a range of ports you’d like to make available for NodePort assignments. By default this range is set to 30000–32767 and doesn’t allow a service to bind to port 80. Since we’re on our localhosts and we know there is nothing else listening on that port, we can override this value to include port 80 in the lower bound and tell our Kubernetes service that we want to use that port explicitly for Nginx. Our Kubernetes Nginx service looks like this:

Kubernetes Nginx Service Example

Our start command looks like this:

Mounting the Project Directory as a Volume

One thing that should be different from production is how your project’s source code gets into the container. For local development, a developer needs to see their changes in real-time. So we had to mount the project directory on the host machine — inside the container. We were already using VirtualBox with our Docker Compose configuration, so we looked into what mount options were already provided. VirtualBox has something they call “Shared Folders” (vboxsf) that mounts /Users by default on a Mac. This makes anything inside /Users available inside the VirtualBox VM running Minikube. Then we just needed to mount this volume inside each container that needs access to the project source code, like this:

Kubernetes Host Path Volume Mount Example

This seemed to work, but the performance was unbearable. Page loads took upward of 30 seconds in some cases, so that option was out. Then we came across this blog post, by Mitchell Hashimoto of HashiCorp, before considering trying other hypervisors and then ultimately NFS. Though a bit more tricky to configure, NFS ended up being a ton faster. Mitchell’s blog post illustrates the difference in sequential file reads and writes.

via: Mitchell Hashimoto, HashiCorp

NFS was clearly a better option than vboxsf. In order to have the proper file permissions inside the VM, the project directory needed to be exported in /etc/exports like this:

/Users -network 192.168.99.0 -mask 255.255.255.0 -alldirs -maproot=501:20

Then inside the Minikube VM, we ran this command to get it mounted:

Now we’re cooking with gas!

Stern for Logging

Similar to the way we had Docker Compose showing what was being output by our containers, we needed a way for developers to be able to see their logs in real-time with Minikube. We came across another tool written in
Golang called Stern. Stern is great! It has the ability to show all logs, in all containers, in all pods, all within the same stream while neatly grouping events from the same pod or container together by color. Stern’s strengths lie in its ability to filter and exclude logs by regex queries against the pod selector labels. One of the main reasons we chose Stern is its ability to cleanly stop and start tailing the logs while pods are scaling up and down. This functionality allows the developer to quickly debug an issue, resolve it, and move on without wasting much time.

via: Antti Kupila, Wercker

By making this switch from Docker Compose to Minikube, we’ve been able to reduce certain variances across environments that made local development a pain. In the process, the team learned a great deal about Kubernetes. Now we can even poke around with the latest and greatest Kubernetes features in isolation without the risk of accidentally bringing down production or QA.

Join Us

If you like solving cool problems like these, we’re hiring!

--

--