From Docker Compose to Minikube

Some Learnings while running Kubernetes locally

Devin Burnette
Sep 7, 2017 · 5 min read

One of the burdens of any software development team is trying to make your development environment look and feel as close to production as possible. This helps to ensure there aren’t too many surprises when releasing a feature. Last summer, the Skillshare Engineering Team moved both our production and QA environments to Kubernetes and it has since been instrumental in helping us scale as we continue to grow. AWS and tools like Kops have made running Kubernetes in production a cinch. However, our local development environment was different enough to cause a bit of frustration at times.

We were using Docker Compose with docker images built with local development in mind. When we heard about Minikube, a single node implementation of Kubernetes, we weren’t really sure of its stability since it was still in “pre-release” mode. We decided to try it out to see if it would meet our needs and we ran into a few snags along the way. I’ll walk you through how we solved them.


Converting Docker Compose yaml to Kubernetes

We created a Kubernetes deployment and a Kubernetes service for every container that had an exposed port. Most used the default service type, ClusterIP, for internal cluster communication. There were a couple of services that needed to expose a port that we could access externally, like from a browser. For those, the NodePort service type is what we needed. You can see the before and after below:

Docker Compose Config Example

VS

Kubernetes Pod Spec Example

Exposing Port 80 as a NodePort

Kubernetes Nginx Service Example

Our start command looks like this:

minikube start \
--kubernetes-version=v1.7.0 \
--cpus=2 \
--memory=4096 \
--extra-config=apiserver.ServiceNodePortRange=1–50000

Mounting the Project Directory as a Volume

Kubernetes Host Path Volume Mount Example

This seemed to work, but the performance was unbearable. Page loads took upward of 30 seconds in some cases, so that option was out. Then we came across this blog post, by Mitchell Hashimoto of HashiCorp, before considering trying other hypervisors and then ultimately NFS. Though a bit more tricky to configure, NFS ended up being a ton faster. Mitchell’s blog post illustrates the difference in sequential file reads and writes.

via: Mitchell Hashimoto, HashiCorp

NFS was clearly a better option than vboxsf. In order to have the proper file permissions inside the VM, the project directory needed to be exported in /etc/exports like this:

/Users -network 192.168.99.0 -mask 255.255.255.0 -alldirs -maproot=501:20

Then inside the Minikube VM, we ran this command to get it mounted:

minikube ssh -- sudo busybox mount \
-t nfs 192.168.99.1:/Users /mnt/sda1/data/source-code \
-o rw,async,noatime,rsize=32768,wsize=32768,proto=tcp

Now we’re cooking with gas!


Stern for Logging

via: Antti Kupila, Wercker

By making this switch from Docker Compose to Minikube, we’ve been able to reduce certain variances across environments that made local development a pain. In the process, the team learned a great deal about Kubernetes. Now we can even poke around with the latest and greatest Kubernetes features in isolation without the risk of accidentally bringing down production or QA.


Join Us

Skillshare Writings

Musings, stories, and updates from the Skillshare team

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store