Local Django on Kubernetes with Minikube

Bill Prin
Google Cloud - Community
6 min readDec 13, 2016
Minikube

It’s been over half a year since I last wrote about Django on Kubernetes, along with Postgres and Redis containers, and I wanted to update it to talk about one of the most exciting projects to emerge in the Kubernetes ecosystem in the last year — Minikube.

Minikube makes it really, really easy to run a Kubernetes cluster locally. While there previously were lots of options for running Kubernetes locally, to some degree the community is coalescing around Minikube, and it’s an official part of the Kubernetes Github organization.

All the code for this tutorial can be found on this Github project.

Here are some other tutorials on getting started with Minikube:

Minikube Getting Started from the Kubernetes Docs

Getting Started With Kubernetes via Minikube

Configuring the Ultimate Development Environment for Kubernetes

The Minikube project itself usually has the most up-to-date docs in the README on different ways to install it, as well as an Issue Tracker that the development team actively responds to.

Besides Minikube, there a few other changes made to the project:

  • I started using Jinja2 templates via jinja cli to populate any environment variables, and to refactor parts of the configuration that are Minikube or Container Engine specific.
  • I switched all the Replication Controllers to the new Deployments, which are pretty much the same thing with a more declarative update system

Our Django project used a few Cloud features such as Load Balancers and Volumes (via GCE Persistent Disk) that you might wonder how it gets translated to Minikube. So this post will go over the following topics:

  • Minikube tips and gotchas I’ve run into
  • Persistent Volumes and Persistent Volume Claims
  • Minikube services vs External LoadBalancers
  • Port forwarding and why it’s useful
  • Hot reloading your code in development with host mounts

Minikube tips

Another big 2016 announcement was Docker for Mac, which is super great for those of us who didn’t like running VirtualBox and futzing with docker-machine. The default Minikube driver is still VirtualBox though, so if you’re using Docker for Mac, make sure you specify the xhyve driver (xhyve is the hypervisor that drives Docker for Mac):

$ minikube start --vm-driver=xhyve

Or set this permanently with:

$ minikube config set vm-driver=xhyve

Another thing to consider with Minikube is that it won’t always have credentials to pull from private container registries. If you’re using public DockerHub images, this is no big deal, but if you’re using a private registry (like Google Container Registry by default), it’s a problem. There are two solutions — the first is to add imagePullSecrets to all your pod specs. The other alternative is just to avoid having Minikube pulling images, by making sure that imagePullPolicy is set to IfNotPresent.

Keep in mind that the default imagePullPolicy is IfNotPresent, unless the image is tagged as latest, in which case it’s Always. Images without tags are considered to have the tag latest. So it’s best just to tag your images and explicitly set your imagePullPolicy.

My sample repo has gone with the latter approach and avoids pulling images when working locally. In order for Minikube to get the images it needs, you can share your Docker daemon with Minikube.

$ eval $(minikube docker-env)

Now when you do Docker builds, the images you build will be available to Minikube.

When switching back and forth between Container Engine and Minikube, make sure to switch the contexts:

$ gcloud container cluster get-credentials mycluster # Container Engine context$ kubectl config use-context minikube # Minikube context

Persistent Volumes and Persistent Volume Claims

In the original project, I attached a GCE Persistent Disk directly to the Postgres Pod as a Volume:

volumes:
- name: postgresdata
gcePersistentDisk:
# your disk name here
pdName: pg-data
fsType: ext4
- name: secrets

The problem is that Minikube will not be able to access a GCE disk.Of course, this is easily solved by our Jinja2 templates. However, Kubernetes has the concepts of PersistentVolumes and PersistentVolumeClaims, which generalize the nature of storage, so I figured it would be a good place to add it anyway.

Instead of attaching a specific volume, we attach a PersistentVolumeClaim, which simply asks for some sort of storage. Of course, the claims can specify what read/write permissions they need, how much storage, etc.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi

Then we can attach the PVC to the Pod instead.

volumes:
- name: postgresdata
persistentVolumeClaim:
claimName:
postgres-data

Claims will need to be bound to PersistentVolumes that satisfy their constraints. For Container Engine, we will still create a GCE disk:

apiVersion: v1
kind: PersistentVolume
metadata:
name:
pv0001
spec:
accessModes:
- ReadWriteOnce
capacity:
storage:
5Gi
gcePersistentDisk:
pdName:
pg-data
fsType: ext4

But for Minikube, we can just create a local directory and use a hostmount as a persistent volume instead.

apiVersion: v1
kind: PersistentVolume
metadata:
name:
pv0001
spec:
accessModes:
- ReadWriteOnce
capacity:
storage:
5Gi
hostPath:
path:
/data/pv0001/

Load Balancers

Our frontend that serves web traffic used a Service of type: LoadBalancer, which on Container Engine provisions a Google Compute Engine L7 Load Balancer. That provided us with an External IP when we ran :

$ kubectl get services

that we could then reach our service from. Obviously, it doesn’t make sense for Minikube to have GCE Load Balancers or External IPs. Fortunately, the LoadBalancer is ignored by Minikube, and we can still reach our service using the `minikube service` command:

$ minikube service guestbook

This will open up a browser window to our guestbook Service on a local port.

Port forwarding

During development, it’s often useful to be able to make changes to code and immediately have them reflected in the browser. On the other hand, Kubernetes expects immutable images, so we need to do a Docker build in order to update our app. It would be nice if we have our code changes be reflected immediately and be able to use our real Postgres database and Redis cache.

One option is to develop the Django code locally, but still use the Minikube Postgres and Redis through Minikube. Kubernetes has a port-forward command, which is really useful whenever you want to access one of your Kubernetes services without exposing it as some sort of external service. So if you do something like

$ kubectl port-forward <posgtres-pod> 5432:5432 &$ kubectl port-forward <redis-pod> 6379:6379 &

Then your localhost Postgres and Redis ports will map to the Minikube ports. You can you psql and redis-cli to talk to these services from your Macbook directly, and your local Django app can talk to them through localhost as well. Port forwarding is pretty useful in general for accessing services that you don’t want exposed on a Kubernetes cluster.

Hot reloads via host mounts

With port-forwarding, you’re still running the Django code on your local machine rather than on a Kubernetes cluster, which might not exactly be what you want, especially if you have things installed on your Docker image for your frontend that you don’t necessarily have on your workstation. Fortunately, it’s also pretty easy to have your code hot reloaded but still run in your Docker container.

All you have to do is create a host mount to mount your local directory into your container. You also need to make sure you add the — reload flag to the gunicorn command in your Dockerfile:

gunicorn --reload -b :$PORT mysite.wsgi

Now we need to mount the host directory. Keep in mind, when running on a Mac, there are usually two level of hosts. The Macbook itself, and then either VirtualBox or xhve. Since I’m using xhyve, my /Users directory is automatically mounted, and this is where I do all my development anyway. So I just need to mount where I keep my Django code, which for me is in `/Users/waprin/code/django_postgres_redis/guestbook`, to where the container expects to find the code, which is in /app. So I end up adding something like this to my frontend.yaml:

# in guestbook container volumeMounts:
- name: reload
mountPath: /app
volumes:
- name: reload
hostPath:
path: /Users/waprin/code/django_postgres_redis/guestbook

Now, whenever I make any changes to the code on my Macbook, it’s reflected in the container’s directory, and gunicorn automatically hotswaps in the new code. So I can code on my Macbook, but all my code is running on my Linux container in Minikube, with all my Kubernetes services available.

Unfortunately, host folder sharing is not implemented on Linux, although Linux workstations tend to be closer to the Docker images we’re running anyway, so it might not be a big deal.

Getting In Touch

As always, file an issue on my Github repo or mention me on Twitter.

You can also join the #minikube channel on the Kubernetes Slack (get an invite here), as well as the #python channel on the Google Cloud Slack (get an invite here).

--

--