Deploying Django, Postgres, and Redis Containers To Kubernetes (Part 2)

Bill Prin
Google Cloud - Community
10 min readMar 31, 2016

This is the second part of my series on deploying a Django app to Kubernetes. Click here to read the first part, where I walked through containerizing a Django app and running it on Kubernetes with just the in-memory cache and SQLite database. This part 2 assumes you have completed the part 1 steps and your Django app is available with an external IP, just without a proper database or cache hooked up.

At the end of this tutorial, you will have the Django app running with a PostgreSQL database, protected by a Kubernetes secret database password, and a Redis cache, as well as your static files served by a CDN.

As before, you can find all the code, README with full instructions, and issue tracker for this tutorial on Github. If you run into any problems following along, you can file and issue on the Github Issue Tracker and mention waprin@, or mention me on Twitter.

If you’re interested in other ways and tutorials to run Django on Google Cloud, I recently released a new landing page for Django on Google Cloud, which includes new quickstart guides for Django on Google App Engine standard environment, Google App Engine flexible environment (formerly Managed VMs), Google Compute Engine, and Google Container Engine (Kubernetes) using Google CloudSQL (fully managed MySQL). Check it out! And again, please bug me on Github or Twitter if you run into any issues.

Neither Redis nor Postgres will be highly-available setups — I’ll work on fixing that in the next post.

Step 0: Some Debugging Tips

There’s a lot of steps to walk through in this tutorial, and I usually don’t get them 100% right myself. Here are some useful commands to remember if you run into problems:

$ kubectl logs <pod name>

kubectl logs will show the standard output/error of the container, which is usually the first place I look for problems. If you need to dig deeper, you can run some commands on the container, like `cat` a file.

$ kubectl exec <frontend-pod> -- cat /etc/secrets/djangouserpw

If you want to see all of the Kubernetes meta-information about a resource, try `kubectl describe`:

$ kubectl describe pod <pod_name>$ kubectl describe service <service_name>$ kubectl describe rc <rc_name>

Often times when I run into problems, the easiest thing to do is delete and rebuild the resource.

$ kubectl delete rc frontend$ kubectl create -f kubernetes_configs/frontend.yaml

Step 1: Deploying Redis

Redis is a popular in-memory key-value store, typically used for caching, although it could also be used as a primary database or for queueing. In this example we’ll focus on just using it as a cache, so we won’t need any disk storage. For our use case, it tracks the number of visitors to our page so far. Let’s quickly peek at our Django configuration in mysite/settings.py:

CACHES = {
'default': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION': [
'%s:%s' % (os.getenv('REDIS_MASTER_SERVICE_HOST', '127.0.0.1'),
os.getenv('REDIS_MASTER_SERVICE_PORT', 6379)),
'%s:%s' % (os.getenv('REDIS_SLAVE_SERVICE_HOST', '127.0.0.1'),
os.getenv('REDIS_SLAVE_SERVICE_PORT', 6379))
],
'OPTIONS': {
'PARSER_CLASS': 'redis.connection.HiredisParser',
'PICKLE_VERSION': 2,
'MASTER_CACHE': '%s:%s' % (
os.getenv('REDIS_MASTER_SERVICE_HOST', '127.0.0.1')
, os.getenv('REDIS_MASTER_SERVICE_PORT', 6379))
},
},

As you can see, we’re using the the django-redis-cache and hiredis Python libraries to get Django talking to Redis. We configure the hosts and the ports to localhost and the default Redis port (6379) by default, but we can replace those environment variables to configure it differently.

Within the Kubernetes cluster, REDIS_MASTER_SERVICE_HOST will be automatically populated by the virtual IP of any service named ‘redis_master’. That means as long as we create ‘redis_master’ and ‘redis_slave’ and have them configured to listen on 6379, all of these environment variables will be automatically populated.

Note that if you only have a Redis master, pointing the REDIS_MASTER_SLAVE_HOST to the same host will work fine.

You can create the Redis cluster in Kubernetes in one command:

$ kubectl create -f kubernetes_configs/redis_cluster.yaml

That’s it! Ok, but let’s add some more context about what’s going on. Once again,it’s instructive to run containers locally. Redis has an official image on DockerHub, so we can just run that locally.

$ brew install redis # install redis cli locally on OS X
$ eval $(docker-machine env dev) # initialize my Docker client to my ‘dev’ machine
$ docker run -p 6379:6379 redis # Pull and run the official Redis docker image
$ redis-cli -h $(docker-machine ip dev)

If you’re working on a Linux workstation and running Docker directly, you can then access your local Redis through localhost:6379, but if you’re using Docker-Machine, you’ll have to first get your virtual machine’s IP. You’ll also have to install the Redis CLI. On OS X I use Homebrew to accomplish this.

You can use that local image of Redis to play around or run your app locally.

Now let’s go back to look at what exactly we did in our kubernetes Redis configuration. We’re creating two Services, one for the read-write master and one for the read-only slaves. We set the master replication controller to create only one replica.

replicas: 1

The slave replication controller on the other hand, will start with two replicas, and can be scaled to more should we desire:

$ kubectl scale rc redis-slave --replicas=5

One last thing you might note is that the image field for the slave is set to gcr.io/google_samples/gb-redisslave:v1 rather than just redis. The gcr image is a public image from the PHP guestbook example that extends the base image and starts Redis, but allows it to start in a read-only slave mode that can connect to master. You can take a look at the image yourself.

Step 2: Deploying PostgreSQL

Since the Django settings are configured to use both the Redis service and the PostgreSQL service, before we re-build our application without the NODB flag, you should first set-up Postgres. Postgres is going to need a persistent disk. These instructions are going to use a Google Persistent Disk, but the Kubernetes docs contain instructions for how you would need to change it to an Amazon EBS volume.

For my Container Engine instructions in the README.md, you can create a disk with 500GB like so:

$ gcloud compute disks create pg-data --size 500GB

Next, we’ll need to create the database. First, we’re going to create a Kubernetes Secret to contain our database passwords. Creating a Secret lets us store things like password or SSL certs in our clusters without adding it to the image itself or storing it an unsafe storage. You might wonder why we are using secrets for Postgres not for Redis. In both cases, we don’t play to expose either as an external Service, which means we don’t need secrets, but in both cases it’s still probably safer to use secrets to protect your data. You use your best judgement, but in this case I thought it would be helpful to demonstrate how to use Kubernetes secrets to store passwords.

The secrets will be saved in base64 format, so you have to encode it. Note: base64 is a compressed format, but it’s not encryption and the password in base64 format is as insecure as plaintext. On OS X or Linux, you can use the command line:

$ echo mysecretpassword | base64

Open up `kubernetes_configs/db_password.yaml` and replace <your-base64-encoded-pw-here> with the output of the above base64 command.

apiVersion: v1
kind: Secret
metadata:
name:
db-passwords
data:
djangouserpw:
bXlzZWNyZXRwYXNzd29yZAo=

Then, add the secret to the cluster:

$ kubectl create -f kubernetes_configs/db_password.yaml

Now if you run ‘kubectl get secrets’ you can see the see the secret you created, db-passwords.

Once the Secret is created, you can create Postgres pod. We can look in ‘kubernetes_configs/postgres.yaml’ to see our config.

spec:
containers:
- name: postgres
image: gcr.io/$GCLOUD_PROJECT/postgres-pw
# disable this in production
imagePullPolicy: Always
ports:
- containerPort: 5432
volumeMounts:
- name: postgresdata
mountPath: /usr/local/var/postgres
- name: secrets
mountPath: /etc/secrets
readOnly: true
# PostgreSQL Data
# Replace this with the persistent disk of your choice
# TODO: replace with Persistent Volume
volumes:
- name: postgresdata
gcePersistentDisk:
# your disk name here
pdName: pg-data
fsType: ext4
- name: secrets
secret:
secretName:
db-passwords

As usual, we’re creating a Service for our database so that other Pods can talk to our Service. We’re also creating a Replication Controller for our Postgres container, but like in the Redis master example, limiting it to one Pod. Only one Pod can read and write to a Persistent Disk at a time, so it doesn’t make sense to have more Pods until we do a highly-available setup. The advantage of creating a Replication Controller with one replicas instead of creating a Pod directly is that the Replication Controller will restart our Pod if it fails, since its mission in life is to always ensure 1 Pod with label “name=postgres’ exists at all times.

Before we create the Postgres image, we have to build it. We can’t just deploy the vanilla Postgres image because we need to add a few lines to read the password from the secret file and use it in our database configuration. Look at postgres_image/Dockerfile to see how that’s done. The secret gets mounted into /etc/secrets/djangouserpw by the Pod config, and then the Dockerfile read it’s into an environment variable. Unfortunately, currently secrets are only available as files rather than environment variables, so you have to add these lines yourself.

ENTRYPOINT []CMD export POSTGRES_DB=guestbook; export POSTGRES_USER=django_user; export POSTGRES_PASSWORD=$(cat /etc/secrets/djangouserpw) ;./docker-entrypoint.sh postgres;

To build and push the image, change into the ‘postgres_image/Dockerfile’ directory and run the following:

$ cd postgres_image
$ gcloud config set project <your-project-id>
$ make build
$ make push

The Makefile just aliases the Docker commands to build and push the image to Google Container Registry. Alternatively, you could build and push an image to DockerHub.

Once the image has been uploaded, edit `kubernetes_configs/postgres.yaml` and replace ‘waprin/postgres_image’ to reflect the image name of the Docker image you just pushed, which should be ‘gcr.io/your-project-id/postgres-pw’. Or, of course, you can just use my image since it should be the same thing. When you’re done, create the Postgres Pod:

$ kubectl create -f kubernetes_configs/postgres.yaml

Step 3: Redeploy the Django App and Run Migrations

Now if we run `kubectl get services`, we should see our Postgres and Redis services (both redis-slave and redis-master). We can run `kubectl get rc` to see each of the replication controllers, and `kubectl get pods` to see each of the containers (replicas) the RC creates.

Since we had previously built our Django image with the `NODB` environment variable set, no we want to disable it. Change into the `guestbook` directory again and make sure the `ENV NODB` line is commented out.

# Comment it out and rebuild this image once you have Postgres and Redis services in your cluster
#ENV
NODB 1

Once that’s done, rebuild the container and push it again (remember these make commands are aliases for docker build/docker push):

$ make build
$ make push

The next thing we need to do is edit the Replication Controller config. Since we need a database password, we need to mount the Kubernetes Secret onto the frontend as well. It’s already in `kubernetes_config/frontend.yaml`, it’s just commented out, so uncomment it:

# uncomment the following lines with NODB set to 0, so you can mount the DB secrets
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName:
db-passwords

Since you changed the definition of the replication controller, at this point, the easiest thing to do is just delete the rc and recreate it:

$ kubectl delete rc frontend$ kubectl create -f kubernetes_configs/frontend.yaml

If you wanted to avoid downtime, it would probably be better to `kubectl patch` the resource instead.

If you weren’t updating the Replication Controller definition and wanted to just change the image, the safest way is a rolling-update, which will spin up 1 new container at a time and only keep going if the new containers successfully start.

$ kubectl rolling-update frontend \
--image=gcr.io/${GCLOUD_PROJECT}/guestbook

If, like me, you like to live dangerously, and don’t mind a little downtime you can instead edit the frontend.yaml file to point to your new image, scale the replication controller down to 0 (killing all your pods) and then scale it back up.

$ kubectl scale rc frontend --replicas=0 # kill your pods
$ kubectl scale rc frontend --replicas=3 # new image

In any case, we needed to change more than just the image in order to mount the secrets so deleting/creating or patching the entire replication controller is necessary in this case.

The final step is to run our database migrations. One nice thing about Kubernetes is we can use the ‘kubectl exec’ command to execute any commands available within our containers. So to run migrations, we will just pick an arbitratry front-end pod and run them from there.

First, get a list of your pods so you can pick one of the front-end pods:

$ kubectl get pods

Then, pick one of the frontend pods and use it to run the migrations:

$ kubectl exec $(FRONTEND_POD_NAME) -- python /app/manage.py migrate

or you can just use the Makefile alias I have setup to do this automatically:

$ make migrations

Once that’s done, we can again do ‘kubectl get services’ to get the external IP of the frontend. If everything went well, the app should now be serving and correctly maintaining state.

$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.67.245.107 104.154.27.239 80/TCP 8hkubernetes 10.67.240.1 <none> 443/TCP 9hpostgres 10.67.255.67 <none> 5432/TCP 9hredis-master 10.67.244.152 <none> 6379/TCP 8hredis-slave 10.67.240.128 <none> 6379/TCP 8h

Step 4: Serve Static Content

While the app should now correctly persist data, we still have the DEBUG flag on in the settings. If we take it off, we’ll notice all our CSS stylesheets and Javascript went away. That’s because the app is configured to use Django’s static file handler, which is appropriate for debug usages but not for serving files in a production setting.

There are multiple options to service static content, and any CDN (Content Delivery Network) will suffice. While I’m obviously a bit biased, Google Cloud Storage is a nice choice because it’s both a globally available file store and a CDN.

To use GCS as your static file handler option, first create a publicly accessible GCS bucket:

$ gsutil mb gs://<your-project-id>
$ gsutil defacl set public-read gs://<your-project-id>

and, of course, I have a Makefile alias that does the same thing:

$ make bucket

Next, gather your Django static content into a local folder specified by settings.STATIC_ROOT

$ python manage.py collectstatic

Finally, upload your content to GCS (replace <your-gcs-bucket>)

$ gsutil rsync -R static/ gs://<your-gcs-bucket>/static

Now, you can change settings.STATIC_URL to point to this bucket name and when settings.DEBUG is set to False, it will serve from this URL instead.

STATIC_URL = 'https://storage.googleapis.com/your-project/static/'

At this point, you should have a Django app that runs on a Postgres database and Redis cache, all running inside Kubernetes.

There’s still some work to be done on high availability of our services. Until next time!

--

--