A Beginner’s Guide To Deploying Production Web Apps to Docker

Justin McNally
3 min readDec 24, 2015

At kohactive I have been moving more and more of our apps to Docker. I thought it was worth while to share what I’ve learned from the prospective of a naive developer working with docker for the first few times.

  1. We are only using docker for our staging and production environments.
    While I know the craze is to use it everywhere, some of the complexities of keeping volumes in sync (via mounting) with containers, is a bit of a hurdle and we haven’t implemented it much with our team. I want to push this in future but right now developers on our team are used to having a native development environment on the metal and we haven’t explored moving over. Also our entire team has not adopted docker or learned it in depth, so for now we are slowly phasing it in as a deployment tool rather than a development tool.
  2. Configuring docker machines in the cloud.
    All of our docker machines currently run in Ubuntu droplets on DigitalOcean. Getting Docker running on DigitalOcean with Ubuntu is pretty simple. Spin up an Ubuntu Droplet, Install docker via https://docs.docker.com/engine/installation/ubuntulinux/ and start launching containers.
  3. Use a Registry
    Deployment can be done in multiple ways. For the first few weeks we would build an image on OSX with DockerMachine, then export it, tar/gzip it, scp it to the server and import it. This was slow, and inefficient because we didn’t take advantage of cached layers. Cached layers are great because if things don’t change, you don’t have to recompile or push them. This really speeds up transfers because base layers such as the Ubuntu / Passenger core are only sent once. We’ve both setup our own registry (Using Docker) and have used the hub.docker.com service. You get one free private repository (Image), and you can get 4 more for $7/mo. I highly recommend starting with DockerHub and if you feel its too expensive migrating to your own Registry running on S3 as you grow. Details can be found at https://docs.docker.com/registry/
  4. Take the time to build a toolset for deploying your environment.
    Starting and by extension restarting containers can quickly become complex, error-prone and time consuming. Unlike a typical code deployment you must pull your new image, stop existing containers and start a replacement container in their place, to minimize downtime, having some form of simple automation quickly becomes a must. Here is an example container upgrade script: https://gist.github.com/j-mcnally/1ce0a1455cb940e4c5e3. It pulls the new image, stops the old container, and creates a new container form the new image.
    From the development site we can create a deployment script to make the process runnable from your local machine: https://gist.github.com/j-mcnally/02b6a34bb9e8847a4c85. This script will build the image, push the image to the registry and then runs the upgrade script over ssh.
  5. Use Links
    Links are local connections between containers. Run separate containers for Postgres, Redis and other services, then link them to your application. This will simplify your architecture and most of these containers require very little configuration for basic use cases. All link information will be available to your application in the form of Environment variables to make linking them to your app code very turnkey. https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/

Those are the best lessons and workflows I’ve learned so far. This should cover the basics of getting started with using docker in production. I plan to write a follow up with more advanced workflows, including setting up client SSL certificates for authenticating with docker and using volumes to preserve data across container upgrades.

Follow me on twitter: @j_mcnally

--

--