Deploy a ready-to-use, consistent lemp stack, anywhere with Docker

If your servers already live in the cloud, if you label yourself as “devops” or have an interest in server deployment automation, you probably have heard of Docker. It works on top of Linux Containers (LXC), providing a nice layer of abstraction for us non lxc-gurus. In docker-land, you are not running applications, you are running containers running a linux distribution running an application with minimal performance overhead. Think of it as a chroot on steroids.

How to use it though ? And most importantly, how can it help you ease the pain of deploying new servers ?

Exemples of time consuming stuff you do when deploying a new server :
  • Hesitate over the best distro for the job (CentOS for cPanel or Ubuntu for the fresh packages ?)
  • Add some ppas/repos to get the latest nginx/nodejs/etc
  • rsync your finely tuned conf files
  • maybe break a few dependencies on the way
Exemples of time saving solutions when deploying with Docker :
  • Run the cPanel container with a CentOS image and the nodejs one with Ubuntu
  • Integrate your config files in the container build process, or have Docker fetch them from git/ftp/ssh/etc..
  • Keep your base system light and enjoy the added resilience from running your apps in closed and disposable containers

This tutorial covers the usage and deployment of Docker containers but does not details the creation of dockerfiles : pre-made ones will be used to set up nginx, php-fpm and mysql. I believe that, by having a better understanding and a clearer view of Docker’s usage, it will then be easier for you to build custom images suiting your needs.

Step #1 - install Docker

Get yourself a fresh server/droplet/vm and install Docker. LXC is a rather young tech and Docker even more so. As such, you really should install it following these guides to get the latest version rather than doing a simple apt-get.

Step #2 - get yourself some Dockerfiles

You have a working Docker installation, it’s now time to build the nginx/php and mysql images which will then be used to spawn containers. You can spawn as many containers as you want from an image, and they will all be in the same state upon launch, no matter the metal or the distro running on the host. This is already more scalable than your old way of doing things.

So, let’s get down to business : create a directory for us to work in…

$ mkdir /home/dockerfiles && cd $_

… and fetch the dockerfiles

$ git clone && git clone

The dockerfiles are nothing more than recipes for building images and can easily be tweaked with minimal knowledge and a bit of common sense. The images, however, are immutable once built.

If needed, this would be the right moment for you to make some changes on the config files (nginx.conf, my.cnf … ) provided by the repositories (located in the <repo>/conf folders).

Step #3 - build the images

Building a docker image is pretty straightforward, the only thing you should worry about right now is tagging your images. Tagging will allow you to refer to the image by its name rather than its UID (eg. when deploying a container).

## The `-t` argument allow us to tag the images
$ docker build -t bulgroz/mysql ./dockerfile-mysql && docker build -t bulgroz/nginxphp ./dockerfile-nginx-php

After the build is finished, to list your now available images, you can run

$ docker images

You will see the two images you just built and a third one labeled “ubuntu”. This is because the distro running in your future containers is Ubuntu; docker built a base ubuntu image and used it to make bulgroz/nginxphp and bulgroz/mysql as they are both based upon Ubuntu 14.04.

Step #4 - run the containers

This is the tricky part, when running docker containers you have to consider the multiple layers of your application stack, make them communicate (both between each other and with the outside world); basically you should be ready for the software-inception.

First, we need to ask ourselves : what do we want to accomplish with these containers ?

  • Allow clients to reach our application from a public ip by allowing the container to handle requests coming from the host interface.
  • Allow applications to reach our mysql server while denying arbitrary connections from the internet.
  • Facilitate backups by persisting data on the host OS instead of leaving it in the containers.

Lucky for us, Docker provides means to do all of this via volumes, linking and network bindings.

Run the mysql container and persist its data

First, let’s create a folder for our data :

$ mkdir /home/mysql-data

And run the container :

$ docker run -d -v /home/mysql-data/:/var/lib/mysql --name mysql bulgroz/mysql && docker logs -f mysql

Now, what did we do here ?

  • -d : we’ll run the container in the background, like a daemon.
  • -v /home/mysql-data/:/var/lib/mysql : we tell docker to mount our newly created mysql-data folder on the container’s /var/lib/mysql. This means that the database binary files (eg. ibdata1) will be written in our host’s folder instead of the container’s, this is possible because the Dockerfile exposes /var/lib/mysql as a volume. We effectively persisted mysql’s data on the host OS instead of the container’s.
  • -name mysql : that’s pretty self-explanatory, we give the running container a name; it will become handy when we’ll have to make nginx communicate with it.
  • docker logs -f mysql : print the container logs when the build is done. This is because the container will generate a random admin password and display it in its logs. See the container’s readme for more info about this step.

We ran our first container, but it is not yet very useful since we can not access the server from the outside world (and guess what, by now, there is no way you can even get a shell in it). The next steps will fix that.

Run the nginx/php container

Again, we will create a few folders to keep some data on the host :

$ mkdir /home/www && \
mkdir /home/nginx-sites/logs && \
mkdir /home/nginx-sites/sites-available && \
mkdir /home/nginx-sites/sites-enabled

Then, run the container with :

$ docker run -d -p <your public ip>:80:80 \
-v /home/nginx-sites/sites-enabled/:/etc/nginx/sites-enabled \
-v /home/nginx-sites/sites-available/:/etc/nginx/sites-available \
-v /home/nginx-sites/logs/:/var/log/nginx \
-v /home/www/:/home/www \
--name nginx \
--link mysql:mysql bulgroz/nginxphp

Yeah, this is a bit cryptic, let me detail the command options :

  • -p <your public ip>:80:80 : this will forward the traffic coming to your server’s public ip into the container (on its 80 port). Obviously, nginx will have to respond to requests from the outside world, we just have to tell docker to open the appropriate “door”.
  • -v : again, we mount some volumes from the host on the container. /sites-available/ and /sites-enabled/ will contain your nginx rules; /logs/ for debugging and /www/ to host your websites
  • -link mysql:mysql : probably the most important part of the command; with this we can make our nginx container communicate with the mysql one.
A word about linking : docker containers are running independently from the host system, they run their own networking stack. That is what linking is all about : networking containers inside the docker0 interface without impacting the host’s stack.

Now, if you type …

$ docker inspect mysql

… to display informations about our mysql container, you’ll be able to see (among other stuff) the container’s ip inside the docker network. It will look like this :

"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "",
"IPAddress": "XX.XX.XX.XX",

Remember how we could not access our mysql server from the outside world ? Well, the nginx container can access it through docker0.

Wrapping it up, installing phpmyadmin

Yup, we’re ready, let’s install phpmyadmin to see how our cool new bricks fit together. First, get the latest PMA and unpack into


Now, for the interesting part, we will modify the to suit our installation. For this, only one piece of information is needed : the mysql-server ip address. And you guessed it : since the connection will be happening from inside the nginx container, this ip will in fact be the mysql container’s internal ip we obtained by running docker inspect.

Get your mysql container’s internal ip and modify the line

$cfg[‘Servers’][$i][‘host’] = ‘localhost’; // replace with the ip

then save the resulting file as

The next thing we need to do is to set up a nginx rule to serve our phpmyadmin install. Create a new file named phpmyadmin in your /home/nginx-sites/sites-available folder. Copy and paste this (very basic) rule inside it :

# Don't forget to change the server_name directive to your 
# hostname or public ip in case you don't have a domain name handy
server {
listen 80;
server_name <your server name>;
root /home/www/phpmyadmin;
index index.php index.html index.htm;
if (!-e $request_filename) {
rewrite ^/(.+)$ /index.php?url=$1 last;
location ~ .php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
Now, if you are familiar with nginx, you know that we’ll have to symlink this rule into the /sites-enabled/ folder. However, don’t forget that this link will be followed from inside the container, and since we mounted volumes on both sites-enabled and sites-available, nginx will have a hard time following the symlink if its target is /home/sites-available instead of /etc/nginx/sites-available

Let’s enable our new rule :

$ cd /home/nginx-sites/sites-enabled
$ ln -s /etc/nginx/sites-available/phpmyadmin phpmyadmin

This link is obviously broken on our host system, it goes nowhere since nginx is not installed on it but in a container instead. However, it does have meaning from within the container since those folders are mounted as volumes.

That’s it, restart nginx,

$ docker restart nginx

Connect to your phpmyadmin using the credentials provided by

$ docker logs mysql