Laravel-Docker Dev Environment

Weebly Engineering
Weebly Engineering Blog
8 min readJan 26, 2017

A brief description of how we used docker for better development with Laravel.

Also published on http://engineering.weebly.com/

Overview

​This post is an overview of how we use Docker as our development environment in combination with Laravel at Weebly. My goal was to write this in a way that people with little to no Docker experience could easily follow along, and people with Docker experience could get some insight into how we used Docker with Laravel. If you have lots of Docker experience, much of this post might be reiterations of concepts you are already very familiar with.

Preface

We’ve been using the popular PHP framework Laravel for a recent project at work. Laravel’s “out of the box” approach to development is using VMWare/Vagrant (Homestead) — which works perfectly fine, but we were curious about using Docker/a containerized approach for a couple of different reasons:

1 — Lots of buzz around ‘containers/containerizing applications’. We were curious about all the fuss so figured we should see what it was all about.

2 — Our automation team has used/deployed apps using Docker previously, so we thought this might be an easy way to quickly spin up staging/integration environments for new services we build at Weebly. The old integration approach is very tailored to our monolith setup and no standard was in place for new services going forward.

3 — The idea of every change to our dev environment (every change to a container) being tracked in git felt like a potentially much cleaner solution than what we were doing with Vagrant. Not that you can’t do something similar with Vagrant, but the Docker container approach lends itself to this naturally.

For more insights into the differences between Vagrant and Docker, see this Quora post.

Getting Started

​There is a neat project online called LaraDock. This was a cool way to get the app running using Docker in a matter of minutes, and also a great reference to see what the configuration for all your different containers might look like. Unfortunately it didn’t really help us truly understand what was happening under the hood, so we ended up starting from scratch but heavily borrowing from the skeleton that LaraDock provides. Docker itself has a pretty great tutorial which was also an excellent resource (you can skip around to relevant sections without doing the whole tutorial).

Once we made the decision to start from scratch, the first step was figuring what containers we would break our application down into. Initially this ended up being:

  • nginx: our web server
  • HHVM: to run php
  • Postgres: our database
  • workspace: to run things like phpunit/artisan commands/etc

Why separate all of these things? Why not have everything in one place? The beauty of the containerized approach is the modularity you get. With our setup broken down into these separate containers, we could easily swap in PHP-FPM for HHVM, or MySQL for Postgres down the road (without touching any of the other containers). In fact, we actually ended up doing that PHP-FPM/HHVM swap with the release of PHP7. Also, to even more closely mimic our production environment over time we ended up with the following containers by the end as well:

  • memcached: our caching layer
  • RabbitMQ: the queueing service we use in production
  • artisan-queue: a container acting as a consumer node for our queue
  • web-socket: a container running node.js and socket.io for some web socket functionality baked into our application

​A Closer Look At Containers

​So we’ve decided on our various containers now, how do we actually build them? One of the greatest things about Docker are the thousands and thousands of images available on the Docker Registry. For many containers, your exact solution might already exist somewhere in the Docker universe. In our case, we often wanted to pull down an existing image and then build on top of that, which Docker lets us do.

FROM rabbitmq

# enable rabbitmq management plugin
RUN rabbitmq-plugins enable --offline rabbitmq_management

In the above Dockerfile we are simply pulling this RabbitMQ Docker image from the Docker Registry, and adding our own command to enable a rabbitmq admin tool. For many containers you might not even need a Dockerfile since your exact solution might be available already, but often there is lots of tinkering needed on top of an existing image. Take a look at our PHP-FPM 7.0 Dockerfile for example:

FROM php:7.0-fpm

# Install "curl", "libmemcached-dev", "libpq-dev", "libjpeg-dev",
# "libpng12-dev", "libfreetype6-dev", "libssl-dev", "libmcrypt-dev",
RUN apt-get update && \
apt-get install -y --no-install-recommends \
curl \
libmemcached-dev \
libz-dev \
libpq-dev \
libjpeg-dev \
libpng12-dev \
libfreetype6-dev \
libssl-dev \
libmcrypt-dev

# Install the PHP mcrypt extension
RUN docker-php-ext-install mcrypt

# Install the PHP pdo_pgsql extension
RUN docker-php-ext-install pdo_pgsql

# Install bcmath extension
RUN docker-php-ext-install bcmath

# Install the PHP gd library
RUN docker-php-ext-configure gd \
--enable-gd-native-ttf \
--with-jpeg-dir=/usr/lib \
--with-freetype-dir=/usr/include/freetype2 && \
docker-php-ext-install gd

# Install the php memcached extension
RUN curl -L -o /tmp/memcached.tar.gz "https://github.com/php-memcached-dev/php-memcached/archive/php7.tar.gz" \
&& mkdir -p memcached \
&& tar -C memcached -zxvf /tmp/memcached.tar.gz --strip 1 \
&& ( \
cd memcached \
&& phpize \
&& ./configure \
&& make -j$(nproc) \
&& make install \
) \
&& rm -r memcached \
&& rm /tmp/memcached.tar.gz \
&& docker-php-ext-enable memcached

# Install opcache
RUN docker-php-ext-install opcache && docker-php-ext-enable opcache

ADD ./laravel.ini /usr/local/etc/php/conf.d
ADD ./laravel.pool.conf /usr/local/etc/php-fpm.d/

RUN rm -r /var/lib/apt/lists/*

# Permissions
ARG PUID=5082
RUN usermod -u ${PUID} www-data

ADD start.sh /start.sh
RUN chmod +x /start.sh

USER www-data

WORKDIR /var/www/laravel

CMD ["/start.sh"]

There is a lot going on there but if you’ve ever installed and gotten a PHP application running on a linux machine, most of this is probably familiar. One thing to note is you can specify a file to run when the container gets created (running the Dockerfile itself just builds the image) with the following command:

​CMD [“/start.sh”]

​And your start.sh script might run migrations or anything else that needs to happen when the container is spun up.

Our docker-compose

​After getting our various containers building individually, it was now time to have a structured way to spin up all relevant containers at once with the appropriate configuration options. One great way to do this is with a docker-compose file. Docker-compose is an easy way to manage your Docker services/networks/volumes all in one place without having to memorize long CLI commands. For example, when you spin up a container, you may want to copy some log files to a location on your local file system, map docker network ports to your local machine’s ports, or more. Instead of running a terminal command that might look like:

docker run nginx -i -p 22 -p 8000:80 -m /data:/data -t <foo/live> /bin/bash

We could have all this fun stuff defined in our docker-compose.yml. Below is an example of what our docker-compose.yml file might look like.

version: '2'

services:
nginx:
build: ## define root folder of Dockerfile
context: ./nginx
ports:
- "8000:80" ## Map Docker port 80 to Local port 8000
- "443:443" ## Map Docker port 443 to Local port 443

...more_configuration:
…define more containers

Now that we have all our docker container configurations in one place we can simply run docker-compose up, and it will automatically spin up our containers defined in our docker-compose.yml with the appropriate configurations. This is great because now you have a file you can commit to source control so all developers have the same docker network configuration. Ultimately, our docker-compose.yml had a lot more going on than in the example above. Below is a slightly meatier version.

version: '2'

services:
nginx:
build:
context: ./nginx
args:
- PHP_UPSTREAM=php-fpm
volumes_from:
- volumes_source
volumes:
- ./logs/nginx/:/var/log/nginx
links:
- php-fpm
ports:
- "80:80"
- "443:443"

php-fpm:
build:
context: ./php-fpm
volumes_from:
- volumes_source

postgres:
build: ./postgres
volumes_from:
- volumes_data
environment:
POSTGRES_DB: our_groovy_db
POSTGRES_USER: mogley
POSTGRES_PASSWORD: not_telling

…more containers

### Laravel Application Code Container ######################

volumes_source:
image: tianon/true
volumes:
- ../base/:/var/www/laravel

### Data Container ################################

volumes_data:
image: tianon/true
volumes:
- ./data/postgres:/var/lib/postgresql/data
- ./data/rabbitmq:/var/lib/rabbitmq/mnesia

As you can see there are many more configuration options you can specify on each of your containers. Also of notable interest are the volumes_source container and the volumes_data container. These are special containers known as data volumes and in our case we used them to:

  1. Sync code from local machine to our docker network and share said code across containers
  2. Persist data beyond the lifespan of a container

Here is Docker’s longer/better explanation of data volume containers:

  • Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization
  • Data Volumes can be shared and reused among containers
  • Changes to a data volume are made directly
  • Changes to a data volume will not be included when you update an image
  • Data volumes persist even if the container itself is deleted

In addition to providing structure to your Docker network, docker-compose is also a great way to separate configurations for your dev environment and whatever other environments you may use in conjunction with docker (staging/integration/production). You can have separate docker-compose files, i.e. docker-compose.dev.yml, docker-compose.integration.yml, and you can have a base docker-compose.yml that they inherit from. This is especially useful if your integration environments ports differ from your local machine, or if you want to pass in certain environment variables only to one setting.

Final Notes

Docker has definitely had some pain points, and it’s still a growing community. There is lots of new terminology to learn entering into the Docker/Container world. Overall, our development team that adapted this approach has been extremely happy with the decision. Some reasons (some reiterating here):

  • Devs feel like they have a much better understanding of their infrastructure using Docker. We’re forced to have a better understanding of each service that is interacting with our application.
  • Devs feel a little more able to debug infrastructure related issues (without bugging those ops guys!).
  • Potential for more closely related prod/dev environments.
  • Container/service configuration in source control is amazing (I know this is not necessarily new, but certainly not common practice when using Vagrant/Homestead as far as I’m aware)
Rohan Sahai

Rohan Sahai

Full Stack Software Engineer at Weebly

--

--