Laravel + Docker Part 2 — preparing for production

What’s different in production?

This post will only cover a single-server setup. We’ll do it all the manual way so you can see all the bits and pieces that are needed.

Step 1 — Prepare the ‘app’ image

In the first post we created a PHP-FPM based image that was suitable for running a Laravel application. Then all we needed to do was mount our current directory into that container and the app worked. For production we need to think about it differently though, the steps are:

  1. Use a Laravel-ready PHP image as a base
  2. Copy our source files into the image (not including the vendor directory)
  3. Download & Install composer
  4. Use composer within the image to install our dependencies
  5. Change the owner of certain directories that the application needs to write to
  6. Run PHP artisan optimize to create the class map needed by the framework.
  • On line 1 we’re building from an image I’ve published to the Docker hub that contains just enough to run a Laravel Application.
  • Because on line 3 & 5 we use the COPY command, Docker will check the contents of the what’s copied each time we attempt a build. If nothing has changed, it can use a cached version of that layer — but if something does change, even a single byte in any file, the entire cache is discarded and all following commands will execute again. In our case that means every time we attempt to build this image, if our decencies have not changed (because the compose.lock file is identical) then Docker will not execute the RUN command that contains composer install and our builds will be nice and fast!

Step 2 — create .dockerignore

In the previous snippet we saw the line COPY . /var/www This alone would include every single file (including hidden directories like .git) resulting in a HUGE image. To combat this we can include a .dockerignore file in the project root.

.git
.idea
.env
node_modules
vendor
storage/framework/cache/**
storage/framework/sessions/**
storage/framework/views/**
  • The last three lines are there to ensure any files written to disk by Laravel in development are not included, but we do need the directory structures to remain.

Step 3 — Build the ‘app’ image

With the files app.dockerfile and .dockerignore created, we can go ahead and build our custom image.

docker build -t shakyshane/laravel-app .
  • This may take a few minutes whilst the dependencies are downloaded by composer.
  • -t here instructs Docker to ‘tag’ this image. You can use whatever naming convention you like, but you need to consider where this image is going to end up when deciding. In this example I’m publishing this image under my username on docker hub (shakyshane) and want the repo to be named laravel-app You can see what I mean here https://hub.docker.com/r/shakyshane/laravel-app/ & for the more visual amongst us:

Step 4— Add the ‘app’ service to docker-compose.prod.yml

Now we can begin to build up the file docker-compose.prod.yml starting with our app service. (as in the first post, these are all under the ‘services’ key, but don’t worry as there will be full snippets later).

#  The Application
app:
image:
shakyshane/laravel-app
volumes:
- /var/www/storage
env_file: '.env.prod'
environment:
- "DB_HOST=database"
- "REDIS_HOST=cache"
  • We use image: shakyshane/laravel-app here to point to the image we just built in the last step. Remember this has all of our source code inside it, so we do not need to mount any directories from our host.
  • We need a way for files written to disk by the application to persist in the environment in which it runs. Using a volume definition in this manner /var/www/storage will cause Docker to create a persistent volume on the host that will survive any container stop/starts.
  • We’re going to set up Redis as the session and cache driver, but I’ve yet to find a way to stop Laravel writing view caching to disk, that’s why this single volume is required.
  • We use env_file: ‘.env.prod’ to mount a Laravel environment file into the container. This part is something that can be improved by using a dedicated secret-handling solution, but I’m not dev-opsy enough to do that, and in cases where we’re just using a single-server setup, I think this approach is ok. (please, any security experts out there, correct/point me in the right direction)

Step 5 — Prepare the ‘web’ image

So, now that we’re building a custom image that contains a PHP environment, all of our source code and all application dependencies, it’s time to do the same for the web server.

FROM nginx:1.10-alpine

ADD vhost.conf /etc/nginx/conf.d/default.conf

COPY public /var/www/public
  • This time we’re using nginx:1.10-alpine — the -alpine bit on the end means this base image was built from a teeny tiny linux base image that will shave 100s of MB from our final image size.
  • The alpine build also supports HTTP2 out of the box also, so, bonus.

Step 6 — Create the NGINX config

Save the following as vhost.conf — that’s the name we had above in web.dockerfile

  • Where you see paths like/etc/letsencrypt/live/example.com in that file above, you can change example.com for your own domain if you have one, but either way, go and download some self-signed certs from http://www.selfsignedcertificate.com/ (or generate your own if you know how) and stick them in the following directory certs/live/example.com — it should look like this when done.
  • The idea is that when testing locally, you can pass in something like LE_DIR=certs — referring to your local directory, but then on the server you can pass LE_DIR=/etc/letsencrypt — which is where the certbot will dump your certs. This will create security warnings locally due the self-signed certificates, but you can click past the warning to fully test HTTP2 + all your links over a HTTPs connection.

Step 7 — Build the ‘web’ image

With the files web.dockerfile and vhost.conf created, we can go ahead and build our second custom image.

docker build -t shakyshane/laravel-web .
  • Just as the previous image, we tag this one to match the repo name under which it will live later.

Step 8— Add the ‘web’ service to docker-compose.prod.yml

# The Web Server
web:
image:
shakyshane/laravel-web
volumes:
- "${LE_DIR}:/etc/letsencrypt"
ports:
- 80:80
- 443:443
  • We mount the directory containing our certificates using an environment variable. Docker-compose will replace ${LE_DIR} with the value we provide at run time which will allow us to swap between live/local certs.
  • We bind both port 80 & 443 from host->container. This is so that we can handle both insecure and secure traffic — you can see this in the second server block in the vhost.conf file above.

Step 9 — Add MySql and Redis services to docker-compose.prod.yml

Finally, the configuration for MySql and Redis needs to be placed in our docker-compose.prod.yml file — but for these we do not need to build any custom images.

version: '2'
services:

# The Database
database:
image:
mysql:5.6
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
- "MYSQL_PASSWORD=secret"
- "MYSQL_ROOT_PASSWORD=secret"

# redis
cache:
image:
redis:3.0-alpine

volumes:
dbdata:
  • We use a named volume for MySql to ensure data will persist on the host, but for Redis we don’t need to do this as the image is configured to handle this for us.

Step 10 — Create .env.prod

As with any Laravel Application, you’re going to need a file containing your app’s secrets, a file that is usually different for each environment. Now because we want to run this application in ‘production’ mode on our local machine, we can just copy/paste the default Laravel .env.example sample file and rename to .env.prod — then when this application ends up on a server somewhere we can create the correct environment file and use that instead.

Step 11 — Test in production mode, on your machine!

This is where things start to get seriously cool. We’ve built our source & dependencies directly into images in a way that allows them to run on any host that has Docker installed, but the best bit is that this includes your local development machine!. Say goodbye to test locally, pushing and hoping for the best!

  1. Created a app.dockerfile , built an image from it & configured it in docker.compose.prod.yml
  2. ^ same again for web.dockerfile
  3. Created a .dockerignore file for exclude files and directories from COPY commands
  4. Created the vhost.conf file with NGINX configuration, created self-signed certs for local testing & added the docker-compose.prod.yml config for it
  5. Added the Redis & MySql configurations
  6. Created a env.prod configuration file
version: '2'
services:

# The Application
app:
image:
shakyshane/laravel-app
working_dir: /var/www
volumes:
- /var/www/storage
env_file: '.env'
environment:
- "DB_HOST=database"
- "REDIS_HOST=cache"

# The Web Server
web:
image:
shakyshane/laravel-web
volumes:
- "${LE_DIR}:/etc/letsencrypt"
ports:
- 80:80
- 443:443

# The Database
database:
image:
mysql:5.6
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
- "MYSQL_PASSWORD=secret"
- "MYSQL_ROOT_PASSWORD=secret"

# redis
cache:
image:
redis:3.0-alpine

volumes:
dbdata:
LE_DIR=./certs docker-compose -f docker-compose.prod.yml up

Next Steps.

I can only cover so much here, and the main focus of this blog was to fill in the pieces I found to be lacking from other posts in terms of the differences between using Docker in Development and then in Production.

  1. Created a Digital Ocean Droplet with Docker pre-installed.
  2. Followed one of their guides for installing docker-compose (it doesn’t come pre-installed on Linux like it does on Docker for Mac/Windows)
  3. Setup auto-building in Docker Hub, so that when I push to Github the two images are automatically built.
  4. Used certbot to generate SSL certs on the server
  5. Ran the same docker-compose command seen above, but this time swapping the LE_DIR part for the path on the server to the certs. eg: LE_DIR=/etc/letsencrypt docker-compose -f docker-compose.prod.yml up

Next things to look at:

  • CI: Lots of the steps in this post can be automated, for example, you may use a CI service to build and publish your images to a registry, along with running tests etc. I would recommend that you get familiar with building/pushing images manually first however, just so that you fully understand the workflow bits and pieces before you move into a completely automated flow.
  • Secret & certs management: Having an entire application along with it’s running environment neatly packaged into containers just feels like the correct thing to do, but due to my lack of dev opts/sysadmin skills, I honestly don’t know how to remove these 2 items from being mounted at run time. I hear that Docker is currently working on it’s own solution for secrets management, which will be so cool to have it integrated. Container services such as Kubernetes already provide solutions for it out of the box however, and then there’s also dedicated things such as Vault…. So much to learn… :)
  • Swarm mode, scaling etc: Docker has native support for scaling applications across multiple host using simple CLI commands. Very much looking forward to using this in the future. I’ve tried using docker-machine to boot up cloud servers on Digital Ocean and I can tell you right now that it’s a mind-blowing experience — especially when you realise that all of your regular docker commands, including things such as docker-compose, all work over the network… Crazy cool.

--

--

Work: https://wearejh.com/ Teaching: https://egghead.io/instructors/shane-osbourne

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store