Dockerise Everything
We’ve been using Linux containers at Univers Labs since 2013, when we first started deploying applications to the ‘software-as-a-service’ platform Heroku. At the time, Heroku offered us a number of advantages over deploying to virtual machines in the cloud, or deploying to bare metal. The premise is simple: create an application, make sure that it can run within the Heroku environment, and push it to a Heroku Git repository; Heroku takes care of managing the application, the server it runs on, and the networking it depends on, minimising our system administration and hosting costs. In 2017, Univers Labs migrated from Heroku to Kubernetes, managed with Deis and deployed on Google Container Engine. In doing so, we have strengthened our commitment to offering our customers affordable, dependable, scalable hosting, based on containerisation technology from the world’s top cloud vendors.
Our infrastructure is cutting edge, but that can make it a challenging environment to develop applications for. Setting up a development environment can be a challenge in itself. Over the last six months, I’ve worked on replacing our previous development practice, using orchestrated Linux virtual machines, with an entirely Docker-based solution. In the process, I’ve saved a whole bunch of disk space on my Ubuntu laptop, sped up my development workflow, and tightened the security of my device.
Setting up an environment for developing PHP applications is the main pain point in our development workflow. Our PHP applications need a webserver, a PHP runtime, a MySQL server, a Memcached server, and Grunt. Previously, we orchestrated all of these moving parts with Vagrant, Ansible, and just installing and running services on the host system. With Docker, we can do away with all of that. The core of this new Docker workflow is a custom Docker image, based on the Heroku Cedar-16 Docker image. By installing the Heroku PHP Buildpack and a Procfile runner into the Heroku Cedar-16 image, we create an environment for running and serving PHP applications which mirrors our Deis environment. This image weighs in at about 500 MB: hefty, but a fraction of the size of our Ubuntu virtual machines.
FROM heroku/heroku:16
# To compile this Dockerfile, run `composer update — ignore-platform-reqs && docker build -t phpslug .`.
ADD ./composer.json /app/composer.json
ADD ./composer.lock /app/composer.lock
ADD ./vendor /vendor
# Shoreman is a Procfile runner written in Bash: https://github.com/chrismytton/shoreman.
ADD ./shoreman /heroku/shoreman
# Run symlinks the output from the buildpack’s compile script into your working directory, so that your working directory functions like a compiled PHP Heroku slug.
ADD ./run /heroku/run
# Create non-root user and give this user ownership of all assets; this user will compile the buildpack and invoke its start script.
RUN useradd -ms /bin/bash heroku
RUN chown -R heroku:heroku /app
RUN chown -R heroku:heroku /heroku
RUN chown -R heroku:heroku /vendor
USER heroku
WORKDIR /app
RUN /vendor/heroku/heroku-buildpack-php/bin/compile /app /tmp/app-build
RUN chmod +x /heroku/run
RUN chmod +x /heroku/shoreman
# Copy everything out of /app into a backup directory.
RUN cp -ra /app/. /heroku
RUN rm -r /app/*
RUN rm -r /vendor/*
# Set PATH to traverse the right folders in /app first.
ENV PATH “$PATH:/app/.heroku/php/sbin:/app/.heroku/php/bin”
ENV PORT 5000
ENV HOME_URL http://localhost:5000
ENV SITE_URL http://localhost:5000
# To run this container in your working directory, run `docker run — rm -it -p 80:5000 -p 5000:5000 -v $(pwd):/app phpslug`. This would be a useful command to create a shell alias for.
CMD /heroku/run
With Docker Compose, this Docker image can be run in conjunction with MySQL and Node.js containers, to provide a database and to compile our source code with Grunt.
version: "3"
services:
wordpress:
image: phpslug
volumes:
- .:/app
networks:
- wordpress
depends_on:
- mysql
- memcached
mysql:
image: mysql
networks:
- wordpress
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: projectdb
MYSQL_USER: projectuser
MYSQL_PASSWORD: projectpassword
memcached:
image: memcached:alpine
command: memcached -m 512
networks:
- wordpress
networks:
wordpress:
driver: bridge
And that’s all you need to run WordPress!
I no longer need PHP, Nginx, MySQL, PostgreSQL, MongoDB, Memcached, ElasticSearch, Neo4j, Grunt, Webpack, Vagrant, Virtualbox, or any of a thousand packages which I had installed on Ubuntu before. I’ve uninstalled all of them, and replaced them in my projects with Docker containers, orchestrated with Docker Compose. “Perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away.”
This setup works for running one WordPress installation. Docker containers start and stop in a few seconds, run quickly, and need few resources. However, we also maintain WordPress Multisite installations, which can only be reached over specific domain names. How do we connect to the PHP container using a domain name?
Docker has an embedded DNS server, to facilitate service discovery inside Docker networks; containers started using Docker Compose, for instance, can find and connect to each other using their container names or network aliases. That’s how WordPress can connect to the MySQL database at `mysql:3306`; the Docker DNS server takes care of resolving the hostname `mysql` to the IP address of the MySQL container. If we want our Docker host to use this DNS server, we need to change Docker’s DNS configuration, and we need a DNS proxy running inside the Docker network. To configure the Docker daemon, edit `/etc/docker/daemon.json`.
{
"dns": [
"208.67.222.222",
"208.67.220.220"
]
}
For our DNS proxy, we’ll use `pdnsd`.
FROM alpine
ADD ./pdnsd.conf /etc/pdnsd.conf
RUN apk add -U pdnsd && rm -rf /var/cache/apk/*
EXPOSE 53/tcp 53/udp
CMD [“pdnsd”]
And we’ll run it with Docker Compose.
version: "3"
services:
pdnsd:
build: ./pdnsd
restart: always
volumes:
- pdnsd-cache:/var/cache/pdnsd
ports:
- 127.0.0.1:53:53/tcp
- 127.0.0.1:53:53/udp
networks:
- web-apps
networks:
web-apps:
external: true
volumes:
pdnsd-cache: {}
Now, we can configure Ubuntu to use `127.0.0.1` as a DNS server. Chrome will consult with the `pdnsd` daemon running in the `pdnsd` container; `pdnsd` will consult with the Docker DNS server; and the Docker DNS server will consult with the DNS servers we specified in `/etc/docker/daemon.json`, as well as the Docker network. In this way, our Docker host is configured to resolve Docker container names to IP addresses. `pdnsd` also caches my DNS requests, speeding up my Web browsing generally.
Now, we can connect to our WordPress Multisite installation!
This only scratches the surface of the possibilities which Docker and Linux containers offer. In time, it is my hope that we can find a solid way to orchestrate our deployments to Deis and Kubernetes using a declarative method like Compose files. In the meantime, I hope to test this new workflow on macOS, so that others on the Univers Labs team can enjoy the benefits of developing with containers.