Setting up nginx and PHP-FPM in Docker with Unix Sockets

As many of my peers probably know, PHP-FPM comes out of the box listening on port 9000 on most distros. This makes setting up nginx and PHP extremely simple: you set up nginx to use fastcgi and pass calls to PHP files onto localhost, which PHP-FPM will field with the relevant pool. Job done.

There is a gotcha however. Setting up a TCP connection is not overhead-free, and more importantly, there is a security concern.

Assume for a moment that your site allows user upload, or that you have an application with some critical vulnerability yet to be found or patched. Whatever the circumstance, someone has been able to get some PHP to execute. Possibly you have done some things to mitigate the damage this can cause, such as having your front end pool use a chroot. Yet fastcgi is an open protocol, and there are several libraries that show how you can interact with a fastcgi daemon in pure PHP using sockets and streams. The permissions model for Unix does not apply to a TCP connection from localhost: perhaps you were relying on connections to the fastcgi endpoint being locked down to the local loopback, but the request is from localhost, so TCP security naturally lets it in.

The overall effect is that now someone has full access to all the pools and can do whatever PHP-FPM can do, and as nginx itself has the power to pass directives as fastcgi_params, that can mean overriding an awful lot of the security directives in php.ini. Containerization or read-only permissions are not a panacea if someone can write to the PHP-FPM socket; they can learn or do any number of horrid things.

The standard way to mitigate this on bare metal has been to switch to using Unix sockets, and giving only the nginx user permission to write to it. This prevents any nefarious PHP code from doing something it shouldn’t, at the cost of flexibility: no more load balancing your application server. In reality, this isn’t such a problem: assuming you have an application that takes advantage of caching with something like Varnish, then your PHP-FPM daemon should only be serving up pages for either a cold cache or a non-anonymous user (be that a logged in user or a user that has been sharded to some identifying feature). In which case, you will want to be serving the page as fast and as securely as possible, which is where Unix sockets have an advantage over TCP.

This is all fine and dandy on a dedicated server, but what about in Docker? On a dedicated server, users — and more importantly, UIDs and GIDs — are shared, so it is extremely easy to set up nginx as the sole user allowed to connect with PHP-FPM. But Docker by design does not share these users: we need to do some additional setup first.

For this, we’re going to use docker compose and two containers — the official PHP Alpine 7.1.7 PHP-FPM container, and the Nginx Pagespeed container by funkygibbon. We’re going to use fairly safe UID and GIDs for both containers, which must match across both. In the end I elected for setting the user to app with the UID and GID both set to 3000, but there’s no reason the GID and UID can’t differ, so long as both use the same name and ID for user and group. First, the nginx container:

# nginx-backend/Dockerfile
FROM funkygibbon/nginx-pagespeed:xenial
RUN addgroup --gid 3000 --system app
RUN adduser --uid 3000 --system --disabled-login --disabled-password --gid 3000 app

And now the FPM container:

# php-fpm/Dockerfile
FROM php:7.1.7-fpm-alpine
RUN addgroup -g 3000 -S app
RUN adduser -u 3000 -S -D -G app app

Note the differences in setting up the user and group as the nginx container is based on Ubuntu Xenial.

Now, in our docker-compose.yml, we need to bind these together:

---
# docker-compose.yml
version: '3'
services:
php_fpm:
build:
context: php-fpm
volumes:
- sock:/sock
# data containers containing the actual PHP code
- app:/www/app:ro
- vendor:/www/vendor:ro
- src:/www/src:ro
networks:
- backend-php
nginx_backend:
build:
context: nginx-backend
expose:
- 80
volumes:
- app:/www/app:ro
- sock:/sock
networks:
- backend-php
...

Let’s run through some of the things we’re doing here. First, we’re defining two services, php_fpm and nginx_backend, corresponding to our two containers. As we need PHP-FPM to be able to read (but not write) to PHP files within our application, we define three data volumes: app, src, and vendor, and mount them as read only. These coincide with the docroot inapp, the source code, and the composer vendor directory. For nginx, we don’t need access to anything other than the docroot, so we only bind app as a volume. Now we share a special directory, sock, which is where we will put the PHP-FPM socket. While the data volumes contain something that we naturally deploy with, this is purely a volume to share the Unix socket through.

Now, when we add configuration into the PHP-FPM container, we specify where we want the socket in the pool.d definition:

[www]
;listen = [::]:9000 # Don't need this
listen = /sock/docker.sock
listen.owner = app
listen.group = app
listen.mode = 0660

And a corresponding entry in the nginx config. For preference, we use the nginx upstream directive:

# site.conf
upstream _php {
server unix:/sock/docker.sock;
}

And further down, in our handler for PHP:


location ~ ^/index\.php(/|$) {
fastcgi_pass _php;
include fastcgi_params;
...
}

We then add the appropriate entries in the Dockerfile to add this config to the containers during build.

What does this gain us? Well, while we have now a container bound to another (PHP-FPM and nginx must now live on the same host), in reality this isn’t a big issue for the benefit in both performance and security.