Louay alakkad­
Apr 29, 2016 · 3 min read

You know that ‘damn’ moment you get when you see things like your api returning incomplete JSON response? or your file uploads crashing your server for no obvious reason? We’ve had a lot those recently, and this article is here to spare you a few of them.


TL; DR

Use this.

The Long Version

Docker is an amazing product. We’ve been relying on it for months now as an essential part of our CI/CD setup. The ability to move your applications around as immutable images and guarantee that your development environment is identical to your staging/production ones is just amazing. But it has its drawbacks.

The main issue with docker comes from its main feature, immutability. A file system where you cannot add, change or remove any files doesn’t play well with Nginx’s —arguably the best http server out there— heavy reliance on writing temporary files to disk, and that’s why you start seeing all sorts of problems when you mix the two together.

You’d expect the official nginx docker image to solve this for you, but no, it doesn’t. And neither does this tutorial on nginx’s official website nor this one on DigitalOcean’s community blog warn you about any potential problems.

mkdir() “/var/cache/nginx/proxy_temp/1/70” failed (30: Read-only file system) while reading upstream, client: 10.10.10.10, server: urbanmassage.com, request: “GET / HTTP/1.1”, upstream: “http://10.10.10.11:80/", host: “urbanmassage.com”, referrer: “”

You see the message above, and you panic, trying to figure out why does nginx want to write to “/var/cache/nginx/proxy_temp” and you realise it’s the proxy_temp_path option. So you turn off proxy_buffering.

proxy_buffering off;

A few days later, you see another one and say to yourself “I’ve got this”.

open() “/var/cache/nginx/client_temp/0000000939” failed (30: Read-only file system), client: 10.10.10.10, server: urbanmassage.com, request: “POST /webhooks/braintree HTTP/1.1”, host: “urbanmassage.com”

And then you realise that it’s not as simple as turning something off. You have to set client_body_buffer_size to the same value as client_max_body_size. And so you do, adding a comment about that to your config file warning the next guy not to change one without the other.

# These two should be the same or nginx will start writing 
# large request bodies to temp files
client_body_buffer_size 10m;
client_max_body_size 10m;

Thinking that you might have missed something, you do another google search on setting up nginx with docker, and you go through all those tutorials but no one seems to mention issues like this at all. And then you say to yourself, “I must write something about this”, or at least I did.

As a final note: there are a couple more options that are recommended to be turned on in nginx but do not work with docker. e.g. sendfile. So just be careful whenever you enable anything. You can use this file to start.

And there you have it.

Update: As some people pointed out, this only applies to certain setups. The read-only filesystem can be turned off in docker easily but our setup doesn’t support that yet (we use empire on top of AWS ECS.) There are a couple more solutions like — tmpfs and volumes, but neither of them is supported in our setup and we’ll be sticking to the solution below for a while.

Urban Massage Product

Read about any interesting projects, cool tech we've been working with and any other tid-bits we do within the Urban Massage product team

Louay alakkad­

Written by

Tech & business geek

Urban Massage Product

Read about any interesting projects, cool tech we've been working with and any other tid-bits we do within the Urban Massage product team

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade