Local domains through nginx-proxy and Docker

Juan Cortés
5 min readNov 16, 2017

--

Continuing on my quest to get everything I had on Mamp Pro (on the good days) on my new Docker setup, the next problem I faced was running multiple projects simultaneously on my machine and linking them to domain names so I can do myawesomeproject.local instead of 0.0.0.0:8181 and in the process learn a little bit more about what docker is doing when we setup our compose file.

Fair warning: This is a continuation of “A story of Laravel, Docker, XDebug and a Mac” where I explore a potential solution for running multiple projects at the same time locally. I have updated the compose file from the previous post to simplify unnecessary things that we where doing over there.

The objective

Lets say I have two projects running Laravel, each of them with their docker compose file. Being able to get them to run at the same time will be the first thing on our list, to do this we need to use different ports for our containers, for example, if our httpd and mysql containers are using ports 8081 and 3306 respectively, we’ll need our second container to use different ports, say 8181 and 3307. But here’s the interesting part, and let me highlight it for you:

And I’m not even sure of this!

This means that unless we want to access services from outside the container, or should I say the network adapter (more on this in a second) we can have multiple containers running on port 80 without interfering with each-other.

With the previous logic, where we accessed the apps through 0.0.0.0:PORT we needed to expose those ports in order to make them reachable. But we are now going to use nginx-proxy which will map certain domains to certain containers, as long as they running in the same network as the proxy.

Network what?

Containers in Docker can be grouped in network adapters, and containers in the same network adapter can see each-other. This is in fact fascinating and a lot more interesting than what I’m going to cover here, read more about it here.

Create a network to share

You can get a list of the available networks for docker by running docker network ls. We need to create our own, running:

docker network create nginx-proxy

Where nginx-proxy can be whatever we want, so long as we keep using the same name in the following steps. Verify that you’ve created the network successfully by running the ls command once again.

Run a nginx-proxy on that network

This is the longest command we need to run in the whole process. And it can be read as follows:

Create and image based on the repository jwilder/nginx-proxy and run it in detached mode (releasing the process on the terminal), map it’s port 80 to the hosts port 80 and map a volume on between the container and the host.

This last step is so that all containers have access to that volume but I’m not really sure for what. But since it’s listed in the docs, we might as well just do it.

docker run -d -p 80:80 --net nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy

Tell our docker compose files to use this network

With that nginx-proxy running, all that’s left is to tell our projects to use that network, and set some environment variables so that the proxy server knows where to map what. Following the name of the network created above, add the following to your compose file:

#Specify the network these containers will run on
networks:
default:
external:
name:
nginx-proxy

And define which domain we’ll be using and which port nginx should listen for. This port is not the port of the container but the port of the connection. Add this code inside the service that will handle the connection, not your database server!. In my case it’s inside the httpd service

environment:
- VIRTUAL_PORT=80
- VIRTUAL_HOST=project.local

That’s it! go access project.local in your favourite browser and be done with it! This didn’t take that long, I bet you’re super happy.

Oh no! it doesn’t work!

Ok, Ok. I know what happened, your system has no idea how to interpret the url http://project.local so we need to define it in our hosts file. In case you’ve never done that it simply involves editing a text file in your system, /etc/hosts and adding the line 127.0.0.1 project.local

How does that solve the problem?

It tells your system that when you ask for http://project.local it should redirect that request to 127.0.0.1 and that’s where we have our nginx-proxy running.

That server takes the request and matches it against the config file it generated with our environment variables, and it will see that a container in its network has declared itself as capable of handling requests from that domain, on that port (80). After that it’s just a matter of what does httpd do with the request, so if it was working when you had port 8080 exposed before, it will work now.

Connecting to the mysql server from within the network

If you followed the previous post, your httpd service will have a depends_on and links properties declared, that point to the db service. This means, that in your .env file, or wherever you are specifying the host of the mysql server, you can simply put db and it will resolve to the correct service.

As for the port, use the internal port, 3306. Even if you have two mysql servers, and two http servers, they can both use the same port (remember the image on top) internally. So your host and port are db and 3306 respectively.

Connecting to the mysql server from outside the network

From the host operating system, things are a bit different. Now we do need to map a port to the outside world, in order to connect to it from your favourite database client.

We can do this by simply adding something like this to our compose file, where 3307 is the port that can’t be the same between compose files, because it will mean they are both trying to map your local machine’s port 3307 to their internal 3306

ports:
- 3307:3306
We are connected!

Full sample compose file

This file will be inside a folder with the name myproject-dock and will have a sibling folder myproject-dock-mysql. This is not mandatory but I will continue to use this approach until I find something wrong with it. And so far it works like a charm.

version: "3"
services:
httpd:
build:
.
links:
- db:db
ports:
- 1080:1080
depends_on:
[db]
volumes:
- ../:/var/www
- ../public/:/var/www/html
environment:
- VIRTUAL_PORT=80
- VIRTUAL_HOST=myproject.local
db:
image:
mariadb
restart: always
ports:
- 3307:3306
volumes:
- ../myproject-dock-mysql/:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD:
root
MYSQL_USER: test
MYSQL_PASSWORD: test
MYSQL_DATABASE: myproject
volumes:
db_data:
networks:
default:
external:
name:
nginx-proxy

--

--

Juan Cortés

I love building beautiful things and breaking code. Here I mostly rant about tech, code craftsmanship, paranoia, and wariness. www.linkedin.com/in/juancross