Deploy a Scalable Open-Source Architecture
Ubuntu, Let’s Encrypt, Docker, Docker Compose, NGINX
This guide will describe the tools and techniques I’ve used to host my personal website, Servesa, and also a bunch of side projects. This architecture uses free open-source tools, it can run on any cloud provider (or can be spread across multiple providers), and it can handle thousands of requests per minute.
- Host several applications on a single virtual machine and/or spread a single application across multiple virtual machines
- Manage multiple environments, e.g. “production” , “staging”, “test”, etc.
- Host a private p2p network (e.g., Ethereum)
- Configure the Server: create a virtual machine instance, point DNS records at the instance, and install SSL certificates.
- Define the Network: define a network of Docker containers and configure a load balancer to direct incoming traffic to the correct application.
- Deploy: Copy the network configuration to the server and launch the network.
- Tweak settings for local development: modify the production configuration to run on a local machine.
1. Configure the Server
We’ll keep the server as simple as possible: we’ll install Docker, then install SSL certificates, and that’s it. To begin, create a new virtual machine running Ubuntu and SSH into your virtual machine.
Install Docker and Docker Compose
Configure the DNS records for your domain and subdomains
Log into your domain registrar and add DNS records to point at your virtual machine’s public IP address:
- add A records for
- add a CNAME records for each of your subdomains
Install certbot on the server. Use certbot to request certificates for your domain and subdomains:
$ certbot --server https://acme-v02.api.letsencrypt.org/directory -d servesa.io --manual --preferred-challenges dns-01 certonly
Certbot will generate a validation key. Add a TXT record to your DNS settings named “_acme-challenge” and use the validation key as the value:
Wait for a few minutes to be sure that the DNS record has had time to be updated (you can check your DNS records using a site like What’s My DNS?) and then press enter to continue. Certbot will ping your DNS to prove that you control the domain and then install the certificates at
We’ll repeat the process to get a “wildcard” certificate that will cover any subdomain (notice that we have added a “*.” to the beginning or our domain)
$ certbot --server https://acme-v02.api.letsencrypt.org/directory -d *.servesa.io --manual --preferred-challenges dns-01 certonly
Repeat the steps above to confirm.
That’s it! Our server is ready : )
2. Define the Network
Rather than run our applications directly on the VM we will build our applications into Docker images, and then connect the applications into a private network using Docker Compose. This layer of abstraction makes it easy to add new applications to the network, add additional instances of an existing apps, spread our network across several VM’s, etc.
A Docker Compose “service” is responsible for managing a docker container. We’ll use a Docker Compose configuration file (docker-compose.yml) to define each of the services that will run in our network: a service for each website, a service that will run a database, and a load balancer (“nginx”) to direct incoming traffic to the appropriate service.
Docker can resolve the IP address of the applications within our private network using the name of service. The load-balancer, NGINX (pronounced “engine-X”), uses this feature to direct the traffic to the appropriate application. Notable features of the servesa.io configuration:
- direct traffic for each subdomain to the correct service (lines 83–100)
- redirect any non-https traffic to https (lines 61-67)
- redirect “www” to base domain (lines 69-81)
- format logs to include more useful info (lines 39-44)
Another thing to note is that NGINX (which is running inside of a Docker container) needs access to our certificate files (which are installed on our virtual machine). Here is how we make the files available to the container:
- When we setup our server, certbot installed our certificates in
nginx.confinstructs NGINX to look for the certs in
docker-compose.ymluses the "volumes” directive to map the
/etc/letsencrypt/directory on the virtual machine to the
/etc/letsencrypt/directory inside the container (lines 48–51).
You can place your certificates in any directory on the virtual machine but you’ll need to keep these settings in sync.
Push configuration to Github; Pull configuration from Github
Push your config to a Github repo. Then SSH into your VM and create a new directory. Create a git repo and pull in your configuration from Github:
$ mkdir /etc/servesa && cd /etc/servesa
$ git init
$ git remote add origin http://github.com/magrelo/servesa-compose
$ git pull origin master
With all of the configuration in place we just need to download our docker images and start all of the services in our network. This command will do both:
$ docker-compose up -d
ps command to see the status of the services:
$ docker-compose ps
See the Docker Compose reference for other useful commands.
4. Configure Local Development
Most of our services will run exactly the same whether we’re running on our local machine or on the server (great!). The exception is our load balancer, NGINX, because URLs work differently on a your local machine.
In production we use the subdomain to direct incoming traffic to the correct service, e.g.,
https://midi.servesa.io will be directed to the “midi” service, and
https://garden.servesa.io will be directed to the “garden” service. Your local machine, however, doesn’t support subdomains, it serves webpages at
http://localhost, and it doesn’t have SSL certificates (
https). So we’ll create a separate NGINX configuration for use on a local machine, and change the configuration to use the port to direct traffic to each service:
Finally, we’ll need a way to instruct docker-compose to use the local configuration when we are developing locally. We can create another docker-compose file named
docker-compose.override.yml which will override our production configuration with our development configuration (I rename the file so that docker-compose won’t see it in production, and then rename it back on my local machine when I want to use it).