Dockerizing your personal infrastructure

Adrian Gheorghe
6 min readSep 5, 2018

--

This setup has been significantly improved and is detailed in this new post

Like many developers out there, i own multiple domains that i use for various projects or my cv/portfolio. That being said, i had hosting accounts in various places:

  • A hosting account with Godaddy (for Cpanel mainly)
  • A hosting account provided by my good friend Bogdan on his server
  • On and off i have used a VPS to test different technologies

I had long been thinking about moving everything to a cloud based solution like AWS so that i can have everything in one place. I wanted to move all the projects i have into containers and setup an easy deployment process but could never find the time. Until a couple of months ago when i actually did it. Or at least part of it.

This tutorial will include how to get a VPS, install Docker and raise your projects in containers with SSL from Let’s encrypt and port forwarding from an NGINX proxy container. It does not include the project build process which i have not yet finished.

Cloud hosting — Vultr.com

While this tutorial is intended for implementation on a VPS or cloud server instance, it can also be used to set up projects locally on your machine, so skip this step if that is what you are after

First off i needed to get a VPS. I did some research and unfortunately could not go with DigitalOcean like i wanted initially, because i did not have a Credit Card. As they do not accept Debit or PrePaid, i had to find a different solution. I found this thread on Hacker News which directed me to Linode and Vultr and finally decided on Vultr.

I got a 1 core, 2GB RAM VPS, as you can scale up whenever you need. As for the OS, i installed Ubuntu 16.04

Docker

Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux, Windows Server, and Linux-on-mainframe apps.

In order to set up your projects in containers you will of course need to install Docker and Docker Compose. You can use the following instructions to get you started. It shouldn’t be very hard to get docker running on your Ubuntu 16.04 server.

https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#install-using-the-convenience-script

or

https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-16-04

Container administration — enter Portainer

Managing containers through the command line is the most flexible solution of all, but sometimes you just want to have a nice user interface to help you with this. Enter Portainer.

Portainer is an automated docker container management tool that is as easy to get running as it is itself a container. It needs to connect to the Docker socket, so a volume will need to be set up for this.

docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer

Also, what is it with services and port 9000? You might want to make Portainer listen on a different port, if you are already using port 9000 or plan on using it for nginx.

Uninstall apache2

Ubuntu comes with the apache2 web server preinstalled. In order to make the port forwarding work, we will need to run nginx on port 80. So either stop the apache2 service, or uninstall it altogether. I decided to uninstall it as i wasn’t going to use it anyway.

sudo apt-get autoremove

sudo apt-get remove apache2*

dpkg -S `which apache2`

or locally

sudo apachectl stop

Nginx Proxy and Let’s Encrypt proxy companion

This is where things start to get interesting. In order to run projects with docker we will have to set up one or more containers for each of them, but in order to have them accessible from outside of the server, what we will need to do is set up nginx to listen to port 80 and forward all requests to their specific containers. This can be done in multiple ways, including installing nginx on the server and setting the proxy rules manually, but luckily for us, there is a container setup that can help.

https://github.com/jwilder/nginx-proxy is a very useful project that will do just that. It will listen on the docker socket and whenever a container is created with a set of environment variables, it will set up port forwarding for that virtual hostname to that container. We will need to create a docker network to use for all the containers we want it to forward to.

In addition to this, we will use Let’s encrypt to automatically generate and renew free ssl certificates for our projects. We could do that manually, but we will use a container project that does just that https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion. This will listen on the docker socket as well, and scan containers, so it can renew certificates automatically.

So let’s get started

Create a docker network that will run all your containers

docker network create nginx-proxy

Define a nginx template file

Save the following nginx template file to your server. We will need it when creating the nginx container https://github.com/jwilder/nginx-proxy/blob/master/nginx.tmpl

Create docker-compose.yml file for your ngnix containers

version: ‘2’
services:
nginx:
image: nginx
container_name: nginx
ports:
— “80:80”
— “443:443”
volumes:
— ./conf.d/:/etc/nginx/conf.d
— ./vhost.d/:/etc/nginx/vhost.d
— ./html:/usr/share/nginx/html
— ./certs:/etc/nginx/certs:ro

dockergen:
image: jwilder/docker-gen
container_name: dockergen
command: -notify-sighup nginx -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
volumes_from:
— nginx
volumes:
— /var/run/docker.sock:/tmp/docker.sock:ro
— ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro

nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
environment:
# ACME_CA_URI: https://acme-staging.api.letsencrypt.org/directory
— NGINX_DOCKER_GEN_CONTAINER=dockergen
container_name: nginx-letsencrypt
volumes_from:
— nginx
volumes:
— ./certs:/etc/nginx/certs:rw
— /var/run/docker.sock:/var/run/docker.sock:ro

networks:
default:
external:
name: nginx-proxy

Now start the containers to have the set up running

docker-compose up -d

Move your projects to containers

Now depending on what your project is running on, you can create the container setup you need. For the purpose of this tutorial we can create a sample wordpress container setup. The following docker-compose.yml contains setup for a mysql db container and a container running the latest wordpress.

version: ‘3’

services:
project_db:
container_name: project_db
image: mysql:5.7
restart: always
volumes:
— ./mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: randompass
MYSQL_DATABASE: wordpress
project_web:
hostname: example.org
container_name: project
depends_on:
— project_db
image: wordpress:latest
restart: always
environment:
WORDPRESS_DB_HOST: project_db
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: randompass
VIRTUAL_HOST: example.org
LETSENCRYPT_HOST: example.org
LETSENCRYPT_EMAIL: noreply@example.org
volumes:
— ./code:/var/www/html/wp-content
networks:
default:
external:
name: nginx-proxy

Breaking down the configuration, the containers will be attached to the nginx_proxy network so the nginx proxy container can detect them and route all traffic using the VIRTUAL_HOST environment variable.

The LETSENCRYPT_HOST and LETSENCRYPT_EMAIL will be used by the ssl container to generate your certificates.

All you need to do now is raise the containers using

docker-compose up -d

In order to get started for your personal projects you need to create a directory for each project (i’m not going to go into detail how to do that, you should create separate users for each project and be careful with permissions) but you get my drift.

After cloning your project repo, or copying your files over to the server, all you have to do is create a docker-compose.yml file that suits the technologies your project depends on. Whether you run a very generic setup, like the WordPress one i just presented, or a very specific setup running something else, you can probably set it up like this.

Future improvements

Currently, there is no build process for deploying or updating my projects. In order to make things super easy in the future, i plan on integrating Jenkins (at least a Jenkins container) to be able to build projects from their git repo and not have to clone each project and scp the files directly on the server.

Initial article from my personal blog.

--

--

Adrian Gheorghe

Backend Web Developer currently living in Bristol, UK. I mainly code in PHP and JS, but currently learning GO. I'm also very passionate about DevOps.