How to setup your server with freenom .tk domain and nginx reverse-proxy to host multiple docker containers

FedericoGianni
10 min readOct 16, 2020

Alright, this is my first guide. I will try to explain in detail how to setup your own VPS with multiple webapps using nginx reverse proxy and with a free domain from freenom.

0. What you’ll need

  • a dedicated server or a VPS, reachable with a static IP
  • a Cloudflare account (the free plan is enough for our purpose)
  • familiarity with the Linux terminal

First of all, you’ll need to have a server, VPS or dedicated, reachable with a static IP. You can find many services online, one of which could be linode.

1. Register on Cloudflare

We’ll need that later, we’ll use cloudflare to easily get our SSL certificates using certbot, a tool that helps the process of obtaining and renewal of our certificates. Since we can’t do that directly with freenom, we’ll need to manage our domain with Cloudflare.

2. Get your free domain from freenom

Freenom is an online service that lets you get a free domain.

Visit the site https://freenom.com/

Enter a domain name that you want, add it to cart and select it for 12 months (it’s the same as 1 year option, but free :P)

3. Link freenom domain to our Cloudflare service

Our next step is to add our site to the Cloudflare service, so that we can have access to multiple features that will help us with the certificate.

Enter the Cloudflare dashboard and click on “add site”, and add your newly registered domain.

To let Cloudflare have access to our domain, we need to change the default nameservers in our freenom control panel, replacing with the new ones offered by Cloudflare.

From the freenom client area, enter Services -> My Domains and click on Manage Domain.
Then go on Management Tools -> Nameservers.
Here you must change the option to use custom nameservers, and enter the ones provided by Cloudflare.

4. Add record on Cloudflare to make our domain point to the IP address of the server

Now that we correctly changed our nameservers to work with Cloudflare, we can finally add our IP address to let our server be reachable from the domain.
You’ll need to add first an A record, leaving blank the name and just typing the IP address (change the proxy status to DNS only by clicking it, we won’t need that for now).

And also a CNAME to point all existing wildcards names to the same IP, like on the following picture.

5. Initial setup of the server

Now we can finally start doing some work on our server.

Given a fresh debian installation, let’s start by updating our softwares.
From our root terminal let’s write

apt update && sudo apt upgrade -y

Now we create a new user and we grant him the ability to run commands as root by adding him to the sudoers

sudo adduser yourusernamesudo usermod -aG sudo yourusername

Now let’s install docker, and add our user to the docker group in order to let our user launch docker commands

sudo groupadd docker
sudo usermod -aG docker yourusername

Congratz! You’ve reached a checkpoint! Let’s reboot in order to check everything is fine.

sudo reboot

6. Setup SSH and security measures

Now that our machine is reachable from the external world, we need to ensure a layer of security by letting us SSH only with an RSA key and removing the possibility of password access.

Let’s check that we already have ssh installed

sudo apt install ssh

We need to generate a couple public-private key from the computer from which we will connect to the server.
Use this command on your computer to generate a couple and get back to the server with the public key, contained in the file id_rsa.pub

ssh-keygen -t rsa

Now let’s add this key to the authorized keys in our server.

cd /home/yourusername
mkdir ~/.ssh/
touch authorized_keys
nano authorized_keys

And copy the public key previously generated inside that authorized_keys file.
Now we need to mess up a little with the ssh configuration:

cd /etc/ssh
sudo nano sshd_config

Let’s modify this configuration file accordingly:

# $OpenBSD: sshd_config,v 1.103 2018/04/09 20:41:22 tj Exp $# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.
# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options override the
# default value.
Include /etc/ssh/sshd_config.d/*.confPort 33261
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
#HostKey /etc/ssh/ssh_host_ed25519_key
# Ciphers and keying
#RekeyLimit default none
# Logging
#SyslogFacility AUTH
#LogLevel INFO
# Authentication:#LoginGraceTime 2m
PermitRootLogin no
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10
PubkeyAuthentication yes# Expect .ssh/authorized_keys2 to be disregarded by default in future.
AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2
#AuthorizedPrincipalsFile none#AuthorizedKeysCommand none
#AuthorizedKeysCommandUser nobody
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#HostbasedAuthentication no
# Change to yes if you don’t trust ~/.ssh/known_hosts for
# HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don’t read the user’s ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication no
#PermitEmptyPasswords no
# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication no
# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no
# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
#GSSAPIStrictAcceptorCheck yes
#GSSAPIKeyExchange no
# Set this to ‘yes’ to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of “PermitRootLogin without-password”.
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to ‘no’.
UsePAM yes
#AllowAgentForwarding yes
#AllowTcpForwarding yes
#GatewayPorts no
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
#PermitTTY yes
PrintMotd no
#PrintLastLog yes
#TCPKeepAlive yes
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#UseDNS no
#PidFile /var/run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none
# no default banner path
#Banner none
# Allow client to pass locale environment variables
AcceptEnv LANG LC_*
# override default of no subsystems
Subsystem sftp /usr/lib/openssh/sftp-server
# Example of overriding settings on a per-user basis
#Match User anoncvs
# X11Forwarding no
# AllowTcpForwarding no
# PermitTTY no
# ForceCommand cvs server

In this way we will allow access only for non-root users and only with the couple public, private RSA key previously generated.
I also changed the default ssh port from 22 to 33261, it’s not mandatory but it’s a good practice to have ssh access on a port different from standard since many automatic hacking tools will try the default one, with multiple password logins (which, by the way, we have disabled :P).

Just remember to use the -p 33261 flag every time you want to connect to the server throught ssh.

7. Nginx server and reverse proxy setup

We can now move to the setup of nginx server. I prefer to install nginx directly on the machine instead of using a docker container, and use the container for the different web apps.

sudo apt install nginx -y
sudo systemctl enable nginx
sudo unlink /etc/nginx/sites-enabled/default
sudo touch proxy_config.conf
cd /etc/nginx/sites-enabled
sudo ln -s . ../sites-available/proxy_config.conf

With these commands we will install nginx, enable automatic start at reboot, and add setup our proxy-configuration.
We will use the ln -s command to create a symbolic link so that both sites-available and sites-enabled will see the same file and we can edit only one of them to modify both.

cd /etc/nginx/sites-enabled/
sudo nano proxy_config.conf

Edit it as follows:

server {
listen 443;
server_name test.myfirstmediumguide.tk;
location / {
proxy_pass http://localhost.com:8001;
}
}
server {
listen 443;
server_name myfirstmediumguide.tk;
location / {
proxy_pass http://localhost:80/;
}
}

Now let’s check that our configuration is correct and restart nginx to apply changes:

sudo service nginx configtest
sudo service nginx restart

8. Use certbot to get SSL certificate

In this section we will use certbot with nginx and cloudflare plugins to get a SSL certificate valid for both our base name + our wildcard name *.myfirstmediumguide.tk

The hard part here is that Cloudflare does not support anymore automatic obtaining or renewal of certificates from .tk domains.
We’ll need to obtain it manually inserting a txt challenge in our DNS record from the cloudflare dashboard.

First of all, let’s install certbot with the requird plugins.

sudo snap install core; sudo snap refresh core
sudo snap install --beta --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
sudo snap set certbot trust-plugin-with-root=ok
sudo snap install --beta certbot-dns-cloudflare
sudo snap connect certbot:plugin certbot-dns-cloudflare

Now get your cloudflare dashboard open and run the following command in your terminal:

certbot certonly \
--manual \
--preferred-challenges dns \
-d myfirstmediumguide.tk \
-d "*.myfirstmediumguide.tk" \
-i nginx

Now certbot will ask you to put a DNS TXT record to provide a challenge to demonstrate that you are the real owner of that domain.

Navigate to your cloudflare dashboard and add them one by one.

You’ll need to do this 2 times, be aware NOT to delete the first challenge when adding the second, even if they have the same name, otherwise the challenge won’t succed!

Now we should enable auto redirect for all non https requests to https.

Create this file if not already present, otherwise just use the second command to edit it.

sudo touch /etc/nginx/conf.d/domain-name.conf
sudo nano /etc/nginx/conf.d/domain-name.conf

And add this configuration replacing with your domain names.

server {
listen localhost;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
server_name myfirstguideonmedium.tk www.myfirstguideonmedium.tk;
listen 443 ssl; # managed by Certbot# RSA certificate
ssl_certificate /etc/letsencrypt/live/myfirstguideonmedium.tk/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/myfirstguideonmedium.tk/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot# Redirect non-https traffic to https
if ($scheme != "https") {
return 301 https://$host$request_uri;
} # managed by Certbot
}

Restart the nginx server to apply the changes:

sudo service nginx restart

Check that everything is fine by navigating both to the main domain throught https: https://myfirstguideonmedium.tk and a wildcard domain like https://test.myfirstguideonmedium.tk
You should see from both the nginx default page, like this:

Congratulations! You successfully got your letsencrypt certificate!

9. Final Docker configurations

We can now proceed to the final docker configurations needed to get all our services up and running.

We will create 2 separated directory to manage our docker-compose files and our docker volumes, to make everything cleaner.

sudo apt install docker.io
sudo mkdir /srv/docker/volumes
ssudo mkdir /srv/docker/compose
sudo curl -L https://github.com/docker/compose/releases/download/1.25.4/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Now we can use docker-compose commands.

10. How to add a new service: example

In this section I will explain with an example the procedure to get a new service up and running by binding local ports to docker ports, managing volumes and get the new service running on a subdomain like blog.myfirstmediumguide.tk

We will set up ghost, an open source blogging platform.

First of all we need to add our new entry to the nginx reverse-proxy configuration, to redirect the requests incoming to blog.myfirstmediumguide.tk on a local port different than the standard ones, so that we can host multiple web services on the same machine.

Every time we want to add a new service we need to edit our proxy-config.conf generated at the beginning.

sudo nano /etc/nginx/sites-available/proxy-config.conf

Add a new entry without deleting the others:

server {
listen 443;
server_name blog.myfirstguideonmedium.tk;
location / {
proxy_pass http://localhost:8001/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
}
}

Since we are redirecting all requests to https, we can add just the configuration for port 443, but the procedure is the same if you want to listen also on port 80, just copy paste this configuration above and change port.

Now we are redirecting all requests coming to our subdomain blog to port 8001 on localhost.
The next step is to set up a docker container and link port 8001 with port 2368 of the container, which is the default ghost port (for most services it will be 80 or 443).

Let’s create our docker-compose recipe

sudo tuoch /srv/docker/compose/ghost/docker-compose.yml
sudo nano /srv/docker/compose/ghost/docker-compose.yml

And edit our docker-compose.yml as follows:

version: '3'
services:
ghost:
image: ghost:latest
ports:
- "8001:2368"
restart: always
depends_on:
- db
environment:
url: http://blog.YOUR_DOMAIN.tk
database__client: mysql
database__connection__host: db
database__connection__user: root
database__connection__password: YOUR_PASS
database__connection__database: ghost
volumes:
- /srv/docker/volumes/ghost/ghost_content:/var/lib/ghost/content
- /srv/docker/volumes/ghost/var/www/ghost/:/var/www/ghost/
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: YOUR_PASS
volumes:
- /srv/docker/volumes/ghost/mysql:/var/lib/mysql

Now let’s just get our container up and running!

cd /srv/docker/compose/ghost
docker-compose up -d

All done! We now have our web app running on our subdomain!

To add new services just repeat section 10 using different localhost ports and subdomains.

--

--