Deploy Ghost blog using Docker and Caddy

Raza Gill
6 min readApr 17, 2017

--

Update 19.08.2017:

Ghost released version 1.0, now we don’t have to do the whole run-first-in-development mode to get the config. We can pass the config through the environment variables. The new command is something like the following:

docker run -e NODE_ENV=production -e url=http://YOURWEBSITE.com -d --name ghost -p 8080:2368 -v /home/USERNAME/data/ghost:/var/lib/ghost/content ghost:1-alpine

This is a complete step by step guide on how to deploy Ghost using a VPS with docker and Caddy as a webserver.

Setting up the VPS

First you’ll need a VPS to host the ghost platform. The most common options are digitalocean or vultr. I choose vultr since it’s a little cheaper and faster, although with less storage. Since Ghost is a lightweight node application it doesn’t require a lot of resources to run. Anything with 512mb RAM should be enough for average traffic.

After creating your VPS, it’s better to create a new user with root level privileges rather than using the root account. SSH into your newly created server and add a new user.

local$ssh root@SERVER_IP_ADDRESS
$ adduser USERNAME

Set the password for the new account then add it to the sudo group.

$ usermod -aG sudo USERNAME

Logout and login back to the server using the new account and update your OS. I’m using Debian 8 so it’s as simple as

$ sudo apt-get update -y

Setup Docker

Docker is a great containerization platform for running isolated applications. A container is an application or service that’s been bundled with all its dependencies. This bundle is called an image. A good example of this is the Ghost image. It contains the Ghost application and its dependencies. Instead of installing npm, webserver etc directly on the webserver, we’ll use Docker to install all these software packages in isolation.

Follow the following link on instructions on how to install Docker

Once Docker is installed you need to add your USERNAME to the docker group.

Add the Docker group if doesn’t exist already.

$ sudo groupadd docker

Add the connected user “${USER}” to the Docker group.

$ sudo gpasswd -a ${USER} docker

Restart the Docker deamon

$ sudo service docker restart

Setup Ghost

We’ll use docker to install ghost on our server. But since a docker container doesn’t preserve state on restarts or reinstalls, we need a place on our server to save the persistent data like posts or configurations. We’ll simply create a data directory for Ghost.

$ mkdir -p data/ghost

Next up we’ll install the Ghost image on our server.

$ docker run --env NODE_ENV=development --name ghost -p 8080:2368 -v /home/USERNAME/data/ghost:/var/lib/ghost ghost:latest

You can now visit SERVER_IP_ADDRESS:8080 and you’ll be presented with the main page for Ghost. Press Ctrl + C to stop running Ghost.

You might be wondering why are we starting Ghost in development mode, the reason is this issue: https://github.com/docker-library/ghost/issues/15. Suffice it to say we need to specify the configuration.

  • docker run - This tells docker you want to run a container
  • --env NODE_ENV=development - This sets an environment variable in the container. NODE_ENV is commonly used to tell applications written in Node if they're in development or production mode.
  • --name ghost - Specifies a name for your container. If you don't specify a name, Docker will make one up for you.
  • -p 8080:2368 - Maps port 2368 (the port the Ghost image uses by default) in the container to host port 8080.
  • -v /home/USERNAME/data/ghost:/var/lib/ghost - Mounts the folder /home/USERNAME/data/ghost from the host machine to /var/lib/ghost where the Ghost image looks for its configuration inside the container.

Next step is to configure Ghost. We’ll go to the data/ghost directory and Ghost will have already created some files there.

$ cd /home/USERNAME/data/ghost
$ ls
apps config.js data images themes

We need to edit config.js in order to configure Ghost. Head over to the production section inside the file and follow the sample below.

production: {  
url: 'http://YOUR_DOMAIN_NAME',
mail: {
transport: 'SMTP',
options: {
service: 'Mailgun',
auth: {
user: 'YOUR_MAILGUN_USERNAME',
pass: 'YOUR_MAILGUN_PASSWORD'
}
}
},
database: {
client: 'sqlite3',
connection: {
filename: path.join(process.env.GHOST_CONTENT, '/data/ghost.db')
},
debug: false
},
server: {
host: '0.0.0.0',
port: '2368'
},
paths: {
contentPath: path.join(process.env.GHOST_CONTENT, '/')
}
},
  • Mailgun - I’m using Mailgun as the email provider as it’s the most simplest to setup. For information on how to set it up or a custom mail server for Ghost follow this link: http://support.ghost.org/mail.
  • sqlite3 - I’ll be using sqlite3 database for storing all my content because it’s the easiest to setup and good enough for an average blog. Unless you are getting 10k unique visitors everyday I don’t think you need a heavy weight RDMS solution like MySql or PostgreSQL.
  • paths - That’s what we’ve added to set the contentPath.

Setup Caddy

Caddy is a new lightweight web server. It’s pretty easy to setup and one of the features is it’s out of box support for SSL using Let’s Encrypt.

Let’s create a new folder for storing Caddy’s configuration files.

$ mkdir -p data/caddy/.caddy

Create another file called Caddyfile and enter the following configuration.

YOUR_DOMAIN {  
proxy / ghost:2368 {
header_upstream Host {host}
header_upstream X-Real-IP {remote}
proxy_header X-Forwarded-Proto {scheme}
}
tls YOUR_EMAIL
}
  • YOUR_DOMAIN - This is a server block and we can configure multiple domains with it.
  • proxy / ghost:2368 - Caddy is used as a reverse proxy.
  • header_upstream Host {host} - This forwards on the name of the host to Ghost.
  • header_upstream X-Real-IP {remote} - This passes the IP of the visitor to your website to Ghost. It's not really needed currently, but I've included it for completeness.
  • proxy_header X-Forwarded-Proto {scheme} - This is important. It lets Ghost know if it's being accessed over SSL or not. Without it, you will get an endless redirect loop.
  • tls YOUR_EMAIL - This tells Caddy to get an SSL certificate for your site. The email address you use will be given to Let's Encrypt and will receive a notification if the certificate expires or is revoked.

Update 01.06.2018:

Thanks to Elias Ojala for mentioning the short hand for the above configuration. Instead of writing the whole thing you can simply write

YOUR_DOMAIN {  
proxy / ghost:2368 {
transparent
}
tls YOUR_EMAIL
}

Now we’ll run Caddy.

$ docker run — name caddy -v /home/USERNAME/data/caddy/Caddyfile:/etc/Caddyfile -v /home/USERNAME/data/caddy/.caddy:/root/.caddy -p 80:80 -p 443:443 abiosoft/caddy:latest
  • -v /home/core/data/caddy/Caddyfile/Caddyfile:/etc/Caddyfile - This passes the config file you created to the container.
  • -v /home/core/data/caddy/.caddy:/root/.caddy - This passes a directory to Caddy so it can store your certificates and speeds up subsequent load times.
  • -p 80:80 -p 443:443 - These two sections expose port 80 (http) and port 443 (https). This allows the server to listen for connections.
  • abiosoft/caddy:latest - This is the Docker image we're using for Caddy.

If all goes well you will see the following output:

Activating privacy features… done.
YOUR_DOMAIN: 443
YOUR_DOMAIN: 80

Stop Caddy with Ctrl + C.

Run containers in production mode

Up till now we have run our containers in the foreground and in development. Now we’ll remove the containers and add them back as demonized containers so they can be run in the background.

$ docker rm ghost
ghost
$ docker rm caddy
caddy

Now we are going to run the containers as demonized service by adding an extra -d in the command.

$ docker run -d --restart=always -p 8080:2368 --name ghost --env NODE_ENV=production -v /home/USERNAME/data/ghost:/var/lib/ghost ghost:latest

The --restart=always option would restart the container if it crashes or goes down.

And for Caddy the following command

$ docker run -d --restart=always -p 80:80 -p 443:443 --name caddy --link ghost:ghost -v /home/USERNAME/data/caddy/Caddyfile:/etc/Caddyfile -v /home/USERNAME/data/caddy/.caddy:/root/.caddy abiosoft/caddy:latest

The --link option links the two containers. This time we’ll be returned to the terminal and both containers will run the background.

To see your containers run the following command.

$ docker ps

This will give you a list of all containers. If you like to see any logs of the containers run the following command.

$ docker logs -f caddy

This is all, by now you should have a ghost blog setup and ready to slack away at writing, just like I am, haven’t written shit on that blog yet!

--

--