Migrating from Google Cloud Platforms App Engine to DigitalOceans Docker Droplet

dbillinghamuk
Mar 12, 2018 · 8 min read

Introduction

I recently worked on re-writing an old website over the Christmas period. Once MVP had been achieved I released that I needed a new web provider, capable of hosting my node JS app. Having previously deployed small personal applications to Heroku and Azure, I decided to give Google App Engine a try. After reviewing the cost of hosting over a period of weeks I decided I needed to look for an alternative, cheaper solution. I eventually settled on DigitalOceans Docker Droplet.

Google App Engine

As with everything Google, this is an amazing platform, with a great “getting started” tutorial. Within an hour or two I was up and running, it’s essentially as easy as:

  1. Creating an app.ymal file which contains resources and runtime environment information
  2. Cloning your git repo
  3. Installing dependancies with an npm install
  4. Starting the app with annpm start
  5. Deploy the application using the gcloud cli gcloud app deploy --project myProj

The user experience is excellent and you can easily spin up additional instances and alter resources to suit your needs.

So what’s wrong with Google App Engine?

Essentially the cost…

For a single instance running with the minimum resources it costs around £30-£40 per month. I had previously paid this for a year with my old hosting provider. In hindsight I should have looked into this before signing up, but I had wrongly assumed they would have some sort of cheap plan for low volume websites like mine, (we’re talking on average 15–20 individual hits a day). This isn’t so much a problem with Google’s pricing plan, it’s just people like me, wanting to host personal projects in their spare time, are not their target market. I looked back into Azure and they have a very similar over complicated pricing structure.

DigitalOcean

In my quest to find a cheaper alternative I remembered an advert on JavaScript Jabba for DigitalOcean. They have many different plans, but I noticed two straight away that would suit my needs. Hosting for nodeJs applications and docker container hosting. These are referred to as “droplets” within digitalOcean, and you pay for a single droplet per month based on assigned resources. The lowest droplet is $5 per month, and has 1 GB Memory / 25 GB Disk, and is hosted in London. Perfect for my needs for a low volume site.

Docker setup

I’m going to assume some basic docker knowledge of docker machines, images and containers. The first step would be getting my dockerfile setup so that I could build an image that can be run on the droplet.

App dockerfile

In the root of my application I created a dockerfile as below:

FROM node:carbonRUN mkdir /usr/src/appWORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATHENV NODE_ENV=productionENV PORT=3000
ADD package*.json ./ADD /server ./serverADD /src ./srcADD /public ./public
RUN npm installRUN npm run build
EXPOSE 3000CMD [ "npm", "start" ]

The content of this file is pretty standard, but to summarise:

  1. Take a node base image from docker hub
  2. Create a directory, and set it as the working directory for the container
  3. Set some environment variable for the application to use
  4. Copy over package.json and some core directories needed for the application to run
  5. Install dependencies and build the application
  6. Expose the port the application will be running on and start the application.

Building and running the dockerfile

First attempt, (see second attempt for a better solution)

As a starting point I decided to build the image locally and copy the image over to the droplet before running it.

I built the image, (give it an optional tag if you like):

docker docker build -t my-app-1 .

I could now see the image on my docker machine by running:

docker images

Now I needed to copy the docker image over to my droplet so I could run the container. I did this by saving the docker image and archiving it, then connecting to the droplet, before extracting and loading the image into docker:

docker save d69eee733ac6 | bzip2 | ssh root@165.227.234.1 'bunzip2 | docker load'

It was then just a case of running the container and mapping to port 80:

docker run -p 80:3000 -d my-app-1

I was now able to hit the external IP address assigned to the droplet to access the site.

This worked well, but docker images are quite big, and with limited upload bandwidth this process took too long.

Second attempt

I then came to realise that if git was available on the droplet, I could just clone my repository, (which I had stored in a remote private bitbucket repository), and build the docker image on the server itself. It seems that the version of Ubuntu installed on the droplet already includes git. So my new process for updating the app became committing any changes to the remote bitbucket repo, pulling the changes down onto the droplet, building to create a new image and running the container.

What about SSL (https)?

One of the features I did use on google app engine was the free 12 month SSL certificate for serving content over a secure connection. For this I needed to be able to get a certificate and install it on some intermediate, between the droplet and the app. This something, was another container running Nginx.

Nginx

Nginx is a reverse proxy. My understanding, after working in the Microsoft space for a number of years, is it’s very similar to IIS, (and Apache in the LAMP stack). At its core, it deals with routing requests coming into the server and handling the response.

Setting up Nginx to work without SSL

The first step was to get Nginx routing traffic on port 80. To start with I created an Nginx dockerfile in an nginx directory, which looked like this:

FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf

I then also created a config file called nginx.conf and placed it in the same directory.

The nginx.conf file starts out looking something like this:

worker_processes 2;events { worker_connections 1024; }http {
upstream node-app {
least_conn;
server app:3000 weight=10 max_fails=3 fail_timeout=30s;
}
server {
listen 80 default_server;
server_name *.welcome.co.uk;
location / {
proxy_pass http://node-app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}

Upstream and server are modules. So we set our node app as an upstream module. And then within server, we specify we want to route traffic on port 80 matching said server_name, to a specific location.

So this will create the Nginx container, but to link the two containers together we now need to use docker-compose.

docker-compose

Create a docker compose file in the project route with a config similar to this:

version: '2.0'  
services:
nginx:
container_name: nginx
restart: always
build:
context: ./nginx
dockerfile: Dockerfile
links:
- app
ports:
- '80:80'
app:
container_name: app
stdin_open: true
build:
context: .
dockerfile: Dockerfile
ports:
- '3000:3000'

This names the two containers, specifies a build property which points to each of the dockerfiles, links the Nginx container to the app container, and exposes ports.

Note that the app:3000 setting in the nginx config matches the container name and exposed port in the docker-compose file.

We now need to build and run the docker-compose file, (before we do, we will have to make sure we have killed and removed all old containers):

docker stop $(docker ps -a -q) && docker rm $(docker ps -a -q)docker-compose build && docker-compose -f docker-compose.yml up -d --remove-orphans

Now when we hit the external IP address we will be proxing requests through Nginx.

Getting an SSL cert

Before we look at configuring Nginx to deal with SSL and routing traffic through port 443 we need to get an SSL certificate.

I used Lets Encrypt via an application called certbot, this will give you a free SSL cert which expires every 90 days.

Install certbot on the droplet

sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot

Now we can use the certbot cli to get a certificate. We want the configuration options which will get a manual cert only. We will also need to authenticate that we own the domain name. There are a couple of ways to do this, but my preferred option is via DNS. This involves adding a new TXT DNS record for the domain or subdomain you are getting the certificate for with a GUID supplied by certbot.

sudo certbot certonly --manual --preferred-challenges dns -d welcome.co.uk -d www.welcome.co.uk

I’m adding two certs here for two domains welcome.co.uk and www.welcome.co.uk.

You will then be guided through a wizard. Once you have the DNS keys and values, go to your DNS provider for the domain, (mine is with a company that uses cpanel), and add the TXT record.

Then go back to the certbot wizard and continue, where verification will continue and the certificates are issued.

To see the certificates that have been created you can use the below command

certbot certificates

For a default installation the certificates will be located in this directory

/etc/letsencrypt/live/welcome.co.uk/

Configuring Nginx to use the certificate

Now we have our certificates we need to:

  1. Copy them into the nginx directory within the project root
  2. Set a volume, and expose port 443, within the nginx service of the docker-compose file
  3. Update the nginx config to use the certificate and redirect any traffic arriving on port 80
  4. Re-build and run docker containers

Copy certificates to nginx directory in project root

The easiest way to do this is to run this command

rm -rf ~/my-app-1/nginx/certs/
mkdir ~/my-app-1/nginx/certs/
cp /etc/letsencrypt/live/welcome.co.uk/* ~/my-app-1/nginx/certs/`

Set cert volume

We now need to update the docker-compose file to include this config which will expose port 443 and create a volume pointing to the certs located in the projects nginx directory.

ports:
- '80:80'
- '443:443'
volumes:
- ./nginx/certs/:/etc/nginx/certs/

So the final docker-compose file should look like this

version: '2.0'  
services:
nginx:
container_name: nginx
restart: always
build:
context: ./nginx
dockerfile: Dockerfile
links:
- app
ports:
- '80:80'
- '443:443'
volumes:
- ./nginx/certs/:/etc/nginx/certs/
app:
container_name: app
stdin_open: true
build:
context: .
dockerfile: Dockerfile
ports:
- '3000:3000'

Updating Nginx config

The nginx config file is only configured to route traffic on port 80. We need to tell it to redirect any traffic on port 80 to port 443.

To do this we need to update the server module’s listen property

#listen 80 default_server;
listen 443 default_server;

Then we need to redirect the traffic coming from port 80

server {
listen 80;
server_name *.welcome.co.uk;
return 301 https://$host$request_uri;
}

The order of additional server modules can be important as they are evaluated in order. But in our case it does not matter. I would still, however, place it before the server module listening on port 443.

We then need to tell Nginx to use SSL and add the location of the certificates to the server module listening on port 443

ssl on;
ssl_certificate /etc/nginx/certs/fullchain.pem;
ssl_certificate_key /etc/nginx/certs/privkey.pem;

The final nginx config file should look like this

worker_processes 2;events { worker_connections 1024; }http {
upstream node-app {
least_conn;
server app:3000 weight=10 max_fails=3 fail_timeout=30s;
}
server {
listen 80;
server_name *.welcome.co.uk;
return 301 https://$host$request_uri;
}
server {
listen 443 default_server;
server_name *.welcome.co.uk;
ssl on;
ssl_certificate /etc/nginx/certs/fullchain.pem;
ssl_certificate_key /etc/nginx/certs/privkey.pem;
location / {
proxy_pass http://node-app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}

Re-build and run docker containers

docker stop $(docker ps -a -q) && docker rm $(docker ps -a -q)docker-compose build && docker-compose -f docker-compose.yml up -d --remove-orphans

Now when you hit https://welcome.co.uk the site will load. And when you hit http://welcome.co.uk, you will be redirected to the https version.

Scaling

This setup also allows you to scale additional instances of your app very easily, and have Nginx control which instance a request is best served on. With a single core droplet configuration, there is little point to having more than one instance. But if your site grows, you can easily pay for additional cores and add additional instances to cope with higher traffic.

Conclusion

Overall I have been very happy with the migration from Google App Engine to DigitalOceans docker droplet, and feel this is a far more cost effective way to host a small website.

dbillinghamuk

Written by

Software dev — Javascript, node, express, mongo, react, redux, rxjs, es6, ramda

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade