Building a Node Js Application and deploying through Docker: Meet Docker

Nirat Attri
11 min readApr 14, 2017

--

So you have an express server on which your APIs are hosted. Awesome! But localhost:3000 is not where I want my application to run. I want the world to see it. I mean, not everyone can make a server, and then perform CRUD operations on it can they? Okay maybe they can. But hey, newbies like me need something to feel good about.

But, butt(hurt). Deploying to a remote hosting platform has forever been a pain since environment standardisation and testing often leads to the statement: “It works on my local but not on the remote”. As developers, that remote might be your production or staging server as the case may be. Virtualbox and vagrant have been the norm around the problem for a while.

And, we all absolutely need these bulky, difficult to set up virtual environments as a prerequisite to churn out our creations, because they are really fun to use and we don’t need space and RAM on our systems anyway 😁. (If you missed the sarcasm there, you need to question some life choices).

In comes our saviour, your friendly neighbourhood Docker.

P.S. This has a basic explanation at the start. You can skip to the heading, Back to Node if you know what you’re doing.

Hello, Docker!

The Blue Whale is so cute

There exist better explanations for what docker is, than whatever I’m capable of offering. Link to the official website is here, link to wikipedia is here, and the official documentation is found here.

An oversimplified version is this: Imagine you have a box. Inside that box, you set up your application. Now, wouldn’t it be amazing if instead of setting up this box everywhere, every time, I can just use an exact same copy of my box. The box running on my local system will exactly be the same as the box running on your system, same as the one running on my hosting server because, it is the same exact box that I gave you. This is essentially what docker containers and images offer.

Again, this a super water-downed explanation. The beauty lies in how it handles resource isolation and for that matter container isolation. How it’s super lightweight. And, how easy it is to set up.

I now make an assumption that you know why you’re here, with a fair idea of what docker does. Let’s get cracking on the “how”s of it. But first,

Let npm handle your scripts

To setup our container, we’ll have to first describe what our docker image has to do. (Images produce your containers when run). A fair share of this happens through CLI commands that form our image layers. Now, it makes sense to keep your application specific CLI stuff inside your package.json. My argument for it is that if you had to use it at some point, let it be available for someone else’s benefit as well. So, add

    "migrate": "knex migrate:latest",
"rollback": "knex migrate:rollback"

to your "scripts".

Signed, DOCKERFILEd, Delivered. I’m Yours

If you missed the reference there, the song is “Signed, Sealed, Delivered” by Stevie Wonder. You might want to listen to it if you haven’t. If you have heard it and don’t like it, Shame,*Ta ding*, Shame,*Ta ding*, Shame,*Ta ding*, Shame,*Ta ding*, Shame.

First the basics. Install docker from the official website. (Such obvious, so amaze, much wow).

To test things out, and see how docker works in action, do a

docker run -it ubuntu

There some magic has happened. A few layers were pulled and extracted and now you seem to be in another terminal. Interesting. So I played with this terminal as shown below.

Wow, It’s actually ubuntu

As you can see, the basic shell commands seem to be working. Play around a bit to validate my claim. You can see your running containers by doing a docker ps. Pay attention to the NAMES. Why? They’re generally damn funny that’s why.

But oh wait, what is this? cat seems to be working as expected but vi?

I thought vim was my birthright :(

Let’s get to understanding what happened here. When I ran the docker run -it ubuntu command, I told docker to run an interactive terminal (-it) which has ubuntu as its base image. The docker daemon first checked if the ubuntu image exists locally, when it found that it didn’t, it pulled that image from dockerhub (our public registry). It then downloads this image in layers and saves these layers for building the local cache for future use.
But, oh my buttercups, where’s my vim? That’s the point. This is your base image. No bells and whistles attached whatsoever. The dist on your systems that you might be used to, have all these additional packages installed out of the box but they’re not part of your linux OS by default.

Inside your container, you can always do an apt-get install blah blah-blah to get your blah and blah-blah packages installed. But that’s a pain. I want to customize my container to do much more and have all these things I’m fond of (and ofcourse my awesome application) right out of the box.

Let me introduce the magic image creation file, the DOCKERFILE. (*Insert thunder and lightning effects here*)

As a continuation to what we did above, let’s add more flavour to our ubuntu image through our dockerfile.

touch Dockerfile
echo "This is amazing" > wootwoot.txt

Open it and paste the following content within it:

FROM ubuntu
LABEL maintainer Nirat Attri <nirat.attri07@gmail.com> # This is totally unnecessary. I just want my name everywhere :P
RUN apt-get update && apt-get install -y \
vim \
nano
WORKDIR /home/amazeballs
COPY wootwoot.txt /home/amazeballs

Time to build your own image:

docker build -t myubuntu .
Layers. When it’s hot, the lesser you have the better you’ll feel

Talking points: Notice Step 1/5. That right there is the cached layer from the previously used ubuntu base image. The subsequent steps are layers (run in intermediate containers) that will get associated with my main container as it’s built. Pretty neat if you ask me. After everything is done, do a docker images in your terminal and you’ll see your image listed. You might also see that the SIZE of your myubuntu image is more than the ubuntu just below it. As expected if you ask me. My (your, our ❤️) myubuntu image has things in addition to whatever ubuntu had to offer.

Back to the file, I basically asked my container to RUN the apt get commands at the buildtime. So, when I’m within the container I reap the benefits of having had this run for me before hand. This is a long process the first time I do it. But when I push this image to dockerhub (say) and just do a pull on it it will be faster since it’s just downloading layers.

You can test this out by first running docker rmi myubuntu and then running docker build -t myubuntu . again and follow this by docker run -it myubuntu. In another terminal, do docker run -it nattri07/myubuntu (This is basically the same image that I pushed to my hub account). Compare the difference and bask in the glory of reproducing. (Yes, that was cringeworthy. Yes, I still kept it because being inappropriate is appropriate).

Either way, you’ll land inside the container and see a difference straight away. You’re inside a different folder now than the root which you can check by running a simple pwd command. What more? My wootwoot.txt file is already there AND vim wootwoot.txt WORKS!!!!!!!

Back to Node

My node Dockerfile is going to look something like this:

Dockerfile

FROM node:6.9.4LABEL maintainer Nirat Attri <nirat.attri07@gmail.com># Set the work directory
WORKDIR /www/myAwesomeApp
# Good to have stuff
RUN npm install pm2 -g
RUN npm install babel-cli -g
RUN apt-get update && apt-get install -y \
vim
# Use Cache please
ADD package.json /www/myAwesomeApp
RUN npm install
# Add application files
ADD . /www/myAwesomeApp
# Entrypoint script
RUN cp docker-entrypoint.sh /usr/local/bin/ && \
chmod +x /usr/local/bin/docker-entrypoint.sh
# Expose the port
EXPOSE 3000
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]

My base image in this case is going to be straight up node:6.9.4. This is basically a debian jessie OS with node installed on top of it.

I add my package.json before copying my entire application folder to exploit the caching behaviour in the RUN npm install. Reason being, if I copy all the folder content and then run npm install, my cache will be bust each time I rebuild this image. The content inside the folder would have changed hence the docker daemon would dismiss the existing the layer. And thus start the worst thing possible. Running npm install from scratch each time. Anyone familiar with node will agree that doing this step repeatedly will make you want to pluck your hair out, one strand at a time and then poke your eyes with the a needle, walking on a bed of coals, all the while professing your love for One Direction.

Okay maybe I took that too far. My point is, use caching whenever possible please.

EXPOSE 3000 is the step where I expose the container port that I’ll be binding to a port on the system that’s hosting my container. But what is the weird ass entrypoint script here. Hmmm… Let’s create it first I guess 😅

docker-entrypoint.sh

#!/bin/bashnpm run migrate
npm run start-docker

Please add the following script to your package.json :

"start-docker": "pm2-docker start pm2_configs/config.json",

Pretty concise and self explanatory. I run my migrations and I start my application. pm2-docker is a docker specific pm2 command that doesn’t exit the container once the application starts. But why a separate file for it? Shouldn’t my migrations be a part of my build? But I read somewhere we’re supposed to start the server like

CMD [“pm2”, “start”, “processes.json”, “ — no-daemon”]

Why aren’t we doing this?

The answer to migrations, I’ll defer to later (like, 4–5 paragraphs later). The answer to why not the CMD, is because I had to run the migrations before I ran the application (duh?).

I also want to mention the .dockerignore file here so I don’t needlessly add my local git files and node_modules to the image. Reason for the former: It’s redundant to have version control on something you’ll be able to ship with a click of a button. Reason for the latter: let the node_modules be created in the container environment and not overwrite them while pasting application content. So, create it. And add the stuff you don’t need to get copied onto your image.

.dockerignore

node_modules
*.log
.git
.gitignore
.DS_Store

Seems like we’re all set.

docker build -t myawesomeapp .

Firstly, don’t be scared at all the red on the screen. It’s just your npm logs. You can always edit your RUN npm install line to have the -q tag to suppress it.

A quick docker images reveals that the image has been created. So what are we waiting for? Let’s bind the port and get to business.

docker run -p 3000:3000 --name myAwesomeApp myawesomeapp

-p 3000:3000 is binding your system’s port 3000 to the container’s port 3000. --name myAwesomeApp is specifying the name of the container (Although quirky, default names are inconvenient). myawesomeapp is the image we’re building from.

BUT DISASTER!!!

Why did this error out? I have mysql running and the credentials are right then why can’t it establish the connection?

2 reasons. First, your container can only talk to your localhost through the port 3000. Second, the container is a virtualised machine in itself (essentially speaking. Please, I accept my sins of oversimplification). So when you say I want to make a connection on localhost, the container is referencing to itself. Not your system.

So, how do we fix this? Do I bind my localhost:3306 (default mysql port) to some container port and then hit that for a connection? I could actually do that. But where’s the fun in that? On a more philosophical level, my container isn’t isolated anymore if I do this. It now will have a dependency on a service hosted within an environment that’s machine specific. I don’t want that.

If only there existed a way in which I could start a mysql container and connect it with my application container. Hmmm…… If only… 😏

Linking Containers through docker-compose

The docker-compose file offers you the ability to start multiple containers simultaneously and define the relationship between them.

Let’s first supply our db container the things that it might need. Create a new folder and call it db_stuff. Inside this folder, create your Dockerfile and a file called init_db.sql:

Dockerfile

FROM mysqlCOPY init_db.sql /docker-entrypoint-initdb.d/

init_db.sql

CREATE DATABASE IF NOT EXISTS mybooks;GRANT ALL PRIVILEGES on mybooks.*
TO 'awesomeness'@'%' IDENTIFIED BY 'lamepassword'
WITH GRANT OPTION;

In the Dockerfile, we’re copying the .sql file to the db entrypoint which is the set of commands that the mysql container will run before it becomes available. In our case, the container will come with the preexisting database mybooks and a user called awesomeness which has all privileges. A quick reference to the previous article where I mention isolation of root privileges from application privileges. In production, I’d probably change ALL PRIVILEGES to INSERT, CREATE, ALTER, UPDATE for our application.

Awesome. Looks good. Butttttttt, we need to tweak our application configs to be in sync with this dockerized architecture. So, return to your application root folder.

Add, db_stuff to your .dockerignore. (Don’t need it in your application image)

Change your bookshelf.js and knexfile.js as follows:

bookshelf.js

var knex = require('knex')({
client: 'mysql',
connection: {
host : 'mysql',
user : 'awesomeness',
password : 'lamepassword',
database : 'mybooks',
charset : 'utf8'
}
});
var bookshelf = require('bookshelf')(knex);bookshelf.plugin('registry');export default bookshelf;

knexfile.js

module.exports = {development: {
client: 'mysql',
connection: {
database: 'mybooks',
user: 'root',
password: '',
host: 'mysql',
port: ''
}
}
};

Time to return to the docker-compose business. First, create the docker-compose.yml file and fill it up with the following content

docker-compose.yml

version: '2.1'services:
mysql:
build: ./db_stuff
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
healthcheck:
test: "exit 0"

myawesomeapp:
build: .
depends_on:
mysql:
condition: service_healthy
entrypoint:
- /usr/local/bin/docker-entrypoint.sh
ports:
- "3000:3000"

Perhaps the one thing that needs explanation here is the healthcheck functionality (Others are “hopefully” self explanatory). The depends_on field ensures that the mysql container will start before my app container. But, as it turns out, my mysql container will get ready after my app container. To ensure that my migrations don’t try to run themselves on a container that doesn’t exist, I tell the daemon to start my app only when the healthcheck condition have passed.

Guys, this is it. You just have to do

docker-compose build
docker-compose up

Your localhost:3000 now hosts your containerized application.

With a few tweaks, we can host our boxes on any server with docker support. But I guess, that’s a lesson for next time. 😁

Find this project on github. Feel free to message or comment anything relevant. Cheers 🍻

--

--