How to use Docker for Frontend Developers

Delivery Hero Tech Blog
Delivery Hero
Published in
6 min readMay 23, 2019

by Akanksha Sharma

You are all probably familiar with the most common problem of having one big platform — It’s too big to handle, it is hard to understand, it can be tricky to deploy new features and it can be a pain to onboard new engineers. In my team at Delivery Hero, we decided to tackle this issue by using Docker. In this article I will explore how we use Docker for frontend and how it helps us to make all our lives easier.

Why should you use Docker?

In the past, when a business needed new applications, their DevOps team would go out and buy a server without knowing the performance requirements of the new apps. This would involve a lot of guesswork as well as waste of capital and resources which could have been used for other apps.

Enter virtual machines (VM). They allowed us to run multiple apps on the same server. However, there was also a drawback. Every VM needed an entire OS to run. Every OS needed CPU, RAM etc. to run as well. This required patching and licensing, which lead to increased costs and resiliency.

A container model basically means that multiple containers on same host use one host, freeing up CPU and RAM which could be used elsewhere.

But how does it help us developers?

It ensures that the working environment is the same for all developers and all servers i.e, production, staging and testing.

Anyone can set up the project in seconds; no need to mess with config, install libraries or setup dependencies.

In simple terms Docker is a platform that enables us to develop, deploy, and run applications with containers.

Let’s take a step back, what does container system actually look like and how is it different from VM?

Difference between VM and Docker

As you can see, a host and it’s resources are shared in containers but not in Virtual Machines. With that out of the way, let’s dive in!

How to use Docker?

First off, we need to familiarise ourselves with certain terminology.

Visualisation of Docker images and Docker container

Docker image: An executable file which contains cutdown operating systems and all the libraries and configurations needed to run the application. It has multiple layers stacked on top of each other representing a single object. A Docker image is created using Dockerfile.

Docker Container: A running instance of a Docker image. There can be many containers running from the same Docker image.

Containerize a simple Node.js App

We would try to containerize a very simple node.js app and create an image:
Let’s start by creating folder my-node-app,

mkdir my-node-app 
cd my-node-app

The next step is to create a simple node server in index.js and add the following code:

//Load express module with `require` directive
var express = require('express')
var app = express()
//Define request response in root URL (/)
app.get('/', function (req, res) {
res.send('Hello World!')
})
//Launch listening server on port 8081
app.listen(8081, function () {
console.log('app listening on port 8081!')
})

and save this file inside your my-node-app folder. Now create a package.json file and add the following code:

{
"name": "helloworld",
"version": "1.0.0",
"description": "Dockerized node.js app",
"main": "index.js",
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.16.4"
}
}

At this point, you don’t need express or npm installed on your host, because remember, the Dockerfile handles setting up all the dependencies, libraries and configurations.

The Dockerfile

Let’s create the Dockerfile and save it inside our my-node-app folder and then add the following code:

# Dockerfile
FROM node:8
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 8081
CMD node index.js

Here is an overview of what is happening:

FROM node:8 - It pulls a node.js docker image from docker hub, which can be found here.

WORKDIR /app - It sets the working directory for our code in the image and is used by all the subsequent commands such as COPY, RUN and CMD.

COPY package.json /app - It copies our package.json from the host my-node-app folder to our image in the /app folder.

RUN npm install - We run this command inside our image to install dependencies (node_modules) for our app.

COPY . /app - We communicate to Docker to copy our files from my-node-app folder and paste it to /app in the docker image.

EXPOSE 8081 - We expose a port on the container by using this command. This is done because in our server index.js is listening on 8081. By default, containers created from this image will ignore all requests made to it.

Build Docker Image

# Build a image docker build -t <image-name> <relative-path-to-your-dockerfile>
docker build -t hello-world .

Now it’s show-time! Open the terminal, go to your folder my-node-app and type the following command:

This command creates a hello-world image on our host.

-t is used to give the name hello-world to our image.

. is the relative path to the docker file. Since we are in folder my-node-app, we used dot to represent the path to the Docker file.

You will see an output on your command line like this:

Sending build context to Docker daemon  4.096kB
Step 1/7 : FROM node:8
---> 4f01e5319662
Step 2/7 : WORKDIR /app
---> Using cache
---> 5c173b2c7b76
Step 3/7 : COPY package.json /app
---> Using cache
---> ceb27a57f18e
Step 4/7 : RUN npm install
---> Using cache
---> c1baaf16812a
Step 5/7 : COPY . /app
---> 4a770927e8e8
Step 6/7 : EXPOSE 8081
---> Running in 2b3f11daff5e
Removing intermediate container 2b3f11daff5e
---> 81a7ce14340a
Step 7/7 : CMD node index.js
---> Running in 3791dd7f5149
Removing intermediate container 3791dd7f5149
---> c80301fa07b2
Successfully built c80301fa07b2
Successfully tagged hello-world:latest

As you can see, it ran the steps in our Docker file and the output is a Docker image. When you try it for the first time it might take a few minutes. If you are repeating it more often, it will start to use the cache and build much faster and the output will be as shown above. Now, try the following command in your terminal to see if your image appears:

# Get a list of images on your host 
docker images

It should have a list of the images in your host similar to the one below:

REPOSITORY    TAG      IMAGE ID      CREATED         SIZE
hello-world latest c80301fa07b2 22 minutes ago 896MB

Run Docker Container

With our images created, we can spin up a container from it.

# Default command for this is docker container run <image-name>
docker container run -p 4000:8081 hello-world

This command is used to create and run a Docker container.

-p 4000:8081 - This is publish flag. It maps host port 4000 to container port 8081 which we opened through expose command in dockerfile. Now all the requests to host port 4000 will be listened to by container port 8081.

hello-world - This is the name we gave our image earlier when we ran Docker-build command.

You will receive an output similar to this:

app listening on port 8081!

If you want to enter your container and mount a bash terminal to it, you can run:

# Enter the container
docker exec -ti /bin/bash

In order to check if the container is running, open another terminal and type:

docker ps

You should see your container running like this:

CONTAINER ID    IMAGE        COMMAND                  CREATED
<container id> hello-world "/bin/sh -c 'node in…" 11 seconds ago
STATUS PORTS NAMES
Up 11 seconds 0.0.0.0:4000->8081/tcp some-random-name

It means our container with id <container id> created from the hello-world image, is up and running and listening to port 8081.

Now our small Node.js app is completely containerized. You can run http://localhost:4000/ on your browser and you should see this:

Containerized Node.js App

Voilà, you have containerised your first app.

If you are interested in joining our team, have a look at our open positions:

Originally published at https://tech.deliveryhero.com on May 23, 2019.

--

--