Developing and deploying a Node.js app from Docker to Kubernetes

Learn how to develop and deploy a Node.js app using containers and an orchestration engine

Paul Zhao
Paul Zhao
Jun 8, 2020 · 16 min read

As demands of scaling and automation, the conventional method of deployment is no longer meeting the requirements of businesses in this informational age. With that said, DevOps Engineer dives deep and looks for ways to streamline and automate the continuous deployment of code.

To some extent, docker has been widely adopted as a containerized automation tool to deploy applications with ease, which guarantees the predictability and consistency of packaging You can simply expect the software to behave similarly whether you’re on a laptop or in the cloud.

However, as the demands of scaling and complexity emerge, containerized docker may not serve the needs. That’s when orchestration engine tools, like Kubernetes come into play. Teams are using Kubernetes as a higher-level abstraction to manage Docker container technology and further simplify the pipeline to enable their teams to go faster.

We’re already seeing tremendous benefits with Kubernetes — improved engineering productivity, faster delivery of applications and a simplified infrastructure.

Teams who were previously limited to 1–2 releases per academic year can now ship code multiple times per day!

Chris Jackson, Director for Cloud Platforms & SRE at Pearson

Though users may not require the traffic, which online giants such as Google or Facebook may demand, they may need to accurately predict their infrastructure costs, or just want to manage their systems more efficiently.

Why user containers?

  • Less overhead. Containers require less system resources than traditional or hardware virtual machine environments because they don’t include operating system images.
  • Increased portability. Applications running in containers can be deployed easily to multiple different operating systems and hardware platforms.
  • More consistent operation. DevOps teams know applications in containers will run the same, regardless of where they are deployed.
  • Greater efficiency. Containers allow applications to be more rapidly deployed, patched, or scaled.
  • Better application development. Containers support agile and DevOps efforts to accelerate development, test, and production cycles.
  • Improved security. your container is isolated from other containers, so that someone shipping fish tanks won’t slosh fish water 🐟 onto your bundle of firewood

How Containers Work

The term container is truly an abstract concept, but three features can help you visualize exactly what a container does.

  • Namespaces. A namespace provides a container with a window to its underlying operating system. Each container has multiple namespaces that offer different information about the OS. An MNT namespace limits the mounted filesystems that a container can use; a USER namespace modifies a container’s view of user and group IDs.
  • Control groups. This Linux kernel feature manages resource usage, ensuring that each container only uses the CPU, memory, disk I/O, and network that it needs. Control groups can also implement hard limits for usage.
  • Union file systems. The file systems used in containers are stackable, meaning that files and directories in different branches can be overlaid to form a single file system. This system helps avoid duplicating data each time you deploy a new container.

There are two main components to container solutions: an application container engine to run images and a repository/registry to transfer images. These components are supported by the following:

  • Repositories. Repositories provide the reusability feature of private and public container images. For example, there are platform component images available for MongoDB and Node.js.
  • Container API. The API supports creating, distributing, running, and managing containers.
  • Container creation. Applications can be packaged into a container by combining multiple individual images, often images extracted from repositories.

After the Postman engineering team reorganized into a microservice architecture, every service now uses Docker to configure their own environments. Every service owner defines their own Dockerfile from which an image is generated when new code is deployed as part of the CI/CD pipeline. The resulting image is pushed to the team’s container registry, and their Beanstalk environments are configured to pull the image from the registry to run the containers.

Every service gets the flexibility of configuring how to run their services. So services engineers can focus on building the application while platform engineers can focus on how to build and deploy automatically.

Docker takes over the responsibility of configuring the environment and standardising the deployment pipeline. This gives us faster deployment and scaling time because the build happens only once during CI.

— Saswat Das, Platform engineer at Postman

Why Kubernetes?

  • Service discovery and load balancing
    Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
  • Storage orchestration
    Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
  • Automated rollouts and rollbacks
    You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
  • Automatic bin packing
    You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
  • Self-healing
    Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
  • Secret and configuration management
    Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.

Kubernetes simplifies the deployment process for your application, and provides tools to make your application super robust.

With Kubernetes, you get rolling deployments with no downtime, service discovery, and the flexibility to change cloud providers easily.

— Dan Pastusek, Founder of Kubesail

Enough theoretical knowledge, now let’s dive in deep and see how we can

we’re developing a starter NodeJS server and deploying it to a Kubernetes cluster, starting from the very primary server, then building the image from docker and deploying it to the Kubernetes cluster.


In order to start off this project, we need to have following tools installed

NodeJS Installation

Installation on a Mac or Linux

In order to install everything on a Mac, we’ll be running commands in, and Linux distributions vary.

Install Node.js and npm

We’re going to use Node Version Manager (nvm) to install Node.js and npm.

$ curl -o- | bash

Open the ~/.bash_profile file, and make sure source ~/.bashrc is written in there somewhere. Restart the terminal.

Run the install command.

$ nvm install node

Run the use command.

$ nvm use nodeNow using node v8.2.0 (npm v5.3.0)

Now that Node.js and npm are installed, test them by typing node -v and npm -v.

$ node -v
$ npm -v

Installation on Windows

Installing everything on Windows is a breeze.

Install Node.js and npm

Node.js and npm can be installed from a download link. Go to the Node installation page, and download the Node installer. I have a 64-bit Windows 10 OS, so I chose that one.

Image for post
Image for post

Once it’s done, you can test to see both node and npm functioning by opening PowerShell (or any shell) and typing node -v and npm -v, which will check the version number.

Image for post
Image for post

All set.

Docker Installation

Installation on Mac

Installation on Windows

To verify installation

$ docker --version
Docker version 19.03.8, build afacb8b
$ docker ps ## Docker works fine if no error returns

Kubernetes Installation

Kubernetes will be running, if you’re using your laptop or PC then Minikube must be initiated and running.

Minikube installation

Verify Minikube

$ minikube version
minikube version: v1.11.0
commit: 57e2f55f47effe9ce396cea42a1e0eb4f611ebbd

Kubectl Installation

kubectl version (If it shows both the client and server version you’re good to go)

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Notes: Keep in mind, you must enable Kubernetes service with Docker as shown below

Image for post
Image for post
Enable Kubernetes

Step 1: Make A Separate Directory And Initialize The Node Application

First, we’ll initialize the project with npm (Node Package Manager)

$ mkdir nodejs
$ cd nodejs/
$ npm init
This utility will walk you through creating a package.json file.
It only covers the most common items and tries to guess sensible defaults.
## Below is what you need to type in Press ^C at any time to quit.
package name: (nodongo)
version: (1.0.0)
description: Basic NodeJS with Docker and Kubernetes
entry point: (index.js)
test command:
git repository:
author: Muhammad zarak
license: (ISC)
About to write to E:\Magalix\nodongo\package.json:
"name": "nodongo",
"version": "1.0.0",
"description": "Basic NodeJS with docker and kubernetes",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
"author": "Muhammad zarak",
"license": "ISC"
Is this OK? (yes) yes

After doing npm init, npm will ask for some basic configuration info i.e., your project name (our project name is nodongo), then version and starting point which is index.js (note: whenever the server starts, it looks for index.js to execute).

From here, you’ll have a file name package.json, which holds the relevant information about the project and dependencies.

Step 2: Installing Express

Next, we’ll install Express through npm (Node Package Manager). The Express framework is used to build a web application and API’s:

$ npm install express --save

The above command installs Express dependency in your project. — save tag is used to save this dependency in the project.json.

Step 3: Make index.js File And Write Some Code

First, create a file named index.js in the root folder. Then we can write some code to test the application on the Kubernetes cluster:

vim index.js

const express = require("express");
const app = express();
app.listen(3000, function () {
console.log("listening on 3000");
app.get("/", (req, res) => {
res.send("Users Shown");
app.get("/delete", (req, res) => {
res.send("Delete User");
app.get("/update", (req, res) => {
res.send("Update User");
app.get("/insert", (req, res) => {
res.send("Insert User");

From the first line, we’ve imported the Express module using a require function, this function returns an object that’s used to configure our application.

Then we’ll use a callback function that starts listening on a specific host and port i.e., port 3000 in our case. After that, we configured a route update delete insert that doesn’t do the actual database CRUD function but is implemented to have a route to check. res.send() function returned the response from the server.

You can now check the server by using the following command, and browsing localhost:3000/

$ node index.js
Image for post
Image for post
In browser, test localhost:3000

Step 4: Dockerizing The Node Server

Here comes the fun part — we have the code and the server is ready to deploy. But first, we have to build the image, and for that, we’ll have to write the Dockerfile.

Vim Dockerfile

FROM node:13
COPY package.json /app
RUN npm install
COPY . /app
CMD node index.js

The images are built with many layers and each of the Docker-file steps construct these layers for us. Here, we’ll guide you through each step:

  • We must start with FROM Keyword, tell the docker which image to use as your base image. Here, we’re using node version 13
  • WORKDIR, tells docker the working directory of our image (in our case it is /app). CMD or RUN commands execute in this folder
  • CP stands for copy; Here, we’re copying package.json file to /app
  • RUN executes a command on the working directory that’s defined above. The npm install command installs required dependencies defined in the package.json, which we’ve just copied to /app directory
  • Now, we copy the files in the root directory to /app directory where we’re using all the commands. We’ve done it this way so that we have our index.js file in /app directory. Although we just cp index.js /app to copy index.js file to our app directory, we’re purposely doing it in a generic way because we want all of our data to copy from root to app folder.
  • CMD stands for command, and here we’re running node index.js as we had seen at the beginning of this article to start the NodeJS server or run file. We have index.js in the app directory from the last step, and we’re starting our server from the index.js file.
  • EXPOSE 3000, here it informs the user container (using this image) that it needs to open port 3000.

Next, from Dockerfile we’ll start building our image.

$ docker build -t node-server .

The Docker build command is used to create an image with instructions given by Docker-file. -t flag is used to tag the images with our node-server name. Here you can see a full stop at the very end followed by space, and this defines the build context that we’re building this image on, and that we’re using the current context or local Dockerfile.

Step 5: Create And Run The Container

Now, we’ll then run the container to ensure it works as intended.

$ docker run -d --name nodongo -p 3000:3000 node-server

Here we run a container using our NodeJS image. The run command used to run container -d flag indicates container will be running on detach mode. — name is optional. You can give any name to your container. -p flag is used to define the port on which our server is running, the first port is the container port, and the second one is the host port. Next, we have to specify which image is used to run the container, and that it’s our node-server image. You can curl or browse this address to test that it’s running.

Step 6: Upload The Image To Docker Registry Docker Hub

The image registry that we’re using is Docker Hub. First, your account has to be created, then create a repository with any name, we’ve named it nodejs-starter. Now, let see the steps:

To create the repo:

Image for post
Image for post
Docker hub interface to create a repo
Image for post
Image for post
Provide a repo name as you wish
$ docker tag node-server lightninglife/nodejs-starter

Notes: Here node-server is the image we created previously, lightninglife is your Docker Hub account name and nodejs-starter is the image name you provide

We’ve tagged our existing docker image node-server to zarakmughal/nodejs-starter so we can push it to the docker hub.

$ docker push zarakmughal/nodejs-starter:1.1

Now, we’ve pushed our docker image to the registry by using a docker push and tagged it with the 1.1 version, and it’s not mandatory but highly recommended so you will roll back to the previous version and not override the latest build from the previous build.

Notes: Below is the exmaple of this best practice to provide version 1.1 vs with version number

Image for post
Image for post
Version control vs No version control

Step 7: Start The Kubernetes Cluster

Whether you’re using amazon EKS, Google Cloud GKE, or standalone machine, just make sure your cluster is running.

We’re are doing this lab on Minikube (used to run Kubernetes locally):

$ minikube start

This command will spin up the cluster, having one node that serves as a worker and one as a master node.

Step 8: Define YAML File To Create A Deployment In Kubernetes Cluster

YAML is a human-readable extensible markup language. It’s used in Kubernetes to create an object in a declarative way.

vim deploy.yaml

apiVersion: apps/v1 #1
kind: Deployment #2
metadata: #3
name: nodejs-deployment #4
spec: #5
replicas: 2 #6
selector: #7
matchLabels: #7
app: nodejs #7
template: #8
metadata: #9
labels: #10
app: nodejs #11
spec: #12
containers: #13
- name: nodongo #14
image: lightninglife/nodejs-starter:1.1 #15
ports: #16
- containerPort: 3000 #17

Notes: Keep in mind, in kubernetes files, layers must be written accurately like shown above. Otherwise, kubectl can’t execute yaml file properly

Breakdown Of Our YAML file in order:
1 Describe which API version you’re using to create this object i.e., deployment — we’re using apps/v1

2 What kind of object you’re creating. In our case, it’s Deployment.

3 Metadata is used to organize the object.

4 The name of our Deployment is nodejs-deployment

5 Spec is used to define the specification of the object.

6 How many pods you want to deploy in the cluster under this Deployment. In our case, we want to deploy two pods running containers from our image.

7 Selector, matchLabels and app are provided here as requirements in this yaml file.

8 The template is used to define how to spin up the new pod and the specification of the pod.

9 Metadata of the newly created pod with this Deployment

10 We have one label — key is app and value is nodejs

11 The labels of the freshly created pods

12 Spec defines the specification of how the containers will be created

13 Containers spec

14 Name of the container

15 The image that can be used by the container

16 Which port option to use

17 We’re using containerPort 3000

Step 9: Create Deployment In Kubernetes Cluster

As we’ve created the YAML file, we can go ahead and create a deployment from this YAML file.

$ kubectl create -f deploy.yaml

Kubectl is Kubernetes’ client which is used to create objects. With kubectl create, you can create any object -f indicates we’re using a file and deploy.yaml is the file that will be used to create an object. You can check Deployment with the following command:

$ kubectl get deploy,po
deployment.apps/nodejs-deployment 2/2 2 2 122m

pod/nodejs-deployment-7cdc7b5cbb-kkb8r 1/1 Running 0 122m
pod/nodejs-deployment-7cdc7b5cbb-w7ptj 1/1 Running 0 122m

Given the output, we see that our Deployment and both pods are working fine.

Step 10: Expose The Deployment To The Internet

Next, we’re going live through Kubernetes service object:

$ kubectl expose deployment nodejs-deployment --type="LoadBalancer"

This service will create a load balancer service that exposes the Deployment to the internet.

Kubectl expose is used to expose Deployment named nodejs-deployment of the type Load Balancer.

$ kubectl get svc
kubernetes ClusterIP <none> 443/TCP 4h8m
nodejs-deployment LoadBalancer 3000:30804/TCP 117m

Note: At this point, you won’t yet have an EXTERNAL IP, we’ll see that in the next step — how to get External IP for minikube. Cloud platforms do provide load balancer and you should be getting external IP.

Here, we do have two services. The second one you’re seeing is the service which we’ve created. It has an external IP and port. Visit <External_IP>:<PORT> to access your service. You can visit different routes to see each working /add /delete.

Step 11: Using MetalLB In Your Minikube Environment

You can skip this step if you’re using a cloud provider for your cluster. If you’re using minikube, you’ll notice that you won’t get an external IP because the load balancer will not work on minikube. Here’s the workaround below, just follow these commands and you’ll start getting an external IP:

$ kubectl apply -f$ kubectl apply -f # On the first install only$ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

After that, run minikube IP:

$ minikube ip

Here, you’ll get your minikube IP — ours is After this, we’ll create a config map for the address pool.

vim configmap.yaml

apiVersion: v1
kind: ConfigMap
namespace: metallb-system
name: config
config: |
- name: default
protocol: layer2

Notes: Make sure you keep layers as show above for kubectl to create this yaml file properly

In this configuration, MetalLB is instructed to hand out addresses from to After that, we’ll create a config map in the metallb-system namespace.

$ kubectl create -f configmap.yaml

Next, we have to delete the svc and create the service again:

$ Kubectl delete svc nodejs-deployment$ kubectl expose deployment nodejs-deployment --type="LoadBalancer"

Now that’s done, you’ll be getting External IP.

$ kubectl get svc
kubernetes ClusterIP <none> 443/TCP 4h12m
nodejs-deployment LoadBalancer 3000:30804/TCP 122m

Note, this is only workable on minikube; otherwise, Load Balancer service is available on the Kubernetes cluster via Cloud providers.


  • NodeJS is a javascript runtime, used to develop API and web application framework
  • Docker delivers software in packages called containers, we leverage this functionality through nodejs development and build images using Docker
  • We use Kubernetes as our container orchestration tool to deploy and run these containers in a minikube environment
  • Then we’re able to expose the service to the internet
  • If using minikube, you can get external IP through minikube itself


First and foremost, this whole project was done in around 20 mintues. So that it validates the fact that docker and kubernetes are so-called “great companion” in terms of deploying and orchastrating clusters.

Secondly, I’d like to stress out the importance of installation. Without having right tools in place, we’ll not be able to complete our project in a timely and effective manner. Based on different OS, we need to install each and every tool in an appropriate fashion. What is also worth of mentioning is that verification of tools after installation is pivotal because you don’t want to go back to installation process after spending tons of time getting stuck in a scenario due to improper installation.

In terms of deploying docker and kubernetes, here are some tips I’d like to bring to the table. Docker allows every element being deployed in containers, which provide more flexibility and modularity. Kubernetes, on the other hand, offers more customization on top of Docker.

All in all, this project showcases the power of Docker and Kubernetes when deploying and orchestrating in terms of DevOps operation.

Paul Zhao Projects

List of my projects

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store