Developing and deploying a Node.js app from Docker to Kubernetes

Learn how to develop and deploy a Node.js app using containers and an orchestration engine

Paul Zhao
Paul Zhao Projects
16 min readJun 8, 2020

--

As demands of scaling and automation, the conventional method of deployment is no longer meeting the requirements of businesses in this informational age. With that said, DevOps Engineer dives deep and looks for ways to streamline and automate the continuous deployment of code.

To some extent, docker has been widely adopted as a containerized automation tool to deploy applications with ease, which guarantees the predictability and consistency of packaging You can simply expect the software to behave similarly whether you’re on a laptop or in the cloud.

However, as the demands of scaling and complexity emerge, containerized docker may not serve the needs. That’s when orchestration engine tools, like Kubernetes come into play. Teams are using Kubernetes as a higher-level abstraction to manage Docker container technology and further simplify the pipeline to enable their teams to go faster.

We’re already seeing tremendous benefits with Kubernetes — improved engineering productivity, faster delivery of applications and a simplified infrastructure.

Teams who were previously limited to 1–2 releases per academic year can now ship code multiple times per day!

Chris Jackson, Director for Cloud Platforms & SRE at Pearson

Though users may not require the traffic, which online giants such as Google or Facebook may demand, they may need to accurately predict their infrastructure costs, or just want to manage their systems more efficiently.

Why user containers?

  • Less overhead. Containers require less system resources than traditional or hardware virtual machine environments because they don’t include operating system images.
  • Increased portability. Applications running in containers can be deployed easily to multiple different operating systems and hardware platforms.
  • More consistent operation. DevOps teams know applications in containers will run the same, regardless of where they are deployed.
  • Greater efficiency. Containers allow applications to be more rapidly deployed, patched, or scaled.
  • Better application development. Containers support agile and DevOps efforts to accelerate development, test, and production cycles.
  • Improved security. your container is isolated from other containers, so that someone shipping fish tanks won’t slosh fish water 🐟 onto your bundle of firewood

How Containers Work

The term container is truly an abstract concept, but three features can help you visualize exactly what a container does.

  • Namespaces. A namespace provides a container with a window to its underlying operating system. Each container has multiple namespaces that offer different information about the OS. An MNT namespace limits the mounted filesystems that a container can use; a USER namespace modifies a container’s view of user and group IDs.
  • Control groups. This Linux kernel feature manages resource usage, ensuring that each container only uses the CPU, memory, disk I/O, and network that it needs. Control groups can also implement hard limits for usage.
  • Union file systems. The file systems used in containers are stackable, meaning that files and directories in different branches can be overlaid to form a single file system. This system helps avoid duplicating data each time you deploy a new container.

There are two main components to container solutions: an application container engine to run images and a repository/registry to transfer images. These components are supported by the following:

  • Repositories. Repositories provide the reusability feature of private and public container images. For example, there are platform component images available for MongoDB and Node.js.
  • Container API. The API supports creating, distributing, running, and managing containers.
  • Container creation. Applications can be packaged into a container by combining multiple individual images, often images extracted from repositories.
To visualize the process

After the Postman engineering team reorganized into a microservice architecture, every service now uses Docker to configure their own environments. Every service owner defines their own Dockerfile from which an image is generated when new code is deployed as part of the CI/CD pipeline. The resulting image is pushed to the team’s container registry, and their Beanstalk environments are configured to pull the image from the registry to run the containers.

Every service gets the flexibility of configuring how to run their services. So services engineers can focus on building the application while platform engineers can focus on how to build and deploy automatically.

Docker takes over the responsibility of configuring the environment and standardising the deployment pipeline. This gives us faster deployment and scaling time because the build happens only once during CI.

— Saswat Das, Platform engineer at Postman

Why Kubernetes?

  • Service discovery and load balancing
    Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
  • Storage orchestration
    Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
  • Automated rollouts and rollbacks
    You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
  • Automatic bin packing
    You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
  • Self-healing
    Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
  • Secret and configuration management
    Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
How Kubernetes master works

Kubernetes simplifies the deployment process for your application, and provides tools to make your application super robust.

With Kubernetes, you get rolling deployments with no downtime, service discovery, and the flexibility to change cloud providers easily.

— Dan Pastusek, Founder of Kubesail

Enough theoretical knowledge, now let’s dive in deep and see how we can

we’re developing a starter NodeJS server and deploying it to a Kubernetes cluster, starting from the very primary server, then building the image from docker and deploying it to the Kubernetes cluster.

Prerequisites

In order to start off this project, we need to have following tools installed

NodeJS Installation

Installation on a Mac or Linux

In order to install everything on a Mac, we’ll be running commands in Terminal.app, and Linux distributions vary.

Install Node.js and npm

We’re going to use Node Version Manager (nvm) to install Node.js and npm.

Open the ~/.bash_profile file, and make sure source ~/.bashrc is written in there somewhere. Restart the terminal.

Run the install command.

Run the use command.

Now that Node.js and npm are installed, test them by typing node -v and npm -v.

Installation on Windows

Installing everything on Windows is a breeze.

Install Node.js and npm

Node.js and npm can be installed from a download link. Go to the Node installation page, and download the Node installer. I have a 64-bit Windows 10 OS, so I chose that one.

Once it’s done, you can test to see both node and npm functioning by opening PowerShell (or any shell) and typing node -v and npm -v, which will check the version number.

All set.

Docker Installation

Installation on Mac

Installation on Windows

To verify installation

Kubernetes Installation

Kubernetes will be running, if you’re using your laptop or PC then Minikube must be initiated and running.

Minikube installation

Verify Minikube

Kubectl Installation

kubectl version (If it shows both the client and server version you’re good to go)

Notes: Keep in mind, you must enable Kubernetes service with Docker as shown below

Enable Kubernetes

Step 1: Make A Separate Directory And Initialize The Node Application

First, we’ll initialize the project with npm (Node Package Manager)

After doing npm init, npm will ask for some basic configuration info i.e., your project name (our project name is nodongo), then version and starting point which is index.js (note: whenever the server starts, it looks for index.js to execute).

From here, you’ll have a file name package.json, which holds the relevant information about the project and dependencies.

Step 2: Installing Express

Next, we’ll install Express through npm (Node Package Manager). The Express framework is used to build a web application and API’s:

The above command installs Express dependency in your project. — save tag is used to save this dependency in the project.json.

Step 3: Make index.js File And Write Some Code

First, create a file named index.js in the root folder. Then we can write some code to test the application on the Kubernetes cluster:

vim index.js

From the first line, we’ve imported the Express module using a require function, this function returns an object that’s used to configure our application.

Then we’ll use a callback function that starts listening on a specific host and port i.e., port 3000 in our case. After that, we configured a route update delete insert that doesn’t do the actual database CRUD function but is implemented to have a route to check. res.send() function returned the response from the server.

You can now check the server by using the following command, and browsing localhost:3000/

In browser, test localhost:3000

Step 4: Dockerizing The Node Server

Here comes the fun part — we have the code and the server is ready to deploy. But first, we have to build the image, and for that, we’ll have to write the Dockerfile.

Vim Dockerfile

The images are built with many layers and each of the Docker-file steps construct these layers for us. Here, we’ll guide you through each step:

  • We must start with FROM Keyword, tell the docker which image to use as your base image. Here, we’re using node version 13
  • WORKDIR, tells docker the working directory of our image (in our case it is /app). CMD or RUN commands execute in this folder
  • CP stands for copy; Here, we’re copying package.json file to /app
  • RUN executes a command on the working directory that’s defined above. The npm install command installs required dependencies defined in the package.json, which we’ve just copied to /app directory
  • Now, we copy the files in the root directory to /app directory where we’re using all the commands. We’ve done it this way so that we have our index.js file in /app directory. Although we just cp index.js /app to copy index.js file to our app directory, we’re purposely doing it in a generic way because we want all of our data to copy from root to app folder.
  • CMD stands for command, and here we’re running node index.js as we had seen at the beginning of this article to start the NodeJS server or run file. We have index.js in the app directory from the last step, and we’re starting our server from the index.js file.
  • EXPOSE 3000, here it informs the user container (using this image) that it needs to open port 3000.

Next, from Dockerfile we’ll start building our image.

The Docker build command is used to create an image with instructions given by Docker-file. -t flag is used to tag the images with our node-server name. Here you can see a full stop at the very end followed by space, and this defines the build context that we’re building this image on, and that we’re using the current context or local Dockerfile.

Step 5: Create And Run The Container

Now, we’ll then run the container to ensure it works as intended.

Here we run a container using our NodeJS image. The run command used to run container -d flag indicates container will be running on detach mode. — name is optional. You can give any name to your container. -p flag is used to define the port on which our server is running, the first port is the container port, and the second one is the host port. Next, we have to specify which image is used to run the container, and that it’s our node-server image. You can curl 127.0.0.1:3000 or browse this address to test that it’s running.

Step 6: Upload The Image To Docker Registry Docker Hub

The image registry that we’re using is Docker Hub. First, your account has to be created, then create a repository with any name, we’ve named it nodejs-starter. Now, let see the steps:

To create the repo:

Docker hub interface to create a repo
Provide a repo name as you wish

Notes: Here node-server is the image we created previously, lightninglife is your Docker Hub account name and nodejs-starter is the image name you provide

We’ve tagged our existing docker image node-server to zarakmughal/nodejs-starter so we can push it to the docker hub.

Now, we’ve pushed our docker image to the registry by using a docker push and tagged it with the 1.1 version, and it’s not mandatory but highly recommended so you will roll back to the previous version and not override the latest build from the previous build.

Notes: Below is the exmaple of this best practice to provide version 1.1 vs with version number

Version control vs No version control

Step 7: Start The Kubernetes Cluster

Whether you’re using amazon EKS, Google Cloud GKE, or standalone machine, just make sure your cluster is running.

We’re are doing this lab on Minikube (used to run Kubernetes locally):

This command will spin up the cluster, having one node that serves as a worker and one as a master node.

Step 8: Define YAML File To Create A Deployment In Kubernetes Cluster

YAML is a human-readable extensible markup language. It’s used in Kubernetes to create an object in a declarative way.

vim deploy.yaml

Notes: Keep in mind, in kubernetes files, layers must be written accurately like shown above. Otherwise, kubectl can’t execute yaml file properly

Breakdown Of Our YAML file in order:
1 Describe which API version you’re using to create this object i.e., deployment — we’re using apps/v1

2 What kind of object you’re creating. In our case, it’s Deployment.

3 Metadata is used to organize the object.

4 The name of our Deployment is nodejs-deployment

5 Spec is used to define the specification of the object.

6 How many pods you want to deploy in the cluster under this Deployment. In our case, we want to deploy two pods running containers from our image.

7 Selector, matchLabels and app are provided here as requirements in this yaml file.

8 The template is used to define how to spin up the new pod and the specification of the pod.

9 Metadata of the newly created pod with this Deployment

10 We have one label — key is app and value is nodejs

11 The labels of the freshly created pods

12 Spec defines the specification of how the containers will be created

13 Containers spec

14 Name of the container

15 The image that can be used by the container

16 Which port option to use

17 We’re using containerPort 3000

Step 9: Create Deployment In Kubernetes Cluster

As we’ve created the YAML file, we can go ahead and create a deployment from this YAML file.

Kubectl is Kubernetes’ client which is used to create objects. With kubectl create, you can create any object -f indicates we’re using a file and deploy.yaml is the file that will be used to create an object. You can check Deployment with the following command:

Given the output, we see that our Deployment and both pods are working fine.

Step 10: Expose The Deployment To The Internet

Next, we’re going live through Kubernetes service object:

This service will create a load balancer service that exposes the Deployment to the internet.

Kubectl expose is used to expose Deployment named nodejs-deployment of the type Load Balancer.

Note: At this point, you won’t yet have an EXTERNAL IP, we’ll see that in the next step — how to get External IP for minikube. Cloud platforms do provide load balancer and you should be getting external IP.

Here, we do have two services. The second one you’re seeing is the service which we’ve created. It has an external IP and port. Visit <External_IP>:<PORT> to access your service. You can visit different routes to see each working /add /delete.

Step 11: Using MetalLB In Your Minikube Environment

You can skip this step if you’re using a cloud provider for your cluster. If you’re using minikube, you’ll notice that you won’t get an external IP because the load balancer will not work on minikube. Here’s the workaround below, just follow these commands and you’ll start getting an external IP:

After that, run minikube IP:

Here, you’ll get your minikube IP — ours is 192.168.64.2. After this, we’ll create a config map for the address pool.

vim configmap.yaml

Notes: Make sure you keep layers as show above for kubectl to create this yaml file properly

In this configuration, MetalLB is instructed to hand out addresses from 192.168.79.61 to 192.168.79.71. After that, we’ll create a config map in the metallb-system namespace.

Next, we have to delete the svc and create the service again:

Now that’s done, you’ll be getting External IP.

Note, this is only workable on minikube; otherwise, Load Balancer service is available on the Kubernetes cluster via Cloud providers.

TL;DR

  • NodeJS is a javascript runtime, used to develop API and web application framework
  • Docker delivers software in packages called containers, we leverage this functionality through nodejs development and build images using Docker
  • We use Kubernetes as our container orchestration tool to deploy and run these containers in a minikube environment
  • Then we’re able to expose the service to the internet
  • If using minikube, you can get external IP through minikube itself

Conclusion:

First and foremost, this whole project was done in around 20 mintues. So that it validates the fact that docker and kubernetes are so-called “great companion” in terms of deploying and orchastrating clusters.

Secondly, I’d like to stress out the importance of installation. Without having right tools in place, we’ll not be able to complete our project in a timely and effective manner. Based on different OS, we need to install each and every tool in an appropriate fashion. What is also worth of mentioning is that verification of tools after installation is pivotal because you don’t want to go back to installation process after spending tons of time getting stuck in a scenario due to improper installation.

In terms of deploying docker and kubernetes, here are some tips I’d like to bring to the table. Docker allows every element being deployed in containers, which provide more flexibility and modularity. Kubernetes, on the other hand, offers more customization on top of Docker.

All in all, this project showcases the power of Docker and Kubernetes when deploying and orchestrating in terms of DevOps operation.

--

--

Paul Zhao
Paul Zhao Projects

Amazon Web Service Certified Solutions Architect Professional & Devops Engineer