Part-1: A Beginner's Practical Guide to Containerisation and Chaos Engineering with LitmusChaos 2.0

Neelanjan Manna
May 20 · 15 min read

Part-1: Containers 101, Deploy a Node.js App Using Docker and Kubernetes

This blog is part one of a two blog series that details how to get started with containerization using Docker and Kubernetes for deployment, and later how to perform chaos engineering using LitmusChaos 2.0. Find Part-2 of the blog here.

So you’ve just come across this term called Containerisation and now you’re wondering that as an aspiring software product engineer, or a DevOps engineer, what role will it play in your day-to-day work. After all, applications can be deployed without containers, and in fact, that has been the norm for a long time until containerization technologies like Docker, Kubernetes came into the big picture. So what’s all the fuss about?

In this blog, I’d try to answer all your questions, starting from what are containers, how containers got themselves into the limelight, why one should use them, what is container orchestration, advantages of container orchestration, and finally we will deploy a Node.js application using Docker and Minikube Kubernetes. Please take note that this blog will try to put more emphasis on the practical aspect of using Containers, and won’t encompass the basic theoretical concepts of Docker and Kubernetes at a large, but only those concepts which are necessary to understand the demo.

What are Containers?

As Docker defines it:

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

Simply said, containers allow applications to run in an isolated environment of their own, along with all its dependencies. This decoupling makes it simple and consistent to bundle and deploy container-based applications along with all their dependencies, regardless of whether the target environment is a private data center, the public cloud, or even a developer’s personal laptop.

Difference between Containers and Virtual Machines

A hypervisor virtualizes physical hardware in conventional virtualization. As a result, each virtual machine has a guest OS, a virtual copy of the hardware that the OS needs to run, and an application with all of its related libraries and dependencies. On the same physical server, multiple virtual machines running different operating systems will coexist. A VMware VM, for example, can coexist with a Linux VM, which in turn can coexist with a Microsoft VM, and so on.

Containers virtualize the operating system (typically Linux or Windows) rather than the underlying hardware, so each container only contains the portable and its libraries and dependencies. Containers are small, fast, and portable since, unlike virtual machines, they do not need a guest OS in every instance and can instead rely on the host OS’s features and resources.

Containers, like virtual machines, allow developers to make better use of physical machines’ CPU and memory. On the other hand, Containers go much further because they support microservice architectures, which allow for more granular deployment and scaling of application components. This is a more appealing option than scaling up an entire monolithic framework because a single component is experiencing load issues.

Why we should Use Containers

  1. Faster time to market: Keeping the competitive advantage needs new software and services. Organizations may use growth and organizational agility to accelerate the implementation of new services.
  2. Deployment velocity: Containerization allows a quicker move from production to implementation. It allows DevOps teams to reduce deployment times and frequency by breaking down barriers.
  3. Reduction of IT infrastructure: Containerization increases the density of device workloads, improve the utilization of your server compute density, and cut software licensing costs to save money.
  4. Performance in IT operations: Containerization allows developers to streamline and automate the management of multiple applications and resources into a single operating model to improve operational performance.
  5. Obtain greater freedom of choice: Any public or private cloud can be used to package, ship, and run applications.

What is Container Orchestration

As VMware defines it:

Container orchestration is the automation of much of the operational effort required to run containerized workloads and services. This includes a wide range of things software teams need to manage a container’s lifecycle, including provisioning, deployment, scaling (up and down), networking, load balancing and more.

Running containers in production can easily become a huge effort due to their lightweight and ephemeral nature. When used in conjunction with microservices, which usually run in their own containers, a containerized application will result in hundreds or thousands of containers being used to construct and run any large-scale system.

If handled manually, this can add a lot of difficulties. Container orchestration, which offers a declarative way of automating much of the job, is what makes the organizational complexity manageable for creation and operations, or DevOps. This makes it a natural match for DevOps teams and cultures, which aim for much greater speed and agility than conventional software development teams.

Advantages of Container Orchestration

  1. Improved Resilience: Container orchestration software can improve stability by automatically restarting or scaling a container or cluster.
  2. Simplification of Operations: The most significant advantage of container orchestration, and the primary explanation for its popularity, is simplified operations. Containers add a lot of complexity, which can easily spiral out of control if you don’t use container orchestration to keep track of it.
  3. Enhanced Security: Container orchestration’s automated approach contributes to the protection of containerized applications by reducing or removing the risk of human error.

Demo: Deploy a Node.js App Using Docker and Kubernetes

Let’s get our hands dirty by deploying a simple Node.js application using a Docker container, followed by deploying the container image in a Minikube Kubernetes cluster in our own development machine.

Before we move on to the actual demo, let's check a few pre-requisites off the list so that we will all be on the same page:

  1. Node.js version 10.19.0
  2. Docker version 20.10.6
  3. Minikube version 1.20.0
  4. Virtualbox 6.1.6
  5. Kubectl

It's worth mentioning that I will be using a machine running on Ubuntu 20.04 for this demo, though you should be fine with a Windows machine too. For this demo, we’ll not cover the installation part of Docker and Minikube since they are pretty straightforward and require no special instruction, and focus on the deployment part only.

1. Node.js Application

Let’s start with a very basic Node.js “Hello World” application for this demo. The application has been developed as any other Node.js application, after initializing an empty repository using . The generated is as follows:

Once that’s done, we can set up our as follows:

Here we have a pretty basic Node.js server and all it does is serve the string ‘Hello World’ when a GET request is sent to the loopback address, at port 3000. Upon executing the above code using we obtain the following output:

And we do get our Hello World at

Simple, right? Once that’s done, let’s move on to the sweet part, creating a container image of our application using Docker.

2. Docker

Once we have our application up and running, we can proceed to dockerizing the application. But before we do that, let us initialize a startup script in our so that our application can be readily executed by Docker. Therefore, we’d modify our as follows:

Now we are all set to Dockerize our application! To do that we need to simply create a in the same directory as of our . A Dockerfile is simply a set of instructions required for building the container image of the application. Our looks something like this:

Let us walk through each of these commands to better understand their purpose.

The instruction initializes a new build stage and sets the Base Image for subsequent instructions. A Base Image is simply a Docker image that has no parent image, which is created using the directive. As such, a valid must start with a instruction. Here we are using the Base Image.

The command is used to set the working directory for all the subsequent instructions. If the is not manually created, it gets created automatically during the processing of the instructions. It does not create new intermediate Image layers. Here we set our working directory as .

The instruction copies new files or directories from and adds them to the filesystem of the container at the path . Here we are copying the file to directory. Interestingly, we don’t copy the rest of the files into just yet. Can you guess why? This is because we’d like Docker to cache the first 3 commands so that every time we run we won’t need to execute those commands, and thus improve our build speed.

command can be used in two ways; either through a as shown here, otherwise through the Docker CLI. The instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the . To install the Node.js app dependencies from the file, we use the command here.

Next, we use to move all the remaining files to the directory.

Finally, we use the command to execute the Node.js application. The main purpose of a is to provide defaults for an executing container. These defaults can include an executable, or they can omit the executable, in which case we must specify an instruction as well. There can only be one instruction in a Dockerfile. If we list more than one then only the last will take effect.

And we’re done with the . Now we’re all set to build our container image, which can be done using the command

The only important takeaway here is that we MUST tag our image via flag, as it’d be very much beneficial for us to deploy and manage the container image later on. Here we have specified the name of our container image as and tagged it with its version . We had seen a similar practice with our Base Image as well. Lastly, we specified the directory from where the is located using the path.

Upon building the image, Docker tries to fetch the Base Image if it's not present in the local registry. Next, it executes the set of instructions given in the in a sequential order to complete the build process. Here’s the build output:

Upon a successful build, we can view our image using the command :

Notice the image is also present here since we have used it as our Base Image. Now we’re all set to run our application. Let’s run our docker container using the command

Here we’re specifying that we want to run the container image in an interactive mode using the flag . Further, we specify the port mapping of our container using the flag, where we direct that the port of the host machine is mapped to the port of the container. Finally, we specify the name and tag of the image to be run. Hence we obtain:

Thus, we have successfully deployed a container image of our application using Docker, which we can verify by visiting :

Further, we can run the container in detached mode by specifying flag in place of in the previous command:

The long alphanumeric string output is nothing but the UUID long identifier of the running container. We can further inspect the properties of this container using the command :

Amazing! We have just deployed our very first Docker container and now we’re all set for our next destination: Kubernetes.

3. Kubernetes

As the container image of our application is now ready, all that’s left is to deploy our container image to a Kubernetes deployment. We’d use a Minikube cluster for the deployment of our container image locally.

It's important to take note that a Minikube cluster would have only one worker node, which will be created using a virtual machine, and the control plane will reside in our own machine only.

To start Minikube, we can use the command . It's worth pointing out that a minimum of 2 CPUs, 2GB RAM, and 20GB Disk Space is required for starting a Minikube cluster using this command. One may check the number of processing units in their machine using the command:

Once the Minikube cluster starts up, you’d get the following output in the terminal:

Now that our Minikube cluster is up and running, let’s devise a deployment strategy for our container image.

Currently, our container image is stored locally in our own Docker Local Registry which is present in our machine. The Registry is nothing but a stateless, highly scalable server-side application that stores and allows the distribution of Docker images. So, either we can use the image directly from the Local Registry to deploy it in the Minikube server, or we can first push our image to a Hosted Registry such as Docker Hub and later pull it for the image deployment. The latter is a more suitable approach when working in a team.

The former approach, however, has an unnoticed aspect. Minikube comes with its own Docker ecosystem when we install it on our machine. If we create Docker images on our computer and try to use them in a Kubernetes deployment, we will get the ErrImageNeverPull error since it always tries to get the image from its own Local Registry or Docker Hub, resulting in an error as the pod is started.

To verify this, let’s do a small experiment. We can still view our Local Registry images using the command :

Now, let’s try to access the Docker Images of Minikube. To do that we need to first SSH into the Minikube’s VM. To do that, we can use the command :

Now that we are inside the Minikube’s VM, we can again use the command and this time the hello-world image or the node image is nowhere to be found:

The SSH can be exited using the command.

To get around this issue, we have two options. Either we can push our image first to a Hoster Registry and then pull it into our Minikube VM’s Docker, or we can directly build our Docker image using Minikube’s Docker daemon. Let’s try to deploy our container image using the second approach.

Firstly, we need to set the environment variable using the eval command as . This will allow us to use the Docker daemon of Minikube to be used for the subsequent command. You can confirm that now the Minikube’s Docker is being used by using the command again and this time you’d find the images from your Minikube Docker:

Next, use the build command for docker image as you’d normally do using the command :

Now that our Docker image is in the right Registry, we can proceed towards the actual deployment. We can deploy our image either using a deployment manifest file or by using command directly. The first approach is a better approach since it gives us much more flexibility to specify our exact Pod configuration.

Let’s define our manifest:

A few interesting observations are to be made over here. Notice that has been set to 1 for the time being. This means that there will always be exactly one pod for our deployment under the present configuration. The is set to “Never” as the image is expected to be fetched from the Minikube Docker’s Local Registry. Finally under , the has been set to 3000 because our Node.js application listens on that port.

Now, let’s create the container deployment using the following command , assuming that the terminal is open in the directory where file is present. We get the following output:

Woohoo! We just deployed our container image in the Kubernetes cluster! To verify our deployment, we can see all the deployments in our cluster using the command :

As we can see, our deployment is successfully created. We can also inspect the pods associated with this deployment using the command :

As per our manifest file, only one pod is created. Though, we can still increase or decrease the number of Pod replicas in our deployment using the following command . This command will create 2 more Pods which will be exact replicas of the Pod that we have created:

We can verify the newly created Pods by again using the command :

Though we have deployed our container image, still we can’t access the application just yet, for the lack of a Service. As we know, Service in Kubernetes is an abstraction that defines a logical set of Pods and a policy by which to access them. Services enable a loose coupling between dependent Pods. For our purpose, we’d create a NodePort Service to be able to access our deployment. A NodePort exposes the Service on the same port of each selected Node in the cluster using NAT.

Let’s define a manifest:

A few points to notice: Under , we have defined the and in relation to the Service itself i.e. refers to the port exposed by the service and refers to the port used by our deployment aka. the Node.JS application.

To create this Service we’d use the following command , assuming that the terminal is open in the directory where file is present. We get the following output:

We just created our NodePort service, which we can verify using the command :

As evident, we have successfully created a NodePort Service. Noticeably, it doesn’t have an external IP, therefore we can use port-forwarding to access our deployment at a specified port. The command to do so is specified as :

And now if we go to :

Congratulations! You have just deployed a containerized application using Docker and Kubernetes by adhering to the best practices. Though we have touched only the tip of the iceberg, I hope this demo has made you a little bit more familiar with the containers, and how Docker and Kubernetes can be used for containerizing and deploying applications.

In the next part of this series, we’d explore the world of Chaos Engineering using Litmus Chaos! We’d understand the best principles of Chaos Engineering and witness how Litmus Chaos performs Kubernetes-Native Chaos Engineering for attaining unparalleled resiliency in our Kubernetes application. Find Part-2 of the blog here.

With that, I’d like to welcome you to the world of containers and chaos engineering. Come join me at the Litmus community to contribute your bit in developing chaos engineering for everyone. Stay updated on the latest Litmus trends through the Kubernetes Slack channel (Look for #litmus channel).

Don’t forget to share these resources with someone who you think might benefit from them. Thank you. 🙏


Litmus is a toolset to do cloud-native chaos engineering.


Litmus is a toolset to do cloud-native chaos engineering. Litmus provides tools to orchestrate chaos on Kubernetes to help SREs find weaknesses in their deployments. Fixing the weaknesses leads to increased resilience of the system.

Neelanjan Manna

Written by

SE-1 @ ChaosNative | Full-Stack Development, Data Science, ML, Cloud | In permanent beta; learning, improving, and evolving through experience and passion


Litmus is a toolset to do cloud-native chaos engineering. Litmus provides tools to orchestrate chaos on Kubernetes to help SREs find weaknesses in their deployments. Fixing the weaknesses leads to increased resilience of the system.