Docker! Setup once, Run anywhere.

Varun Bhaya
Mindful Engineering
6 min readMay 22, 2020

Alright!! Let’s talk Docker.
Docker for me, Docker for you, Docker for everyone.
🤓

Let’s start with an example, I made an application with all of the dependencies and packages required for that project and it was running successfully in my Machine-A. After a while I decided to send the code of the application to another developer so that he can help me out as well.
But as soon as the other developer tried running the project on his Machine-B the application build failed and it kept throwing package and dependency errors as those versions were different than my Machine-A.

I am sure every developer at least once would have faced such issues and hours and days are spent fixing those dependency conflicts.

How do we avoid such kind of conflicts, so that every time we either change our Machine-A/B/C or on-board a new developer we don’t face such issues. How do we make our application independent of the host platform yet being able to utilize all of its underlying resources such as CPU, RAM, etc?

This is where we can utilize docker.
It can be defined in simpler terms as a packaging tool, that helps us set up an environment that is best suitable for our application, defined by us in a dockerfile listing out all of the dependencies, environment variables, packages with specific versions, build an image out of it and bundle them into an isolated container that can run anywhere without worrying about what machine we are running it on.

Dockerfile, Image, Container? Dude? What are you even talking about?

Okay, Okay! First let’s understand what does docker do and how it helps us, consider the following illustration:

Let’s break it down one by one, so that it doesn’t feel like rocket science ✅ :

Dockerfile

A docker file is a human-readable file with a set of commands that lists out the dependencies, packages, environments, commands needed to run our application, ports our application will be exposed from etc.

Images

A Docker image is a read-only template from which we will create our containers, Now what does an image do actually? An Image reads the docker file commands, it downloads all the dependencies we need from the docker registry, sets up the filesystem we need, downloads and installs all those defined dependencies.

For example: Setting up Ubuntu and installing Apache and NodeJs to help us run our application internally.

Containers

Image becomes container at runtime.

Docker containers contain everything from source code to the environment we need to run and interact with our application. Each container gets it’s own filesystem, RAM, CPU, network resources, security protocols and does not depend on a specific operating system or kernel.

Like a Virtual Machine, right? No. No. No. Containers are not virtual machines, but they are actually lightweight wrappers around a single UNIX process. While a virtual machine is a full-blown system megabyte in size running on the host machine through a hypervisor. Containers are small and lightweight because it is just a reference to a layered filesystem image and some metadata about the configuration.

It runs a discrete process, taking no more memory than any other executable, making it lightweight.

“ Hey! Enough chit-chat, where’s the code? and how will docker help me in making my application independent of the platform? ’’ 🔥🔥🔥

Alright! Let’s create a basic NodeJs server application and utilize docker in a way that we are able to run our application on any host machine.

Pre-requisites:
Make sure you have NodeJs and Docker Desktop installed on your local computer.

Our approach can be broken down in the following order:

  1. Create a NodeJs server application
  2. Create a Dockerfile for our NodeJs application
  3. Build a docker image from Dockerfile
  4. Create a container from the image and run that container
  5. Make requests to the running isolated container and test the application

Step 1: Create a NodeJs server application

Terminal commands

Once we have the folder all set up, write the following code in the server.js file:

Run the command in terminal and you will see the below output.
You can even try making the request from your browser via the URL: http://localhost:8080 , What did you see? 🤓

Server running successfully!

Step 2: Create a Dockerfile for our NodeJs application

Dockerfile helps list out a series of commands we usually use every day to run the application in our local. You can refer in detail about the commands.

Step 3: Build a docker image from Dockerfile

Building a docker image means pulling all the required dependencies for the environment that we need by reading from the docker file commands.

An image can be build using the following command:

$ docker build -t node-server .

Notice that last “ . ’’ at the end is important it tells the command where the Dockerfile is located. As we are currently inside the server’s root folder same as Dockerfile path so the path can be identified as “. ’’

You will be able to see the following scenario when you run the command:

Building image (Downloading resources)

Once all the resources are downloaded and the image is built successfully, the list of created images can be listed via:

$ docker images
------------------------
REPOSITORY TAG IMAGE ID ... SIZEnode-server latest d73c93a65880 ...

Alright, we have our image at the place the next step is to build our container from the image.

Step 4: Create a container from the image and run that container

Again to remind,

Images become containers at runtime.

The container will help run the application in an isolated environment, independent of our host system dependencies, we can make requests to our application using the PORT our container is exposed from.

To run our container we use the docker run command. But we also need to be able to make requests to our container and the application running inside of it from our browser/client-side applications.

Execute the following command in your terminal:

docker run -p 3000:8080 node-server

the command can be interpreted as:
docker run -p(port tag) [EXTERNAL_PORT]:[INTERNAL_PORT] [IMAGE_NAME]

> node-server@1.0.0 start /app> node server.jsServer started!

Step 5: Make requests to the running isolated container and test the application

Try making the request to the application from the browser using the following URL: http://localhost:3000/

Browser window URL: http://localhost:3000/

HEY, FINALLY!! 🥳 🎉

We have a running application isolated, not practically running from our local folder but running independently inside a container. 😃

And now that our application is capable enough to run on its own and isolated. We can share our application to other developers and teams via two possible approaches:

  1. Send the source code, and they follow the same process and run containers inside their own local Machine-A/B/C. No Issues.
  2. Upload our source code and container images to Docker Hub and teams can pull container images directly from their terminal and run applications in their own machine, without worrying about the dependencies.

Note:

Of-course, Docker is so much more than what we utilized for our needs. But the article is more inclined towards people who are curious about docker and wanting to get their heads around what docker is and use it for easing the deployment process particularly.

Let me know your thoughts in the comments below!

Also, check out some of the amazing work that we do at MindInventory and how we help build awesome products for people around the world. 😀

Until later, Folks!

--

--

Varun Bhaya
Mindful Engineering

Full Stack Developer | JavaScript, Python, NodeJs, Docker.