#OneStop Everything about Docker

Anand Prakash
7 min readSep 2, 2023

--

What is Docker ?

Docker is a platform for developing, shipping, and running applications in containers.

Containers are lightweight, standalone, and executable packages that contain everything needed to run an application, including the code, runtime, system tools, and libraries.

Why Docker? Is there any alternate solution ?

Elimination of “it works on my machine” problem — application runs consistently across different environments, from development to production.

Efficiency — Containers are lightweight and start quickly, making them efficient for deploying and scaling applications, especially in cloud environments.

Portability — Docker containers can run on any platform that supports Docker.

Version Control — Docker allows us to version control container images

Ecosystem — Docker has a rich ecosystem of tools and services for container orchestration (e.g., Kubernetes), continuous integration, and deployment (CI/CD), monitoring, and more.

→ Some alternate to docker are Podman, rkt, LXC, Containerd and Singularity. Based on our requirements we can select those.

Let’s see how it work in detail: —

Architecture of Docker

Docker follows a client-server architecture.

Docker Client → The Docker client is the command-line interface (CLI) or graphical user interface (GUI) that allows users to interact with Docker. It sends commands to the Docker daemon to perform various tasks like building images, running containers, and managing Docker resources.

Docker Daemon → Also known as the Docker engine, is a background service running on the host system. It listens for Docker API requests from the Docker client and manages Docker containers, images, networks, and volumes. It handles container execution, storage management, and networking.

Docker Images → Docker images are templates or blueprints for creating containers. They consist of a read-only file system snapshot, executable code, and application dependencies. Images are used to instantiate containers. Images can be stored in Docker Hub or other container registries.

Docker Container →A container is a runnable instance created from a Docker image. Containers encapsulate an application and its environment, including the necessary libraries and settings, in an isolated space. They run in isolation from other containers and share the host machine’s kernel.

Docker Registry → A Docker registry is a repository for storing and distributing Docker images. Docker Hub is the default public registry, but we can use private registries for security and control. Images can be pulled from registries to run containers or pushed to share with others.

Docker Network → Docker provides networking capabilities to connect containers with each other and with the external world. It offers different network modes, such as bridge, host, and overlay, to control how containers communicate.

Docker Volumes → Docker volumes are used to persist data generated by containers. They are separate from the container’s file system and can be shared among multiple containers. Volumes are essential for storing application data that needs to survive container restarts or removal.

Lets setup Docker from scretch.

1. Install both the Docker Engine and Docker CLI → Once installed verify it by checking it version by running the command at cmd:- docker — version

2. Create a Dockerfile → In our project directory, create a Dockerfile. This file specifies how to build a Docker image for our application. It includes instructions to set up our application’s environment, dependencies, and code. eg → for backedn java with spring boot we can create Dockerfile as :-

# Use an official OpenJDK runtime as a parent image
FROM openjdk:11-jre-slim

# Set the working directory in the container
WORKDIR /app

# Copy the Spring Boot JAR file into the container
COPY target/your-app.jar app.jar

# Expose the port your Spring Boot app listens on (default is 8080)
EXPOSE 8080

# Define the command to run your Spring Boot application
CMD [“java”, “-jar”, “app.jar”]

Simlarly for front end suppose we are using react then we can create dockerfile as →

# Use an official Node.js runtime as a parent image
FROM node:14

# Set the working directory in the container
WORKDIR /app

# Copy package.json and package-lock.json to the container
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy the rest of the React app code to the container
COPY . .

# Build the React app for production
RUN npm run build

# Serve the React app using a lightweight web server
CMD [“npm”, “start”]

Above is just an example of creating dockerfile of spring boot app and react app.

3 Build a Docker Image → Now go to project directory containing the Dockerfile and run the following command to build a Docker image:

docker build -t yourDesiredImageName

4 Run a Docker Container → Once the image is built, run a Docker container from it using below command →

docker run -p 8080:8080 -d yourDesiredImageName

It will run in detached mode (-d) and maps port 8080. There are various mode can be used here like interactive mode(-it). Detached mode means docker will run the container in the background, and it won’t block the terminal or command prompt from which we started it.

5 Access our Application → Now Application can be accessed by using the address http://localhost:8080

6 Git Integration → Doing the above step now its time to integrate git to our application. we have to include Dockerfile and a .dockerignore file in our Git repository to ensure that the Docker image can be built from our code.

→ Creating a .dockerignore file is similar to creating a .gitignore file in our Git repo. It allows us to specify which files and directories should be excluded when building a Docker image.

→ you can add those on your local and then commit it to the git repo.

CI/CD and git Integration and working diagram

7 Continuous Integration (CI) → we can Set up a CI/CD pipeline using tools like Jenkins, Travis CI, CircleCI, or GitHub Actions to automate the building and deployment of Docker images whenever changes are pushed to our Git repository.

1. Here we will use Jenkins for CI/CD which will consider installing Jenkins on server.

→After installation, access Jenkins through your web browser by navigating to http://your-server-ip:8080 (or the custom port you specified during installation).

2. Create a New Jenkins Job → Click on “New Item” to create a new Jenkins job. Choose the type of project we want to create. Configure the job settings, including source code management. Define the steps needed to build, test, and package your application.

3. Integrate Git Repository → We have to integrate git to jenkins.

4. Build Your Project → We have to define the build steps in our Jenkins job. This typically includes commands to compile, run tests, and produce artifacts. we can also setup the sonar to work here.

5. Set Up Post-Build Actions → Configure post-build actions, such as archiving build artifacts, triggering downstream jobs, and notifying team members or stakeholders about the build status.

6. Create a Deployment Stage (CD) → Extend your Jenkins pipeline to include a deployment stage. This may involve deploying your application to a staging environment or a production server.

7. Implement Continuous Delivery → To enable continuous delivery, set up additional stages in our Jenkins pipeline, such as user acceptance testing (UAT) or automated end-to-end testing. Ensure that only successful builds progress through these stages.

8. Monitor and Maintain → Monitor our Jenkins pipelines and jobs for failures and performance issues. Set up notifications (email, Slack, etc.) to alert your team about build or deployment problems.

8 Container Orchestration → If we need to scale our application or manage multiple containers, consider using container orchestration tools like Kubernetes or Docker Swarm.

→ Kubernetes provides features like automatic scaling, load balancing, and service discovery.

Lets see how to set up Kubernetes →

1. Choose a Kubernetes Installation Method → Kubernetes can be installed on various platforms, including on-premises hardware, cloud providers (e.g., AWS, GCP, Azure), or local development environments.

2. Minikube → For local development and testing, you can install Minikube, which creates a single-node Kubernetes cluster on your local machine.

3. Managed Kubernetes Services → Cloud providers offer managed Kubernetes services, such as Amazon EKS, Google Kubernetes Engine (GKE), and Azure Kubernetes Service (AKS). These services simplify cluster management.

4. Kubernetes Distribution → Install Kubernetes manually using a distribution like kubeadm, kops, or Rancher. These tools help set up and manage Kubernetes clusters.

9 Monitor and Maintain → Then we can Implement monitoring and logging solutions to keep track of our containerized application’s health and performance.

That’s all as of now ! I was thinking to cover Kubernetes setup here as well, but as the article is getting long will consider it separately in next article.

Hi.. I am Anand prakash, I work as a Software Developer, and really enjoy learning and building Distributed Systems. Feel free to reach out to me here or on Linkedin and follow me here for anything related to tech.

Happy learning 😃

--

--

Anand Prakash

Hi I am Anand Prakash, I work as a Software Developer, and really enjoy building Distributed Systems.