Day19 #90daysdevops Docker Volume & Docker Network

Samsor Rahman
4 min readJan 27, 2024

--

🐳 Day 19: Docker for DevOps Engineers — Exploring Docker Volume & Docker Network 🚀

As we continue our #90DaysOfDevOps journey, we’re delving deeper into Docker’s core concepts. Today, let’s sharpen our skills on Docker Volume and Docker Network.

🗂️ Docker Volume: Managing Data Like a Pro Docker Volume is your secret weapon for managing data in containers. It’s like having a separate storage room accessible by multiple containers. This means your valuable data, like databases, remains safe and sound even when containers come and go. The best part? You can mount the same volume in multiple containers, creating a harmonious data-sharing ecosystem. Learn More

🌐 Docker Network: Connecting the Container Dots Docker Network allows you to create virtual spaces, enabling your containers to communicate with each other and with the host machine. Think of it as building a superhighway for your containers to exchange data and work in harmony. It’s the key to unlocking seamless collaboration in your containerized applications. Learn More

Task 1: Mastering Multi-Container Management

Creating a multi-container Docker Compose file for an application and database is a common use case in DevOps. Below is an example of a Docker Compose file that sets up an application and a database container.

version: '3.8'
services:
# Application container
webapp:
image: your-webapp-image:latest
ports:
- "8080:8080"
networks:
- mynetwork
depends_on:
- database
# Database container
database:
image: your-database-image:latest
environment:
POSTGRES_DB: mydatabase
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
networks:
- mynetwork
networks:
mynetwork:

In this example:

  • We define two services, webapp and database, each representing a separate container.
  • For the webapp service:We specify the image for your web application. Replace your-webapp-image with the actual image name.We map port 8080 on the host to port 8080 in the container, allowing access to the web application.The depends_on option ensures that the database container is started before the webapp container.
  • For the database service:We specify the image for your database. Replace your-database-image with the actual image name.We set environment variables for the database, such as the database name, username, and password.
  • We define a custom network called mynetwork and assign both containers to it.

With this Docker Compose file, you can use the docker-compose up command to start both containers and the docker-compose down command to stop and remove them. Make sure to replace your-webapp-image and your-database-image with the actual image names for your application and database.

To use this Docker Compose file:

  • Save it to a file, e.g., docker-compose.yaml.
  • Open a terminal and navigate to the directory containing the Docker Compose file.
  • Run the following commands:
  1. To start the containers: docker-compose up -d
  2. To stop and remove the containers: docker-compose down

This will create a multi-container environment with your application and database, allowing you to manage them together easily.

Task 2: Docker Volumes & Named Volumes

Docker volumes are an essential feature for sharing data between containers and persisting data even when containers are removed. Named volumes provide an easy way to manage and reference volumes. Let’s create multiple containers that read and write data to the same named volume.

Step 1: Create a Named Volume

docker volume create my_shared_volume

Step 2: Create Containers Now, create multiple containers, sharing the named volume.

# Container 1
docker run -d --name container1 --mount source=my_shared_volume,target=/shared_data busybox /bin/sh -c "echo 'Hello from Container 1' > /shared_data/data.txt && sleep 3600"
# Container 2
docker run -d --name container2 --mount source=my_shared_volume,target=/shared_data busybox /bin/sh -c "cat /shared_data/data.txt && sleep 3600"

Here, we’ve created two containers, container1 and container2. They both use the named volume my_shared_volume mounted at /shared_data. container1 writes “Hello from Container 1” to a file, and container2 reads and displays the content.

Step 3: Verify Data Sharing To verify data sharing between the containers:

# Check content in container1
docker exec -it container1 cat /shared_data/data.txt
# Check content in container2
docker exec -it container2 cat /shared_data/data.txt

Both containers should display the same content: “Hello from Container 1.”

Step 4: Cleanup After verification, you can remove the containers and the volume:

# Stop and remove the containers
docker stop container1 container2
docker rm container1 container2
# Remove the named volume
docker volume rm my_shared_volume

Using Docker volumes and named volumes, you can easily share data between containers and maintain data integrity. Named volumes are a convenient way to reference volumes, making data management a breeze in your containerized applications.

I hope you learned something from this blog. If you have, don’t forget to follow and click the clap 👏 button below to show your support 😄. Subscribe to my blogs so that you won’t miss any future posts.

If you have any questions or feedback, feel free to leave a comment below. Thanks for reading and have an amazing day ahead!

Stay connected on Linkedin

Know about my Projects

--

--

Samsor Rahman

Follow for more projects on -> DevOps | DevSecOps | System Design | Django | Microservices