Unveiling [42 The Network-Inception]: A Dive into Docker and Docker-Compose

Ahmed Fatir
32 min readMay 20, 2024

--

INTRODUCTION

Welcome to the world of containerization technology, where innovation meets efficiency.

Adaptability is key in the fast-paced digital landscape, and the groundbreaking Inception-42 project exemplifies this. Join me as we explore the transformative technology of containerization and its myriad applications. From streamlining development workflows to optimizing resource utilization, containerization promises to revolutionize how we build, deploy, and manage applications. Buckle up and unleash the full potential of the Inception-42 project with us.

The main parts of the project

  1. Containerization Technology Overview
  2. MariaDB Container
  3. NGINX Container
  4. WordPress Container
  5. Docker Compose

1-Containerization Technology Overview

Once upon a time, in the vast world of software development, there was a developer named Bob. Bob had a dream of making it easy to build and deploy applications without the usual headaches of traditional virtualization methods. However, Bob faced a common problem: sending applications between different environments. His friend Alice, a fellow developer, often struggled to run Bob’s applications because she used a different operating system and different dependencies. This compatibility issue made collaboration a nightmare.

But Bob’s journey was tough. Back in the early days of software development, deploying applications meant dealing with the complexities of virtual machines (VMs). VMs were powerful but had their limitations. Each VM required its operating system, leading to bulky and inefficient deployments. Setting up a new VM was a slow process, often taking minutes or even hours.

Determined to overcome these challenges, Bob dug deeper into the world of operating systems and stumbled upon a game-changing concept: containerization. Unlike VMs, which relied on full OS virtualization, containerization offered a lightweight alternative, a way to package applications and their dependencies into isolated environments known as containers.

With containers, Bob could ensure that his application, along with all its dependencies, would run seamlessly on Alice’s machine, regardless of the underlying OS. This was because containers provided consistent environments across different systems, solving the compatibility issues that Bob and Alice faced.

But how did containerization work, and what’s the actual difference between virtualization and containerization, you ask? Well, let’s take a closer look.

ㅤㅤㅤㅤVirtualizationㅤㅤㅤㅤㅤㅤㅤvsㅤㅤㅤㅤㅤㅤContainerization

The image shows the key differences between traditional virtualization using virtual machines (left) and containerization (right). In the virtual machine model, each application runs on its instance of a guest operating system, which sits atop a hypervisor. This hypervisor operates on the host operating system and underlying infrastructure. This setup leads to significant overhead, as each VM includes a full-fledged OS, consuming considerable system resources and leading to bulky and inefficient deployments.

On the other hand, the containerization model streamlines this process by using a container engine that runs directly on the host operating system. Each application and its dependencies are packaged into lightweight containers that share the host OS kernel but remain isolated from one another. This results in more efficient resource usage, faster startup times, and improved scalability, making containerization a superior choice for modern software development and deployment.

Well, if you are confused, that’s good because I think you might be asking yourself the most important question in this blog:

How does the operating system isolate a container, making it only see itself?

Let’s travel back in time to the early days of implementing the UNIX kernel to answer this question.

In the early days of computing, the UNIX operating system emerged as a powerful, multitasking environment. However, it wasn’t initially designed to isolate processes completely from one another. Back then, processes could see the entire system, including all running processes and the entire file system, leading to potential security and stability issues.

The Birth of chroot:

In 1979, as UNIX was evolving, a significant milestone was introduced with the 7th Edition Unix: the chroot system call. The chroot command allowed administrators to change the root directory for a process and its children to a new location in the filesystem.

This created a sort of “jail,” isolating the process within a specific subtree of the filesystem.

There is the chroot command first source code. While chroot provided a basic level of isolation, preventing processes from seeing or modifying files outside their designated root directory, it had its limitations. It didn’t isolate other resources such as processes, networks, or user namespaces, nor did it manage resource allocation effectively.

Beyond chroot: The Need for More Isolation.

As systems became more complex and the demand for multi-user, multi-application environments increased, it became evident that more robust isolation mechanisms were necessary. Developers needed a way to isolate not only filesystems, but also to provide isolated environments for processes, networks, and other resources. This marked the era of namespaces and control groups (Cgroups) in the Linux kernel.

The Advent of Namespaces:

In the early 2000s, Linux developers started working on a series of patches to introduce the concept of Namespaces. Namespaces are a powerful feature that provides isolation at various levels:

  • Mount Namespace: Isolates the filesystem mount points.
  • UTS Namespace: Isolates hostname and domain name.
  • IPC Namespace: Isolates inter-process communication resources.
  • PID Namespace: Isolates process IDs, giving each namespace its process tree.
  • Network Namespace: Isolates network interfaces, routing tables, and network-related resources.
  • User Namespace: Isolates user and group IDs.

Each namespace type allows processes within it to have a separate instance of that particular resource, effectively creating isolated environments. This advancement was a significant improvement from the basic isolation provided by chroot.

The Introduction of Cgroups:

At the same time, another important development was occurring: the introduction of control groups (Cgroups) in the Linux kernel. Cgroups provide a mechanism for aggregating and partitioning sets of tasks (processes) and managing their resource usage. With Cgroups, administrators can limit the amount of CPU, memory, disk I/O, and network bandwidth each group of processes can use. This ensured that no single process could monopolize system resources, providing better control and stability.

Combining Namespaces and Cgroups:

The combination of namespaces and Cgroups laid the foundation for modern containerization. By using namespaces, the system could isolate various aspects of the environment, making each container appear as if it were running on its isolated instance of the operating system. Meanwhile, Cgroups ensured that containers could be limited in their resource consumption, ensuring they ran efficiently alongside other containers.

So, in other words, we can say that the namespace answers the question “How does the container work?”, and the Cgroup answers the question “What does the container use to work?”

Enter Containers:

With these building blocks in place, the concept of containers started to take shape. Containers leverage namespaces to create isolated environments and use Cgroups to manage resource allocation. However, managing these low-level kernel features directly was complex and error-prone.

Docker: Simplifying Containers:

In 2013, Docker was introduced, revolutionizing the way developers worked with containers. Docker provided an easy-to-use platform that abstracted the complexities of namespaces and Cgroups. It allowed developers to package applications and their dependencies into a single, portable container image that could run anywhere. Docker made it simple to create, deploy, and manage containers, bringing containerization into the mainstream.

Docker had a significant impact by enabling consistent environments across development, testing, and production, effectively solving the classic “it works on my machine” problem. Developers could define their application environments using Dockerfiles, specifying the exact setup needed for their applications to run. Docker Compose further simplified multi-container applications, allowing developers to define complex architectures in a single YAML file. (I’m gonna explain that more later in the docker-compose part)

While namespaces and Cgroups form the core of container isolation and resource management, the evolution of containers has been supported by a wide array of technologies. Union file systems(UnionFS), security modules(LSM), advanced networking solutions, and orchestration tools have all contributed to making containers a robust, efficient, and secure solution for modern software development and deployment. These technologies, together with ongoing innovations, continue to enhance the container ecosystem.

The process from writing a Dockerfile to having a running Docker container:

==>Step 1: Writing a Dockerfile.

A Dockerfile is a text file that contains instructions for creating a Docker image, defining the environment, base image, dependencies, files to copy, and commands to run. This provides a reproducible and automated way to build Docker images. Think of it as a blueprint for creating a Docker image, or like a Makefile if you want to think of it like that.

==>Step 2: Building the Docker Image.

After writing the Dockerfile, the next step is to build the Docker image using the docker build command. During this process, Docker reads the instructions in the Dockerfile and executes them in order: Pull the Base Image, Execute Instructions, Create Layers, and Finalize Image.

==>Step 3: Running the Docker Container.

After building the Docker image, you can run a container using thedocker run command. This involves creating a new container, isolating its resources, managing resource allocation, and starting the container to run the specified command from the Dockerfile.

==>Step 4: Managing the Docker Container.

Once the container is running, Docker provides commands to manage it:

  • Use docker ps to monitor running containers
  • Employ docker stop and docker start to control the container’s lifecycle.
  • Remove a container with docker rm
  • Retrieve logs using docker logs for debugging and monitoring.

After exploring the evolution of containerization technology, from early UNIX days to the impact of Docker, it’s time to apply these concepts practically. Picture using this technology to build a robust, scalable infrastructure on your virtual machine. Welcome to the Inception-42 project, a hands-on experience that will guide you through building and managing Docker containers for essential web services.

In this project, you’ll learn how to set up a secure NGINX server, a dynamic WordPress site, and a resilient MariaDB database, all interconnected through Docker Compose. Get ready to turn your theoretical knowledge into practical skills as we embark on this exciting project together! this is the project subject.pdf.

2-MariaDB Container

This is the MariaDB Dockerfile:

  1. FROM debian:bullseye: This line specifies the base image to use for this Docker image. In this case, it's using Debian Bullseye as the base image. The FROM instruction initializes a new build stage and sets the base image for subsequent instructions.
  2. RUN apt-get update && apt-get upgrade -y: This line runs commands in the container during the build process. apt-get update updates the package lists for available packages, and apt-get upgrade -y upgrades all installed packages to their latest versions. -y flag is used to automatically answer 'yes' to all prompts.
  3. RUN apt-get install -y mariadb-server: This line installs using the apt-get install command mariadb-server (MariaDB server)inside the container.
  4. COPY ./mdb-conf.sh /mdb-conf.sh: This line copies a file from the host machine into the Docker image. It copies the file mdb-conf.sh located in the ./ directory relative to the Dockerfile on the host machine to the root directory / inside the Docker image.
  5. RUN chmod +x /mdb-conf.sh: This line changes the permissions of the mdb-conf.sh script to make it executable (+x). This script was copied into the image in the previous step.
  6. ENTRYPOINT ["./mdb-conf.sh"]: This line specifies the default command to run when a container is started from the image. Here, it sets the entry point to the mdb-conf.sh script. This means that when a container is launched from this image, it will execute the mdb-conf.sh script.

Before we move on to the mdb-conf.sh script, let’s first introduce MariaDB.

MariaDB is an open-source relational database management system (RDBMS) and a replacement for MySQL. It was created by the original developers of MySQL due to concerns about Oracle Corporation’s acquisition of MySQL. Since then, MariaDB has continued to develop independently as a community-driven project.

MariaDB Key Points:

  • RDBMS: MariaDB organizes data into tables with rows and columns and supports SQL for querying and managing the data.
  • Open Source: Released under the GNU General Public License (GPL), it is freely available for use, modification, and distribution.
  • Compatibility with MySQL: Designed to be compatible with MySQL, making it easy to transition from MySQL to MariaDB.
  • Features: Offers ACID compliance, support for multiple storage engines, replication, clustering, partitioning, and more.

Now, let’s move on to understand the mdb-conf.sh script.

I would like to note that the entire project uses an environment variable file .env containing all the passwords and sensitive data. Let’s demonstrate it first.

Now that we have all the variables set up in the .env file, we can safely use them. If you’re wondering how the command knows where to get the value of the variable, don’t worry, the docker-compose will handle it.

If you familiarize yourself with MariaDB using the tutorial I mentioned earlier, you should be able to understand most of the lines in the script on your own. I’m going to focus on the most important line, which is the last one.

mysqld_safe --port=3306 --bind-address=0.0.0.0 --datadir='/var/lib/mysql'
  1. mysqld_safe: This is a script provided by MariaDB to start the MariaDB server in a safe mode. Safe mode means that if the server encounters an error during startup, it will attempt to restart automatically. This helps ensure that the database remains available even if there are issues during startup.
  2. --port=3306: The option specifies the port on which the MariaDB server will listen for incoming connections. By default, MariaDB listens on port 3306, the standard port for MySQL/MariaDB database connections. This port must be exposed so other services, such as WordPress and Nginx, can connect to the MariaDB server.
  3. --bind-address=0.0.0.0: This option specifies the network interface to which the MariaDB server will bind. The value 0.0.0.0 means that the server will listen for connections on all available network interfaces. This is useful in a Docker environment where the IP address of the container might change, or when the MariaDB server needs to accept connections from other containers.
  4. --datadir='/var/lib/mysql': This option specifies the directory where MariaDB will store its data files. In this case, it's set to, /var/lib/mysql which is the default data directory for MariaDB installations. It's important to specify the data directory so that MariaDB knows where to store its databases, tables, and other data files.

We use mysqld_safe to manage the MariaDB process within the Docker container, ensuring continuous operation. The container needs to remain running because MariaDB is a long-lived process, unlike typical Docker containers that stop when their main process completes. Without (mysqld_safe) or similar management, the container might stop unexpectedly, leading to service downtime and potential data integrity issues.

After securely setting up MariaDB in our Docker environment, we’re diving into configuring NGINX. Get ready to optimize performance, enhance security, and ensure seamless communication between client requests and our application servers.

NGINX is the key to our web server infrastructure, orchestrating requests with efficiency and reliability.

3-NGINX Container

NGINX is a high-performance, open-source web server and reverse proxy server software. It is known for its efficiency, scalability, and versatility, making it a popular choice for serving web content, managing load balancing, and proxying requests to backend servers.

Here’s an overview of how NGINX works:

  • Handling Client Requests: NGINX acts as a web server, listening for incoming HTTP and HTTPS requests from clients (such as web browsers). When a request is received, NGINX processes it according to its configuration.
  • Static Content Delivery: NGINX excels at efficiently serving static content, such as HTML files, images, CSS, and JavaScript files, directly to clients. It can handle large numbers of concurrent connections efficiently, making it suitable for high-traffic websites and applications.
  • Reverse Proxying: NGINX can also act as a reverse proxy server, receiving requests on behalf of backend servers (such as application servers or other web servers). It then forwards these requests to the appropriate backend server based on configurable rules.
  • Load Balancing: NGINX’s reverse proxy capabilities extend to load balancing, distributing incoming requests across multiple backend servers to ensure optimal resource utilization and improved performance.
  • Caching: NGINX includes built-in caching mechanisms that can cache static content and even dynamically generated content from backend servers. Caching improves response times for subsequent requests for the same content and reduces the load on backend servers.
  • SSL/TLS Termination: NGINX can handle SSL/TLS encryption and decryption, acting as a termination point for secure connections.

(I will demonstrate the SSL/TLS part more thoroughly).

  • Security Features: NGINX offers various security features, such as access control, rate limiting, and request filtering, to protect web applications from common security threats like DDoS attacks, SQL injection, and cross-site scripting (XSS).

This is the NGiNX Dockerfile:

  • FROM debian:bullseye: This line specifies the base image for the Docker container.
  • RUN apt-get update && apt-get upgrade -y: This command updates the package lists and upgrades all installed packages to their latest versions.
  • RUN apt-get install -y nginx openssl: This command installs NGINX (the web server) and OpenSSL (a toolkit for SSL/TLS).
  • RUN mkdir -p /etc/nginx/ssl: This command creates the directory where SSL certificates will be stored. The -p flag ensures that no error is raised if the directory already exists.
  • RUN openssl req -x509 -nodes -out /etc/nginx/ssl/inception.crt -keyout /etc/nginx/ssl/inception.key -subj: This command generates a self-signed SSL certificate using OpenSSL.
  1. RUN openssl req -x509 -nodes -out /etc/nginx/ssl/inception.crt -keyout /etc/nginx/ssl/inception.key -subj: This command generates a self-signed SSL certificate using OpenSSL.

==>-x509: Outputs a self-signed certificate instead of a certificate request.

==>-nodes: Skips the option to encrypt the private key.

==>-out: Specifies the output file for the certificate.

==>-keyout: Specifies the output file for the private key.

==>-subj: Provides subject information for the certificate in a single string.

==>C=MO: stands for “Country”, and MO is the country code (ISO 3166–1 alpha-2 code). Here, it represents Morocco.

==>ST=KH: stands for “State” or “Province”.

==>L=KH: stands for “Locality” or “City”.

==>O=42: stands for “Organization”. “42” refers to the 42 The Network.

==>OU=42: stands for “Organizational Unit”. Often used to specify departments.

==>CN=domain_name.fr: stands for “Common Name”. domain_name.fr is the fully qualified domain name (FQDN) for the certificate. This is the domain the certificate will secure.

==>UID=admin_name: stands for “User ID”. admin_name is an identifier for the individual or admin creating the certificate.

  • COPY ./nginx.conf /etc/nginx/nginx.conf: This command copies a custom NGINX configuration file from the local directory (./nginx.conf) to the container (/etc/nginx/nginx.conf).
  • RUN mkdir -p /var/www/wordpress: This command creates a directory for WordPress files at /var/www/wordpress.
  • RUN chown -R www-data:www-data /var/www/wordpress: This command changes the ownership of the /var/www/wordpress directory to (www-data), which is the default user and group for NGINX. The -R flag applies the ownership change recursively to all files and subdirectories.
  • CMD [“nginx”, “-g”, “daemon off;”]: This command specifies the command to run when the container starts.

==>nginx: Starts the NGINX server.

==>-g "daemon off;": Ensures that NGINX runs in the foreground, which is necessary for the Docker container to stay alive. If NGINX were to run in the background (daemon mode), the Docker container would exit immediately after NGINX starts.

Before moving on to the next part, let’s discuss the SSL/TLS cryptographic protocols.

SSL (Secure Sockets Layer) and TLS (Transport Layer Security) are cryptographic protocols designed to provide secure communication over a computer network. TLS is the successor to SSL, and they are often collectively referred to as SSL/TLS, It is positioned in the TCP/IP model at the presentation layer.

Or you know what? Let me tell you a story.

Chapter 1: The Early Days of the Internet.

In the early days of the Internet, a web developer named Alex was excited about the potential of this new technology. Alex built a small e-commerce website to sell handmade crafts. Business was booming, but there was one big problem: security. Customers hesitated to enter their credit card information because there was no secure way to transmit data over the Internet.

Chapter 2: The Birth of SSL.

One day, Alex heard about a groundbreaking technology called SSL (Secure Sockets Layer). Developed by Netscape in 1995, SSL promised to create a secure channel between the client and server, ensuring that sensitive information like credit card numbers could be encrypted and safely transmitted over the internet. Alex decided to implement SSL on the website. With SSL, Alex’s customers could now see a small padlock icon in their browsers, signaling that their connection was secure. This gave them the confidence to enter their payment details, and Alex’s business flourished.

Chapter 3: The First Challenge.

However, SSL was not without its flaws. Early versions of SSL (1.0 and 2.0) had several security vulnerabilities. Hackers quickly found ways to exploit these weaknesses, leading to data breaches and other security issues. Alex started noticing strange activities on the website, and some customers reported unauthorized transactions.

Chapter 4: The Evolution to TLS.

Recognizing the need for a more robust security protocol, the Internet Engineering Task Force (IETF) introduced TLS (Transport Layer Security) in 1999 as an upgrade to SSL. TLS 1.0 improved upon SSL by fixing many of its vulnerabilities and providing stronger encryption methods. Alex quickly upgraded the website to use TLS 1.0. The improvement was noticeable, customer trust was restored, and the website became more resilient against cyber-attacks.

Chapter 5: The Continuous Battle for Security.

Over the years, Alex kept up with evolving TLS standards. TLS 1.1, introduced in 2006, brought additional protections against CBC attacks. In 2008, TLS 1.2 was released, offering even stronger security features. Alex’s online store greatly benefited from these advancements, ensuring customers’ data remained confidential and secure during transactions.

Chapter 6: The Era of (TLS 1.3).

In 2018, TLS 1.3 improved web security by simplifying the handshake process, removing outdated cryptographic algorithms, and providing forward secrecy. This enhanced website performance and provided faster, more secure browsing experiences for customers.

SSL/TLS

But the most important question here is: How does this SSL/TLS handshake work? Well, let’s discover the logic behind it.

Client Hello:

  1. ==>Initiation: The client starts the TLS handshake by sending a Client Hello message to the server.
  2. ==>Version Number: Reflects the highest TLS version supported by the client. Indicated in hex code (e.g., 0x0304 for TLS 1.3).
  3. ==>Random Number: 32 bytes generated by the client. The first four bytes encode the timestamp ensuring uniqueness so it is impossible for two different client hellos sent.
  4. ==>Session ID: An 8-byte value used to identify the specific session. Typically all zeros for new sessions.
  5. ==>Cipher Suites: Lists supported encryption algorithms in order of preference.
  6. ==>Extensions: Optional additional parameters for the handshake (empty in this basic example).

Server Hello:

  1. ==>Response: The server responds to the Client Hello with a Server Hello message.
  2. ==>Version Number: Indicates the TLS version chosen by the server.
  3. ==>Random Number: 32 bytes, with the timestamp encoded in the first four bytes.
  4. ==>Session ID: A randomly generated value used to identify the session.
  5. ==>Cipher Suite: Selected by the server from the client’s list.
  6. ==>Extensions: Optional additional parameters (none in this example).

Exchange of Certificates

  1. ==>Server Certificate: The server sends its certificate chain to the client.
  2. ==>The client verifies the legitimacy and ownership of the certificate.
  3. ==>Acquires server’s certificate and public key for encryption.

Client Key Exchange (Client Side)

  1. ==>The client generates a premaster secret, a 48-byte random value.
  2. ==>Using the server’s public key obtained from its certificate, the client encrypts the premaster secret (e.g., ECDH).
  3. ==>Encryption ensures that only the server, possessing the corresponding private key, can decrypt and obtain the shared secret.

Client Key Exchange (Server Side)

  1. ==>Using its private key (the counterpart to the public key in the certificate), the server decrypts the received premaster secret.
  2. ==>Decryption yields the original premaster secret generated by the client.
  3. ==>With the premaster secret decrypted, the server now has the same shared secret as the client.

Master Secret Derivation

  • The client and server then combine the premaster secret with the negotiated TLS version and random values exchanged during the handshake. Using a specified cryptographic algorithm, typically a pseudo-random function (PRF), both parties calculate the master secret. PRF takes the concatenated input and generates a fixed-length master secret that is unique to this TLS session. This master secret serves as the seed value for generating session keys used in subsequent encryption and authentication processes. Client and server Both ensure synchronization by independently deriving the same master secret, ensuring consistency and security in the TLS handshake.

Session Key Derivation

  1. ==>On the client side, session keys are calculated using a specified algorithm, often a pseudo-random function, which includes the master secret, client random, server random, and a constant string such as “key expansion. At least four session keys are generated, two for symmetric encryption and two for HMAC (Hash-based Message Authentication Code). The symmetric encryption keys are used for encrypting and decrypting data sent from the client, while the HMAC keys are used for verifying message integrity to ensure that the transmitted data remains unchanged.
  2. ==>On the server side, the same algorithm as the client (e.g., pseudo-random function) is utilized to compute session keys, involving the master secret, client random, server random, and the constant string used for key expansion. Similar to the client, the server generates symmetric encryption keys for encrypting and decrypting data sent to the client. Additionally, the server generates HMAC keys for message integrity verification to ensure data integrity during transmission.
  3. ==>TLS uses two separate tunnels to protect data. One tunnel secures data from the client to the server, and the other protects data sent from the server back to the client.
  4. ==>Both tunnels use symmetric keys, which means the client’s data is encrypted with keys that the server can decrypt, and vice versa.
  5. ==>If someone manages to brute force one set of keys, they will only be able to capture and decrypt half of the conversation, as the other direction uses a completely new set of keys.

Change Cipher Spec and Finished Messages

  • ==>Client Side:
  1. Change Cipher Spec Message: Sent to signal readiness to switch to encrypted communication. Simple, typically a single byte (value 1).
  2. Finished Message: Compute Handshake Hash, Generate Verification Data, Encrypt Verification Data, and Send Finished Message containing encrypted verification data.
  • ==>Server Side:
  1. Change Cipher Spec Message: Sent after receiving the client’s Finished message. Indicates readiness to switch to encrypted communication. Simple, typically a single byte (value 1).
  2. Finished Message: Compute Handshake Hash, Generate Verification Data, Encrypt Verification Data, and Send Finished Message containing encrypted verification data.

This is the most basic form of a handshake, which involves a TLS handshake using an RSA key exchange. Of course, there are different variations of this basic handshake, but the most common one is the one I explained above.

SSL/TLS Handshake Process

I know that I have talked a lot about the SS/TLS part, but it’s better to cover all the important stuff.

But wait, it seems that you already read the words “digital certificate” or “CA certificate,” but what exactly is it?

A digital certificate is an electronic document used to prove ownership of a public key. It is issued by a trusted third party known as a Certificate Authority (CA). The certificate contains the owner’s public key along with identifying information such as the owner’s name, the expiration date of the certificate, and the CA’s digital signature. When a client (such as a web browser) connects to a server (such as a website), the server presents its digital certificate to the client. The client verifies the certificate by checking the CA’s signature and confirming that the certificate has not expired or been revoked. This process ensures that the client is communicating with the genuine server and not an impostor, establishing a foundation of trust and enabling secure, encrypted communication between the client and server.

And now it’s time to move on to the nginx config file (nginx.conf), which is the main focus of this section.

If you familiarize yourself with NGINX using the following tutorial, you should be able to understand most of the lines in the config file on your own.

Well, if not, let’s break down each part of it.

Events Block:

The events block is used to configure settings that affect how Nginx handles connections. In this configuration, it is left empty, meaning Nginx will use its default settings. Typical settings you might find here include the maximum number of simultaneous connections or multi-threading settings, but these defaults are often sufficient.

HTTP Block:

The http block contains configurations for handling HTTP and HTTPS traffic.

  1. ==>include /etc/nginx/mime.types;: This line includes a file that maps file extensions to MIME types. MIME types tell the browser how to handle different types of files (e.g., HTML, CSS, etc.).

Server Block:

The server block defines settings for a specific virtual server.

  1. ==>listen 443 ssl;: This directive tells Nginx to listen on port 443 for HTTPS traffic. The ssl keyword indicates that SSL/TLS should be used for this server.

SSL/TLS Configuration:

  1. ==>ssl_certificate /etc/nginx/ssl/inception.crt;: Specifies the path to the SSL certificate file. This file contains the public key and identity information for the server.
  2. ==>ssl_certificate_key /etc/nginx/ssl/inception.key;: Specifies the path to the private key file. This key is kept secret on the server and is used to decrypt information encrypted with the public key.
  3. ==>ssl_protocols TLSv1.3;: Specifies which SSL/TLS protocols are allowed. Here, only TLS 1.3 is allowed, which is the most secure and recent version of the protocol.

Root and Index Configuration:

  1. ==>root /var/www/wordpress;: Sets the root directory for the server. All relative URLs will be served from this directory.
  2. ==>server_name $DOMAIN_NAME;: Specifies the domain name for this server block. The ($DOMAIN_NAME) variable should be replaced with the actual domain name in the .env file
  3. ==>index index.php;: Specifies the default file to serve when a directory is requested. Here, it is set to index.php.

PHP Location Block:

The location block defines how requests for PHP files should be handled.

  1. ==>location ~ \.php$ {: This location block matches any request ending in .php.
  2. ==>include snippets/fastcgi-php.conf;: Includes a configuration snippet for handling PHP files with FastCGI. This file typically contains necessary directives for processing PHP requests.
  3. ==>fastcgi_pass wordpress:9000;: Specifies the address and port of the FastCGI server (in this case, wordpress:9000). FastCGI is a protocol for interfacing interactive programs with a web server. Here, it directs PHP requests to a PHP-FPM (FastCGI Process Manager) server running on the wordpress host.

Once upon a time, it’s story time again! Hhhhh.

The Birth of Dynamic Web Pages

In the early days of the internet, websites were composed of static HTML files. Every page was a separate file on the server, and whenever a user requested a page, the server simply sent the corresponding HTML file to the user’s browser. This worked well for simple, unchanging content but became cumbersome as websites grew larger and more complex.

Introduction of CGI

To address the need for dynamic content, the Common Gateway Interface (CGI) was introduced in the early 1990s. CGI allowed web servers to execute external programs (scripts) and use their output as web content. This was a breakthrough, enabling the creation of interactive web applications.

How CGI Works

  1. Request Handling: When a user requests a CGI-enabled page, the web server launches a separate process to run a CGI script.
  2. Script Execution: The CGI script (written in languages like Perl, Python, or C) processes the request, often interacting with databases or other services.
  3. Response Generation: The script generates HTML dynamically and sends it back to the server.
  4. Response Delivery: The server then sends this HTML back to the user’s browser.

Limitations of CGI

While CGI was revolutionary, it had significant drawbacks:

  • Performance: Each request spawns a new process, which is resource-intensive and slow.
  • Scalability: Handling many concurrent requests could overwhelm the server, leading to poor performance.

Introduction of FastCGI

To overcome CGI’s limitations, FastCGI was developed in the mid-1990s. FastCGI aimed to improve performance by reusing processes to handle multiple requests.

How FastCGI Works

  1. Persistent Processes: Unlike CGI, where a new process is created for each request, FastCGI processes persist and handle multiple requests over their lifetime.
  2. Process Pooling: A pool of FastCGI processes can handle incoming requests, reducing the overhead of process creation.
  3. Inter-Process Communication: FastCGI uses more efficient inter-process communication methods, leading to faster request processing.

Benefits of FastCGI

  • Improved Performance: By reusing processes, FastCGI reduces the overhead associated with process creation.
  • Better Resource Management: A pool of processes can handle more requests with fewer resources.
  • Scalability: FastCGI can handle higher loads, making it suitable for busy websites.

The Rise of PHP

PHP, created in 1994, became one of the most popular languages for web development due to its simplicity and ease of integration with HTML. Initially, PHP scripts were executed using the traditional CGI method, which suffered from the same performance issues.

Introduction of PHP-FPM

PHP FastCGI Process Manager (PHP-FPM) was developed to address the inefficiencies of traditional CGI and to leverage the advantages of FastCGI. PHP-FPM provides advanced features specifically designed for PHP.

How PHP-FPM Works

  1. Persistent PHP Processes: PHP-FPM maintains a pool of PHP processes that handle multiple requests.
  2. Advanced Process Management: PHP-FPM offers features like adaptive process spawning, graceful stopping, and the ability to restart individual processes without affecting the entire pool.
  3. Configuration Flexibility: PHP-FPM allows fine-tuning of process management settings, such as the number of child processes, request handling limits, and more.

Benefits of PHP-FPM

  • High Performance: PHP-FPM significantly reduces the overhead associated with PHP request handling, improving performance.
  • Scalability: PHP-FPM can handle high traffic loads efficiently, making it ideal for large websites and applications.
  • Reliability: Advanced process management features ensure better reliability and uptime.

The Modern Web

Today, technologies like PHP-FPM and FastCGI are essential components of web servers, enabling them to serve dynamic content efficiently. Nginx, a popular web server, often uses PHP-FPM to handle PHP requests, as seen in our config file.

And just like that, we have finished the NGINX section. I hope I covered all the details.

4-WordPress Container

The Birth and Evolution of WordPress: A Story of Web Empowerment

The Early Days of Blogging

In the early 2000s, the internet was undergoing rapid transformation. Blogging emerged as a popular way for individuals to share their thoughts and experiences online. However, creating a blog requires technical knowledge, making it inaccessible to many.

The Genesis of WordPress

In 2003, two developers, Matt Mullenweg and Mike Little, saw an opportunity to create a more user-friendly blogging platform. They took an existing, abandoned project called b2/cafelog and transformed it into what would become WordPress. The goal was simple: to democratize publishing and make it easy for anyone to create and manage a blog or website without deep technical skills.

Open Source Foundation

From the beginning, WordPress was released as open-source software under the GNU General Public License (GPL). This decision was pivotal. It allowed a global community of developers to contribute to its growth and improvement, fostering innovation and collaboration.

WordPress 1.0: The First Release

In May 2003, WordPress 1.0 was released. It featured a simple interface, basic functionality for creating and managing posts, and a template system for customizing the look and feel of blogs. Despite its modest beginnings, WordPress 1.0 laid the groundwork for a powerful, extensible platform.

The Rise of Plugins and Themes

As WordPress grew, its extensibility became one of its defining features. The introduction of plugins and themes revolutionized the platform:

  • Plugins: These are add-ons that extend WordPress’s core functionality. From adding contact forms to optimizing SEO, plugins allowed users to customize their sites to meet specific needs without coding knowledge.
  • Themes: These control the visual appearance of WordPress sites. Users could choose from a vast library of free and premium themes, allowing them to change the design of their site with just a few clicks.

From Blogging to Full-Fledged CMS

Initially conceived as a blogging tool, WordPress quickly evolved into a full-fledged content management system (CMS). This transformation was driven by its flexibility and ease of use, which made it suitable for a wide range of websites beyond blogs, including business sites, e-commerce stores, portfolios, forums, and more.

WordPress Today

WordPress powers over 40% of all websites on the internet, a testament to its versatility and ease of use. It’s used by everyone from small bloggers to major corporations and government entities. Its active community continues to drive its evolution, ensuring it remains a cutting-edge platform.

Let’s proceed with our mission to install and configure WordPress along with php-fpm.

  1. FROM debian:bullseye: specifies the base image to use for building the Docker container.
  2. RUN apt-get update && apt-get upgrade -y: update the package lists for apt and then upgrade the installed packages to their latest versions.
  3. RUN apt-get install -y curl php php7.4-fpm php-mysql mariadb-client iputils-ping: installs the necessary packages required for WordPress and its dependencies. It includes curl for making HTTP requests, php and php7.4-fpm for running PHP scripts, php-mysql for MySQL support in PHP, and mariadb-client for interacting with the MariaDB database and netcat to ping the MariaDB container to make sure it's running before the WordPress container.
  4. COPY ./wp_conf.sh /: copies the WordPress configuration script (wp_conf.sh) from the host machine to the root directory (/) in the container.
  5. RUN chmod +x wp_conf.sh: changes the permissions of the wp_conf.sh script to make it executable.
  6. ENTRYPOINT ["./wp_conf.sh"]: specifies the command that should be executed when the container starts.

I think it’s all clear, and we can move to the main thing, which is the (wp_conf.sh) file.

wp-cli installation:

  1. curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar: This command is used curl to download the wp-cli (WordPress Command Line Interface) phar (PHP Archive) file from its GitHub repository.
  2. ==>chmod +x wp-cli.phar: changes the permissions of the downloaded wp-cli.phar file to make it executable.
  3. ==>mv wp-cli.phar /usr/local/bin/wp: This moves the wp-cli.phar file to /usr/local/bin/, making it accessible globally as the wp command.

Go to the WordPress directory:

  • ==>cd /var/www/wordpress: Changes the current directory to (/var/www/wordpress), assuming this is where WordPress will be installed.

Permissions and ownership:

  1. ==>chmod -R 755 /var/www/wordpress/: Gives read, write, and execute permissions to the WordPress directory and its contents.
  2. ==>chown -R www-data:www-data /var/www/wordpress: Changes the ownership of the WordPress directory and its contents to the www-data user and group. This is the user and group typically used by web servers like Nginx or Apache.

Check if the MariaDB container is up and running:

  • ==>This part of the script ensures that WordPress waits for the MariaDB container to be up and running before proceeding. This prevents errors and is important for automated deployment scenarios where services must start in a specific order. Docker Compose’s depends_on option only ensures that the specified service(s) start before the dependent service but doesn’t wait for the service to be fully ready and operational.
  • ==>The script checks the availability of the MariaDB service by trying to connect to its port and waits for up to 20 seconds for it to become available. If MariaDB is not up within this time frame, it prints an error message. This helps to prevent issues during the WordPress setup that might arise from an unavailable database service.

Download WordPress core files:

  • ==>wp core download --allow-root: Uses wp-cli to download the WordPress core files into the current directory (/var/www/wordpress).

Create wp-config.php file:

  • ==>wp core config --dbhost=mariadb:3306 --dbname="$MYSQL_DB" --dbuser="$MYSQL_USER" --dbpass="$MYSQL_PASSWORD" --allow-root: Generates a wp-config.php file with the provided database details. It sets the database host to mariadb:3306, the database name to $MYSQL_DB, the database user to $MYSQL_USER, and the database password to $MYSQL_PASSWORD , Those are values from the .env file

Install WordPress:

  • ==>wp core install --url="$DOMAIN_NAME" --title="$WP_TITLE" --admin_user="$WP_ADMIN_N" --admin_password="$WP_ADMIN_P" --admin_email="$WP_ADMIN_E" --allow-root: Installs WordPress with the provided settings such as site URL ($DOMAIN_NAME), site title ($WP_TITLE), admin username ($WP_ADMIN_N), admin password ($WP_ADMIN_P), and admin email ($WP_ADMIN_E). Those are also values from the .env file

Create a new user:

  • ==>wp user create "$WP_U_NAME" "$WP_U_EMAIL" --user_pass="$WP_U_PASS" --role="$WP_U_ROLE" --allow-root: Creates a new user with the provided username ($WP_U_NAME), email ($WP_U_EMAIL), password ($WP_U_PASS), and role ($WP_U_ROLE). Those are also values from the .env file

PHP configuration:

  1. ==>sed -i '36 s@/run/php/php7.4-fpm.sock@9000@' /etc/php/7.4/fpm/pool.d/www.conf: Changing the PHP-FPM (We’ve talked about php-fpm in detail earlier in the nginx part) configuration to modify the listen port from a Unix socket to port 9000 enables network communication between PHP-FPM and the web server container in a Dockerized environment managed by Docker-Compose. This modification simplifies networking configurations, increases flexibility, and allows for easier scalability, making it more suitable for containerized deployments where multiple containers need to communicate over the network.
  2. ==>mkdir -p /run/php: It is crucial to create the /var/run/ directory for PHP-FPM to function properly, even when it is configured to use TCP port communication instead of Unix sockets. This directory is where PHP-FPM stores its Process ID (PID) file php-fpm.pid.
  3. ==>/usr/sbin/php-fpm7.4 -F: Starts the PHP-FPM service in the foreground to keep the container running.

* Now WordPress is fully configured and ready to go.

* we’ve established a strong foundation for a seamless web experience.

* The real magic happens next as we integrate everything with Docker-Compose.

*Get ready to see how easy it is to orchestrate MariaDB, Nginx, and WordPress containers into a cohesive and scalable stack.

5-Docker Compose

The Story of Docker Compose: Orchestrating Simplicity in Containerization

In the early days of containerization, developers were excited about the promise of Docker. However, as applications became more complex, involving multiple interconnected services, managing these containers became a challenging task. This is where Docker Compose comes into play.

The Birth of Docker Compose

Docker Compose was created to simplify and efficiently manage multi-container applications. It was initially developed by a small London startup called Orchard in 2013 and was originally named Fig. Fig provided a simple way to define and run multi-container Docker applications. It allowed developers to use a single YAML(Yet Another Markup Language) file to describe how their applications should be composed of various services, networks, and volumes.

How Docker Compose Works

Docker Compose relies on a YAML file to define the services that constitute an application at its core. Each service corresponds to a container, and the YAML file docker-compose.yml specifies how these containers should be built, configured, and networked together. The simplicity of Docker Compose is its beauty. With just a few lines of code, developers can spin up a complex environment, ensuring seamless communication among all services.

Now we will create our docker-compose file, which should include the required components based on the project specifications.

- 3 services: MariaDB, NGINX, and WordPress

- A volume for the WordPress database

- A second volume for the WordPress website files

- A docker network to establish connections between the containers.

This is the wholedocker-compose.yml file:

Now, let’s break down each part of it.

version: "3.8"
  • Purpose: Specify the version of the Docker Compose file format to use.
  • Details: Docker Compose has various file format versions, each adding new features and deprecating older ones. Version 3.8 is one of the latest, supporting advanced networking and other modern Docker features.

Services:

mariadb:
image: mariadb:user
container_name: mariadb
build: ./mariadb
volumes:
- mariadb:/var/lib/mysql
env_file:
- .env
networks:
- inception
restart: always
  • image: Specifies the Docker image to use for this service. Here, mariadb:user it indicates a custom MariaDB image.
  • container_name: Names the container mariadb for easier management and identification.
  • build: It points to the directory ./mariadb containing the Dockerfile to build the image. This is used instead of pulling from Docker Hub.
  • volumes: Mounts the mariadb named volume to /var/lib/mysql inside the container, ensuring that MariaDB data persists across container restarts.
  • env_file: Specifies an environment file .env to load environment variables into the container. This file typically contains sensitive information like database passwords as we discussed earlier.
  • networks: Connects the container to the custom network(inception), allowing it to communicate with other containers on the same network.
  • restart: Ensures that the container always restarts if it stops, maintaining service availability.
nginx:
image: nginx:user
container_name: nginx
build: ./nginx
ports:
- "443:443"
depends_on:
- wordpress
volumes:
- wordpress:/var/www/wordpress
networks:
- inception
restart: always
  • image: Uses a custom nginx:user image.
  • container_name: Names the container nginx.
  • build: Specifies the ./nginx directory for building the Nginx image.
  • ports: Maps port 443 on the host to port 443 on the container for HTTPS traffic.
  • depends_on: Ensures that the wordpress service starts before Nginx, establishing necessary dependencies.
  • volumes: Mounts the wordpress named volume to (/var/www/wordpress) inside the Nginx container, ensuring that Nginx serves the WordPress files.
  • networks: Connects to the inception network for inter-container communication.
  • restart: Configures the container to always restart if it fails.
wordpress:
image: wordpress:user
container_name: wordpress
build: ./wordpress
depends_on:
- mariadb
volumes:
- wordpress:/var/www/wordpress
env_file:
- .env
networks:
- inception
restart: always
  • image: Uses a custom wordpress:user image.
  • container_name: Names the container wordpress.
  • build: It points to the ./wordpress directory for building the WordPress image.
  • depends_on: Ensures that the mariadb service starts before WordPress, as WordPress needs the database to be available.
  • volumes: Mounts the wordpress named volume to (/var/www/wordpress) inside the container, ensuring persistent storage for WordPress files.
  • env_file: Loads environment variables from the .env file, which likely contains database connection details.
  • networks: Connects to the inception network.
  • restart: Ensures that the container always restarts if it stops, keeping WordPress available.

Volumes:

volumes:
mariadb:
name: mariadb
driver: local
driver_opts:
device: /home/data/mariadb
o: bind
type: none
wordpress:
name: wordpress
driver: local
driver_opts:
device: /home/data/wordpress
o: bind
type: none

MariaDB:

  1. name: This names the volume (mariadb), which can be referenced in the services section.
  2. driver: The (local) driver is the default driver that manages volumes on the local machine.
  3. driver_opts:
  • ==>device: Specifies the host directory /home/data/mariadb where the volume's data will be stored. The device option indicates where the data should be kept on the host system.
  • ==>o: It sets the mount type to bind. Bind mounts allow you to map a directory on the host machine directly to a directory in the container.
  • ==>type: Set to (none), indicating that no special filesystem type is specified. This is standard for bind mounts.

WordPress:

  1. name: Names the volume wordpress for easy reference.
  2. driver: Uses the local volume driver.
  3. driver_opts:
  • ==>device: Maps the host directory /home/data/wordpress to the container's volume. This ensures that WordPress data is stored persistently on the host.
  • ==>o: Sets the mount type to (bind), similar to the MariaDB volume.
  • ==>type: No specific filesystem type is indicated.

Networks:

networks:
inception:
name: inception
  • name: Defines a custom network named (inception), which all the services connect to. This network facilitates communication between the MariaDB, Nginx, and WordPress containers.

Now that we’ve completed our docker-compose file, the next exciting step is to create a Makefile to effectively manage our containers using the docker-compose file.

The comments in the file explain all the things going on inside it.

This is the source code for the entire project that we built in this blog.

For more details on how to install Docker and Docker Compose, or on how to run the project on your machine, I suggest visiting my Inception GitHub repository and my LinkedIn account.

The blog took you through containerization, from setting up containers to orchestrating them with Docker Compose. It’s about adopting an efficient approach to application management. Docker Compose offers endless possibilities for scaling applications and integrating CI/CD pipelines. Thank you for joining the deep dive into Docker and Docker Compose. Happy containerizing! 🚀

--

--