Streamlining Infrastructure Deployment with Docker: Exploring the Inception Project

Navoos
7 min readJun 1, 2023

Disclaimer:

Please note that while I have made every effort to follow best practices while building the infrastructure in the Inception project, it is possible that I may have missed some. If you are using this article as a guide to validate your Inception project, it should be sufficient. However, if you are using this article to build your own infrastructure using Docker, it is recommended that you research and verify the best practices to ensure the highest level of reliability and security. (LINUX CONTAINERS ONLY)

Docker:

Docker is a powerful tool for building and deploying applications within a virtual containerized environment. It utilizes three key kernel features in Linux to make this possible: Chroot, Namespaces, and Cgroups.

Chroot allows users to change their root directory, which is useful for creating a contained environment where applications can run without interfering with the host system.

Namespaces enable the isolation of processes from each other, so that they run in their own virtual environment without interfering with other processes on the system.

Cgroups provide the ability to limit the consumption of resources by containers, which helps prevent resource contention and ensures that containers run efficiently.

… one Potara Earrings later we get Docker.

By leveraging these three kernel features, Docker creates a powerful virtual environment where applications can be deployed and run in a consistent and reliable manner

Dockerfile

The Dockerfile is a crucial component in creating your environment, as it specifies how the environment should be built and what should be run. The structure of a Dockerfile typically follows this format:

  1. Base image specification
  2. Environment setup commands
  3. Application installation commands
  4. Exposed ports specification
  5. Configuration setup commands
  6. Startup commands and arguments

Please note that using the EXPOSE command in step 4 of the Dockerfile does not actually expose the specified ports. Rather, it simply documents the ports that the container is expected to listen on at runtime. To actually expose the ports, you will need to use the -p option when running the container, or specify the ports section in a docker-compose.yml file

Here is an example Dockerfile that demonstrates how to configure an

Nginx server with ssl from scratch:

FROM debian:stable

RUN apt-get update && apt-get install -y nginx
RUN apt-get update && apt-get install -y openssl
RUN mkdir -p /ssl
RUN openssl genpkey -algorithm RSA -out /ssl/private.key
RUN openssl req -new -x509 -days 365 -key /ssl/private.key -out /ssl/certificate.pem \
-subj "/C=MA/ST=State/L=None/O=None/CN=None"

COPY conf /etc/nginx/nginx.conf
COPY tools/launch.sh /launch.sh
RUN chmod +x launch.sh

CMD ["./launch.sh"]

Docker compose

Docker Compose is a tool that helps you define and run multi-container Docker applications. It simplifies the process of managing your containers by allowing you to configure all your services in a YAML file.

With Docker Compose, you can quickly create and start all your services with a single command. This is useful in various environments such as production, staging, development, testing, and CI workflows.

Docker Compose also provides you with several commands that help you manage the lifecycle of your application. You can start, stop, and rebuild services, view the status of running services, stream the log output of running services, and run a one-off command on a service.

Networking in docker

Container networking refers to the ability of containers to connect and communicate with each other, as well as with non-Docker workloads.

In a Docker environment, containers are typically isolated from each other, but sometimes they need to communicate with one another. With container networking, you can create a virtual network that connects your containers and allows them to communicate with each other using standard network protocols. This makes it easier to build and manage complex applications that consist of multiple containers.

Docker provides several networking options, including bridge networks, overlay networks, and macvlan networks. Each of these options has its own advantages and use cases, depending on your specific needs.

La pulpe

From now on all configurations will be done using Docker Compose. You can view the docker-compose.yml file here:

version: "1.0"
services:
mariadb:
# (optional) this a program that will handle forwarding signals to the right program and reap zombie processes
init: true
container_name: mariadb
# this will be the name of the container in the network
hostname: ${DB_HOST}
image: "mariadb_image"
# restart regardless of the exit status
restart: always
build: "./requirements/mariadb"
# by default docker-compose.yml can read variables from the .env file in the same directory
environment:
- DB_NAME=${DB_NAME}
- USER=${USER}
- USER_PASSWORD=${USER_PASSWORD}
- ADMIN=${ADMIN}
- ADMIN_PASSWORD=${ADMIN_PASSWORD}
# remember containers are isoleted environements, one way to let containers communicate is adding them to the same network
networks:
- database-network
# I used healthcheck to solve a problem that I have encountered the mariadb server must be up and running before running the wordpress container
# You can utilise a healthcheck to specify the condition that can be used to consider if a container is healthy and other containers can only start if the container thats implementing the healthcheck is healthy
# a container is considered healthy if the command in the test directive is exits successfully.
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h" ,"localhost"]
# how many seconds before running the command again
interval: 5s
# in seconds how should I wait for the command in the test directive
timeout: 0s
# if the command fails how many times the healthcheck will rerun the command before giving up
retries: 5
# containers are made to be volatile, easy to launch easy to destroy, one way to persist data is using volumes, data from can be stored in the host machine
volumes:
- database-volume:/var/lib/mysql/${DB_NAME}
wordpress:
init: true
container_name: wordpress
hostname: php_fpm_server
image: "wordpress_image"
restart: always
build: "./requirements/wordpress"
depends_on:
mariadb:
condition: service_healthy
environment:
- DB_NAME=${DB_NAME}
- USER=${USER}
- USER_PASSWORD=${USER_PASSWORD}
- DB_HOST=${DB_HOST}
- WORD_ADMIN=${WORD_ADMIN}
- WORD_ADMIN_PASSWORD=${WORD_ADMIN_PASSWORD}
- WORD_ADMIN_EMAIL=${WORD_ADMIN_EMAIL}
- WORD_USER=${WORD_USER}
- WORD_USER_PASS=${WORD_USER_PASS}
- WORD_USER_EMAIL=${WORD_USER_EMAIL}
- WORD_USER_ROLE=${WORD_USER_ROLE}
networks:
- database-network
- webserver-network
- redis-network
volumes:
- wordpress-volume:/var/www/Inception-website
healthcheck:
test: ["CMD", "pidof", "php-fpm7.4", ">/dev/null"]
interval: 5s
timeout: 0s
retries: 30

redis:
init: true
container_name: redis
hostname: redis_server
image: "redis_image"
restart: always
build: "./requirements/bonus/redis"
depends_on:
wordpress:
condition: service_started
networks:
- redis-network
nginx:
init: true
container_name: nginx
hostname: nginx_server
image: "nginx_image"
restart: always
build: "./requirements/nginx"
depends_on:
wordpress:
condition: service_healthy
networks:
- webserver-network
ports:
- 443:443
volumes:
- wordpress-volume:/var/www/Inception-website:ro
ftp:
init: true
container_name: ftp
image: "ftp_image"
environment:
- FTP_USER=${FTP_USER}
- FTP_PASS=${FTP_PASS}
restart: always
build: "./requirements/bonus/ftp"
depends_on:
- wordpress
ports:
- 21:21
volumes:
- wordpress-volume:/home/${FTP_USER}
adminer:
init: true
hostname: adminer
container_name: adminer
image: "adminer_image"
restart: always
depends_on:
- wordpress
build: "./requirements/bonus/adminer"
networks:
- database-network
ports:
- 7000:7000
simple_website:
init: true
container_name: simple_website
image: "simple_website"
restart: always
build: "./requirements/bonus/simple_website"
ports:
- 80:80
portainer:
init: true
container_name: portainer
image: "portainer"
restart: always
build: "./requirements/bonus/portainer"
ports:
- 9000:9000
volumes:
- /var/run/docker.sock:/var/run/docker.sock

networks:
database-network:
webserver-network:
redis-network:
volumes:
wordpress-volume:
driver: local
driver_opts:
type: none
device: /home/yakhoudr/data/wordpress
o: bind
database-volume:
driver: local
driver_opts:
type: none
device: /home/yakhoudr/data/mariadb
o: bind

mariadb:

We can split the configuration of this container in three parts: the Dockerfile, the tools, and configuration file for mariadb.

  • Dockerfile: we will use debian as the base image, then we install the mariadb-server, we copy the tools to the container, and then the configuration.
FROM debian:stable

RUN apt-get update && apt-get install -y mariadb-server && apt-get -y install systemctl
COPY ./tools/createdb.sh /createdb.sh
COPY ./tools/init.sql /mariadb-conf.d/init.sql
COPY ./conf /etc/mysql/my.cnf
RUN chmod +x /createdb.sh

CMD ["./createdb.sh"]
  • Tools:

****** init.sql ******

this file contains the queries that would be run in order to create the database, user that would be used by wordpress.

-- will create the databases that will contain all the tables needed by wordpress
create database if not exists :DB_NAME;
-- this would be the user that wordpress will use to connect to the database
-- note that in order to connect from outside the container we will pass % wildcard
create user if not exists ':USER'@'%' identified by ':UPASS';
-- The user will not have access to do CRUD operations on the database thus we need to give privileges to that user.
grant all privileges on :DB_NAME.* to ':USER'@'%' identified by ':UPASS';
flush privileges;
create user if not exists ':ADMIN'@'%' identified by ':APASS';
grant all privileges on *.* to ':ADMIN'@'%';
flush privileges;
  • ***** createdb.sh ******
#!/bin/bash

# infile modification of the queries file to replace the placeholders with the expanded values
sed -i "s/\:DB_NAME/${DB_NAME}/g" /mariadb-conf.d/init.sql
sed -i "s/\:USER/${DB_USER}/g" /mariadb-conf.d/init.sql
sed -i "s/\:UPASS/${USER_PASSWORD}/g" /mariadb-conf.d/init.sql
sed -i "s/\:ADMIN/${ADMIN}/g" /mariadb-conf.d/init.sql
sed -i "s/\:APASS/${ADMIN_PASSWORD}/g" /mariadb-conf.d/init.sql

# create directories that will be used by mariadb to store runtime files like the file containing the pid of the running instance
if [ ! -d /var/run/mysqld ]; then
mkdir /var/run/mysqld
fi
if [ ! -d /run/mysqld ]; then
mkdir /run/mysqld
fi

if [ ! -d /var/lib/mysql/mysql ]; then
mysql_install_db --user=root
fi
# NOTE: will have to run the daemon in the foreground to keep the container up
# thus we have to: start the mariadb service in the background, execute the queries, kill the background service then start mariadb in the foreground
service mariadb start
# Loop for checking if the mariadb server is up or not
while ! mysqladmin ping -hlocalhost --silent 2>/dev/null; do
sleep 1
done
mysql -u root < /mariadb-conf.d/init.sql
service mariadb stop
while mysqladmin ping -hlocalhost --silent 2>/dev/null; do
sleep 1
done
# run the service in the foreground
exec mysqld -u root

--

--