Docker: What Frontend Engineers Should Know

Eric_见嘉
14 min readJul 12, 2023

1. Preface

Recently, when I tried to deploy my own application, I couldn’t escape from the realm of Docker. As a frontend developer, this technology is still somewhat unfamiliar to me. However, if you’re like me and have used your desktop computer as a gaming machine, installed single-player games from CDs, and played until you were “lost in the game,” then after reading this article, you will definitely be able to use Docker.

2. What is Docker?

Before talking about Docker, we need to understand virtual machines. To put it simply, a virtual machine is like another computer built on the local physical machine:

virtual machines

docker

Compared to virtual machines, Docker doesn’t need to run another operating system by virtualizing the hardware environment. It is a lightweight virtualization technique known as containerization. Docker uses core functionalities of the operating system, such as Linux namespaces and control groups (cgroups), to create independent and isolated runtime environments called containers.

Each container can run one or more applications and provides isolation and resource management capabilities similar to physical computers. Applications can be run on different machines or operating systems without worrying about environmental differences or dependency conflicts.

In summary, Docker containers share the host machine’s kernel and are completely independent from each other. This ensures that the applications to be deployed are always in a unified environment configuration, facilitating continuous integration and continuous delivery. Additionally, this feature can be used to create a unified development environment.

3. Components of Docker

3.1 Intuitive Understanding of Docker

As a tool, Docker consists of three main components:

  • Dockerfile
  • Image
  • Container

The Dockerfile is like a design blueprint or blueprint for a particular environment. An image is a snapshot recorded according to the Dockerfile and can be used to start one or more independent environments, which are called containers.

If we were to make an analogy in real life, it would be something like this:
Many boys go through a phase of loving games. At that time, when my family got a new computer, I was so excited. I immediately gathered my friends and put together a bundle of game CDs, including CS1.5, CS1.6, Warcraft, StarCraft, Swordsman III, Swordsman Inn, Romance of the Three Kingdoms 11, Grand Theft Auto… We had them all. Then, my friends and I sat together happily. I pressed the open button on the computer and inserted the CS 1.5 game CD.

[Close — My Computer — F Drive — Double-click — Select installation directory — Install]

The CD spun in the drive for a while, and it was done. I double-clicked to run CS 1.5, and my brothers were waiting for me in the desert map.

Of course, when I had internet access at home, I started directly downloading game installers from GameCopyWorld to save time.

In this childhood story, each game CD is a different image, and CS1.5, which runs when you double-click it, is a container. The game designs developed by different game companies are like Dockerfiles. Burning the files into the CD based on this Dockerfile is the process of building an image. The differences between CS1.5 and 1.6 or Swordsman I and III can be understood as different versions (tags) of images. Downloading the installer directly from GameCopyWorld is like pulling an image, and GameCopyWorld as a game platform is like Docker Hub-the official image repository for Docker.

3.2 Docker Workflow

If we summarize the above process, it would be something like this:

Preparing the Environment: First, Docker needs to be installed on the computer, similar to the process of installing a game.

  • docker --version
  • docker info

Downloading/Building Images: Similar to downloading game installers from GameCopyWorld, Docker uses images to build containers. An image is a preconfigured file that contains a complete application and its dependencies. You can search and download suitable images from Docker Hub and other image repositories. Alternatively, you can prepare your own burner and create images yourself.

  • docker pull <image>
  • docker build -t <image> <path>

Creating Containers: Once the required image is downloaded, Docker commands can be used to create containers. Containers are running instances created based on images, similar to installing games. Various parameters such as port mapping and file mounting can be specified for the containers.

docker run -d --name <your_container_name> -p 8080:80 -v $(pwd):/app <image>

Running Applications: Once the container is created, Docker commands can be used to start the container and run the application. Just like double-clicking to run a game, Docker will start the container and run the desired application.

  • docker exec -it <container> <command>

3.3 Common Docker Commands

Container Management:

  • docker run <image>: Run a new container
  • docker start <container>: Start a stopped container
  • docker stop <container>: Stop a running container
  • docker restart <container>: Restart a container
  • docker rm <container>: Delete a container
  • docker ps: List currently running containers
  • docker ps -a: List all containers, including stopped ones

Image Management:

  • docker images: List local images
  • docker pull <image>: Download an image
  • docker push <image>: Push an image to a remote repository
  • docker build -t <image> <path>: Build an image based on a Dockerfile
  • docker rmi <image>: Delete a local image

Logs and Output:

  • docker logs <container>: View container logs
  • docker exec -it <container> <command>: Execute a command in a running container
  • docker cp <container>: : Copy files from a container to the local machine

Networking and Ports:

  • docker network ls: List Docker networks
  • docker network create <network>: Create a new Docker network
  • docker network connect <network> <container>: Connect a container to a specified network
  • docker port <container>: Show the port mapping of a container

Data Management:

  • docker volume ls: List Docker volumes
  • docker volume create <volume>: Create a new Docker volume
  • docker volume inspect <volume>: Inspect detailed information about a volume
  • docker volume rm <volume>: Delete a Docker volume

Other Common Commands:

  • docker info: Show system information
  • docker logs <container>: Get container log information
  • docker inspect <container>: Display detailed configuration information of a container

4. Deploying a Frontend Project

Now, let’s give it a try.
Install Docker by visiting the official Docker website and follow the instructions to set up a local mirror for acceleration. I will skip these steps for now.
For example, if I want to deploy a frontend project locally, what should I do?

4.1 Prepare the Frontend Project Source Code

Create a React + TypeScript project quickly using Vite:

npm create vite@latest my-react-app-docker-1 --template react-ts

After creating the project, build the project code: npm run build. This will generate the bundled files in the dist directory.

4.2 Add nginx.conf

To deploy the project, we need to use an Nginx server. Nginx is a high-performance open-source web server and reverse proxy server known for its excellent performance in handling high concurrency and load balancing. Below is the nginx.conf file, which is the main configuration file for Nginx. After starting the Nginx service, it will determine how to handle incoming requests and responses based on this file:

# Global configuration
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

# Event module configuration
events {
use epoll; # Event-driven I/O
worker_connections 1024;
}

# HTTP module configuration
http {
# MIME types configuration
include /etc/nginx/mime.types;
default_type application/octet-stream;

# Log format configuration
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

# Access log configuration
access_log /var/log/nginx/access.log main;

# Gzip compression configuration
gzip on;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

# Server configuration
server {
listen 80;
server_name localhost;

# Root directory configuration
root /usr/share/nginx/html;
index index.html;

# Other routing configuration
location / {
try_files $uri $uri/ /index.html;
}

# Static file caching configuration
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 1d;
}
}
}

4.3 Add Dockerfile

Create a new file named Dockerfile in the project root directory with the following content:

FROM nginx:stable-alpine3.17 
COPY dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/nginx.conf

In the Dockerfile:
FROM nginx:stable-alpine3.17: This directive specifies the base image. It uses the base image named nginx:stable-alpine3.17. This base image is an Alpine Linux-based image with Nginx (Alpine versions are lightweight) and has a version of 3.17.
COPY dist /usr/share/nginx/html: This directive copies the contents of the dist directory in the current directory to the /usr/share/nginx/html directory in the Docker image (which is the default HTML directory for Nginx).
COPY nginx.conf /etc/nginx/nginx.conf: This directive copies the nginx.conf file from the current directory to the /etc/nginx/nginx.conf file in the Docker image. This file is the Nginx configuration file, and by copying it into the image, we can use a custom Nginx configuration when running the container.

It’s recommended to pull commonly used base images like ubuntu, node, nginx, postgres, etc., in advance, so that when building images locally later, the local images will be used instead of pulling them from remote sources. Now, let’s pull nginx:stable-alpine3.17:

docker pull nginx:stable-alpine3.17

4.4 Build the Image

Now that the preparations are complete, let’s build the image:

docker build -t vite-web:v1 .

The -t flag specifies the name of the image (vite-web). You can optionally specify a tag with a colon (:) like v1. By default, it uses the latest tag.
The . means the directory where the Dockerfile is located. Docker will look for the Dockerfile in this directory and use it to build the image. Here, it represents the current directory.

After the build is complete, you can use the command docker images to see the vite-web:v1 image in the list of images.

4.5 Start the Container

After the image is ready, use it to start a container:

docker run -d --name my-web-1 -p 8080:80 vite-web:v1

The -d parameter indicates running the container in detached mode, which means the container will run in the background without blocking the terminal.
The — name my-web-1 parameter specifies the name of the container as “my-web-1”. This name can be used to uniquely identify the container.
The -p 8081:80 parameter maps port 80 of the container to port 8081 of the host machine. This allows accessing the application within the container through port 8081 on the host machine. (Similar to the -v parameter, where the left side represents the host and the right side represents the container environment.)
vite-web:v1 is the name and tag of the container image to run.
Execute the command docker ps to see the list of running containers.

You can use the curl command to check the web page connection status:

curl http://localhost:8080 -v

Open the web page in a browser to see if it works.

Sometimes, you may want to view the logs of a container, such as to check why it didn’t start, why it reported errors, or who accessed it. Use the following command:

docker logs my-web-1 # Using the container name 
docker logs 00e39d9365df # Alternatively, using the container ID

At this point, the local frontend deployment is successful!

4.6 Automation Deployment Process

In the entire process above, both image building and container running require manual execution of commands. However, repetitive tasks can be optimized. Now, let’s automate the entire deployment process using a shell script.
Shell script files are generally placed in the bin directory. Create a new file named setup_for_host.sh in the bin directory with the following content:

# Build the image
image_name=vite-web # Image name
version=$(date +'%Y%m%d-%H%M%S') # Image version (represented by the current time)
container_name=my-web # Container name
host_port=8080 # Host port
container_port=80 # Container port

echo 'docker build...' # Build the image
docker build -t $image_name:$version .
echo 'docker rm...' # Clean up the container with the same name
# If a container with the same name already exists, remove it
if [ "$(docker ps -aq -f name=$container_name)" ]; then
echo 'docker rm ...'
docker rm -f $container_name
fi
echo 'docker run...' # Start the container
docker run -d --name $container_name -p $host_port:$container_port $image_name:$version
echo 'Done!'

After writing the script, delete the previously running my-web-1 container because it occupies port 8080 on the local machine, causing a conflict with the host_port in the script:

docker rm my-web-1

After deletion, run the script in the root directory:

chmod +x bin/setup_host.sh # Add executable permissions 
bin/setup_host.sh

The automated deployment script is successful!

4.7 Summary

Let’s summarize this section with an illustration:

The image starts a container environment that copies the two files required by the Nginx server from the host machine. Then, it maps port 80 of the container to port 80 of the host machine, allowing access to the web page from the host machine.

5. Deploying a Node.js Application

Once you’ve mastered the local deployment of frontend projects, how do you deploy a Node.js application?

5.1 Prepare the Backend Service Source Code

We’ll skip the installation of Node.js since you already have it installed. Let’s get started with the command line:

mkdir my-express-app-docker-1
cd $_
npm init -y
npm i express
touch server.js
ls
nano server.js

The content of server.js file should be as follows:

'use strict';

const express = require('express');

// Constants
const PORT = 8080;
const HOST = '0.0.0.0';

// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello World');
});

app.listen(PORT, HOST, () => {
console.log(`Running on http://${HOST}:${PORT}`);
});

Add a start script to the package.json file:

After running npm run start, use curl to check the response, and everything should be working fine.

5.2 Add .dockerignore and Dockerfile

The express package uses Node.js, so we don’t need to copy the local node_modules into Docker. We can create a file similar to .gitignore called .dockerignore to ignore the corresponding files. The content of the .dockerignore file is as follows:

node_modules
npm-debug.log

In the project’s root directory, create a file named Dockerfile with the following content:

# Use a lightweight node18 image
FROM node:18-alpine
# Create a working directory /app
WORKDIR /app

# Copy package.json and package-lock.json needed for dependency installation to /app
COPY package*.json ./
# Install dependencies
RUN npm install
# Use npm ci --omit=dev for production environment
# RUN npm ci --omit=dev

# Package the source code into /app
COPY . .

# Expose port 8080
EXPOSE 8080
# After starting the container, execute node server.js
CMD ["node", "server.js"]

5.3 Build Image + Start Container

This step is similar to the frontend deployment section.

Build the image:

docker build -t express-app .

This command builds an image named express-app. If you don’t add a colon and tag number, the built image will default to the latest version.

Start the container:

docker run -d --name my-express-app -p 3002:8080 express-app

This command starts an express application container named my-express-app using the express-app image.

Use the curl command to check the webpage connection:

curl -i localhost:3002

The connection should be successful!

To stop the running service, execute docker stop my-express-app. To start it again, execute docker start my-express-app.

Node.js application deployment is now complete!

As for automated deployment, you can refer to the frontend automated deployment script and customize it accordingly. We won’t go into detail here.

5.4 Enter the Container

If you want to enter the container to take a look, execute the following command:

docker exec -it my-express-app ash

Entering whoami will show that the current user is root.

If you haven’t set permissions in the Dockerfile, it will default to using root. This is a potential issue, as production environments generally don’t directly use root for service deployment. We’ll leave this topic for future discussions.

Type exit and press Enter to exit the container operation.

6. Image Pushing

Now that we have the images for both the frontend and backend (vite-web and express-app), let’s push them to a repository. This way, the testing engineer can pull the code from the testing or production environment for testing. Typically, companies set up their own Docker image repositories. In this example, we’ll use Docker Hub (registration omitted):

6.1 Docker Login

First, log in to Docker Hub:

docker login

Docker login successful.

6.2 Tagging Images

Next, tag the images:

# docker tag <image> <username>/<image>
docker tag vite-web:v1 ericknight/vite-web
docker tag express-app ericknight/express-app

Note: Replace “username” with your actual username; do not enter random values.

6.3 Pushing to the Image Repository

Finally, push the images to the repository:

docker push ericknight/vite-web:latest
docker push ericknight/express-app:latest

Afterward, you will be able to see the newly pushed images on Docker Hub:

7. Conclusion

With this, you should have a basic understanding of how to use Docker and deploy projects locally. Once you have mastered the basic workflow, you can:

  • Use Dockerfile
  • Build or pull Docker images
  • Run Docker containers
  • Push Docker images

Of course, there are many other Docker commands and deployment aspects to explore. For instance, integrating frontend, backend, and databases through a network, achieving data persistence, implementing CI/CD, deploying to cloud servers, or even abandoning Docker and opting for serverless deployment. These topics will be discussed in future blog posts.

If there are any errors, please correct me. Thank you for reading!

👏 Sure! If you don’t have me as a friend yet, feel free to add me on WeChat: enjoy_Mr_cat. Please mention “DEV” as a reference. By doing so, you’ll have the opportunity to join a high-quality frontend development community where you can meet more like-minded friends. Also, feel free to follow my official WeChat account, “见嘉 Being Dev,” and set it as a starred account to receive updates promptly.

Reference:
Virtual Machines: https://www.ionos.com/digitalguide/server/know-how/virtual-machines/
Docker — 从入门到实践: https://docker-practice.github.io/zh-cn/
Dockerizing a Node.js web app: https://nodejs.org/en/docs/guides/nodejs-docker-webapp
docker docs:https://docs.docker.com/get-started/overview/

Originally published at https://dev.to on July 12, 2023.

Chinese edition Originally published at juejin.cn on June 29, 2023.

--

--