Creating and Deploying a Full Stack Web Application
Using Docker, PostgreSQL, Express, React, and NGINX
Getting Started
Welcome! This is a step by step guide on creating and deploying a full stack web application using Docker, PostgreSQL, Express, React, and NGINX. To demonstrate the underlying concepts we’ll build a project named Present, with the objective to create a browser user interface that displays the current database time on the press of a button. To get a better sense of what we’ll build you can try the online demo. Also, you can refer to the github repo if you get stuck and please share any questions or thoughts in the comments!
This article assumes some familiarity with the core underlying technologies (Docker, PostgreSQL, Express, React, and NGINX). However, the instructions are explained step by step so anyone is encouraged to learn along the way. The project is best used as an educational tool and scaffold to build your own ideas on. In this respect, it is minimal and may require additional configuration for your specific needs.
Requirements
The requirements for the project are as follows:
- Run the project entirely within a
docker-compose
orchestration. For simplicity this will encapsulate all initialization, so only Docker is required to build/run the entire project with one command. - Create a PostgreSQL container with specific admin credentials.
- Create an Express container that can query the PostgreSQL container through Docker internally, and create an
HTTP GET
endpoint named/now
to encapsulate this functionality. - Create a React container with a single button and paragraph. Pressing the button will send a request to the
/now
API endpoint, returning the current database time. Then the result will be displayed in the paragraph. - Enable hot-reloading in the development environment for the
api
andui
, so we can see the code changes live without restarting the containers. - Create a separate production environment and deploy it to a cloud server so the project is publicly accessible online.
For reference, here’s a screenshot of the final UI:
Structure
This project is organized into a mono-repo directory pattern, so the entire project will contain multiple distinct apps (named db
, api
, and ui
) within a single repository. To distinguish which we’re talking about at any time we’ll use the terms project and app strictly in relation to the entire project vs a distinct app. Below is a crude example of the directory structure:
- present
- db
- data
- api
- src
- Dockerfile
- package.json
- ui
- src
- Dockerfile
- package.json
- .env
- docker-compose.yml
Below is an diagram of the container structure with exposed port numbers:
Prerequisites
Feel free to skip any step if the software is already installed on your computer.
Install Docker
First install Docker to power the project. This will also install docker-compose
, which we’ll use for orchestration. Please follow the installation instructions.
Install psql
Next, install psql
in order to test that the Database is running correctly. There are many ways to install this tool, but I recommend following these installation instructions for your specific operating system.
Install Node
Lastly, install Node in order to use the npm
and npx
command-line tools. Since we’re running the project entirely within Docker, we won’t run on the locally installed Node engine at all. For this reason it’s important we match the local version of Node (14.14
) to the version used in the containers. There are two installation methods:
- The recommended method is using the
nvm
tool. This is a command-line tool built to easily switch between local Node versions. Please follow these installation instructions. - You may also manually install a specific version. However, this is not recommended because you would need to reinstall to change versions.
Database Setup
I like working up from the bottom of the stack, so let’s start with the database. The goal of this section is to run a PostgreSQL Docker Container, and query the db
for the current time. Also, we’ll initialize the project configuration by creating the .env
and docker-compose.yml
files.
Environment Variables
First let’s setup the env vars for the db
. Create a file named .env
in the present
directory with the following content (feel free to replace the values I use with your own more secure credentials):
DB_PORT=5432
DB_USER=myadmin
DB_PASSWORD=mypassword
Docker Compose
Next, let’s configure the container within the docker-compose
orchestration. As the PostgreSQL Docker Hub documentation explains, the image configures the admin credentials with the POSTGRES_USER
and POSTGRES_PASSWORD
env vars, so we will set these to the env vars we set above. As we continue there will be other env vars with specific names that we will map similarly, although not all share this property. Create a file named docker-compose.yml
with the following content:
version: "3.8"
services:
db:
image: postgres:13.1
environment:
- POSTGRES_USER=$DB_USER
- POSTGRES_PASSWORD=$DB_PASSWORD
ports:
- $DB_PORT:$DB_PORT
volumes:
- ./db/data:/var/lib/postgresql/data
restart: always
command: -p $DB_PORT
Here’s a quick breakdown of what this does:
image: postgres:13.1
Specifies the specific PostgreSQL version.environment: — POSTGRES_USER — POSTGRES_PASSWORD ...
Passes and renames specific env vars into the container, so it isn’t polluted with unneeded extras.ports: — $DB_PORT:$DB_PORT
Maps the container port to your local machine, using the env var we set earlier.volumes: — ./db:/var/lib/postgresql/data
Mounts a named volume from the container’s internal PostgreSQLdata
directory to the local machine, so it will be persisted between container restarts. Without this step the database would reset to fresh-install defaults with each restart. Also if you are following along usinggit
, then make sure to include thedata
directory in your.gitignore
file.restart: always
Force restart the container each time it’s spun-up, specified by the PostgreSQL Docker Hub documentation.command: -p $DB_PORT
Specifies the running port for PostgreSQL, using the env var we set earlier.
Run and Confirm
Finally the db
container is configured and ready to run, let’s start the project with the command docker-compose up
. After a few minutes the resulting screen should look like this:
db_1 | PostgreSQL init process complete; ready for start up.
db_1 |
db_1 | 2020-10-21 21:04:30.110 UTC [1] LOG: starting PostgreSQL 13.0 (Debian 13.0-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-10-21 21:04:30.110 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-10-21 21:04:30.110 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-10-21 21:04:30.115 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-10-21 21:04:30.146 UTC [66] LOG: database system was shut down at 2020-10-21 21:04:30 UTC
db_1 | 2020-10-21 21:04:30.182 UTC [1] LOG: database system is ready to accept connections
Let’s confirm everything is working correctly by querying the db
directly. To do this we’ll use the psql
tool (installed as a prerequisite) in conjunction with the NOW()
PostgreSQL function. In order to query the database for the current time, open a new terminal window and run the command psql -h 0.0.0.0 -p 5432 -d postgres -U myadmin -c "SELECT NOW()"
.
Note: Remember to replace the -U myadmin
with whatever username you set in your .env
file earlier. Similarly, use your own password when prompted.
The results should look like this (with the correct time):
now
-------------------------------
2020-11-26 16:19:44.285649+00
(1 row)
🎉 Great job setting up the db
container! To stop the project press ctl + C
(or docker-compose down
if you ran in--detach
mode). Also, we can double check it is no longer running with the command docker container ls
, or running the psql
command again to make sure the connection fails.
Note: Other useful commands for debugging are docker-compose down -v
which clears the container volumes, and docker-compose up --build
which rebuilds the image before starting.
API Setup
The goal of this section is to create an Express app with a single HTTP GET
endpoint named /now
. This endpoint will query the db
for the present time and return the result.
Express
First let’s create an Express app by following these steps:
- Initialize Node App: Create a new directory named
api
within thepresent
directory, then initialize Node by runningnpm init -y
within it. - Install Dependencies: We’ll need a few packages to run Express, connect to a PostgreSQL database, and allow cross-origin resource sharing. Install these by running
npm i express pg cors
in theapi
directory. - Install Dev Dependencies: We’ll also need a dev dependency to allow hot-reloading. Similarly, install it by running
npm i -D nodemon
. - Create Run Script: Add
"dev": "nodemon"
into thescripts
section of thepackage.json
file. Then change themain
line to"main": "src/index.js"
. - Write Source Code: Now we’re ready to write the Express app. The JavaScript below is based on the Getting Started documentation for both Express and node-postgres. Create a directory named
src
in theapi
directory, then create a file namedindex.js
insrc
with the following:
const express = require('express')
const cors = require('cors')
const { Pool } = require('pg')const port = process.env.PORT
const app = express()
app.use(cors())
const pool = new Pool()app.get('/now', async (req, res) => {
try {
const { rows } = await pool.query('SELECT NOW()')
res.status(200).send(rows[0].now)
} catch (err) {
console.log(err)
}
})app.listen(port, () => {
console.log(`api listening at http://0.0.0.0:${port}`)
})
Environment Variables
Next let’s update the .env
file. Append the following to the .env
file:
DB_NAME=postgres
API_NODE_ENV=development
API_PORT=5000
Dockerfile
Let’s create a file named Dockerfile
in the api
directory to configure the Docker image. Copy the following into the file (based of the Node Docker Hub documentation):
FROM node:14.14
WORKDIR /app
COPY package*.json ./
RUN npm i
CMD npm run dev
Note: We only COPY
the package*.json
dependency files (specifically package.json
and package-lock.json
) because we’ll setup a named volume in the next section to enable hot-reloading.
Dockerignore
Now let’s create a .dockerignore
file in the api
directory to make our container more light-weight. Copy the following into the file: node_modules
.
Docker Compose
As we discussed earlier, several of these env vars have specific names. NODE_ENV
is defined by Node and will be used by both the api
and ui
, and the PG
env vars are used by node-postgres
as connection parameters. Also, you may notice the PGHOST
value isn’t a traditional host (eg. 0.0.0.0
) because we are leveraging the docker-compose
orchestration to communicate internally. Let’s configure the container orchestration by adding the following to the docker-compose.yml
file:
api:
depends_on:
- db
build: ./api
environment:
- NODE_ENV=$API_NODE_ENV
- PORT=$API_PORT
- PGHOST=db
- PGPORT=$DB_PORT
- PGUSER=$DB_USER
- PGPASSWORD=$DB_PASSWORD
- PGDATABASE=$DB_NAME
ports:
- $API_PORT:$API_PORT
volumes:
- ./api:/app
- /app/node_modules
Note: Be sure to use the correct the spacing because .yml
files are picky. The api
service should be on the same level as db
from before.
Here’s a quick breakdown of what this configuration does:
depends_on: — db
Allows thedb
andapi
to communicate through Docker internally. Also waits to start theapi
container until after thedb
container has started.build: ./api
Specifies the context path (including theDockerfile
and.dockerignore
).environment: — NODE_ENV — PORT=$API_PORT ...
Passes specific env vars into the container, so it isn’t polluted with unneeded extras.ports: — $API_PORT:$API_PORT
Maps the container port to the local machine.volumes: — ./api:/app
Mounts a named volume from the the container’s internal/app
directory to the source files. This allows the container to detect changes to files, enabling hot-reloading.volumes: — /app/node_modules
Mounts a data volume to thenode_modules
directory, overwriting this directory in the named volume from the previous step. This ensures we have the correct binaries for the Linux based container as opposed to the local OS. Also this saves the content of the container’snode_modules
directory, so the dependencies don’t have to be re-installed with each restart.
Run and Confirm
Finally the Express app is configured and ready to run. Again, let’s start the project with the command docker-compose up
. This time more text will popup, but the end result should look like this:
db_1 | 2020-10-22 02:13:23.522 UTC [1] LOG: database system is ready to accept connections
api_1 | [nodemon] 2.0.6
api_1 | [nodemon] to restart at any time, enter `rs`
api_1 | [nodemon] watching path(s): *.*
api_1 | [nodemon] watching extensions: js,mjs,json
api_1 | [nodemon] starting `node index.js`
api_1 | api listening at http://0.0.0.0:5000
Note: Your logs may be out of order depending on initialization time.
Let’s confirm everything is working correctly by connecting to the /now
endpoint with a HTTP GET
request. We do this by opening a browser and copying http://localhost:5000/now
into the address bar. Your results should appear in the browser, and look like "YYYY-MM-DDTHH:MM:SS..."
.
🎉🎉 Hurray! We’ve setup the project’s back-end, both the db
and api
running and communicating through Docker.
UI Setup
The goal of this section is to create a React app, consisting of a button and paragraph. Pressing the button will send an HTTP GET
request to the /now
api
endpoint, returning the current database time. Then the result will be displayed in the paragraph.
React App
First let’s create a React app that is capable of connecting to the api
following these steps:
- Create React App: Create the React app by running
npx create-react-app ui
in thepresent
directory. This will create the React app within a new directory namedui
(it may take a few minutes to complete). - Write App Component: Write the component by replacing the contents of the
ui/src/App.js
file with the following:
import React, { useState } from 'react'
const apiHost = process.env.REACT_APP_API_HOSTexport default function App() {
const [now, setNow] = useState() async function onClick() {
const res = await fetch(`${apiHost}/now`)
const json = await res.json()
setNow(json)
} return <div>
<button onClick={onClick}>Present</button>
{now && <p>{now}</p>}
</div>
}
Environment Variables
Next let’s update the .env
file with the new env vars we’ll need, where REACT_APP_API_HOST
follows a specific naming convention.
UI_NODE_ENV=development
UI_PORT=3000
REACT_APP_API_HOST=http://localhost:5000
Dockerfile
Now let’s create a file named Dockerfile
in the ui
directory with the following content:
FROM node:14.14
WORKDIR /app
COPY package*.json ./
RUN npm i
CMD npm run start
Note: This file should be an exact copy of the one we created in the api
directory with the exception of the CMD
value.
Dockerignore
Now let’s create a .dockerignore
file in the ui
directory to make our container more light-weight. Unlike the api
, we’ll also add build
for when we need it in the deployment section. Copy the following into the file:
node_modules
build
Docker Compose
Next let’s configure the ui
within the orchestration by adding the following to the docker-compose.yml
file:
ui:
build: ./ui
environment:
- NODE_ENV=$UI_NODE_ENV
- PORT=$UI_PORT
- REACT_APP_API_HOST
ports:
- $UI_PORT:$UI_PORT
volumes:
- ./ui:/app
- /app/node_modules
stdin_open: true
This configuration follows the patterns in the api
, so we won’t review what everything means. However there is one additional parameter stdin_open: true
, which allows the dev server to run properly once the container is started (instead of exiting with code 0
).
Run and Confirm
Finally the React app is configured and ready to run. Again, let’s start the project with the command docker-compose up
. This time there will be more text, the end result should look like this:
ui_1 | Starting the development server...
ui_1 | Compiled successfully!
ui_1 | You can now view ui in the browser.
ui_1 | Local: http://localhost:3000
ui_1 | On Your Network: http://X.X.X.X:3000
Confirm everything is working correctly by loading the UI and pressing the button. We do this by opening a browser and copying http://localhost:3000
into the address bar. The result should appear in the browser and look like the screenshot in the Getting Started section (after pressing the button).
🎉🎉🎉 Congratulations! We’ve now successfully setup the entire development environment for the project entirely within Docker!
Production Deploy
The goal of this section is to create an Ubuntu virtual machine to host the project online. We’ll also need to add production configuration for the container builds, orchestration, and environment variables.
In this orchestration the db
and api
only have minor changes, while the ui
is completely refactored. Specifically we’ll replace the single ui
container (running the dev server) with a ui_build
container that creates a static build and mounts it to the VM, and a ui_server
container which runs NGINX
to host the static build. Below is a diagram of the orchestration:
API Script
First let’s add a new script to the api
package.json
file: "prod": "node ."
. This runs the app with node
rather than nodemon
, since hot-reloading should not be enabled in a production environment.
Dockerfiles
Next let’s configure the production orchestration by creating a new file named Dockerfile.prod
within the api
directory with the following content:
FROM node:14.14-alpine
WORKDIR /app
COPY . .
RUN npm i --production
CMD npm run prod
This file is based on our development Dockerfile
with the following differences:
- Tag the image as
-alpine
, making it more lightweight. We’ll do this with all our production images. COPY
all files (not justpackage*.json
) because we don’t need hot-reloading, so there’s no need to create a named volume.RUN
npm i
with the—-production
flag so unneeded dependencies aren’t installed (in our case this excludesnodemon
).- Change the
CMD
to theprod
script we created above.
Similarly, let’s create another file named Dockerfile.prod
in the ui
directory with the following content:
FROM node:14.14-alpine
WORKDIR /app
COPY . .
RUN npm i --production
CMD npm run build
Docker Compose
Let’s create a file named docker-compose.prod.yml
in the present
directory with the following:
version: "3.8"
services:
db:
image: postgres:13.1-alpine
environment:
- POSTGRES_USER=$DB_USER
- POSTGRES_PASSWORD=$DB_PASSWORD
ports:
- $DB_PORT:$DB_PORT
volumes:
- ./db/data:/var/lib/postgresql/data
restart: always
command: -p $DB_PORT
api:
depends_on:
- db
build:
context: ./api
dockerfile: Dockerfile.prod
environment:
- NODE_ENV=$API_NODE_ENV
- PORT=$API_PORT
- PGHOST=db
- PGPORT=$DB_PORT
- PGUSER=$DB_USER
- PGPASSWORD=$DB_PASSWORD
- PGDATABASE=$DB_NAME
ports:
- $API_PORT:$API_PORT
volumes:
- /app/node_modules
ui_build:
build:
context: ./ui
dockerfile: Dockerfile.prod
environment:
- NODE_ENV
- REACT_APP_API_HOST
volumes:
- ./ui/build:/app/build
- /app/node_modules
ui_server:
image: nginx:1.19.4-alpine
ports:
- 80:80
volumes:
- ./ui/build:/usr/share/nginx/html
Take the time to read through and understand this file. We have covered each of these parameters in previous sections, with the following exceptions:
build: context:
andbuild: dockerfile: Dockerfile.prod
allow us to specify a customDockerfile
name (Dockerfile.prod
).- The
ui_server
’sports: — 80:80
are intentionally non-dynamic because80
is the default port forHTTP
. It’s possible to change theNGINX
listening port by overwriting thenginx.conf
, but that’s outside the scope of this project (learn more in the NGINX Docker Hub documentation).
Run and Confirm Locally
Now that we’ve setup the production orchestration, let’s run it locally with the command: docker-compose -f docker-compose.prod.yml up
. Once the build is complete, confirm everything is working correctly by opening a browser and copying http://localhost
into the address bar (remember the default port in our production environment is 80
). The result should look and behave identically to the development environment.
Setup VM
We’re ready to setup the online VM. Specifically we will run an Ubuntu VM with Docker installed. To host the VM I’ll be using the Vultr platform so I recommend beginners follow along with Vultr. Alternatively there are many ways to do this (Digital Ocean, AWS, Google Cloud Platform, etc.) so feel free to use whatever platform you like. If you don’t have an account yet, please use one of the affiliate links below to gain some free credit:
- Vultr: https://www.vultr.com/?ref=8720533
- Digital Ocean: https://m.do.co/c/7573f5b43ba8
After logging in to Vultr, click the Deploy a New Server button to get started. On the create page, use the following configuration:
- Server: Cloud Compute.
- Location: Optional, I selected New York.
- Type: 64 bit OS, Ubuntu 20.10.
- Size: 25 GB SSD, $5/month 1 CPU, 1024MB memory.
- Hostname & Label: Optional, I wrote Present.
Click Deploy Now and wait for the server to start.
Once the VM has started, ssh
into it by running ssh root@<YOUR_IP>
(you can find your IP and password in the Vultr Server Details page). After connecting to the VM install Docker by following the Ubuntu Dock installation instructions, and install Docker Compose by following the Docker Compose installation instructions.
Next, we’ll copy the source code into the VM. The recommend tool for this git
(if you have not been following along with git
feel free to use my repo). However if you are opposed to using git
, you can scp
the entire project from your local machine.
Environment Variables
Now we have the entire project source code in the ~/present
directory on the VM. The last touch is to write the production env vars in a new file (this isn’t committed to git since it shouldn’t be publicly available). Let’s create a new file named .env.prod
with the following structure:
DB_PORT=5432
DB_USER=myadmin_prod
DB_PASSWORD=mypassword_prod
DB_NAME=postgresAPI_NODE_ENV=production
API_PORT=8080UI_NODE_ENV=production
REACT_APP_API_HOST=http://<YOUR_IP>:8080
Remember to replace your credentials with more secure ones, and replace <YOUR_IP>
with the IP of your VM. The only excluded env var is the UI_PORT
, since HTTP
defaults to port 80
as discussed earlier.
Run and Confirm
Let’s run the project with our new .env
and docker-compose
files with the following command:
docker-compose -f docker-compose.prod.yml --env-file .env.prod up
Blast off! 🚀 This will take a few minutes to start, but once the build is complete you will be able to view the project in your browser at http://<YOUR_IP>
. Check it out on multiple computers, even your phone!
Conclusion
Please remember that this project is designed to be a minimal scaffold to build your ideas on, and learn about more complex orchestrations. I strongly encourage diving into Docker Hub and adding other containers into the orchestration (some good starting places are MongoDB or Redis). I hope you found this article useful and please leave a comment on how you were able to expand on the project!