FastAPI, Docker and Postgres

Krish Na
5 min readJun 12, 2022

--

In the backend, we shall need these components to fulfill the usecase.
1. API service.
2. Database.
3. Deployment server.

But the issue we may face here is the environment setup in different operating systems. This issue could be resolved when we develop an independent service and that can be run in different OS devices.

With Docker we can resolve this issue. Using Docker, we can convert this backend service into an image and we can run this image as container in any device. And the good part is, we have Docker hub from where we can download official images from OS to database. So, instead of installing in our system, we can pull these required images(packaged technology) and run just like that. If we have to run such multiple images for a backend service, we can use docker-compose to achieve the usecase.

Let’s jump into a small API service that is developed with FastAPI with Postgres as database using docker and docker-compose. Explaining about FastAPI and Postgres is beyond the scope of this article. There are many resources explaining about them.

Once API service is setup, we will write a docker file and build it to create an image and run as container.

Point to note is, the docker file has to be named as “Dockerfile”.Let’s have a look into the content inside the Dockerfile.

First we need to prepare an environment for the setup. I am using python3.8 version. With in that environment, I am setting up the project folder/working directory as “app”.

FROM python:3.8
WORKDIR /app

Now we have to copy the requirements to the working directory and install all the requirements in the current environment so that API service run smoothly.

COPY requirements.txt /app/requirements.txt
RUN pip3 install --no-cache-dir -r /app/requirements.txt

It is a good practice to add “--no-cache-dir” as argument to avoid unnecessary creation of temp files and that would reduce the size of the docker image.

Now, copy the entire project home directory from local to working directory in the docker to build an image and run as a container. Once entire project is setup, the only step that is left out is, the command to run the service using uvicorn, an ASGI server.

COPY  .  /app/
CMD [“uvicorn”, “server:app”, “ --host=0.0.0.0”, “--reload”]

The consolidated Dockerfile look like this,

FROM python:3.8
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip3 install — no-cache-dir -r /app/requirements.txt
COPY . /app/
CMD [“uvicorn”, “server:app”, “ — host=0.0.0.0”, “ — reload”]

Once Dockerfile is finished we can build using the command below.

docker build -t image-path/name .
docker run -it -d -p host-port:container-port image-name

Dockerfile is built. Now as a database, we will use Postgres docker image from the docker hub — a container registry that maintains all the official images. We will create the docker compose file to run the dockerfile we created and the database. Docker compose file is used when we have to run atleast 1 service.

First we will create a virtual network to run these two images under same network.

docker network create practice
After creating the network “practice”, we can get the networks list by below command-
docker network list

Let’s use this network in the docker compose we are going to create.
We will be creating 2 services. one service is for backend and another service is for database. Let’s have a look into this docker compose file.

docker-compose file

Let’s discuss each command in the above docker compose file.
services → Under the services command, you can add as many services we need as per the usecase.
In the image, I mentioned web, db as my services. But we can name these services as we like, but all these services comes under the services section.
Inside each service, we can use several commands. Let’s see few of those commands.
build → is used to build the image pulling from registry or from the Dockerfile.
It will have two commands to define the location of the required Dockerfile and filename.
context → to give the path of the dockerfile in the local directory.
dockerfile → optional, if the filename is Dockerfile, if the filename is anything different like base. Dockerfile, we can mention it here.
Volume → Command used to give the path of the persistent data that can be passed from host to the container. to keep it simple, if we have better configs/useful data in host than in the container, we can give the host configs/data to copy it in the same path of the container.
depends_on → Command is used to make sure the mentioned service is should already built properly before building the current service. In our case, web service will be build and run successfully once the db service is successfully built and run.
environment → commands gives the environment variables that would be passed to use during build and post deployment. The environment in the web service is optional if we are passing the details with in the application itself and configured internally. In db service, we pulled the image from docker hub and in this case, we need to pass the environment variables.
ports → ports command is optional here, if we have already passed the ports in the Dockerfile. Left side port is exposed to the application from the host side(Host port). Right side port is exposed with in the container to connect to the requests accessing from outside of container(container port).

networks → Network is the ip routing established between the services that comes under the defined network. In this case, the network is named as “practice”. Just assume this as virtual network under which web and db services are running.
Point to note here is, docker-compose itself setup a default a default network for all the containers to make sure that each container in the app is accessible to each other.

Once we are done with the docker compose file, we can build and run the containers and can check the status of the containers.

docker-compose up --build → to build and run the container.
docker ps -a → to check the status of all the containers .

Few other commands to manage the containers are:
1. docker kill container-id → to kill the containers that are running.
2. docker rm container-id → to remove the containers that are already stopped.

Once we are done with running the containers and want to check if the data is being inserted properly into database, we can use,

docker exec -it container-id bash

Issues faced during building docker-compose:
Issue: application to db connection issue: “hostname/ip not found. Is the server running? sort of issue.
solution: here hostname is nothing but the service name. Once you modify the hostname from localhost to service name in the db image and web database connection logic/credentials, we will be able to connect to the database properly.

Points to take care of:
1. host port and local port configuration → sometimes we may give wrong port too.
2. For further accessibility/modifications in the database, we can modify the pg_hba.conf file.

That’s it..

Thanks for reading this article. please clap the article and follow me for other articles.

Thanks, Happy Reading !

--

--