Deploying Django API & Vue.js App (with NGINX)using Docker

Nikunj Mishra
5 min readMay 2, 2021

--

Introduction

As we all know that developing Apps & deploying apps on server are two very different jobs that needs expertise in multiple technologies. Nowadays Docker is one of the best and most suitable platform to deploy Web Apps. The reason behind this is its concept of containerization & platform independence, which it provides very seamlessly.

Today, I am going to present you one of the best use case, where you should always prefer using docker as your deployment environment, as it will always help you to maintain a single click deployment pipeline if you have rightly utilized the capability of docker.

I have developed a simple Web App which will fetch Equity data from a zip file which is uploaded on BSE Website on daily basis & will store it in Redis. This app is having Python(Django) as Backend, Vuejs as Frontend, Nginx as (frontend)deployment server & Redis as a data storage.

Architecture

System Architecture

If you see the architecture above, you can find that, there are 4 main components of this app, Frontend, Backend, Scheduler(to fetch & display file daily file data) & Redis. As we know that docker works on the concept of Containerization, therefore we will need 4 different containers to run our app. For 4 containers, we will need 4 different docker images.

As an analogy to OOPS concept, you can understand docker image as class and docker containers as objects.

Note : After getting the architecture overview, Check out the code architecture here

Docker Files/ Image Design

Redis Docker Image

The Redis image is freely available on docker hub while for other 3, we will need to create docker images using docker Files. I will come on to redis again in the later part.

Frontend Docker File

Now lets have a look at Frontend docker file and NGINX configuration which will build docker image and then docker container. I have created a multistage build for vuejs App. In the first stage. I used node as my base image, then i copied package.json & my app(source code) from host machine to container, installed node modules & then created an optimized production build.

In the second step, i used Nginx base image, copied my build to html folder inside container. The benefit of using such base images is that, they take very less Space, providing all features of that snapshot with little or no explicit installation.

NGINX

Now coming to the NGINX configuration File, i have routed all my requests of Backend API (with prefix “/cache”) to local port 8000 by setting @proxy_api to backend:8000 (where backend is the name of the service defined in docker-compose.).

Backend & Scheduler Docker File

Now lets have a look at Backend docker file and Scheduler docker file respectively which will build docker image and further docker container.

Both the files are almost same. We have used python’s base image, then copied & installed our requirements in both the containers & copied our source code to container.

Now in our first dockerfile(of server),we have not mentioned CMD to run the container, while in our second Dockerfile (SchedulerDockerfile), we have given the command(CMD) in last line to run our container(scheduler). To run the first container, we will use command in docker compose(will be explained in later part).

Both containers are exposed Port 8000 but only server container will listen to external Port 8000.

Now coming to the redis and its connections. Redis (also its container) by default runs on PORT 6379. To connect it to django, we will use Redis library.

To connect django to redis container you can use below line of code and use redis object further.

redis = rd.Redis(host=’redis’, port=6379, db=0)

Docker Compose Architecture

Now coming to most important part of this blog, of how all incoming requests are mapped to containers & ports. To do so, we use docker-compose.yml file, which will combine all the docker images & run the docker containers by providing correct dockerfile path/docker image name.

We have 4 containers defined in docker compose file. Lets see step by step for each container.

First one is ‘redis’ service whose image is fetched from docker hub. All the requests coming from outside on port 6379 will be mapped to redis container port 6379. We have fetched image name redis:alpine from docker hub which will pull the lightest weighted image from hub.

Second is ‘backend’ service, as mentioned above, we have used the command argument in our docker-compose.yml file. We could have ran our backend through “python manage.py runserver” command but running it through gunicorn by spinning workers is always considered good practice. We have mapped any external request coming to port 8000 to this container port 8000. We also did volume mapping which will keep the code in sync with the container and host. For any code change, we just need to restart the container. Also, We kept this service dependent on redis so that untill redis won’t start, backend service will not start. The image parameter we defined here is the image name with which we want to build our image. This is not something related to docker hub.

Third one is ‘Scheduler’ service. We have exposed it to container port 8000(in SchedulerDockerfile) but not mapped to any external port since we do not need any external request to go inside scheduler service. It will run the scheduler by itself using commands mentioned in SchedulerDockerfile.

We could have ran both backend server and scheduler inside a single container but ideally this is not the good architecture practice to be considered as you might need to maintain scheduler and server logs properly.

Now the fourth and the final part is build up frontend for which we produced multistage build. We gave the build context and dockerfile path inside build argument. Mapped the nginx.conf file (using volume) to our container’s nginx path & Also mapped external port 80 to our container port 80 so that any request coming will be mapped to port 80. In nginx.conf, we have listened to port 80.

Deployment

While calling any Backend API in your code, just use the base url as “window.location.origin” instead of “http://localhost:8000”.

In order to run this project, all you need to do is clone the project from my git repo. , check docker and docker-compose are up and running, go to the cloned folder and hit the command “docker-compose up -d”. Once all 4 containers are build, you can see below in your terminal

To check all 4 are up and running, you can hit “docker ps” and you will see the below in your terminal.

You can hit localhost:80 on your browser to see the project up and running.

If you still have any doubts/queries left, feel free to reach out to me @ nikunjmishra8170@gmail.com

--

--

Nikunj Mishra

Fullstack Python Developer, having experience in designing, developing & delivering, Microservices, SOA & Multitenancy based architectures.