RESTful API, HOW TO | Part 4 — Deployment

Daniele Dapuzzo
Analytics Vidhya
Published in
4 min readMar 22, 2020

Design and implement services are part of my daily bread, I want to share some best practices and tips that can help you in your work.

In this series on RESTful API, I will discuss several topics:

  • Design
  • Implementation
  • Testing
  • Deployment

Some information

We will use swagger editor to design our APIs, python language to create the microservice and finally Docker to deliver the final solution. All the code is available in this repo.

We are now arrived to the last article of this series. Here we will discuss the deployment of our application. For this scope we will use the Docker technology, if you don’t know it go to the official website, there you will find a lot of information about it, also the documentation is a precious resource.

I will not cover the installation of docker, it depends on your platform, so I will skip directly to the application deployment.

Dockerfile

First of all we must fill the Dockerfile, in the generated code a Dockerfile is already there, but we should change a little the content. Below the new version of the file.

FROM pypy:3-7.3.0-slim

# add a not privileged user
RUN useradd base_user

# create the application folder and set it as the working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# upgrade pip and install requirements
RUN pip install --upgrade pip
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt

# copy the application to the wd
COPY . /usr/src/app

# switch to not privileged user
USER base_user

EXPOSE 8080

ENTRYPOINT ["gunicorn"]

CMD ["wsgi:app"]

As you can see the file has been deeply changed, I want to focus on different parts of the file:

Security

On the web you will find a very large amount of Dockerfile example, but very often they bring security breaches. Let’s talk about of the user privileges that a docker user have:

Base image almost always run with root privileges, this is because they need to install packages to build the base, but for final application very often the root user is no longer needed, instead it could expose your system to dangerous attacks.

To mitigate this risk we create a user without root privileges and then, after configured the image, we switch to this user. This is a safer way to run our containers.

Cache

During the building phase cached layers are very useful to not lose too much time waiting for it, when we write Dockerfile is important to write every instruction wisely, for example we first copied the requirements file and install them and only then we copy tha application folder. This way if we change anything in the code we are not invalidating the requirements and the image will use the cached layer and will skip the installation of requirements.

Image size

Talking about layers we should consider also the size of the image, both the base image and the layers that we add with our Dockerfile.

Always choose a base image that fits your requirements! In both directions obviously, it doesn’t make sense to lose hours of work to build your image from a scratch image to obtain maybe 4 MB less in the final image size.

Also think about all the layers that you are adding editing the Dockerfile.

Last but not least, it is very important to add to your project a .dockerignore file that works exactly like the known .gitignore file and avoid to copy useless files and directories into your image.

Build

To build the image we run the following command:

docker build -t test_app:1.0 .

With this command we are tagging our image with the tag test_app:1.0.

Here we are building the docker image of our application in a local environment, in a real production environment it is strongly suggested to push the image to a docker registry.

To know more see the docs.

Production like

So, we have now built our docker image, in order to simulate a production-like environment we will make use of docker compose. See docs in order to install it on your system.

Below the docker-compose file.

version: '3'
services:
api:
image: test_app:1.0
ports:
- 80:8080
env_file:
- .env
restart: always

In the docker-compose file we described the configuration of the deployment of our application, we expose the application on port 80, along the docker-compose file we will provide a .env file containing all the environmental variable useful to configure the application. The restart directive inidicates to the engine to always restart the container has been stopped.

Let’s run it with:

docker-compose up --scale api=3

With the scale flag we are telling to docker-compose to create 3 instances of the service so we can handle large amount of traffic.

In this article we saw hot to configure a Dockerfile and run the container with docker compose. We didn’t cover orchestration tools like Docker Swarm or Kubernetes because of their complexity, they would be out of the scope of these articles.

About Docker I would like to list some useful links:

That’s all! We are finally arrived to the last article of the series, to everyone that got here, thank you for having followed me in this adventure and always keep learning!

REMINDER: you can find all the updated code at this GitHub repository!

Link to the previous Article: https://medium.com/analytics-vidhya/restful-api-how-to-part-3-testing-8fd3fac4e1cd

--

--

Daniele Dapuzzo
Analytics Vidhya

Software Engineer, interested in new technologies, travel lover.