Build a Production Ready Face Detection API

Part 2: Refining the API and integrating the final workflow

Urandu Bildad Namawa
DevCNairobi
10 min readSep 24, 2019

--

This is a continuation article to https://medium.com/analytics-vidhya/build-a-production-ready-face-detection-api-part-1-c56cbe9592bf which was the first part of our series.

In Part 1 of this post, we created a simple dockerized face detection API on top of django and mtcnn face detection model. In this post, we are going to make our API asynchronous by using a micro-service architecture and process the images in the background. I will try as much as possible to abide by the 12 factor app methodology in the development of our service.

In order to make our API asynchronous, we shall use celery as our task processor, Redis as our result back-end, RabbitMQ as our message broker and PostgreSQL as our persistence layer. We shall also use Minio as our file storage service (Minio is an s3 alternative). We shall begin by updating our architectural diagram from part 1 so as to visualize the setup of the new design.

Asynchronous Face Detect API architecture

We shall begin by updating our docker-compose file (docker-compose.yaml) as shown below;

We also add an env file env.env which contains all our application environment variables. copy the following into the env.env file in our project root directory;

We next integrate celery into our application and convert the detect_faces function into a background task.

Asynchronous Processing

in order for us to process our task in the background/Asynchronously we shall follow the following steps;

  1. Add celery to the requirements.txt file, this is to ensure that celery is installed inside our application container

2. Add celery configs to settings.py: copy the settings below to the bottom of settings.py

3. Copy the below database settings to the settings file also

4. Create the file api/celery_app.py and copy the following code into it.

5. Add the Image model (api/models/image.py).

6. For each face identified, we want to save it in the db so we add a Face model to our models api/models/face.py

7. now modify our view to only save the upload image to disk then to the database before returning a 202 response.

8. Create the file api/tasks/image.py and inside, copy the below code. we have move the detect faces function from the view and converted it in to a celery task.

Now what the task does is, it fetches an image from the db, detects faces on the image and then saves the faces to the db.

9. We are almost there, now, lets modify our view and call our detect faces task.

Now we are ready to test our API!! First, ensure that your requirements.txt file looks as the one below

Now run

The command might take a while pulling and building the necessary images.

After the command has finished running, in a new terminal, run the sollowing set of commands

  • Create a django admin super user by running the following command then type in the requested details
  • make database migrations
  • migrate database

After this, sending a post request to our endpoint gives us the following response

looking at the docker logs, we see that indeed the task was processed in the background as shown below;

face detection task processed in the background

Until now, lets stock stake and see where we are at with our API, so at the moment, we are able to process our face detection tasks with the workers, leaving the API container to only receive tasks and queue them for later processing. This micro-service architecture gives us leverage when we want to scale our application to be able to handle a huge number of requests concurrently. There is one drawback though, at the moment, we are saving our images to disk and as we know, containers are transient and when they die, we shall also lose our uploaded image files. This is where minio(amazon S3 alternative) comes in, we shall use minio as our image storage service. To learn more about minio and its capabilities, please refer here.

To integrate minio into our django application, we shall add the settings below to our settings file (face_detect_api/settings.py)

also in the settings file, add minio_storage to INSTALLED_APPS.

Then we shall add the packages minio==4.0.21 and django-minio-storage==0.2.2 to our requirements.txt file.

we now do a final docker-compose up and send a test image to our API. Now if we navigate using our browser to http://localhost:9000 and login using the minio credentials in our env file, we get to see that now our django application, used minio for storage.

minio object storage

Now that we have set the rails for our API to become scalable, we need to add a callback mechanism to requests. We shall begin by adding a callback mechanism.

Callback

We shall make a few modifications to our API so as to enable us to provide a callback after we are done processing an image.

  • Modify our image model to include status and callback_url our new image model should look like the one below
  • On our view, we shall now accept the request_id as a payload alongside the callback_url. so now our endpoint will accept three parameters, an image file, request_id and callback_url. we shall also add a serializer to validate our inputs. Below is our new view;
  • Next, we create a callback task that will be invoked once the face detect task is completed. For this, paste the below code to the file api/tasks/image.py

As an explanation to what we have done in that new task; the callback task, gets retrieves the processed image from our storage service, then draws the bounded faces on that image, before posting the results, plus a link to the same image to the provided callback url for the specified request.

To finalize on our API code, we shall add a get method that will allow us to download the processed image. In order to accomplish this, add the below method to our Image view.

We shall then chain the detect_faces task with the detect_faces_callback task on the view. The final version of our view looks like this:

Now we are ready to test our asynchronous API.

Testing the asynch API

In order for us to test our simple API, we shall send a request using postman with the below image.

input image

For the callback, we shall create a requestbin.com endpoint so as to inspect the callback we get from the API.

Below is the postman request

Looking at our requestbin endpoint, we confirm that the callback was posted, below is the screenshot

The payload that was posted has the output_image_url, when we fetch that image, we see that all the faces were detected and bounded.

result image

At this point, our API is both asynchronous and composed of a micro-service architecture, with this architecture, we can morph it into more complex architectures but for the purposes of this post, we shall keep it simple. We are also able to scale our application with ease and since we are using docker, we have flexibility when it comes to deployment strategies. On part 3 of this series, we shall deploy our API using Kubernetes to Azure and test it out.

The code used in this post is available on this GitHub repo https://github.com/urandu/face_detect_api . If you want to test out the API locally, head to the repo and read the description on how to run the API locally.

In case of any question or comment you can reach me on twitter https://twitter.com/bnamawa

--

--

Urandu Bildad Namawa
DevCNairobi

A passionate software engineer with deep interest in machine learning and scalable software architectures.