How to implement emotion and gender recognition?

Nishchal Gaba
Dockship
Published in
4 min readOct 31, 2019
Emotion and Gender recognition

In this post, we are going to learn how to launch a pre-trained emotion and gender recognition on images, the same method can be easily extended to videos.

We are going to make use of one of the models provided on Dockship.io and launch the model on our machine with a few simple steps. This lets us avoid the complicated coding battle and use the provided model seamlessly out of the box.

Remember to create a profile on Dockship before we proceed. Go to the Emotion Recognition model on dockship and you will be directed to the page.

Emotion Recognition — Dockship
The Emotion Recognition model page — Dockship
The Emotion Recognition model page — Dockship

This page provides information about the purpose of the model along with standard computational statistics for tested CPU and GPU environments. This lets the user get an idea about the speed of execution on various devices. Screenshots are also attached to display the results after model execution.

Click on the Free Download button as shown below to get the model zip along with requirements.

Free Download Model
Free Download Model

The model does require us to have Python installed on our system. Using a new Anaconda/Virtual environment for Python 3.6 helps organize the different models in a better manner.

You can download Anaconda for free from here (optional): Anaconda

Now back to the model!
The model comes with two requirements.txt files

  1. gpu_requirements.txt
  2. cpu_requirements.txt

Depending on the system, we can run the following commands as mentioned on the model page. (NOTE: In the case of the virtual environment, make sure that the environment is first activated)

Here is how you can create and activate a Conda environment: Creating and activating a Conda environment.

  1. For GPU: pip install -r gpu_requirements.txt
  2. For CPU: pip install -r cpu_requirements.txt

Once the installation finishes, all we need to do is run the model, yes that’s it, no more additional steps!

In case you want to customize the model, the source code is also included in the model zip, which in most cases is ‘run.py

run.py - source code, useful for customization

For the Emotion Recognition model we can run the model using:The model does require us to have Python installed on our system. Using a new Anaconda/Virtual environment for Python 3.6 helps organize the different models in a better manner.

You can download Anaconda for free from here (optional): Anaconda

Now back to the model!
The model comes with two requirements.txt files

  1. gpu_requirements.txt
  2. cpu_requirements.txt

Depending on the system, we can run the following commands as mentioned on the model page. (NOTE: In the case of the virtual environment, make sure that the environment is first activated)

Here is how you can create and activate a Conda environment: Creating and activating a Conda environment.

  1. For GPU: pip install -r gpu_requirements.txt
  2. For CPU: pip install -r cpu_requirements.txt

Once the installation finishes, all we need to do is run the model, yes that’s it, no more additional steps!

In case you want to customize the model, the source code is also included in the model zip, which in most cases is ‘run.py

run.py - source code, useful for customization

For the Emotion Recognition model we can run the model using:

  1. python3 image_emotion_gender_demo.py
  2. python3 image_emotion_gender_demo.py ../images/test_image.jpg
Results after the execution of the model
Results after the execution of the model

The model also contains the script to implement for videos. This is the easiest launch of an AI model on the system I have seen in my machine learning career, thanks to Dockship.

What application are you going to use this model for then?

Visit https://dockship.io for more amazing ready to deploy models!

--

--