šØāš»Facial Emotion Recognition Using ModelArts
Hello Everyone,
Today, I will develop a custom Facial Emotion Recognition CNN model using ModelArts and Huawei OBS. We will load our data to our bucket using Huwaei OBS. Then, we will train a CNN model on ModelArts. Finally, We will build an AI real-time service as an API.
Introduction
What is Facial Emotion Recognition?
Facial Emotion Recognition is a technology that analyzes emotions from many sources, including images and videos. It is a member of the group of technologies known as āaffective computing,ā a multidisciplinary area of study on the capacity of computers to recognize and understand affective states and human emotions that frequently relies on Artificial Intelligence.
In this study, we will train a CNN model, which will be VGG19, with custom hyperparameters to recognize facial emotion. Also, we will use the FER2013 dataset to train our model. FER2013 contains approximately 30,000 facial RGB images of different expressions with seven labels. FER2013 is a well-studied dataset and has been used in ICML competitions and several research papers. It is one of the more challenging datasets, with human-level accuracy only at 65Ā±5%. After training our model, we will build a real-time service. [1]
What are the ModelArts?
A one-stop shop for AI development, ModelArts is designed for programmers and data scientists of all levels. You can manage full-lifecycle AI workflows and quickly design, train, and deploy models from the cloud to the edge. With important capabilities, including data preparation and auto labeling, distributed training, automated model creation, and one-click workflow execution, ModelArts supports AI creativity and speeds up AI development.
All phases of AI development are covered by ModelArts, including data processing, model training, and model deployment. ModelArtsā core technologies enable a variety of heterogeneous computer resources, giving developers the freedom to choose and employ resources as needed. TensorFlow, MXNet, and PyTorch are just a few of the well-known open-source AI development frameworks that are supported by ModelArts. Additionally, ModelArts enables you to apply personalized algorithm frameworks that are suited to yours. For More Information:
A Step-by-Step Implementation
1. Upload Data To OBS: First, we need to upload our data to our OBS bucket to train our custom model. There are many various options to upload data. But I would choose to use OBS utils. I think it is more manageable.
Ā· In the first step, we must create an Access key to Access OBS utils.
Ā· Then, we will create a bucket to store our data.
Ā· Finally, We will upload our FER data using OBS Utils.
2. Training a Custom Model Using ModelArts: After we upload our data, we can start to train our custom model using ModelArts.
Ā· Firstly, We need to write our training code.
Ā· Then, we will create a requirement text file to set up the necessary libraries.
Ā· In this step, we will upload our codes and file to the created bucket.
Ā· In the final step, we will train our model using ModelArts Training Jobs.
3. Building Real-Time Service Using ModelArts: Now, we can build a real-time service with our trained model.
Ā· We need to write our inference code to configure and develop the service.
Ā· Finally, we will build the API service for Facial Emotion Recognition using ModelArts.
1. Upload Data To OBS
Creating an Access Key
First of all, we need an access key and secret key to access using development tools, including APIs, CLI, and SDKs. On the management console, hover over the username in the upper right corner and choose My Credentials from the drop-down list.
Then, we should choose Access Keys from the navigation pane. Now, we can create an Access key.
We will click Create Access Key and enter the verification code or password. It is essential that we download the access key file and keep it properly. If the download page is closed, we will not be able to download the access key. However, we can create a new one.
We can get our access key, secret key, and access key id quickly from the CSV file which we downloaded previously.
Creating a Bucket
Now, we will use Huawei OBS (Object Storage Service). We create a bucket and load our data into it. Object Storage Service (OBS) is a cloud storage service optimized for storing massive amounts of data. It provides unlimited, secure, and highly reliable storage capabilities at a relatively low cost. On the management panel, click over the search field in the upper right corner and write OBS, then choose it from the drop-down list.
After clicking Create Bucket button, we will see a panel as follows. In this part, we should remember which region we choose because once a bucket is created, the region cannot be changed, and also, we will also choose the same region on the ModelArts part to train our model. I will select the AP-Singapore region and give a name to it as ferdatahw.
Now, we created a bucket. Letās click over our bucket and create the necessary folders we will use in the training process and building service. On the left navigation panel, letās click over Objects.
In this section, we will create three folders. One of them will be a folder we will use to obtain data from there. Another one will be a folder we will use our modelās and APIās code. The final one will be a folder we will save our own model into it. I created three folders, giving names to their data, code, and Out in order.
Uploading Data Using OBS Utils
As you can remember, in the first stage, we created an access key. Now, we will load our data into the bucket we created a few minutes ago. But after the start, we should download data into our local to upload data into our bucket. As you know, we will develop the Facial Emotion Recognition AI model in this study. I would love to use the FER2013 dataset because it has a lot of images, and its size is not huge. With this link, you can download it. After downloading the dataset, letās extract the RAR file to your chosen local place.
Huwaei OBS tools offer many tools to manage your bucket. For example, you can upload your files manually or use the OBS Utils and OBS Web Browser, etc. In this part, I will use the OBS Utils; it is effortless to use. Letās download OBSUtil from the Huwaei OBS console.
The OBS Utils is a command line tool for accessing and managing OBS on HUAWEI CLOUD. This tool can perform common configurations on OBS, such as creating buckets, uploading and downloading files/folders, and deleting files/folders. If you are familiar with the command line interface (CLI), the OBSUtil is recommended for batch processing and automated tasks. More information and documentation :
After downloading the OBS Utils, you will get a RAR file. Then, letās open obsutil.exe. A command console will be opened. We must enter our access id, access key, and region path to access OBS. Letās enter this code line onto the OBSUtil.
obsutil config -i=your_access_id -k=your_access_key -e=obs.ap-southeast-3.myhuaweicloud.com
If your connection is successful, you will get a response as āUpdate config file successfully!ā. We connected to the OBS. Now we transfer our data to the bucket which we created previously. Letās upload our data.
obsutil cp pathdirectory_yourextracteddata_fer2013 obs://ferdatahw/data/ -f -r
For Example :
obsutil cp C:/Users/pc/Desktop/val obs://ferdatahw/data/ -f -robsutil cp C:/Users/pc/Desktop/train obs://ferdatahw/data/ -f -r
You can check out was the transfer successful or not on the OBS console. In the training process, we upload our code into our bucket manually because our code will consist of a few files and will be easy to upload and change files manually.
2. Training a Custom Model Using ModelArts
Custom Training Code
Letās write our main training code. When a ModelArts model reads data stored in OBS or outputs data to a specified OBS path, perform the following operations to configure the input and output data. We should parse the input and output paths in the training code. Also, we can parse hyperparameters in the training code. As you can see in this figure, we will select the OBS path or dataset path as the training input and the OBS path as the output on the modelarts.
Letās Code!
Creating Requirements Text File
We will create a file named pip-requirements.txt in the code directory, and specify the name and version number of the dependency package in the file. Before the training boot file is executed, the system automatically runs the following command to install the specified Python packages:
imutils==0.5.4numpy==1.21.6keras>=2.1argparse==1.1
Uploading Training Code And Requirement Text File
Finally, we will upload our training boot file and requirement.txt into our code directory, which we created previous OBS stage. Letās upload our files manually. Because we have two files and we donāt need any tools such as the OBS Utils to upload.
Training Using ModelArts Training Jobs
Letās open ModelArts console. In this study, we donāt use the ExeML tool, ModelArts SDK, or others. We will do custom training using only Tensorflow. However, there are many different paths you can follow. For More Information:
On the left panel, letās go to Training Management and then choose Trainin Jobs from the drop-down list. Now, we will create a training job. Now, you will see a panel. Letās examine this part by part. For More Information:
In the first area, we can give a name and description of our training job.
In the second part, there are many fields. In the first field, āCreated By,ā we will choose Custom Algorithms because, as you know, we would love to do custom training. In the second field of this part, we will choose Present Images. Then, as you can see, we will choose TensorFlow 2.1 because we will use Tensorflow Framework to train our model in this study. In the āCode Directoryā field, we will choose the folder which we created previously on the OBS stage. And again, in the boot file, we will select our main training code file.
In this part, we must enter to parameters training_url and data_url. These are a path where we will save our model and a path where from we will obtain our dataset, respectively. Also, We can enter Hyperparameters, and Environment Variables to pass to our code :
In the final part, we can configure what we want. I preferred GPU to train.
Now, it is ready to submit. If you go to your training job, you will see the logs of your code.
We got a 0.65 accuracy score on FER2013 data. We can check whether our model is saved or not in our obs path.
3. Building Real-Time Service using ModelArts
Inference Code
Now, We will develop an AI application as a web service. First, our model requires inference code; ensure that the code is stored in the model directory, which is Out; we saved our model there before. The file name is fixed to customize_service.py. There must be one and only one such file. Our inference code must be inherited from the BaseService class. The following table lists the import statements of different types of model parent classes. For More Information:
Letās code
Finally, We will upload the inference code file into our directory, where we saved our model. Letās upload our files manually. Then, we will be ready.
Building AI Application and a Real-Time Service
Now, letās go back to the ModelArts console. On the left management console, click over the AI Application Management and choose AI Applications from the drop-down list. Then, we will create an AI application.
We will choose OBS on the panel because we saved our model into a bucket and did custom training. Also, in the previous stage, we uploaded the inference code of our model, as you remember. We will select Tensorflow as AI Engine and choose the version of it that we used at the training.
In the final stage, we will build our real-time service. It is the easiest part. On the left management panel, click over the AI Application Management and choose AI Applications from the drop-down list. Then, We will create an AI application.
We should select the AI application which we built. Thatās it. š Cheers!
Summary
To summarize, this is how you can easily generate your own real-time API service for Facial Emotion Recognition with ModelArts.
We trained a CNN model based on FER2013 dataset with custom hyperparameters, reaching a 65% accuracy score on the validation dataset. Then, We built a real-time service using the model. In an addition, if you get a higher score, you can try data augmentation techniques, adding new datasets, or using transfer learning. So, this study showed that using Huawei ModelArts tools is so easy to build an AI real-time service, and train a model.
References
Goodfellow, Ian J., et al. āChallenges in representation learning: A report on three machine learning contests.ā International conference on neural information processing. Springer, Berlin, Heidelberg, 2013.