Real Time Facial Emotion Recognition Using CNN.

Vishwajeet Mohan More
3 min readMay 24, 2024

--

Facial emotion is the key to communication, keeping this thought in mind we demonstrated “FacialEmotion Recognition Using CNN”. This plays an important role in identifying human intentions for different situations. It is highly used in social media platforms and forensics. Convolutional NeuralNetwork popularly known as CNN is a deep learning technique that has been used to build our model.Using the FER2013 dataset, we perform preprocessing of data which is then provided as an input to the CNN, so that the resulting prediction of emotions is accurate. Additionally, we showcase a mobileapplication that runs our FER model on-device in real-time.

An individual’s emotional state can beaccurately conveyed with the help of emotions. Automation Emotion Recognition is the newresearch field that has been opened up with thegoal to gain information about the human emotions. The Facial Emotion Expression can be divided into three major steps that is,preprocessing the dataset, training the model and testing the model on the real world images.Several methods consist of Naïve Bayes and Maximum entropy that has been applied to detect human facial emotions in previous researches. In our model, we have used categorical approach also termed as discrete.

The use of Deep learning methods such as CNN has helped us to get the most accurate results as compared to the previous methods[2]. To determine the facial emotion with the maximum odds, we have used CNN which analyzes the test facial emotions that classifies the images into one of the four categories — happy, sad, anger and fear.

METHODOLOGY

FER2013 DATASET

The FER2013 Dataset consists of seven emotions that is, happy , sad ,anger, fear, disgust, neutral and surprise. But considering the accuracy of the model, we have removed three emotions i.e, surprise, disgust and neutral. This dataset consists of three columns- emotion,pixels, usage and there are total of 38,887 rows.The emotion column consists of seven emotions numbered from 0 to 6. The pixels column consists of image which is mainly represented in the form of image pixels. The usage column tells us if the data can be used for testing or training.

0 — Angry, 1- Sad , 2- Fear, 3-Happiness

Data Pre-Processing

Data preprocessing is the process of refining the input dataset which is the CSV file containing the pixels of the image.This mainly includes resizing , reshaping and normalizing the data for better classification of emotions. The images are then converted into pandas data frame and numpy array.

Splitting Dataset

We split the data into two categories i.e training and testing. The training dataset is used to totrain the model whereas the testing dataset is used to check if the model classifies the emotions accurately.

Build the model using convolutional neural network (CNN)

It is a deep learning algorithm that consists of four layers-

  • ReLU Layer
  • Convolution
  • Pooling
  • Fully Connected

Training the Model

Training is a crucial step where it involves defining important parameters such as numberof epochs, batch size and learning rate. As we train the model, the weights are updated simultaneously.

Conclusion

In this project, CNN was built to recognize emotions and to classify them. The HAAR cascade classifier was mainly used to detect faces. In order to regularize the weights and improve stability, the concept of batch normalization was used. The trained CNN model was tested to classify images from the realtime as well as native dataset. We tried our level best to improve the model andto increase the recognition rate by making it accurate to recognize the emotions even in thecomplex backgrounds. The results proved that our analysis was better compared to the past experimental analysis.

--

--