Building a Real Time Emotion Detection with Python

Coursesteach
7 min readApr 27, 2024

--

Introduction

Detecting real-time emotion of the person with a camera input is one of the advanced features in the machine learning process. The detection of emotion of a person using a camera is useful for various research and analytics purposes. The detection of emotion is made by using the machine learning concept. You can use the trained dataset to detect the emotion of the human being. For detecting the different emotions, first you need to train those different emotions, or you can use a dataset already available on the internet. In this article, we will discuss creating a Python program to detect real-time emotion of a human being using the camera ‘[1].

Table of Content

Intalling Dependencies
Dataset
Import Library
Load Dataset
Data Description
Data Spliting
Data Normalization
Design CNN
Model Training

1-Installing Dependencies

!pip install opencv-python
!pip install tensor flow
!pip install numpy
!pip install pandas
!pip install keras
!pip install adam
!pip install kwargs
!pip install cinit

1 📚Dataset

For training purposes, I use the predefined un trained dataset CSV file as my main input for my input for training the machine. You can use the code given below for training the machine using the dataset. Before that, you need to ensure that all required files in the same repository where the program presents otherwise it will through some error. You can download the data set by clicking here.

2 — 📚Import library

import sys, os
import pandas as pd
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D, BatchNormalization,AveragePooling2D
from keras.losses import categorical_crossentropy
from keras.optimizers import Adam
from keras.regularizers import l2
#from keras.utils import np_utils
from keras.utils import to_categorical

2 — 📚Load Dataset

The both training and evaluation operations would be handled with Fec2013 dataset. Compressed version of the dataset takes 92 MB space whereas uncompressed version takes 295 MB space. There are 28K training and 3K testing images in the dataset. Each image was stored as 48×48 pixel. The pure dataset consists of image pixels (48×48=2304 values), emotion of each image and usage type (as train or test instance)[2].

from google.colab import drive
drive.mount('/content/gdrive')
df=pd.read_csv('/content/gdrive/MyDrive/Datasets (1)/Emotion Detection/fer2013.csv')

📚2-Data Description

df.head()
print(df.info())
print(df["Usage"].value_counts())

📚3- Data Splitting

X_train,train_y,X_test,test_y=[],[],[],[]

for index, row in df.iterrows():
val=row['pixels'].split(" ")
try:
if 'Training' in row['Usage']:
X_train.append(np.array(val,'float32'))
train_y.append(row['emotion'])
elif 'PublicTest' in row['Usage']:
X_test.append(np.array(val,'float32'))
test_y.append(row['emotion'])
except:
print(f"error occured at index :{index} and row:{row}")
num_features = 64
num_labels = 7
batch_size = 64
epochs = 30
width, height = 48, 48
X_train = np.array(X_train,'float32')
train_y = np.array(train_y,'float32')
X_test = np.array(X_test,'float32')
test_y = np.array(test_y,'float32')
train_y=to_categorical(train_y, num_classes=num_labels)
test_y=to_categorical(test_y, num_classes=num_labels)

📚4- Normalizing data between 0 and 1

X_train -= np.mean(X_train, axis=0)
X_train /= np.std(X_train, axis=0)
X_test -= np.mean(X_test, axis=0)
X_test /= np.std(X_test, axis=0)
X_train = X_train.reshape(X_train.shape[0], 48, 48, 1)
X_test = X_test.reshape(X_test.shape[0], 48, 48, 1)

📚5- Designing the CNN

5.1- 1st convolution layer

model = Sequential()
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', input_shape=(X_train.shape[1:])))
model.add(Conv2D(64,kernel_size= (3, 3), activation='relu'))
# model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2), strides=(2, 2)))
model.add(Dropout(0.5))

5.2–2nd Convolution Layer

model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
# model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2), strides=(2, 2)))
model.add(Dropout(0.5))

5.3- 3rd Convolution Layer

model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Conv2D(128, (3, 3), activation='relu'))
# model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2), strides=(2, 2)))
model.add(Flatten())

5.4- Fully connected neural network

model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_labels, activation='softmax'))
# model.summary()

📚6-Training

6.1 Method1 Generator [2]

We can train the network. To complete the training in less time, I prefer to implement learning with randomly selected trainset instances. That is the reason why train and fit generator used. Also, loss function would be cross entropy because the task is multi class classification [2].

from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
import keras
import keras.utils
from keras import utils as np_utils
from keras.preprocessing.image import ImageDataGenerator
gen = ImageDataGenerator()
train_generator = gen.flow(X_train, train_y, batch_size=batch_size)
model.compile(loss='categorical_crossentropy'
, optimizer=keras.optimizers.Adam()
, metrics=['accuracy']
)
model.fit_generator(train_generator, steps_per_epoch=batch_size, epochs=epochs)

6.2-Method 2 Compiling the model

model.compile(loss=categorical_crossentropy,
optimizer=Adam(),
metrics=['accuracy'])
model.fit(X_train, train_y,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(X_test, test_y),
shuffle=True)

📚7-Saving the model

fer_json = model.to_json()
with open("fer.json", "w") as json_file:
json_file.write(fer_json)
model.save_weights("fer.h5")

📚8-Evaluate model [2]

train_score = model.evaluate(X_train, train_y, verbose=0)
print('Train loss:', train_score[0])
print('Train accuracy:', 100*train_score[1])
test_score = model.evaluate(X_test, test_y, verbose=0)
print('Test loss:', test_score[0])
print('Test accuracy:', 100*test_score[1])
Train loss: 0.4940066933631897
Train accuracy: 83.85523557662964
Test loss: 1.272325873374939
Test accuracy: 57.6762318611145

📚9-Confusion Matrix[2]

from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
from sklearn.metrics import confusion_matrix
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

# Assuming you have trained your model and obtained predictions on test data
# Replace these with your actual predictions and true labels
y_pred = np.argmax(model.predict(X_test), axis=1)

y_true = np.argmax(test_y, axis=1) # Convert one-hot encoded true labels back to categorical

# Calculate confusion matrix
cm = confusion_matrix(y_true, y_pred)

# Plot confusion matrix
plt.figure(figsize=(8, 6))
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues')
plt.title('Confusion Matrix')
plt.xlabel('Predicted label')
plt.ylabel('True label')
plt.show()

📚10-Testing[2]

from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
from matplotlib import pyplot as plt
img = image.load_img("la.jfif", grayscale=True, target_size=(48, 48))
x = image.img_to_array(img)
x = np.expand_dims(x, axis = 0)
x /= 255
custom = model.predict(x)
emotion_analysis(custom[0])
x = np.array(x, 'float32')
x = x.reshape([48, 48]);
plt.gray()
plt.imshow(x)
plt.show()
from google.colab import drive
drive.mount('/content/drive')

Emotions stored as numerical as labeled from 0 to 6. Keras would produce an output array including these 7 different emotion scores. We can visualize each prediction as bar chart.

def emotion_analysis(emotions):
objects = ('angry', 'disgust', 'fear', 'happy', 'sad', 'surprise', 'neutral')
y_pos = np.arange(len(objects))
plt.bar(y_pos, emotions, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
plt.ylabel('percentage')
plt.title('emotion')
plt.show()

📚11-Detecting Real-Time Emotion

import cv2
import numpy as np
from keras.models import model_from_json
from keras.preprocessing import image

# Load model
model = model_from_json(open("fer.json", "r").read())
model.load_weights('fer.h5')
face_haar_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture(0)

while True:
ret, test_img = cap.read() # captures frame and returns boolean value and captured image
if not ret:
print("Error: Unable to capture frame from the webcam.")
break

gray_img = cv2.cvtColor(test_img, cv2.COLOR_BGR2GRAY)

faces_detected = face_haar_cascade.detectMultiScale(gray_img, 1.32, 5)

for (x, y, w, h) in faces_detected:
cv2.rectangle(test_img, (x, y), (x + w, y + h), (255, 0, 0), thickness=7)
roi_gray = gray_img[y:y + w, x:x + h] # cropping region of interest i.e. face area from image
roi_gray = cv2.resize(roi_gray, (48, 48))
img_pixels = image.img_to_array(roi_gray)
img_pixels = np.expand_dims(img_pixels, axis=0)
img_pixels /= 255

predictions = model.predict(img_pixels)

# find max indexed array
max_index = np.argmax(predictions[0])

emotions = ('angry', 'disgust', 'fear', 'happy', 'sad', 'surprise', 'neutral')
predicted_emotion = emotions[max_index]

cv2.putText(test_img, predicted_emotion, (int(x), int(y)), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)

resized_img = cv2.resize(test_img, (1000, 700))
cv2.imshow('Facial emotion analysis ', resized_img)

if cv2.waitKey(10) == ord('q'): # wait until 'q' key is pressed
break

cap.release()
cv2.destroyAllWindows()

🔑 Results

#CNN Model 
Train loss: 0.4940066933631897
Train accuracy: 83.85523557662964
Test loss: 1.272325873374939
Test accuracy: 57.6762318611145

Github

Here you can find the complete code of project.

Github Link

Please Follow and 👏 Clap for the story courses teach to see latest updates on this story

🚀 Elevate Your Data Skills with Coursesteach! 🚀

Ready to dive into Python, Machine Learning, Data Science, Statistics, Linear Algebra, Computer Vision, and Research? Coursesteach has you covered!

🔍 Python, 🤖 ML, 📊 Stats, ➕ Linear Algebra, 👁️‍🗨️ Computer Vision, 🔬 Research — all in one place!

Don’t Miss Out on This Exclusive Opportunity to Enhance Your Skill Set! Enroll Today 🌟 at

Machine Learning projects course

🔍 Explore Free world top University computer Vision ,NLP, Machine Learning , Deep Learning , Time Series and Python Projects, access insightful slides and source code, and tap into a wealth of free online websites, github repository related Machine Learning Projects. Connect with like-minded individuals on Reddit, Facebook, and beyond, and stay updated with our YouTube channel and GitHub repository. Don’t wait — enroll now and unleash your Machine Learning projects potential!”

Stay tuned for our upcoming articles because we reach end to end ,where we will explore specific topics related to Deep Learning in more detail!

Remember, learning is a continuous process. So keep learning and keep creating and Sharing with others!💻✌️

📚GitHub Repository

Ready to dive into data science and AI but unsure how to start? I’m here to help! Offering personalized research supervision and long-term mentoring. Let’s chat on Skype: themushtaq48 or email me at mushtaqmsit@gmail.com. Let’s kickstart your journey together!

Contribution: We would love your help in making coursesteach community even better! If you want to contribute in some courses , or if you have any suggestions for improvement in any coursesteach content, feel free to contact and follow.

Together, let’s make this the best AI learning Community! 🚀

👉WhatsApp

👉 Facebook

👉Github

👉LinkedIn

👉Youtube

👉Twitter

📚References:

[1] Building a Real Time Emotion Detection with Python

https://morioh.com/p/801c509dda99?f=5c21f93bc16e2556b555ab2f

[2] Facial Expression Recognition with Keras http://sefiks.com/2018/01/01/facial-expression-recognition-with-keras/

[3] Machine Learning Project | Facial Emotion Detection | part 2 Creating Webapp using Flask

https://www.youtube.com/watch?v=2mN7ygkc2XU&feature=youtu.be&fbclid=IwAR35sBAYwEakprFrmi12-4wW_54COtb8hcXdEdlCJjQ_en2JCi4zRA28bSs

--

--