parth dholakiya
5 min readMay 12, 2023

Facial emotion detection using YOLO-V8

Facial emotion detection has become an important tool in various fields like psychology, marketing, and law enforcement. It involves using computer algorithms to analyze facial expressions in images or videos and detect emotional states such as happiness, sadness, anger, and surprise. In this blog post, we will take you through the process of building a facial emotion detection model using YOLOv8.

Step 1: Downloading Images

The first step in building a facial emotion detection model is to gather a dataset of facial images. One way to do this is to use a Chrome extension such as “Download All Images”. This tool allows you to download multiple images at once from websites such as Google Images. To use it, simply search for the types of images you want to download (e.g., “happy face”, “sad face”, “angry face”, etc.) and then use the extension to download as many images as you need.

Step 2: Labeling Images

Once you have downloaded your images, the next step is to label them using a tool such as Roboflow. This involves manually identifying and annotating the facial features in each image, such as the eyes, nose, mouth, and eyebrows. You will also need to label each face with the appropriate emotional state (e.g., happy, sad, angry, etc.).

Roboflow makes this process much easier by providing a user-friendly interface and a variety of labeling tools. You can follow their in-depth tutorial to get started. After labeling, you can download your data as a zip file.

Step 3: Training on YOLOv8 on Google Colab

After labeling the images, it’s time to train your facial emotion detection model using YOLOv8. YOLOv8 is a popular object detection algorithm that uses a deep neural network to identify objects in images and videos. To train your model, you will need to use a machine learning platform such as Google Colab.

Google Colab provides a free platform to train your model using GPUs. It’s easy to set up, and you can run your training code in a Jupiter Notebook on Google Colab. You will need to download the YOLOv8 code and modify it to work with your labeled dataset.

!pip install ultralytics

from google.colab import drive
drive.mount('/content/drive')

# unzip dataset
!unzip /content/drive/Face-emotion-detection-YOLO-v8/yolov8.zip

#Yolov8 require YAML file luckily roboflow generate YAML file for us

!yolo task=detect mode=train model=yolov8x.pt data=/content/data.yaml epochs=100 imgsz=640 batch=8 project=/content/drive/report "save=True"

#Project = is a destination where u want to save some valuable results matrix

training starts this vill take time depending your system. training metrix and result append on result destination.

Training results

Step 4: Engaging with Real-Time Video Predictions

Congratulations on training your model! Now, let’s dive into the exciting world of predicting emotions in real-time video streams. Brace yourself for an interactive journey!

To begin, you’ll need to harness the power of programming languages and libraries like Python and OpenCV. These tools will enable you to seamlessly predict emotions from live video feeds. Are you ready to see emotions come alive on your screen?

One essential technique is using the Haar Cascade Classifier from OpenCV. It acts as your trusty detective, scanning real-time video streams to detect faces. Once a face is detected, the magic begins! You can pass that face through your very own YOLOv8 model, which possesses the ability to predict the emotional state of the person in the frame. Get ready for the emotional rollercoaster!

But wait, there’s more! To truly bring your model to life, you’ll need to integrate it into a real-world application. This involves crafting a user interface and designing a captivating user experience tailored to your specific use case. It’s all about creating a seamless and immersive emotional journey for your users.

To put everything into practice, I’ve got something special for you. I’ve downloaded a collection of videos, and together we’ll put your model to the test! Are you excited to witness your model in action as it predicts emotions in these captivating videos? Let’s jump right in and explore the mesmerizing world of real-time video predictions!

!yolo task=detect mode=predict model=/content/drive/MyDrive//best.pt conf=0.55 source=/content/drive/video save=True

Video result :

Conclusion

In conclusion, building a facial emotion detection model using YOLOv8 is a complex but rewarding process that can have many practical applications. By following the steps outlined in this post, you can get started with building your own model and contributing to the exciting field of computer vision.

References:

Download All Images” Chrome Extension

https://chrome.google.com/webstore/detail/download-all-images/ifipmfl

Roboflow

Hyperparameter for yolov8

Getting started with roboflow

parth dholakiya

An AI enthusiast with a passion for deep learning, computer vision and natural language processing. https://www.linkedin.com/in/parthdholakiya/