Week 5 — Emotion Detection

Mucahitfindik
bbm406f19
Published in
3 min readDec 29, 2019

Hello everyone! We continue to provide information about the progress of our Machine Learning Project. In this post, we will talk about the libraries we use to create the most appropriate model. When it comes to machine learning libraries, of course, there are a lot of options. So which one did we use? What changes did we have to make? What have we done to improve our model? You can continue reading our blog post for all of them!

First of all, we started to create our model using Keras library. Keras is a wrapper that uses Theano or Tensorflow as a backend. It makes identifying and training models very easy. That’s exactly why we thought this library was suitable for us. However, we preferred to create the model recognition process ourselves, not using ready-made functions.

Below you can see the confusion matrix and some metrics we have achieved as a result of using Keras. As stated in the previous post, our data set contained very little data for the ‘disgusting’ class compared to other classes. Therefore, as observed in the confusion matrix, the data in the ‘disgusting’ column is 0. Precision, recall and F1-score values for disgust are 0. While the epoch was 64, our accuracy was % 44. As a result of all this, our accuracy did not satisfy us. We decided to use the FastAi library to achieve a higher accuracy model.

Performance Measurement for Keras

We used the FastAi library because it is easier to use than Keras and Pytorch. It was easier to use when implementing the model. In this library, we used ResNet-34 and DenseNet-201 models. Since these models are more complex than the models we have created, we have achieved better results in our data set. Although we had few disgust class images in our data set and we had a very bad result in our model, we achieved better results in these two models. The accuracy we have achieved using ResNet-34 is % 59, the accuracy we have achieved using DenseNet-201 is % 55.

Performance Measurement for ResNet-34
Performance Measurement for DenseNet-201

In our opinion, the reason why accuracies are not higher is related to our data set. As mentioned earlier, the image sizes in our data set are very small and we do not have enough data for the disgusting class. That’s why we’re thinking of finding another data set next week and combining it with our data set. See you in the next post!

--

--