David and Goliath, or I Trying to Beat Microsoft at Facial Emotion Detection

During my quest to build emotion detection from faces, I had always assumed there was no available system to compare my own with. Not because there are no companies or hobbyist already building such thing, but because the companies making the headlines are keeping details of their work in obscurity. Turns out I was wrong. Last November Microsoft Research released, as part of their ongoing Project Oxford, a fantastic facial emotion detection demo, and I just found out. How could I miss it? I don’t know, but guess what? I now have something to compare my own system (after a build it) with.

So just how good is this MS system? It is darn good! For instance, I tried to use OpenCV to detect and crop the face in the image below to add it to my own dataset, but it was unable to detect a face in it (bad news for me since I was planning to use OpenCV as part of my system). Have a look at what the MS system did:

MS’s Emotion Detection API correctly detecting this dude’s happy face.

It correctly detected the face and the expressed emotion. So it seems like I will have a tough time trying to beat Microsoft. You may even think I am crazy to even try. The thing is, this is extremely motivating and exciting. I want to see how far I can get. Whether I beat MS or not is not important, this is my training, my personal journey, and who knows maybe once again David will defeat Goliath.


This article is the sixth in a series that I will be writing as part of the process of attempting to create an Artificial Emotional Intelligence.

Note: I am not a native English speaker nor a profesional writer. I use English as a universal language to tell and document this journey I am about to embark on. I apologize if in any way I’m killing the beauty of this language with my grammar and spelling errors.