Face sentiment analysis using Azure Face API and Xamarin Forms

Aritra Das
inspiringbrilliance
3 min readSep 10, 2019

This is a fun little app, which I call “Emojifier”, which utilizes the power of Azure Face API to detect the face, and also various features of the same including the sentiment.

So this article is about how you can develop a similar kind of app or a feature in your app.

The app has the following features

  • Detects faces
  • Detects emotion on those faces
  • Places an emoji matching that sentiment on to the face
  • Draws a rectangle around the face
  • Puts a label describing the predominant sentiment on the face
  • Switchable between emoji mode or text mode

Building Emojifier

Step 1: Set up Azure Face API

Just visit this link, it has all the details and walkthrough, come back here after obtaining the API key and the API endpoint, we would need those in order to communicate with the face API framework.

Please select the location closest to you in Azure, as that would give quick response time.

So our backend is pretty much set as of now, trained with tens and thousands of data without us having to do anything. Which means our ML model is ready and hosted, in order for us to run some queries through it.

Step 2: Let’s develop the Xamarin forms app

Create a Xamarin forms solution with a .NET Standard shared project

Once the project creation is done, we can then move on to the next step, which is to

Capture the image to be sent to the face API

We would use the Xam.Plugin.Media to capture the image, if you want to know more about how to use that plugin, visit this page. This works for Android, iOS, and UWP.

We can capture the image like this

Now once the image is captured, we can pass it on to the face API client and get the face detected along with face attributes.

So first, create and initialize the face client like this
In order to get the face emotions, we have to call the api like this

var faceApiResponseList = await faceClient.Face.DetectWithStreamAsync(image.GetStream(), returnFaceAttributes: new List<FaceAttributeType> { { FaceAttributeType.Emotion }});

Now we would get the emotion response in JSON like this

We have to find out which is the predominant emotion among all of these.

In order to do that, create a method like this

Also as the image may have multiple faces, so we have to save all of them and display accordingly, so let’s create a model class for that

So now our completed method to call the api and parse the response would look like this

So now have got all the data we need. We have the rectangle data which tells the position of the detected faces on the image along with the detected sentiment on them.

You can display these data’s as per as your need.

I have drawn emojis on the detected faces using SkiaSharp

If you want to know how to draw these on to your images or how to develop a complete application, check out my repo on GitHub, especially the SkiaSharpDrawingService.cs

Please feel free to leave any feedback and share this with your friends, I would highly appreciate that, and it would motivate me a lot.

Thanks for reading, hope it helps you.

--

--

Aritra Das
inspiringbrilliance

Backend Developer | ❤ Distributed Systems | ❤ Open source