Detect and analyze faces with the Face service ( Azure )

Hello guys, I’m Senura Jayadeva. Today I’m going to explain how to use the Face cognitive service which is provided by Azure.

Face detection, analysis, and recognition is an important capability for artificial intelligence (AI) solutions.

Microsoft Azure provides multiple cognitive services that you can use to detect and analyze faces, including:

  • Computer Vision: which offers face detection and some basic face analysis, such as determining age.
  • Video Indexer: which you can use to detect and identify faces in a video.
  • Face: which offers pre-built algorithms that can detect, recognize, and analyze faces.

As I mentioned above today we will focus on Face cognitive service. The Face cognitive service in Azure makes it easy to integrate these capabilities into your applications.

To use this service you should have an azure portal.

First, go to the following link

Then click on the start free button

If you already have an account click on the start free button.

Then it will navigate to a page like this. Here I already have a subscription. So I’m going to choose Use an existing subscription in your account option. If you don’t have a subscription, you have to go with the Sign up for a Pay-As-You-Go subscription option.

Then it will redirect to the home page of the Azure portal.

Now Select +Create a resource, search for Face, and create a new resource with the following settings:

  • Workspace Name: enter a unique name of your choice
  • Subscription: your Azure subscription
  • Resource group: create a new resource group with a unique name
  • Location: choose any available location

After filling in those information click on the Review + create button. It’s validating our information.

After validation is passed we have to click on the create button to complete the process.

It takes some time to complete the deployment process.

After our deployment is complete click on the Go to resource and it will redirect to the following page.

Okay, now we are going to test this Face cognitive service with azure API Console. So click on the 2nd Option. Then it will navigate to the following page

Here you can see Face cognitive service currently supports the following functionality:

  • Face Detection
  • Face Verification
  • Find Similar Faces
  • Group faces based on similarities
  • Identify people

Face cognitive service can return the rectangle coordinates for any human faces that are found in an image, as well as a series of attributes related to those faces such as:

  • the head pose — orientation in a 3D space
  • a guess at an age
  • what emotion is displayed
  • if there is facial hair or the person is wearing glasses
  • whether the face in the image has makeup applied
  • whether the person in the image is smiling
  • blur — how blurred the face is (which can be an indication of how likely the face is to be the main focus of the image)
  • exposure — aspects such as underexposed or over exposed and applies to the face in the image and not the overall image exposure
  • noise — refers to visual noise in the image. If you have taken a photo with a high ISO setting for darker settings, you would notice this noise in the image. The image looks grainy or full of tiny dots that make the image less clear
  • occlusion — determines if there may be objects blocking the face in the image

Let’s continue our task again

On the same page, you will see a Title called HTTP Method and there you have to select the region where you created your resource.

Here you can see we are calling a POST HTTP request. I assume you have knowledge about API calls. Therefore I’m not going to explain HTTP methods, headers, and parameters.

After selecting the region we will be redirected to a new page like this.

Defaults returnFaceAttributes field is empty. But I want to know the estimated age and gender of the image which I will give later. Otherwise, it will return only the rectangle coordinates for the human face that is found in the image.

Make sure to change the detectionModel as detection_01 because returnFaceAttributes is not supported by detection_02.

When we are calling the API we have to give Content-Type and Ocp-Apim-Subscription-Key as headers. For the Ocp-Apim-Subscription-Key you have to give a key. To get that, go to the Azure portal and on the left sidebar, you can see an option called Keys and Endpoints. There you will see two keys and you can use one of them as the Ocp-Apim-Subscription-Key.

As the last step, you have to provide an image to analyze. For that, I’m going to get a link for an image from google.

Finally, click on the Send button to get the result.

After that, it will analyze the image and return us a JSON output like below

You can see it returned the gender and estimated age by analyzing the given image.

If you want more details about the image you can add those attributes to the headers like this.

returnFaceAttributes:blur,exposure,noise,age,gender,facialhair,glasses,hair,makeup,accessories,occlusion,headpose,emotion,smile

So you will understand that we can get a lot of details about a given image by using this azure service.

Not only that but also, as usual, you can test this Face API with software like Postman.

Postman is a great way to test APIs like this.

So you can see you can get the same result with postman easily. Therefore you can implement this Azure API for your React, Angular projects and etc.

I hope you learn something about this azure cognitive service. Thank you!

--

--

Senura Vihan Jayadeva
Microsoft Student Champs — Sri Lanka

Software Engineering undergraduate of Sri Lanka Institute of Information Technology | Physical Science Undergraduate of University of Sri Jayewardenepura