Facial recognition on iOS using Microsoft Azure Face API

Alejandro Cotilla
Aug 31, 2018 · 5 min read

Facial recognition has countless applications and is being applied more and more every day in a wide range of technologies, such as mobile platforms, security systems and robotics, just to name a few.

One of the most practical and well-implemented applications of facial recognition on smartphones was introduced by Google with their Photos app back in 2015 with the ability to search photos by faces of people. You can easily find photos of your family and friends from now, and go all the way back to when they were born, it’s crazy how well it works.

Source: The Verge

That is exactly what we’ll do in this tutorial, we’ll filter a collection of images based on people’s faces. There is one little group of faces that need more recognition in 2018, The Avengers.
This will be the end result of our demo app.

Why use a third-party solution?

Apple’s Vision Framework comes out of the box with face detection, but not facial recognition. With the help of Core ML, facial recognition is definitely possible using the Vision framework, although that requires integrating a previously trained model into your app.

There are many services out there that offer facial recognition without too much hassle, including Amazon Rekognition, OpenCV and Microsoft Azure. My favorite so far is Microsoft’s alternative, is very easy to integrate and use, is very flexible and it has outstanding performance. Microsoft Face API has an iOS SDK on Github, although this tutorial will only focus on the REST API.

Before diving into the code

In order to use Microsoft Azure Face API, we need to obtain a Face API subscription key.

  1. Go to the Cognitive Services signup page.
  2. Select “Get API Key” for the Face service.
  3. Signup for a “Free Azure account” (not the “Guest”, as that one has very low limits). Is really free, and you’ll get $200 in credits (which is a LOT).
  4. Follow all the steps to create a new account (or sign in with your existing Microsoft account).
  5. Go to the Azure portal.
  6. Search for “Cognitive Services” in the search bar at top.
  7. Select “Create cognitive Services”.
  8. Search for “Face” in the filtering bar that shows up.
  9. Select “Face” and hit “Create”.
  10. Set a name, location, pricing tier and select “Create new” resource group.

11. Go to All resources > FaceService > Keys and copy and save “KEY 1” (we’ll use it later).

12. Go to All resources > FaceService > Pricing tier and make sure the “Standard” tier is selected.

Time to start CODING!!

You can download the starter project from here.

After your run it, you should see the collection view with the two sections, the avatars and the photos.

As you can see, selecting the avatars has no effect at the moment.

To solve that, let’s start by importing the API consumption class. (Make sure to go through the code and the comments for better understanding on how the Face API works)

Don’t forget to replace the APIKey constant value with your actual API key that was obtained from the account setup steps above.

Now that everything is in place, is time to start making API requests. As you might have noticed, we have two warnings in ContentManager.

Those warnings are because avatarData and photoData are not being used, let’s take care of that.
We’ll use those data values to request face Ids, that then will be stored in the person and photo objects respectively.

Replace those lines above with these:

Run the app again.

If it takes a while to load (~10s), don’t panic, that’s because we’re making a Face-Detect API call for each of those images. On a production app you should only do this once and store the results either locally or on a cloud-based service, and most importantly, don’t block the UI, never ever. Also, keep in mind that the Face-Detect endpoint that we are using only stores face Ids for 24 hours. If you want a more permanent solution see FaceList.

Before dealing with the actual avatar selection logic, we’re going to need more help from the ContentManager class.

First, we need a solution to obtain all the face Ids from the photos section so that we can compare them to the face Id of the selected avatar.

Add this lazy variable to the ContentManager class.

And lastly, after the face recognition service returns all the face Id matches, we need the ContentManager class to provide us with just the photos for those face Ids.

Add this function to the ContentManager class.

Too tired of copying and pasting code? Good news … there’s only one more block of code left; the avatar selection logic, where will make use of everything we’ve added.

Go to your ViewController class and replace the current collectionView(_:didSelectItemAt:) function with this new implementation.

Finally, we have all we need to search/filter photos based on a selected person.

Run the app one last time and go ahead and select your favorite Avenger, the collection view will reload its content and display only photos of that person.

Did it work? Awesome!
Now you can make use of this amazing service on your next facial recognition app 😉.

The complete project is available on GitHub.

That’s all for now, thanks for reading!

Alejandro Cotilla

Written by

Sr. Software Engineer @ floatleft.tv. Love tinkering with new technologies and building enjoyable user experiences.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade