Face Recognition Model using Azure Custom Vision Service
A Facial Recognition Model using Microsoft Azure Custom Vision Service. The model is then used to detect some particular faces & do certain designated tasks as per our business logic.
In this article, I will explain how I have created the Face Recognition model using Azure Custom Vision Service.
A big shoutout to Microsoft for such great documentation and GitHub repositories for learning. I took references from this repository of Microsoft AI900.
Let’s jump in!
Step 1
Capture photos of the team members. These photos will be provided to the model for training. The model is based on Supervised Learning, as while uploading the images we provide labels to them.
The code can be found on this link — Capture Photos
.
Step 2
Now the major process begins. Creating the model on Microsoft Azure.
Computer Vision is a technique that gives the capability to the webcam to “understand ”what it sees.
Similarly, Computer Vision in Azure is a service that lets you play around with the pre-created models.
Now, to build one, they have another service “Custom Vision” that helps to easily create a model for your business logic.
Carefully follow the screenshots and you will be ready with your own model!
This article assumes that you already have an Azure account, a subscription, and 1 resource group ready.
- Azure Account can be made from this link.
- Subscription is the billing account, from where you will be charged for the resources you use. You can easily start with a free account that gives you access to a lot of services.
- Resource group is a logical container that will hold all the resources you create in Azure.
Search for Custom Vision in the Azure Marketplace.
Click on Create.
Create Option — Both (for training & prediction)
Select appropriate Subscription and Resource Group.
Select the same Location & Pricing Tier for both the resources being created.
After 1–2 minutes the deployment of both resources should be complete.
Click on “Go to resource”.
Then click on Custom Vision Portal.
Sign-in into the portal, with the same Microsoft account, as the cloud portal.
Click on create a New Project.
Then it will give these options.
Give a name and description that works for you.
Select the resource from the drop-down (you created already).
Project Type: Classification
Multiclass Classification.
Domains: If your business logic is in the options supported by Azure then select that, else go for a general model.
Click on Create.
This new window will open, which is your empty new project.
Now it asks you to add images, so 1 by 1 add images of the team members.
Provide the correct label and click on Upload.
Click Done after image upload.
Do similarly for all the team members, label is important here.
Then choose your desired training type and click on Train.
The training will take few minutes based on the number of images you have given.
Intermediate step -
Check the accuracy of your model, by clicking on Quick Test and provide the testing images.
You can see that after a certain hit and trials, my model is accurate enough to be deployed.
I performed a quick test using images of all my teammates and the results are as follows-
You can see the varying results.
After successful testing when you are happy with the model, Publish the model.
The model name you give here is very important, so remember it !
Keep a Note of the Model Name.
Click on Publish.
Click on the Gear icon ⚙ on the same page.
These are the details of the model.
Note the Project ID (left side) and Endpoint (right side).
Now click on the eye icon 👁 on the top left corner and now click the Gear icon ⚙ on this page. This will give the details of the predictor. From here we need the KEY.
Note the value of the key.
Required Details — Checklist
- Model name when you published it.
- Project ID and Endpoint from the Model Settings
- Key from the Portal settings
Step 3
Now finally use this Model in the Application.
pip install azure-cognitiveservices-vision-customvision
pip install dotenv
pip install
Our Application was to do some tasks based on who is in front of the camera.
The tasks to be done were -
- Send mail
- Send Whatsapp Message
- Create AWS Instance, EBS Storage and attach it to the instance
- Open LinkedIn profile
You can see in the videos below how it is working.
The complete codes for the application and clicking photos is available on this Github Repo
.