Find missing People using Facial Recognition

Let us save lives with Machine Learning. A practical Machine Learning Project.

Gagandeep Singh
DataSeries
6 min readNov 14, 2019

--

A deeply disturbing fact about India’s missing children is that while on an average 174 children go missing every day, half of them remain untraceable. The National Crime Records Bureau (NCRB) report which was cited by the Ministry of Home Affairs (MHA) in the Parliament (LS Q no. 3928, 20–03–2018), more than one lakh children (1,11,569 in actual numbers) were reported to have gone missing till 2016, and 55,625 of them remained untraceable till the end of the year.

The statistics are however, indicative of the absence of a national Missing Children’s repository. “There are no budgets earmarked for tracking missing people,” said an official source.

Photo by Rux Centea on Unsplash

During my final year of under-graduation, I came across this article. This article sent a chill to my spine. So, I started looking around for all initiatives that NGO’s and the government is doing to prevent such tragic happenings. Here is a thing that I came across.

TrackChild Portal Initiative by Indian Govt.

When I went through the website I immediately understood the issue. The process to submit pictures of a child (you find suspicious) in your area is tricky and not anonymous. People who employ these children are powerful people nobody wants to mess with. This is why the user prefers anonymous submission. Also, that initiative wasn’t leveraging the power of machine learning. Since it is happening on a large scale there should be an automated

So, what did I do?

I wanted to make this whole process very simple. So, here is what I did

  1. Designed a GUI application using Python which can be used by Police to register a new case. In the backend, the image submitted is of the missing person is processed and facial key points are captured and stored in Database along with other information like name, father’s name, age, city, etc.
  2. Designed an Android Application that can be used by common people (people like us). We can use this application to submit a person’s picture whom we might find suspicious. We can submit the picture anonymously or with little information. This picture is stored in the database along with the location.
  3. The GUI application uses Machine Learning Algorithm to match the pictures uploaded by Police and pictures submitted by the user. If any match is found then we can show it along with the location where the missing person was seen last.

Interesting, right?

The possibility of tracking a missing person isn’t just limited to an android application. If we build a central system where as soon as a new picture is received, we train a machine learning model on it and it is deployed instantly and it is being served as REST API. Security cameras at various places like the airport, railway station could hit this API to find if the person is CCTV is a missing person or not and all this happening in realtime. All this is very complicated stuff and difficult to scale.

Let’s understand the implementation —

System architecture for the project

For simplicity reasons we are using the Firebase database as it is very easy to setup.

Let’s begin with GUI Application

Required Packages

$ pip install dlib, opencv-python, sklearn, scipy, numpy, pyqt5,      firebase_admin

I had difficulty installing dlib in normal python environment. If you face any similar issue I would suggest to use conda environment.

What is happening here?

We are finding the facial key points of a person, dlib facial landmark generator generates 68 unique points for a face. It uses a one-shot learning method. These points are float values with a precision of about 8 values after the decimal. It is something like this —

The final GUI application will look something like this —

GUI application
  • Upload new data — whenever a new case needs to be registered. This button will be used. It will open a new GUI that will take a user’s data.
  • Update Classifier — it downloads facial landmark data from firebase and trains a KNN classifier using that points.
  • Match — it downloads all the points registered by users and predicts value using the KNN classifier we trained. If confidence goes above 60% then we say it is a match.
  • Confirmed requests- all the complaints that have been processed will be stored here.

Algorithm for Face Matching

Imagine the desktop application has registered 3 cases, which means dlib will generate (136 * 3) facial landmark points. 136 because x and y coordinate will be generated for each point and there are 68 points and 3 is the number of cases. We’ll then train a KNN classifier on these points.

Assume that KNN distributes the facial landmark points like in the plot below where RED corresponds to Person 1, GREEN for Person 2 and BLUE for Person 3.

KNN classifier

Now, to match any person’s face we will simply generate facial landmarks of that person and we’ll try to predict the confidence using the KNN classifier we trained above. If the confidence is above 60% then we’ll say that it is the same person.

Let’s see this by code:

clf=neighbors.KNeighborsClassifier(
n_neighbors=3,
algorithm='ball_tree',
weights='distance'
)
clf.fit(x, uniqueKeys)
  1. n_neighbours- it is the number of cases that have been registered. We have registered 3 cases.
  2. x- it is a matrix of 136 columns (x, y coordinate of 68 facial landmark points) and 3 rows ( number of cases that have been registered).
  3. uniqueKeys- I’ve used LabelEncoder to encode a person’s information like name, phone number with the corresponding row.

Now, what happens is that if a person matches using the KNN model we trained then as output we’ll get the encoded label. This encoded label will contain his information.

Let’s see how it will be stored in the database

Database Schema

StationID_x will be different for every police station to correctly report to the place from where the complaint was registered.

The reason I’ve chosen Firebase is that it is easy to integrate it into an Android App.

This is how it was for me

encoded is the key that stores 68 facial key points generated by dlib.

Gagan@123@xyz@22 can be decoded as

  1. Name- Gagan
  2. Phone Number- 123
  3. Father’s Name- XYZ
  4. Age- 22

I didn’t want to create any extra fields so I stored it this way.

You can design your own database structure as per your convenience.

Let’s talk about Android application that is going to be used

An Android app will be required that will available to the public. The app can be a very simple app. You can even collaborate will one of your friends who is working on Android to create an Android App.

The Android app will capture the photo and encode it in base64 form (to avoid the workload of using Firebase storage).

The whole project can be seen here.

Conclusion

Image recognition with the use of one-shot learning has become very powerful. This technology when put into good use can be extremely helpful. It can be even used in Hotels, Hospitals, etc. to find criminals in an instant. You can improve this project to a great extent by building API’s using Flask. The API will return 64 facial key points. You could also build a complete web app version of it which will use Tensorflow.js for training.

I hope you got a good idea about how you can use simple machine learning techniques to create a world-class project.

References

  1. https://trackthemissingchild.gov.in/
  2. https://www.indiatoday.in/mail-today/story/delhi-s-shame-19-children-go-missing-daily-1550210-2019-06-17

--

--

Gagandeep Singh
DataSeries

Data Scientist | NLP | Chatbot | Docker | Kubernetes