Call me, maybe: a new algorithm detects call activity using smartphone sensors

NYU graduate students work with CDS affiliated professor Suzanne McIntosh to harness smartphone sensor data for call activity monitoring

To make an overseas phone call 50 years ago, you either had to buy a phone card or cough up wads of cash to foot an expensive phone bill.

But, today, all you have to do is use one of the many internet-based phone apps like Skype, WhatsApp, or WeChat. Not only are these apps free, but they also have strong privacy protocols in place. Unfortunately, however, this is precisely why criminals and other ill-intentioned individuals have begun exploiting such apps to communicate with each other. When suspicious or forbidden calls are placed through these apps, then, security officials cannot effectively identify whom is calling whom, or when such calls are taking place.

But the new real-time phone call detection algorithm that graduate students Huiyu Sun (lead author) and Bin Li, together with Suzanne McIntosh (professor of Computer Science and CDS affiliated faculty member), have invented is poised to become a powerful tool for assisting the security sector, and more.

Their system can detect when someone is making a call using an internet-based phone app with 91.25% accuracy, and it works by analyzing the data collected by built-in proximity and orientation sensors in our smartphones.

“The proximity sensor,” the researchers explain, “measures the closeness between an object and the phone’s screen in centimeters.” The orientation sensor measures “the phone’s rotation angles in degrees around the x-axis, y-axis, and z-axis.” The orientation sensor can be used to detect whether the caller is sitting/standing, walking, or lying down. A proximity or orientation sensor alone cannot detect a phone call taking place, but the researchers found that combining the sensors leads to successful call detection.

After collecting the range of orientation and proximity measurements both when a call is taking place and when it is not, the research team’s algorithm can perform call detection by tracking subtle changes in the prevailing call state classification (e.g. is the person sitting/standing, lying down, or walking) and fusing that with sensed proximity.

Their system was trained using three classifiers — Naïve Bayes, SVM, and Logistic Regression — although the researchers point out that “there are many more classifiers that could be used such as decision tree, random tree, and neural networks. It is possible that other classifiers could produce better performance.”

Potential applications for their system, the researchers added, include “assisting human activity recognition systems, monitoring health conditions, and many more areas.”

By Cherrie Kwok

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.