Steven Wessels
Jul 31 · 8 min read

The modern smartphone comes equipped with a plethora of sensors that are used to assist in providing an immersive user experience, whether it be a gyroscope giving the ability to simulate a steering wheel in a racing game or the GPS sensor helping a navigational app. In addition to enriching UX, smartphone sensors can be used behaviour biometric tools to provide alternate means of authentication to traditional physiological biometrics, i.e. fingerprint and facial recognition. A biometric that can be revealed by leveraging a smart device’s accelerometer data is that of how a user walks, which is also known as gait.

Gait is the coordinated and cyclic combination of movements that result in human locomotion. The uniqueness of a person’s gait lies in the combination of its cyclicity and coordination. An advantage of using gait as a biometric is that it can be measured unobtrusively and without a person having to alter their natural behaviour. Using gait as an authentication mechanism is far less established in comparison to fingerprint or facial biometrics. Additionally, circumvention of the gait biometric is challenging because imitating a person’s gait is very difficult. Regarding permanence and reliability, gait is not the best biometric as a person’s gait may be affected by a variety of factors:

  • Biological factors: Weight, limb length, posture.
  • External factors: Footwear, load bearing, ground surface characteristics.
  • Transient factors: Emotional state, physical state

Data Acquisition

To investigate the validity of using the gait biometric to identify and authenticate smartphone users, data regarding user data must first be collected. A data set was created across multiple days in various places to simulate real-world conditions. The walking environments in which data collection took place included a flat corridor, a tiled corridor with stairs, and a carpeted room. The data set consisted of 20 total subjects aged between 18 and 69 years of age. The device used to collect data is a Nokia 3 which runs an Android operating system (OS). All Android Nokia`s are equipped with a Bosch BMI160, a small low power inertial measurement unit that houses a gyroscope and an accelerometer. Accelerometers are electromagnetic devices that measure non-gravitational linear acceleration. The smartphone is placed in the subject’s pocket during data collection, and the subjects are asked to walk in a natural manner around their environment. The data collection period is timed so that a data sample of approximately 30 seconds is collected. Two samples were acquired from each subject.

An example of the data generated by the sensor data collection app.
Graphical representation of an example of raw accelerometer data output

Data Preparation

Magnitude Calculation

The triaxial signals recorded from the device are highly sensitive to the orientation and positioning of its sensors. The orientation of the sensors may change during data collection. To eliminate the orientation sensitivity, the magnitude of the triaxial signal is computed using the following formula:

Graphical representation of the signal’s calculated magnitude

Cubic Spline Interpolation

Due to the sensor data collection app running on top of an Android OS, data collection will not occur at a fixed sample rate. The irregular sample rate occurs because the OS must give other processes time to run on the phone’s CPU. The signal data must be resampled and interpolated so that a fixed sample rate can be used. It is important to set a high enough resampling frequency so that the dynamics of a gait cycle may be captured with sufficient detail to promote accurate feature extraction. The magnitude of the signal data was interpolated using a cubic spline and resampled at 50Hz (200 samples per second).

Cubic spline applied to the magnitude of raw triaxial signals

Butterworth Lowpass Filter

A common problem when collecting inertial sensor signal data is that the sensors are sensitive and collect “noisy” data. Noisy data manifests itself in the form of abnormally high and sharp peaks. To remove noisy data, a Butterworth low pass filter was applied with a cutoff threshold of 3Hz.

The Butterworth lowpass filter applied to previously splined data

Optimal Threshold Calculation and Gait Cycle Segmentation

To extract features for every gait cycle, the signal must be segmented into individual cycles. The first step to doing this is to locate either the minimum or maximum peaks; this model uses the minimum peaks. Most peak detection algorithms require a parameter that specifies the minimum distance between peaks to be identified, and a threshold value under which each peak value should fall (if minimums are being detected). The threshold cannot be a set to a static value, as there are many variances between gait features, such as average peak height, between subjects. The algorithm used to calculate this threshold value finds the smallest value that minimises the standard deviation between the peaks and is presented in the code snippet below.

Detected peaks

Gait Cycle Average Trace

After segmentation, the traces were calculated and stored in a list. Traces whose time was either too long or too short to be considered a realistic gait cycle were thrown out to further clean the data set. The remaining traces would individually each have features extracted from them.

The overlapped gait cycles traces of segmented data

Feature Extraction

Feature extraction provides a set of values with which classification tasks can be performed. Extracting features that as a set will provide a holistic description of a gait cycle is indispensable to efficiently and accurately recognize subjects. The list of features chosen to represent a gait cycle is shown in the following table:

Features extracted per gait cycle trace for user authentication


Support Vector Machines (SVM) are a supervised, binary, machine learning model that can be used for classification. Initially, a feature space, which is an n-dimensional vector space created by the n specified features of the data that is to be classified, is formed. An SVM performs its classification task by constructing a hyperplane, or a set of hyperplanes to try separate classes by as much distance as possible. The generalisation error of the classifier is reduced when the margin between the hyperplane and the nearest training data point is as large as possible. SVMs utilise support vectors, the data point that lie closest to the hyperplane, to locate the optimal decision surface and specify the decision function. SVM classifiers generally perform well in high dimensional spaces and tend to yield good results on smaller data sets, making it the ideal classifier on the model’s data set. The model’s SVM classifier used a radial basis function kernel and a C value of 1. SVMs are not invariant to scale, so all data that constituted the feature vectors were scaled to obtain meaningful results.

Model Evaluation

Authentication can be characterized as a binary classification problem and the typical way of visualizing the performance of an authentication system is by plotting a receiver operating characteristic (ROC) curve for the classifier. A ROC curve is a graphical representation of the output of a binary classifier as its threshold value changes. Ideally, the ROC curve will pass through the point (0, 1) at the top left corner of the plot, as this is the point where the true positive rate has been completely maximised and the false positive rate is completely minimised. This point closest to (0, 1) defines the systems equal error rate (EER), which is an important metric for evaluating a binary classification system. The EER signifies the point where the false acceptance rate (FAR), the percentage of imposter data samples that will likely be wrong authenticated by a system, equals the false rejection rate (FRR) is the percentage that a valid data sample will be wrongly denied authentication by a system. The smaller the EER the better the performance of the system. Other metrics that will be used to evaluate the performance of the model are accuracy, precision and recall. The accuracy score of a system containing n samples can be calculated by:

Given the number of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) we can calculate the scores for precision, recall and the f1-score:

ROC curve for the accelerometer

An additional mechanism for visualising the performance of a classifier, particularly supervised ones, is a confusion matrix. A confusion matrix is defined by a matrix C, where C_i,j is set to the number of data points that belong to class i but have been predicted to be in group j. A confusion matrix allows for mislabelled classes to be easily identified. The additional metrics presented here are an F1 score, the weighted mean of precision and recall, and support, the number of occurrences of a class in target values. All metrics were calculated using the one-vs-rest classification strategy, which fits one classifier per class. For validation, a testing and training split of 50–50 was used. The accelerometer achieved a satisfactory accuracy of 78.49% and EER of 7.04%. A precision score of 80.44% confirms that the model has a good exactness and minimizes the number of false positives. The accelerometer’s recall, however, did not fare as well with a score of 74.48%, a measure which is also reflected by f1 score achieved, 75.58%.

Metrics attained by the model
Confusion matrix for the accelerometer


As you can see, we have created a model that provides a means to authenticate smartphone users according to their gait as measured by these smartphone sensors. Although this prototype implementation has proven gait based inertial sensor authentication to be possible, the solution has many improvements to make. It may be possible to combine both accelerometer and gyroscope signals generated by smartphone inertial sensors using a Kalman filter to create a more robust authentication system. Extracting additional features such as stride length and joint angle to increase the size of the feature vector may improve the decision boundary for the classifier. Also, the accelerometer pre-processing can be refined by segmenting each gait cycles trace into regions of interest and having feature extraction performed at this level. Additionally, other classification algorithms such as Naive-Bayers or Random Forest could be utilised to potentially improve classification accuracy.

If you have any questions, please feel free to reach out in the comments below.

DVT Software Engineering

Making an impact in Software Engineering

Steven Wessels

Written by

DVT Software Engineering

Making an impact in Software Engineering

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade