Detecting the Walking Frequency — IoT Websensors

This is the fourth and last article of a collection describing implementation of IoT systems in various environments, written as part of my elective in Internet of Things at University Sapienza, Rome.

All the code and the scripts can be found on github, in the branch v4.0

Additionally you can find a demo on youtube.

The task of this article was quite precise: using browser web sensors in order to detect if the holder of the phone is idle or moving.

In my opinion it can be divided in 2 main parts that are almost independent from eachother.

  1. Detect walking ( and optionally distinguish it from random movements )
  2. Connecting to the preexisting infrastructure based on AWS IoT

This time I will focus more on the sensor itself rather than on the architecture, in fact it will be quite simple, the phone will communicate with a webserver, and, depending on the endpoint, it will perform one of the following:

  1. Receive a batch of data, process them and send the result over mqtt ( Cloud Data Processing )
  2. Forward received data over mqtt ( Edge Data Processing )

The Architecture is the same, the main difference is where the data processing that we’ll see in the next part is actually executed, in fact it may be done both on the phone itself or on the server.

In the first case I’ll refer to the data sent by the phone as raw data, in the second as processed data.

As I’ve shown in the previous parts, everything that is sent over mqtt will be automatically recovered and displayed by our dashboard.

There are 2 main API in order to access phone sensors from a browser:

  • DeviceMotionEvent: Legacy interface, supported by most mobile browsers, limited capabilities like fixed sample rate. Although no official notice has been released from apple, it doesn’t work with the latest ios update ( 13.4.1).
  • Generic Web Sensors : New interface, better capability and performances, not supported by Apple devices.

Given that i don’t have easy access to an Android device, I’ve decided to use the first set of APIs.
Anyway the two libraries are extremely similar, you have to write an handler function that will be fired periodically and it should implement the sampling routine.

In my case I’ve decided to also build a fancy interface using RickShaw, in fact I think it is an astonishing tool for creating time-series visuals in javascript and, in this case, it immensely helps me to explain what I’m trying to accomplish:

On the left, you can see the values of the accelerometer on the 3 axes with different colors ( first box ), the sum of the components ( second box ), and the frequency spectrum ( third box ).

In this example i was trying to tilt the phone at around 120 bpm, i.e 2Hz.

You can see that the spectrum only has one peak at ~2Hz, this is because the more precise we are, the bigger is the presence of that wavelenght in the final composition.

As we know that the human walking frequency is ~1.8Hz and has a small variance of around ~5 % we can use this factor to isolate a big part of noise from our samplings.

If you want to try it yourself you can do it from walk-sensor.netlify.app

The javascript code is based on a single handler for the motion event, that is normally triggered with a frequency of ~60Hz in most devices. In my case it will also perform some additional operation of updating local variables.

First it recovers all the required data from the sensors and push them in the right buffers.

Then it will perform the Fourier transform over the buffer to extract the frequency spectrum.

In this step we are just treating our data_raw array as a wave, each point of the wave is denoted by x ( the time ) and y ( the acceleration ), if we take a series of point an apply the Fourier Transform to it using dspjs we’ll get another array of (x,y) couples with different meaning: each x is a frequency and y is the intensity of the corresponding wavelenght ( as we saw in the picture above ).

Finally if the buffer is full we will need to push one element out, in a First In First Out fashion.

Given that I had the possibility of visualizing different sets of parameters, I made some experiments to choose the ones that appeared to produce better results, in the end I used a time window of 128 samples ( also because the FFT library requires a power of 2 array ) and a sampling rate of 60 Hz ( the maximum available frequency ), this means that I am analysing the spectrum of the last 2 seconds of sampling, that seems a good compromise.

The rest of the code in sample.js is just required in order to trigger button events and attach/detach various handlers ( included the motion event handler) and set up the environment.

For example, the motion_handler_req function is required in order to start the sampling in iOS, in fact, if this event is not triggered, no DeviceMotion event will be generated.

Additionally, the event should always be fired from a visible and clickable elements. This is done by Apple in order to prevent hidden malicious elements requiring permissions without the user consensus.

Finally we will need some functions to transmit these measurements.
This data will ultimately be sent over mqtt in order to be displayed, but, even if I could have used mqtt over websockets to send the data directly to the broker, I opted for another alternative: i used AJAX requests to send data to the webserver and then forwarded them from the webserver to the AWS backend using mqtt.

I used Flask for the server part ( a lightweight python http server ) because I think that it’s extremely handy in case of quick webserver prototyiping.

Below you can find an example of javascript code to send data

and the python code that handles it and forwards it over mqtt, ( in this case it just checks for movements of the phone without performing the fourier transform ).
For the mqtt part I used the paho python module and a local mqtt bridge in order to handle secure connections required by AWS.

Then using the templates and the dashboard of the previous parts, we can finally see the results on the backend.

In order not to saturate the channel, only a subset of the measurements are actually sent to the gateway, in this case one per second.

Computer Engineering student and Cybersecurity passionate. Half cinic, half dreamer.