Is my Pi strong enough?

Sébastien Lambour
InMoodForLife
Published in
4 min readJan 17, 2017

When Hardware meets software

In the first two posts of InMoodForLife we have shown how we reverse engineered the Beddit Sleep Tracker in order to extract the raw data we need for our project.

During the process described on these posts we worked on our developer laptops, with all the computing power than a modern computer can offer us. But if we want a working solution, we need to be able to run all the algorithms for collecting and analyzing the data in a way more modest setup: an autonomous, always running, humble Raspberry Pi that will be installed near the Beddit sensor (in Bluetooth LTE range).

We need it to be a Smart Device, not a developer tool. Once the Bluetooth pairing done, user shouldn’t need to do any other action, neither running a program, nor clicking on a button when going to sleep or when waking up.

But a big question was still unsolved: is my Raspberry Pi strong enough?

Strong enough to ingest the 150 measures per second coming from the Beddit sensor? Strong enough to extract and analyse 9000 datapoints every minute and calculate the heart and respiratory rates?

Our approach was to develop all the algorithms directly on the Pi, to be sure that all the code we developed was usable in our target setup, thus avoiding the nasty side effect of “my algorithm is fantastic but resource hungry”.

And to do it the first step was to install on the Pi our chosen stack to time-series analysis: Warp10, an open source platform designed to collect, store and analyse sensor data.

Automate everything or die

Installing Warp10 in a Pi is quite simple: you install Java, then you copy the Warp10 distribution, everything is done in a few commands. But how be sure than the install can be reproducible? Our answer was to use a classical IT automation tool, Ansible.

Once the ssh setup done, everything can be automated: updates, security, software install. With the right Ansible scripts, rebuilding a complete system from scratch is definitely not a problem, but something as easy as running a script.

Data ingestion, Raspberry PI under fire

For our feasibility test we deployed Warp10 on an old Raspberry Pi 2 that we had given a second youth by providing it a Bluetooth LTE dongle.

Our test dataset was a record of a full night of sleep, decoded and converted into the Warp10 input format. It represents 4,884,480 datapoints.

Using curl we injected the dataset into the Pi’s Warp10’s ingress endpoint. The data ingestion took around 6 minutes using only one of the Pi’s CPU cores.

That time meant that we have then an ingestion rate of some 11600 datapoints per second and per core for a single time-series. This performance level is significantly higher than the sensor data rate (150 datapoints per second).

This simple test confirmed us than the Warp10 and Raspberry Pi couple was perfect for collecting and storing sensor data.

OK, but what’s about data analysis ?

In order to validate our setup for data analysis, we followed the same approach of using our test dataset. The goal was to extract the raw BCG signal minute by minute and decompose it into three time-series: heart rate, respiratory rate and movement, all that while still collecting new raw data.

The extraction is based on Seasonal Trend decomposition and a Fourrier transform in order to extract a rate per minute. The first decomposition script is quite imperfect (with many artifact to remove) but it’s representative of the kind of workload that the device need to support.

We executed the script on a server and analyzing 8 hours (4,104,192 datapoints) took 77 seconds on a single core. Of course we couldn’t expect these kind of performance on a Pi, but we were excited to test it on our Little Brave Pi.

And we weren’t disappointed, analyzing on hour of data took 209 seconds (22x slower than on a server), so we could expect an analysis time of some 3.4 seconds for one minute of raw data (8192 datapoints).

Once again, the Little Brave Raspberry PI proved to be powerful enough for data processing.

Open challenge = GIT

As we are starting the active development phase, we have opened out git repository on GitHub. Please feel free to pass by and look at what we have done…

--

--