Project ‘SoundFlux’ Aims to Save Lives with Sound Based Fall Detection System

SoundFlux is the Spring 2019 winner of the Hal R. Varian MIDS Capstone Award

Berkeley I School
Jun 3, 2019 · 6 min read

Simple button alert systems for the elderly to use when they fall are ubiquitous and many use these alert necklaces and systems. A team of I School students found this solution is antiquated in relation to technical innovation in other spaces. They set out to find a solution.

As a result, Master of Information and Data Science graduates Romulo (Rom) Manzano, Matt Thielen, and Mike Frazzini developed SoundFlux, which uses a sound-based system to detect falls.

SoundFlux

Tell us what SoundFlux is all about.

Mike: SoundFlux is a sound-based fall detection system that uses a neural network model trained from simulated human falls along with millions of open source sounds, to provide effective, low-cost, unintrusive, and privacy-sensitive peace of mind for the elderly and their loved ones.

What inspired your project?

Rom: The idea came to me while visiting my great-aunt Lola in Spain last year. She’s in her 90s and while she has a great support system, the family keeps a few medical alert system gadgets at home in case of an emergency. During my visit, I couldn’t help but notice how much we rely on antiquated technology for fall detection. In my great-aunt’s case, she has a pendant with a big red button, that she’s supposed to use in case she falls! This was shocking to me, as this obviously would not work when it was needed the most (i.e. if she was left unconscious as a consequence of a fall). It was then that I realized how little innovation has taken place in the space, and that there is no reason why we can’t have better solutions in this day and age. After researching the space, I found that the only real alternatives were very intrusive: they either required wearables or costly hardware installations (high definition video equipment, etc.). I thought there was probably a way to achieve comparable results using lower cost hardware in a non-intrusive way. I thought about the basics, and imagined how I could tell if someone fell without looking at them, and it was clear that one of the most obvious cues would be the sound! That’s when it all started, everything else came while thinking about the potential solution more deeply.

Mike: While the original idea and vision of the project were Rom’s, I was inspired for three big reasons: the need for something like this, my personal experience with the need, and our unique challenge and ability to use data science to address the need. To elaborate, the need was very compelling as falls are the number one cause of injury and death in the elderly and fear of a fall affects this vulnerable population and the loved ones that care and worry about them. Many of the existing solutions available are intrusive, cumbersome, and sacrifice independence and privacy. I have direct personal experience with this as my 90 year-old grandmother suffered a fall and had to drag herself from her bedroom to her kitchen to get to her phone to summon help — if she had SoundFlux she could have received help very quickly which would have saved her from almost an hour of pain and severe anxiety, as well as serious complications that hindered her recovery. The challenge of providing a data science driven solution to fall detection — one that could be low-cost, unintrusive, and privacy sensitive — was the most exciting challenge of the MIDS program, among many others.

The challenge of providing a data science driven solution to fall detection — one that could be low-cost, unintrusive, and privacy sensitive — was the most exciting challenge of the MIDS program.

What was the timeline or process like from concept to final project?

Mike: While we had nearly the full 15 weeks of the semester, thanks to our instructors allowing us to pitch and select projects right out of the gate, there was still so much to do in this time period. Our biggest challenge was the fact that we needed sound data on human falls and we were not able to find anything useful in the public domain, so we had to generate our own and then augment with transfer learning from models trained from sound files available in the public domain. We first had to do quite a bit of research on ways to solve the problem, what is state-of-the-art, as well as the hardware we needed. We then built and deployed hardware prototypes to capture sound data. The critical dependency was data and we spent the first six weeks simulating and recording hundreds of human falls using a 165 lb human-like CPR manikin with articulated joints and human weight distribution (see picture below). We joked that “Rescue Randy” was the most important member of our team but he was such a dummy!

“Rescue Randy” (left) and Hardware prototype with Rasberry Pi, Microphone array, and an accelerometer (right)

Once we started acquiring the sound data on simulated human falls, we were able to spend the remaining weeks researching different approaches for sound classification and developing our model. The final weeks were spent fine-tuning and validating our model both through test data sets and with real-life simulations with our model deployed to our prototype hardware. We also developed, practiced, and finalized our presentation materials and presentation in the last six weeks of the capstone course.

How did you work as a team? How did you manage to work on your project as members of an online degree program?

Mike: Overall we worked very well as a team, even though we all live in different parts of the country and in three different time-zones.

Rom: While we all were in different time zones, we adapted very well to everyone’s schedules. Communication was key, so we adhered to what we thought were best practices. We constantly posted updates in our internal Slack channel (almost daily) and met at least once a week, usually during the weekends. We maintained our own internal wiki where we hosted anything and everything that was relevant for the project, from how-tos on hardware setup, to links to relevant research materials, etc. Lastly, we shared all of our code via a private repo on GitHub and constantly pushed updates. This allowed us to divide the effort and work on our own schedules while keeping everyone up to speed and able to raise issues on a timely basis.

How did your I School curriculum help prepare you for this project?

Mike: I would say almost all of the courses, all the team projects in many of the courses, and the program-wide focus on communication, were all essential in preparing us to successfully take on such a big project.

Rom: The entire curriculum helped me build up the necessary skills for our Capstone project. I particularly applied a lot of the concepts learned in W251 (Deep Learning), especially while deploying our AI model at the edge.

Do you have any future plans for the project?

Mike: Yes! We believe there is a real need and we have the beginnings of a great solution. We are looking to refine the target customer and product usability in our “spare time” over the coming months and then will be looking at everything from funding and partnership options to open-sourcing our project.

Rom: We truly believe our technology has the potential to save lives and we would like to see it being deployed all over. The way I think about it is that if I’m not comfortable deploying it at my great-aunt’s home, then there is more work to be done!

How could this project make an impact, or, who will it serve?

Mike: Millions of elderly and their loved ones all over the world who want to age in their homes with less worry and comfort that in the event of a fall they can get an immediate response.

Rom: Literally millions of elderly and their relatives, including the 9 million people who visit emergency rooms due to a fall-related injury every year!

BerkeleyISchool

Voices from the UC Berkeley School of Information