Let’s Compose a Baby Monitor

A Baby monitor Kotlin app using Jetpack Compose and HMS ML Kit sound detection

Giovanni Laquidara
Huawei Developers
4 min readApr 13, 2021

--

Photo by Tim Bish on Unsplash

Adding complex features to a mobile app is becoming easier and easier. In the years We saw a lot of SDKs, libraries and utilities
to help us as developers fulfil the trickiest needs of our users.

Years ago I could not imagine how difficult could it be to develop something like a Baby Monitor app for our smartphone.
Something activating the microphone and automatically recognizing the crying sound of a baby and generating effects,
like sending a notification, maybe playing a song or other more useful features.

Today We have Machine Learning, yes, we could train a model to recognize a baby crying, with the assumption to have a good quantity and quality of data of crying newborns.
Then deploy this model in our app, choose some library to better interface with it, design a good UI, and then choose a good library to develop that UI most practically and reliably.

Without any knowledge of ML and as an Android developer today we can use 2 nice tools to do this job.

The tools

First of all the UI. We are Android developers of the new roaring 20s, We don’t want to use XML anymore and go back and forth from Kotlin to it. We deserve an easy way to define our UI and to test it:

We deserve

Jetpack Compose

To detect the baby crying We can use the services offered by

Huawei Mobile Services ( HMS ) ML Kit

In the specific, this offers the on-devices recognition of up to 12 different sounds:

  • LAUGHTER
  • BABY CRY
  • SNORING
  • SNEEZE
  • SCREAMING
  • MEOW
  • BARK
  • WATER
  • CAR ALARM
  • DOORBELL
  • KNOCK
  • ALARM
  • STEAM WHISTLE

So now that We have it all, let’s see how to build it.

Let’s Build it

The first step is to add the dependency of the Huawei repository in the project Gradle file

and the same line in the settings.gradle dependency area.

Then we can add the effective dependency of the HMS ML Kit sound detection just by adding these line to the build.gradle files

The HMS ML Kit Sound Recognizer

The Sound detection provided by HMS ML KIT can recognize up to 12 different kinds of sounds.
To use it We have to initialize the SoundDetector

then assign to the detector a listener

This listener has 2 callbacks: onSoundSuccessResult and onSoundFailResult. These will be called respectively

if a sound is recognized or if an error is raised.

After We set up all this, the recognizer is ready to start.

To start it use this API:

to stop it:

Let’s use the SoundDetector

In our sample, I’ve created a class SoundDetector initializing the recognizer and handling the callbacks as lambda.
This class also help to not store a context in the ViewModel class as the one injected at the start will be used to call the start API.
So no context is saved and we can use the setCallbacks method to assign the desired actions of the listener.

A Straightforward UI — No XML were harmed during this process

The UI of this sample project is pretty simple, a centred Card showing an icon of the command We can launch by tapping it.
So at the first start, We will see an ear icon, tap it and the detector will start listening. The card will expand to show a circular progress indicator and the ear icon
will change into a strikethrough-ed ear that can be tapped to stop listening.

From the moment the detector is running if the crying of the baby is detected, the progress indicator will disappear and an image of a crying baby will be shown instead.

Thining about this UI in Jetpack compose we can assume is based on a Card that contains a Column with the DetectorStatus ( the ear icon ) and the DetectionStatus ( the actual detected sound icon or the progress indicator)
The Card could be expanded, and this behaviour will be managed by the expanded state changed by the tapping of the DetectorStatus composable.

Let’s see this in code:

Now we can focus on the other 2 UI composable elements.

DetectorStatus

The DetectorStatus ( the ear ) is a composable reacting to the state of the SoundDetector showing the right image of the command the user can launch.

DetectionStatus

The DetectionStatus is a composable reacting to the result of the detector. If there is still no detection a CircularProgressIndicator is shown otherwise a DetectedSound is shown

DetectedSound

The DetectedSound is a composable reacting to the detected sound event coming and showing the correspondent image on the screen. A Baby crying or (Bonus Point!) a Knock Knock Knocking on the door :)

The Result

And this is the final result, from the gif we cannot hear the baby crying but

If you want to give it a try you can found the source code of this app on Github

And if You want to test more ML Kit features, you can go to the official ML Kit GitHub repo to download the demos and check the sample code.

See you next time!

--

--

Giovanni Laquidara
Huawei Developers

Developer Advocate @ Huawei, Android, ML and VR/AR lover