bless u

ashley
Our Side Projects
Published in
3 min readSep 15, 2020

--

bless u was the project that I worked on during PennApps XXI, which was a virtual hackathon. We came up with this idea because of a conversation I had with a friend about how no one really says “bless you” after someone sneezes while on a Zoom call. Since we are all taking classes online for this semester, we thought a web conference extension that can help to emulate that in-classroom experience would be fun.

We used machine learning with TensorFlow, OpenCV, and Python to create this project.

Inspiration

In a world where we pay absurd amounts of money to sit in front of a laptop, something that we missed the most about the in-person classroom experience is the “bless you”s from around the room after sneezing. Thus, we set out to make our calls a little bit friendlier with bless u.

What it does

bless u is an extension that can be added on video-sharing platforms such as Zoom and WebEx where most meetings and classes are currently taking place. bless u has been taught what students look like when they are on these calls using machine learning. While you’re on the call, it watches your video and checks the probability that you are sneezing, coughing, yawning, or scratching your head based on the images that it has learned from. bless u will then decide if you are indeed doing any of those actions and if so, it will speak accordingly. For example, saying “bless you” if it has decided that you have sneezed.

How we built it

We built bless u using Python, Tensorflow, OpenCV, brain cells, and selfies. We took selfies of what we look like when we are in online classes and what we look like when we sneeze, cough, yawn, or scratch our heads. We then trained to bless u to recognize these actions and OpenCV to capture the frames from the webcam video to compare the live feed with the images that we fed it previously to search for a match. If the algorithm is fairly sure that there is a match, then a recording will play with a humorous phrase based on whatever the action is.

Challenges we ran into

A challenge we ran into was training the model. We wanted to create an accurate model and that required a lot of our selfies.

Accomplishments that we’re proud of

We are proud of this project because of both the technical aspect but also the creative aspect behind this idea. We are proud of being able to implement TensorFlow and training our model to accurately identify the actions that students may have on Zoom. We are also proud of this project because it’s something that we’ve never seen before and something that could be a popular feature on big platforms like Zoom.

What we learned

Throughout this process, we have learned a lot about machine learning by implementing TensorFlow, deep learning, and training our model. We have also learned about how to use OpenCV to capture frames in the live video. We also learned that even though it feels like everything is out there, there are new problems arising all the time, whether big or small, and we have the power to build things that can solve or at least alleviate those problems.

What’s next for bless u

We want to add more and more potential actions for students on Zoom and extend this concept to meetings for the workplace or otherwise. We would also love to continue improving our model with more and more data.

--

--

ashley
Our Side Projects

23-year-old NYC SWE | Writing about the life lessons I'm learning along the way.