We built an award-winning product to help the visually impaired in under 48 hours. Here’s how we did it.
We spent the weekend at TechCrunch Disrupt’s 2016 London hackathon. It was a pretty intense 48 hours of brainstorming, plotting, researching, eating, building, testing, bit more eating, topped off with a presentation to a few hundred attendees in the room. Gulp. But we won a prize. The special IBM Watson award!
This was the first time we had donned our branded hoodies and stormed a hackathon as a team, so we were pretty happy with the outcome. Here’s some more details about what we built.
One of my family members has an eye disease called Glaucoma which damaged the optic nerves in their eyes. The damage has resulted in severe vision loss, meaning they struggle to find everyday items around them. With advancements in computer vision, it seemed reasonable that we could replace their eyes with an artificial one.
What It Actually Does
AEye is an artificial eye. Using a combination of the microphone and camera on the device, someone is able to ask where an object is and be guided to it. Moving around the room will result in a “hot” or “cold” reading.
How We Built It
Split into two teams: 3 people concentrating on the mobile app & the UX design; 2 concentrating on the backend and visual recognition training.
We have trained multiple image classifiers on the Watson visual recognition service. This can give a reliable match when the image contains the object we’re searching for. This service is wrapped by a Python web service, which maps the desired object class to the associated classifier.
The iOS app uses the Watson Speech to Text service to translate voice input from user. Once the user selects a class to search for, we give aural feedback (“warmer”, “colder”, “found it”, etc) as the user moves the phone’s camera around the room or surface.
Tech We Used
So yeah, without a doubt the next step would be to add celebrity voiceovers. Jeff Goldblum perhaps?
Looking forward to the next Hack! 😬