Google AI Apps Aim to Improve the Lives of People With Disabilities

Synced
SyncedReview
Published in
5 min readMay 17, 2019

About 15 percent of the world population — over one billion people — have some type of disability, according to the World Health Organization. Some 200 million of these experience considerable difficulties in work and public environments.

Hearing aids and screen readers have already improved the lives of people with disabilities, but today’s tech giants believe they can contribute much more by leveraging advances in artificial intelligence to overcome barriers and open new vistas.

Yesterday was the eighth annual Global Accessibility Awareness Day, which aims to promote digital accessibility and inclusion for people with disabilities. Google is one of the Silicon Valley frontrunners in this space and has developed a number of AI-powered applications and features designed to make the physical world more accessible to users with disabilities.

Google’s corporate mission statement reads: “From the beginning, our mission has been to organize the world’s information and make it universally accessible and useful.” Google Product Manager on Accessibility Engineering Patrick Clary says that statement “is a huge motivating factor for our teams when it comes to really innovate in the area of accessibility.”

A full-time wheelchair user who suffered a spinal cord injury from an accident eleven years ago, Clary spoke with Synced at last week’s Google I/O developer conference, where Google Accessibility Teams demonstrated how their AI tech can help people with visual disabilities, hearing impairments, ALS, and intellectual disabilities.

(left to right) Project Diva Developer Davide Ferraro, Google AI Product Manager Julie Cattiau, Project Diva Developer Lorenzo Caggioni, Google Product Manager on Accessibility Engineering Patrick Clary, Android Accessibility Product Manager Brian Kemler

Lookout is a mobile application designed to support people with visual impairments. The smartphone app can real-time narrate the immediate environment, audibly identifying for example the people, objects, scenes, and text it perceives. The app can switch between “Home”, “Work” and “Play” modes to enable its algorithms to focus on environmentally relevant elements.

One challenge in building an application for users to “observe the world” is to narrow down the contextually important information from thousands of different objects in the user’s environment. Google developers have experimented with many different scoring and analysis techniques to provide users with the most relevant results.

Clary recounts the story of a visually impaired person who was testing Google Lookout’s narrate feature: “She was in her garage and kept hearing ‘person detected’. She thought, ‘there’s no one else here’, so she spoke out, ‘Is anyone there?’ And actually her little child was there and said ‘Oh I’m here, mommy!’. That was an eye-opening experience for her, allowing her to be more proactive.”

Live Caption is a new feature announced at Google I/O that can generate real-time subtitles for any video or audio playing on Android Q devices. This is a game-changing functionality for the estimated 466 million people with hearing impairments. Powered by on-device machine learning and federated learning, Live Caption can work offline and keep users’ private data from leaving the devices. Captions display in a black box with adjustable positioning.

Google’s new Project Euphoria was developed to help the millions of people with speech impairments caused by neurological conditions such as ALS, strokes, and Parkinson Disease. Leveraging advanced speech recognition algorithms, Google Assistant can be personalized by training speech recognition models with these individuals’ voice samples. Google has partnered with the ALS Therapy Development Institute (ALS TDI) and ALS Residence Initiative (ALSRI) to collect voice data.

While Project Euphoria is drawing on individuals at this early stage, Google AI Product Manager Julie Cattiau told Synced that it will also be possible to train AI models that work for a variety of people. One thing that we learned from speech therapies that we’re working with is that there exist some profiles of speech impairment, so people who may have the same condition sound similar.”

Not all Google’s accessibility products need cutting edge tech, sometimes all a user wants is a smart and convenient interface. Project Diva (DIVersely Assisted) is a new accessibility program designed to enable users with Down syndrome and intellectual disabilities to command their Google Assistant. A large button plugged onto the Diva interface connects to Google Assistant via bluetooth. Users can use the button to send personalized commands such as playing music or news, prompting Assistant tell a joke, etc.

With Google and other tech giants coming under increasing public criticism regarding privacy, bias and other concerns, initiatives such as these which can greatly improve the lives of people with disabilities are a welcome reminder of the positive role that artificial intelligence technologies can play when steered toward the social good.

Journalist: Tony Peng | Editor: Michael Sarazen

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

Follow us on Twitter @Synced_Global for daily AI news!

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global