4 Applications of Artificial Intelligence for Hearing Loss

Karl Utermohlen
3 min readMay 16, 2018

--

While there are many who raise concerns regarding the ethics of artificial intelligence (AI), the technology has recently been developed to help those with disabilities. The goal that some companies have with AI is to make the world a safer and more comfortable world, as evidenced by advanced IoT security cameras, the rise of smart cameras around major world cities, as well as robotic limbs.

One of the latest trends in artificial intelligence is the development of devices and softwares to help those who have a hearing disability navigate the world with more ease. Additionally, the advancements in hearing technology have made it possible for those with little hearing to focus solely on a single noise and enhance it, rather than be overwhelmed by a lot of faint noises when walking the streets or going to a party.

Intelligent automation WorkFusion has developed software designed to automate work processes and create AI platforms that offer a variety of solutions in the professional sphere and personal world. The company’s RPA Express uses robotic process automation technology to improve software development and work efficiency.

Here are four applications of AI for the hearing impaired.

1) Closed Captioning Personalization

Medical device company Cochlear has developed closed captioning personalization technology, which uses natural language processing to personalize closed captioning. The tech uses AI to transcribe live conversations in real-time, while also translating sign language to text quickly. The company has developed an AI assistant that helps in adjusting the sound processor of an auditory aid in order to fit the exact parameters of a person’s ear shape and disability.

This process is called “fitting” or “programming” and it’s designed to create a balance of loudness and softness of sound, giving users the most fully-functional hearing system possible.

2) Auditory AI Assistants

Auditory AI assistants are essentially more advanced hearing aids that help determine the best fit based on an individual’s cochlear implant. The technology helps to improve patient outcomes, while also improving their hearing with daily tasks. San Francisco-based Ava has developed a mobile app with natural language processing that transcribes conversations in real-time.

Every participant in a conversation needs to download the app to make it happen as it allows users to utilize the microphone on their devices. By speaking into the microphone, the app uses natural language processing software to gather the dialogue and transcribe it for everyone to hear in real time.

3) Sound Isolation

Google researchers have created an AI application capable of isolating a single person’s voice from a mixture of sounds, which includes other voices and any background noise. The technology uses the “cocktail party effect,” in which a person with good hearing can focus their attention on a single speaker in a noisy environment, filtering out other sounds.

The team took 2,000 hours of video clips of people giving talks on YouTube and combined them to create synthetic simulations of the cocktail party environment. They then trained an AI program to analyze the speakers’ faces for signs when they spoke, including mouth movement and other cues, allowing them to create a program that can cleanly isolate sounds.

4) Prediction of Language on Cochlear Implants

Scientists are also using machine learning algorithms to analyze brain scans, which helps them predict hearing loss. The Brain Mind Institute (BMI) at The Chinese University of Hong Kong (CUHK) and Ann & Robert H. Lurie Children’s Hospital of Chicago worked together to develop a machine learning algorithm that can predict language ability in deaf children with a cochlear implant.

Researchers use advanced algorithms to predict individual children’s future language development based on their brain images before surgery. This prediction ability can go far in developing personalized therapy and improving the quality of children’s lives. The algorithm uses a high volume of images captured through magnetic resonance imaging (MRI). The machine learning algorithm identifies atypical patterns that indicate hearing loss early in life.

--

--

Karl Utermohlen

Tech writer focusing on AI, ML, apps and cybersecurity. MFA in Creative Writing from the U of Idaho. Writes for PSafe, Upwork, First Page Sage, WeContent, IP.