Voice & Diversity

Behavioral Signals Team
Behavioral Signals - Emotion AI
4 min readJan 27, 2020

VoiceSignals #11 — Musings on Voice tech news

A Washington Post study found that voice assistants, like Alexa, understand what is being said 30% greater for people with native accents. The top-performing voices, when it comes to VA ‘understanding’, come out of educated whites, upper-middle-class Americans, mainly from the West Coast. Makes sense since the AI was trained with data that came out of a homogenous set of people that designed this technology, and all sounded similar. Meanwhile, voice assistants not only have an issue with understanding accents or speech impairments, they have been created to reinforce gender stereotypes. Voice assistants that are in our homes and use female names, female voices, often with an obedient or flirtatious style, as a UNESCO report points out, perpetuating sexist attitudes towards women.

While someone may argue that all this conversation about ‘voice diversity’ is an insignificant problem for AI development, compared with model training, finding the right data, or the more media-trendy concern on whether AI will take over our jobs or someday kill us, it’s worth looking into a recent Oracle study on how AI is changing human relations: 64% of people trust AI more than their manager. The increasing adoption of AI at work is having a significant impact on the way employees interact with their managers. As a result, the traditional role of HR teams and the manager is shifting. This is impacting everyone all over the world, [..workers in India (89%) and China (88%) are more trusting of robots over their managers, followed by Singapore (83%), Brazil (78%), Japan (76%), UAE (74%), Australia/New Zealand (58%), U.S. (57%), UK (54%) and France (56%)..]. Unbiased, inclusive design, and usability matters.

One solution could be to make voice assistants sound gender-neutral — and it’s something that’s entirely possible, as demonstrated by the makers of Q, the world’s first gender-neutral voice assistant. Second, immensely popular voice assistants, that are developed outside the US, are being designed based on local languages like Chinese or Russian (Baidu’s DuerOS or Yandex’s Alice), and in some cases in local accents as is BBC’s ‘Beep’ that understands British. Voice Assistants could become more inclusive by diversifying their training data sets of accents, and by diversifying their machine learning engineers’ origin and gender.

Google’s Project Euphonia, that wants to make voice tech accessible to people with disabilities, is a step in the right direction. It’s collaborating with nonprofits and volunteers to collect more voice data from people with impaired speech. Being disabled means, you need this type of technology that will make your life easier and inclusive.

It’s time we start thinking inclusive design, user experience, non-English languages, speech disorders and find the right people to help design better conversations between humans and machines.

From our Blog

What Emotion AI Has in Store for 2020

As machine learning algorithms continue to improve, systems will develop increasingly sophisticated abilities to evaluate and measure human emotion, and AI will become better at showing empathy. Biases in programming will be confronted and regulations set in place. AI will accelerate other sciences, by leveraging its speed and the huge amounts of data being collected, leading to breakthroughs from medicine to new materials. Meanwhile, according to a Gartner report, by 2023 AI and other technologies will lead to the tripling of disability employment. Because AI tools make it possible to connect at a higher level with minimal user input, it will become easier to hire people who previously struggled with certain job tasks and help to improve productivity and retention rates of those people. Read more >

What we read online…

Emotion Recognition by Voice [Podcast]

Teri Fisher, MD interviewed Rana Gujral, CEO of Behavioral Signals, on emotion recognition and healthcare. They discuss the AI technology and research behind emotion recognition, how it can improve human and human to machine interactions, how you can predict intent, and what sort of KPIs can businesses target with this technology. They also discussed ethics and how technology can be misused by bad actors. Listen here >

Doctors can now say ‘Hey Epic’…

Clinicians are getting a new voice assistant, called ‘Epic’, to help them retrieve information concerning their patients. Epic, is a private company and one of the largest electronic health record(EHR) providers in the US. According to the company, hospitals that use its software hold medical records of 54% of patients in the United States, while Becker’s Hospital Review says 20 of the best US hospitals, like Mayo Clinic, Johns Hopkins, UCLA Medical Center, use EPIC’s EHR system. While someone can expect to read about all the typical voice functions, like retrieving or storing data, the article points out a few use cases within the hospital that are interesting, like the use of voice in the surgery room where surgeons cannot type after having scrubbed in or patients messaging their nurses to request services. Read more >

How Are New Voice Technologies Impacting the Contact Centre

Mike Palmer of Spearline, a company that monitors call tolls globally, discusses voice and a wide range of technologies that are changing how a contact center works. From AI-powered assistants, voice analytics, voice search, up to biometric authentication, he looks at their potentials for businesses and focuses on enterprise network issues like latency, echo, and background noise that can make it difficult for voice technologies to work well and why network teams need to take a proactive approach to voice services management. Read more >

Do you want our EmotionAI & Voice Bi-weekly newsletter in your mailbox?
Sign up here
https://behavioralsignals.com/sign-up-for-our-newsletter/

--

--