F050 AI in healthcare 4/6: The power of voice (Bill Rogers)

Tjaša Zajc
Nov 13, 2019 · 6 min read

Voice is natural, fast, and accessible. Moreover, it’s bringing a revolution in healthcare.

Visit www.facesofdigitalhealth.com.

Voice-powered technologies such as Siri, Alexa, Google Assistant, and Cortana, are changing everyday life, how people search for information, shop, manage their homes, and more. ComScore studies implicate that by 2020, 50% of all searches will be conducted by voice and smart speakers. According to comScore survey in 2017, the top reasons people don’t use voice technology are:

  • they don’t see it relevant,
  • feel uncomfortable talking to a device,
  • don’t see it useful,
  • the voice assistant does not understand them.

But this is changing not only with the rising sales of smart speakers but also with normalization of voice messaging and voice searches. In healthcare, voice assistants are used for a range or purposes, explains Bill Rogers, CEO of Orbita — a leading provider of conversational AI for healthcare. Orbita helps healthcare organizations tap the power of voice assistants, chatbots, and other conversational AI technologies to engage patients, improve care, and reduce costs.

More specific voice applications are:

  1. In telemedicine, where patients interact with the system before talking to a doctor,
  2. For remote patient monitoring — engaging with patients that would typically require a call from the health system. Engagement can, for example, include check-ups based on data from wearable digital devices.
  3. In clinical trials, for logging symptoms and writing reports for daily diaries. “The advantage of assistants is that they can be on an individual’s phone, not only a smart speaker. The user is not limited to just one device,” says Rogers, who has over 25 years of software industry experience, with a rich history in the field of voice technology.

Voice assistants and technology have a long history since the 60s, but development was accelerated after Amazon Alexa entered the market in 2014.

“A big change accelerating the development of voice AI is the advancement of deep AI. Before, transcribing solutions weren’t accurate enough. With deep learning and the power of applying it in the cloud have been the game changer and technology will continue to improve,” says Rogers. Because voice assistants run in the cloud, AI models constantly receive new data for training. This also enables voice assistants to improve their understanding of various accents. “Alexa and Google Now are deployed to over 200 million devices which are constantly collecting data from people that have accents and ultimately, that data is analyzed and applied to deep learning for improvements.”

In healthcare, organizations are starting to use voice as a way for improved engagement of people with and in healthcare.

One of the uses is the digital front door. For example, people are always interacting with their smartphones to search for content. “Mayo Clinic’s website has 10 million searches per month. With voice recognition, the speed of finding information is improved. You might say ‘I want to search for the symptoms of appendicitis,’ and you get an instant response. Now that the context of your search is understood, you might say ‘treatment’ and get back treatment information for when should you see the doctor or not,” Bill Rogers illustrates.

Orbita is also working with hospitals in Australia where bedside assistants have been deployed and are used as a replacement for the call buttons patients use to call a nurse. This offers various improvements in patient satisfaction: the assistant can respond by saying someone will come right away. A patient can get immediate information such as the visiting hours for family or contact information. The bedside voice assistant also saves time in care because the nurse knows immediately if the patient needs a glass of water or needs help going to the bathroom.

Privacy?

There is no doubt: voice assistants are improving convenience in healthcare. People are excited about Alexa, but what about privacy concerns and scandals? In May 2018, The Guardian wrote about a random recording of a private conversation between a couple — Alexa device recorded a private conversation between her and her husband and sent it to a random number in their address book without their permission. Bill Rogers comments that Alexa made an unintended call, similar to what is commonly referred to as a butt call with smartphones.

Perhaps the alert is higher in voice assistants because technology is new, less known and less common. “What I can say is that software providers working with voice assistants take privacy and security very seriously. So far, the challenge of using voice assistants for medical data has been ensuring HIPAA compliance. Now that Amazon has achieved HIPAA eligibility, doors are open for new opportunities to create applications across the smart speakers that are HIPAA compliant,” comments Bill Rogers. Another issue in privacy and security is the awareness and responsibility of users. Many safety features are available in the digital age: recommendations about safe passwords, multi-factor authentications, and more. “A good start is to turn on the security options offered by programs,” says Bill Rogers.

New job: voice designers

Voice technology is developing rapidly, and so is the need for new specialists — voice and conversational designers. These are going to be very important professions, requiring deep understanding and knowledge for mapping possible responses to a specific question. As again illustrated by Bill Rogers: “What you visually see or what you hear are not the same things. If you can hear an explanation, you can obtain more information than if you read a long list. If you’re designing for an Alexa device that has a display, and a user asks about first aid, you can offer much more than just a list.”

Because the conversational dynamic differs from the written word design is specific, as is the testing. Something might look good on paper but doesn’t sound natural as a conversation. For testing purposes, prototyping tools are in development to address this issue.

“People are starting to realize the impact solutions could have on patient satisfaction,” says Bill Rogers. Voice assistants can reduce the pressure in call centers. Virtual assistants can deliver good news, and reduce the need for medical staff to call patients. Reminders that used to be in a message form are now turned into a full-fledged rich experience.

It has been shown that in the elderly care facilities, users had far less friction in learning how to use voice assistants compared to other digital tools. All this is why developers and doctors hope voice will play an important role in improving patient behavior. “The biggest value of virtual assistants is that it can be more meaningful because it brings frictionless engagement and value by providing answers compared to wearables than just record data,” comments Bill Rogers.

Listen to the full conversation in iTunes.


Some questions addressed:

  • Voice assistants and technology have a long history since the 60s but development was accelerated after Amazon Alexa entered the market in 2014. What has changed by today in voice technology?
  • How is voice digested by AI?
  • Music, weather, timer, radio, recipes … these are pretty risk-free uses of voice assistants supported searches. It is true that the accuracy of misheard words was art 23% in 2012 and is at 5% today, but this can still be huge in healthcare, where the standards and needs for accuracy are much higher. What are the current use cases of voice in healthcare today?
  • Orbita offers the leading healthcare-focused platform for designing and building HIPAA-compliant virtual assistants that engage and support patients. Leading digital health innovators rely on Orbita, including Amgen, American Red Cross, Brigham and Women’s, Deloitte, Mayo Clinic, and Pillo Health. Can you explain a little bit what these use cases are, how are voice assistants used?
  • What is the process of developing a voice-assisted model, and how transferable are the models from one institution to the next?
  • What are all the new jobs required to develop voice technology? Conversation designers are one of them. What are they by profession, who is most suitable to become one?
  • We forgive the doctors; we don’t forgive technology. Technology shouldn’t make mistakes, because it doesn’t get tired, because it does not have feelings, because we expect it to be flawless.
  • What is the role of natural language processing to recognize meaning even in speakers with strong accents?

Faces Of Digital Health

Faces of Digital Health (ex Medicine Today on Digital Health) is a podcast about digital health, exploring how different healthcare system adopt technologies in healthcare. It’s mission is to share insights for global healthcare improvements.

Tjaša Zajc

Written by

Faces Of Digital Health

Faces of Digital Health (ex Medicine Today on Digital Health) is a podcast about digital health, exploring how different healthcare system adopt technologies in healthcare. It’s mission is to share insights for global healthcare improvements.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade