Paging Dr. Robot
Don’t worry — your next doctor isn’t a robot. But as AI enters both the operating room and your living room, we‘re going to have to answer some difficult questions.
Right now, a lot of the media coverage on AI in health care falls into two categories: feel-good success stories or horrifying nightmares.
The successes might be a surgeon-controlled pair of robotic arms stitching the skin of a grape back together, or telepresence technology helping people in underserved areas talk to therapists. The nightmare scenarios might be a popular search engine amassing millions’ of people’s health records without their knowledge, or a widely-adopted algorithm in the American healthcare system found to be biased against black patients.
Incorporating artificial intelligence into health care has the potential to streamline practices, fill staffing gaps, help doctors improve their diagnostic practices and save lives. At the same time, the most pressing issues in our current technological landscape — from algorithmic bias and industry disruption to worker displacement and the AI black box — are playing out in real-time in the health care sector. The decisions we make about how these technologies are implemented in health care may have greater repercussions for how they’re used and regulated in other sectors. And when it comes to medicine, the stakes couldn’t be higher.
It’s easy to worry that putting an assistive robot in a hospital or involving an algorithm in a serious treatment decision is going to make everything cold, detached, and dispassionate — as in, the very same negative attributes we often apply to artificial intelligence. But don’t worry, your next doctor is probably not going to be a robot.
“Machine learning is good at one thing, which is prediction,” said Zachary Chase Lipton, professor of business technologies and machine learning at Carnegie Mellon, where he researches the use of machine learning in health care. “It gives us something like: is there a tumor or is there not a tumor, given the image? It doesn’t tell us why [we should] give a treatment that we historically wouldn’t have given. It doesn’t [help us] detect a tumor in the future. It doesn’t tell us how you make structural changes to the health care system. So, when people get carried away like AI is taking over everything, it’s more like we’re plugging it into these narrow places.”
To give one example of what Lipton is talking about: image recognition and other deep learning strategies have achieved high accuracy in tumor diagnosis — sometimes higher accuracy than human doctors. Along with that, there is mounting anxiety over whether or not that means that doctors are just going to be replaced by these algorithms.
According to Adam Perer, professor of human-computer interaction at Carnegie Mellon, the way we’re going to see artificial intelligence implemented in health care is going to look a lot like the human-in-the-loop systems that we’ve previously discussed. He says that inputs from machine learning tools provide a boost to the cognitive abilities of clinicians — but does not replicate human analysis.
“We’re nowhere near being able to replace [doctors]. Ultimately, they need to make the decisions themselves,” said Perer.
Professors Lipton and Perer are working to improve the way that clinicians interact with and learn from the AI that is supplementing their work. As these technologies continue to evolve, they are going to reinforce the importance of fundamentally human capacities like empathy, emotional intelligence, and collaboration. In that spirit, Lipton is more interested in focusing on algorithms that help human clinicians become better doctors, rather than just trying to outdo them.
“We can build a computer algorithm that sees something that a doctor might not, [but] how do we not just say, ‘okay, we did better in this class of images,’ but actually cycle that knowledge back?” said Lipton. Most doctors aren’t trained in being able to read and write code. And remember: a lot of what goes on with an algorithm happens in a black box.
Perer says that’s the most interesting challenge for something like him, whose expertise is in creating visual interactive systems to help users make sense out of big data.
“How do we explain what this algorithm is doing to the doctors so they can actually get potentially better at detecting cancer by understanding what the algorithm found that they couldn’t find,” said Perer.
Even in the most general sense, it’s vital to be able to interpret the results of health care algorithms, as well as understand the larger cultural context that these algorithms may not be aware of, to ensure that these algorithms are helpful and not harmful.
In the ninth episode of Consequential, we take a deep dive into AI’s impact on health care. In addition to speaking more with Lipton and Perer about their work on the technical side, and we hear more from the Block Center’s chief ethicist, CMU professor of philosophy and psychology David Danks, about the need to develop these algorithms in ways that reflect human values, broaden access to care, and don’t leave certain populations of patients behind.
“It’s such an exciting and powerful area, because health care touches every one of us directly in the form of our own health and in the form of the health of our loved ones. But also indirectly because it is such a major sector of our economy and our lives,” said Danks.
We also check back in this week with CMU computer science professor Tom Mitchell, the Block Center’s chief technologist, who breaks down for us some of the fundamental differences between how humans learn, and how machines do — and why that matters in the context of health care and medical data. Mitchell also shares some important insights into how the U.S. health care system can improve its data practices and standards, and some of the exciting innovations that could follow.
Paging Dr. Robot is available now on Apple Podcasts, Spotify, Stitcher or wherever you listen to podcasts!