Take a seat: the AI will be with you shortly
Medical AI technologies may look promising, but their advocates need to keep healthcare goals in mind
By Laura Carter, DataKind UK Ethics Committee
This blog is a summary of the discussions that took place at the DataKind UK ethics book club on AI and medicine, on 22nd April 2020. Views represented here are those of attendees at the book club.
Data and Coronavirus
By the time DataKind UK’s ethics book club rolled around, our topic — AI and medicine — felt pretty timely. It wasn’t surprising that many of the discussions revolved around technological and data-driven responses to COVID-19. In groups, we discussed the contact-tracing methods being used by public health authorities around the world, and concerns that privacy might be a casualty of the public health response. While there might be legitimate arguments to pry into people’s personal lives in the midst of a pandemic, we also wondered what happens after the crisis is over. Once we give up privacy — even for valid reasons — can we get it back? And in the meantime, what happens to the data that’s been collected?
Participants also noted the potential of AI to help tackle misinformation, including around COVID-19. WhatsApp has become a vector for health information: not all of it is accurate, and some of it is harmful. But a couple of participants shared their experiences of using tools that are designed not only to help people validate accurate information, but also to help friends and family members make sense of conflicting messages.
The computer ‘says’ you’re unwell
Several of the pieces we read centred around the potential for machine learning technology to help diagnose health conditions — including breast cancer from screening images, or depression from Facebook posts. Advocates for these technologies argue that, using data from people diagnosed with these conditions, we can train machine learning algorithms to recognise similar patterns to use in future diagnosis. In the best case scenario it could assist in early identification, leading to earlier treatment and support that ensures the patient gets the best possible care.
Participants were broadly supportive of decision assistance, even if sceptical of claims that eventually, machines alone could diagnose illness: they noted that human medical professionals need to understand why an algorithm is making a decision in order to judge when it seems wrong, and to overrule it when necessary.
The discussions also included concerns about the incentives behind the use of algorithms. Do we aim to provide people with the highest standards of care, or to minimise the number of expensive procedures a person undergoes? Participants raised concerns that introducing algorithms into healthcare systems might skew these incentives even further.
Access and bias in digital — and analogue — systems
Previous book club sessions have looked at how AI can exacerbate race and gender bias. Medicine is a science developed by humans, and therefore it includes human biases. Participants pointed out that when it comes to pain relief, women are less likely to have their pain taken seriously, and black people may be less likely to receive appropriate treatment. The introduction of AI technologies into medicine doesn’t necessarily alleviate these biases, particularly if the data on which they are trained is biased. An image processing program for recognising harmful skin lesions that is trained predominantly on pictures of white skin is going to perform a lot less effectively on pictures of people from BAME communities.
Book club attendees also suggested ways that AI and other technologies could improve access to healthcare. In countries where mental health conditions are stigmatised, the possibility of accessing services remotely instead of visiting a doctor in person could mean that more people feel able to seek out treatment. Remote GP services, or even chatbots where there is no personal interaction, could make healthcare more accessible for people who are uncomfortable accessing services because they fear discrimination.
What should medical AI aim for?
Few participants were completely opposed to the use of AI in medicine. Strong regulation or solid frameworks on which medical apps should be built were popular: medicine is a field that trains its practitioners for years, and the burden of testing and assessing which algorithms, apps, or technologies are appropriate for use shouldn’t fall on individual patients.
Medicine is a field that trains its practitioners for years, and the burden of testing and assessing which algorithms, apps, or technologies are appropriate for use shouldn’t fall on individual patients.
Participants felt more comfortable with the idea of using AI for straightforward tasks, such as issuing repeat prescriptions, and perhaps for early screening of some conditions. While few people would like to see a virtual doctor, there was support for reliable, trustworthy systems that alleviate the administrative burden for medical professionals, and smooth the path between a patient and the healthcare they need. But it’s important that the aim is the same for both patients and the medical professionals that treat them: the best possible healthcare.
Virtual book club
The DataKind book club went completely virtual this time — and that meant our participants could join from wherever in the world they were, from London to New York to Chennai. For those of you who were there, we’d love to hear how you found it! Join us for our next discussion DataKind UK Ethics Bookclub on 17 June: we will be discussing AI in the workplace. Join us online by signing up here!