The new healthcare

Lewis Lloyd
5 min readJun 6, 2017

--

There’s a lot of buzz around the application of artificial intelligence (AI) to healthcare. At a time when health services around the world are under increasing strain, this is understandable. The prospect of lower costs, shorter waiting times, better treatment and happier doctors appeals to politicians and the public regardless of party affiliation.

AI could help at multiple levels. Mark Zuckerberg, Google DeepMind and Oxford University have all pointed to the potential for machine learning to improve treatment quality, capitalising on vast quantities of health data to identify the causes of and new cures for diseases. In hospitals, it is hoped that introducing an element of automation to the process of analysing test results might help over-worked doctors diagnose patients more quickly and accurately. Apps such as HealthTap’s Dr. A.I. (recently built into Amazon’s Alexa) and Babylon offer automated advice, taking some of the pressure off GPs and A&E departments.

As with any new technology, the above will bring a range of benefits and potential pitfalls, and none of them are done justice by the quick mention here. But I’m more curious about the use of AI to help pre-empt health problems, beyond aiding their treatment or diagnosis. A service that looks to prevent you from falling ill in the first place would go even further towards making universal free healthcare affordable, and have a much more revolutionary impact on how we think about what a health service is.

The idea is this. Millions of people already feed data on their physical condition (heart rate, weight, menstrual cycle) and lifestyle (how much they move, eat, sleep, and so on) into their phones and watches, whether intentionally or not. Even today, it would seem plausible that an algorithm could be trained to identify patterns in this data that correlate with the health problems reported to Dr. A.I., or an equivalent. People could then be alerted when their data indicate they might be susceptible to illness, with some sort of intervention recommended.

Although none of the major players are openly talking about this as a possibility, the foundations either have been, or are being laid. Apple appear to be leading the silent charge in this direction. iPhones have been shipped with Apple’s Health app — which cleverly brings together data from any 3rd party health tracking apps you might use — for years now, with the app also allowing you to save your healthcare records in it (more data to fuel an algorithm). The latest Apple Watch advert puts its fitness tracking capabilities front and centre, and rumours that Apple are trying to develop a non-invasive way of monitoring blood glucose levels further reinforce the impression of a company thirsty for our health data.

An algorithm built by Apple could also exploit a wealth of other potentially informative data sources. With access to your current location, environmental conditions — temperature, air pollution, pollen count — could be taken into consideration. Your mood — a possible indicator of both physical and mental health — could be inferred from sentiment analysis of your iMessages and emails. Similarly, the busy-ness of your calendar, or the nature of the events in it, may correlate with changes in health. The EpiWatch app is trying to learn to predict epileptic seizures before they happen, but Apple’s predictions and accompanying recommendations could eventually be much more far-reaching.

This prompts a question: if your phone suggested you get an early night because you have a cold coming on, would you listen? If it recommended you cycle rather than drive to work, because your weight is creeping up dangerously, would you dig a bike out of the shed? Perhaps not. After all, when my Garmin tells me to move, I turn it off. I would posit that most people know when they’re being unhealthy, and a machine reminding them of it would just make them resentful.

However, one of the things this sort of “healthware” could learn is the type of prompt you respond to best. Perhaps you find cheery motivational messages with smiley faces and exclamation marks irritating, but listen when the health risks of your current trajectory are bluntly stated — or vice versa. Similarly, you might prefer daily suggestions to keep you looking and feeling “great”, or updates only when you appear to be at risk of a problem. Psychometric inferences akin to those apparently made by Cambridge Analytica could match you with an initial style, with further tweaks made to optimise your responsiveness over time. Besides, if the system was consistently proven to be effective and adopters led healthier, more productive and longer lives, it would be hard to ignore.

It could also become difficult to opt out of such a service. If universal free healthcare seems unaffordable, governments might make using healthware a prerequisite for access. Much as we already give up our data in exchange for “free” services (ahem, Facebook), your health data would then be a form of tax: pay, and you can use the NHS; don’t pay, and you’ll have to go private. Even in the private sphere, though, health insurers may offer lower premiums for those using this sort of system, just as car insurers do for people willing to have tracking devices monitor their driving.

The bigger question then becomes whether or not this would actually be desirable. In terms of the functioning of health services themselves, would further reliance on technology not make them even more fragile, given the recent hacking débacle? Data privacy would also be a concern, particularly in light of this report from last year (building on a Symantec report from 2014), which found a range of fitness trackers to have astonishingly poor security and little transparency with regard to the data collected and how it’s used.

Even more disquieting is the question of control. CCTV cameras and the rise of GCHQ represent one form of tracking and surveillance, but health trackers (even in the less sophisticated form they take now) serve as another, arguably more insidious means of regulating people’s behaviours. The scenario developed above could see great improvements in public health, but individual autonomy would suffer, with all manner of decisions outsourced to algorithms designed to fit our lifestyles to a desirable norm established by someone else.

It’ll take years to develop systems that are sufficiently accurate and generalisable across people and health conditions to be commercially viable. But with “digital behaviour change” programmes already paving the way, increasing amounts of salient data, the technology to process that data, and potentially huge rewards for whoever can do so successfully, I suspect that it’s coming — and within the next decade. Rather than simply being swept along, we might want to think about how to balance the prospect of better health with the freedom to live as we wish, and who should decide where to draw the line.

For more detailed thoughts on the feasibility of the above, see here.

--

--

Lewis Lloyd

Researcher on tech and bits of Brexit at the Institute for Government