Cracking the Black Box of Healthcare

Martin Holm Jensen
LEO Innovation Lab
Published in
7 min readAug 23, 2018

Deep learning models, or “black boxes”, are increasingly becoming a focal point in technological developments in the healthcare arena. Research in deep learning often combines various health data streams, and is creating the potential to redefine healthcare delivery and improve patient care. Such solutions, however, can struggle with the issue of explainability and thereby also adaption. The processes that go on inside the black box are so complex, we can’t always inherently understand or explain them. The question is: Do we need to?

From self-driving cars to online shopping, artificial intelligence is continuing to expand and take hold within a variety of aspects of our lives — including our health.

Imagine the ability to, within seconds, receive a diagnosis or a new targeted treatment, when you’ve hit the wall in your recovery — all through the vast amount of genomic, molecular, and behavioural data, which is increasingly being amassed.

We’re talking about a predictive model in the shape of a black box. Not the one you find in airplanes (which, by the way, is orange), but the intricate software and complex algorithms that can potentially take the healthcare system to the next level. It is, however, not without its challenges as this tech tends to walk a very thin line between paradox and possibility.

One of the major issues in this conversation is explainability — or lack thereof. The technology has existed for decades, but has reached a level of complexity, where we can’t always trace a clean line between data and result — hence the name “black box”. We have a system with inputs and outputs, but we’re somewhat in the dark about what precisely goes on inside.

Today, we’re able to use this technology to explain complex statistical patterns, and it’s increasingly becoming an integrated part of health tech development through its ability to identify correlations between previously unassociated data points.

What’s the big deal about black boxes?

What these models offer is automation or aiding humans in their decision-making processes. Decisions currently made by humans are replaced or supplemented by a coded system, which in many instances can be more reliable than a healthcare professional acting alone.

How can this be possible? The vast amount of digital patient data available now is fuelling this innovation. Imagine going to your doctor after a treatment for a skin condition has failed. The doctor will probably look at your medical history, consider what your rash looked like during your last consultation and now, and whether you stayed true to your topical medication. That’s about four factors. Black box systems can make recommendations based on far more behavioural, genetic, and demographic factors — and do so within seconds. This technology does not replace healthcare professionals but instead enable them to provide better care through increased accuracy in their clinical assessments. The implications of this are an improved approach to patient care, and the possibility for new treatment opportunities, discoveries and automated second opinions. Humans are complex, and through the rapid increase in the amount of health data being amassed, an algorithm can better analyse the unique relationships between a number of factors and determine the most likely diagnosis, best treatment or whether to refer someone to a specialist. This opens up for more effective treatments and better use of the healthcare professional’s time — or even aid in the discovery of new drugs or other uses for existing drugs.

One size doesn’t fit all

Standardised treatments are often the norm within the healthcare system, but what works on you may not necessarily work on me. Patients may have the same diagnosis such as depression or cancer, but that’s just about where the similarities end. Humans and diseases are incredibly diverse, and a lot of factors affect how people respond to different treatments. Your genes, for example, are a factor that affects how you respond to a drug.

Enter personalised medicine — a way of tailoring treatments to each individual patient. The main ingredient in this ambitious endeavour is (you guessed it) data. Lots and lots of enriched data. Here the black boxes come into play: Algorithms can explore and establish the unique correlations, which in the end call for one treatment over the other.

The Netflix of healthcare

So how do we understand the black box algorithms? Take Netflix as an example. Through machine learning technology, namely collaborative filtering, Netflix can predict, which movies and series you might be interested in based on data such as your ratings, movies you’ve watched, series you’ve binged, and similar data from like-minded users, where their high-rated series and movies can be used as recommendations for you. The path to the predictions can’t be explicitly understood since a variety of data and algorithms come into play when making these suggestions.

The same goes for healthcare. Through a variety of markers, data might favour one type of treatment over another. Why? We don’t know because the model won’t tell us — or more accurately, it can’t tell us. The intricate paths are not purposely hidden but are, in a sense, incomprehensible. At least for the human mind.

We find another example in good old Facebook. The social network employs artificial intelligence to help marketers take their message distribution to the next level through lookalike audiences. It involves the ability for a company to target users that exhibit similar attributes to their customers such as interests and demographic information. Based on the data available, Facebook makes it possible to find and target people who are similar to the existing customer base — these are referred to as “lookalike audiences”.

A crucial difference between Netflix recommendations, Facebook marketing, and healthcare is the dataset. Netflix may have millions of subscribers worldwide, but the variables are limited compared to those that are relevant for healthcare databases. There’s also far more at stake. Your next treatment, after all, has far more impact on your quality on life than whether your next tv show is Orange is the New Black or Black Mirror.

How do you treat what you can’t explain?

Remember math class back in school? Getting the correct answer wasn’t enough to ensure you full marks. You had to show your workings too because the road is just as important as the destination.

But when big data meets health, we run into a challenge. Black box models are more concerned with the end goal. We put in data and through complex algorithms, we receive an answer without actually knowing the exact whys of the vortex of the black box. Great for accuracy and efficiency — not so great if we want to justify the predictions.

The issue of explainability is a particularly sensitive one within healthcare. We’re dealing with people and their lives, and patients want to know why we pick one treatment or referral over another. Physicians want to be confident in relying on the predictions, which is hard, if they don’t understand the reasoning of the AI — and in particular when you have someone’s life in your hands with fatal conditions like cancer.

Are black boxes and the human mind so different?

There are similarities between the black box and human mind. Human cognitive functions are similarly difficult to understand or explain. Based on a number of variables, physicians make a decision, but in this process there’s a lot of tacit knowledge. Physicians are rarely able to break down every single step in their decision-making processes.

The same goes for the black box. We may not explicitly be able to follow the chain of reasoning but it’s there — hidden in the algorithms and training data.

The major difference between the physician and the black box comes down to human elements, such as trust and comfort, which no black box can provide. Even if the black box operates with far more accuracy and a wider dataset than a human being, we’re more prone to trust the authority of the human expert in front of us rather than cold, impersonal, computerised models. We trust and find comfort in the company of humans because of our emotions and ability to empathise. Our social intelligence is what makes us human, but how do we teach or code social intelligence into a machine? Is this even possible?

We can measure and compare the predictions made by humans and a coded system; however, the level of comfort is another story because its formalisation is far beyond the reach of any mathematical model we’ve developed.

Rethinking technology and ethics

Black box models have the potential to decrease costs and increase the quality of healthcare, but for it to truly gain traction, we need to strike the right balance between explainability and the potential gain from adopting new technology.

We have AI technology like self-driving cars that will reduce pollution in cities worldwide and reduce the number of casualties to traffic accidents; however, much of the talk revolves around, what happens if a person is hit by a self-driving car. A valid discussion, but small in the bigger and more utilitarian picture. The same goes for AI in healthcare.

It’s therefore our responsibility to show that not only does this tech work, but we can make doctors and patients trust it to make a significant impact on human lives — if we let it.

--

--