A quick glance at the most important medical applications of deep learning
Introduction
There’s a lot of hype around Artificial Intelligence and its potential to revolutionise every industry and service. This is especially true for healthcare, where the use of AI would eventually affect all of us. Although a lot is made of the changes that will occur in the coming years, here we’ll look at how deep learning has already had a significant impact on healthcare by changing how we do things such as diagnosis, treatment planning, and drug discovery.
The goal here is to examine how deep learning is already massively improving our lives and evolving the way healthcare is delivered. We’ll explore the methods used and the impact they have, while exploring the underlying technologies that are putting these methods into action.
How data powers transformation
Deep learning models generally need to be trained on large amounts of data. The training and development of algorithms depends on the types of datasets being used. Thus in order to understand the use of deep learning we must understand the data — the root of it all.
Computer vision
Computer vision is a field of AI specifically focused on building models that interpret visual information. This field of deep learning is particularly useful in healthcare applications because a lot of medical data exists in this format. Algorithms can be built to analyze and interpret medical images such as X-rays, CT scans, MRIs and Histology slides. Deep learning models are able to perform tasks such as image segmentation, which involves isolating specific structures within an image for further analysis, and object detection, which involves identifying and marking specific structures within an image. This essentially means that a task such as disease diagnosis, which has always required focused effort from medical experts, can now be done much more easily when they use AI. This is hugely impactful not only because of the time saved, but also because in some cases, AI is more accurate than doctors. With around 12 million misdiagnosed patients each year in the US alone, we can expect these tools to create better and better patient outcomes as they progress.
Natural Language Processing (NLP)
NLP models analyse text data and are thus built primarily using electronic health records (EHRs) in a medical context. These algorithms are used to extract information from EHRs such as patient demographics, diagnosis codes, and treatment plans. This information can then be utilised for a variety of purposes such as quality improvement or medical research studies. NLP has been particularly useful because algorithms can be much more effective than humans at identifying patterns and trends across huge volumes of patient data. In fact, NLP based products even claim to result in a 70% reduction in burnout, which has always been a major challenge for nurses and physicians.
NLP has even been integrated into healthcare systems to assist with the coding and billing process by automatically extracting relevant information from EHRs, which reduces the need for manual data entry and the risk of errors. This allows hospitals to make the most of their resources while also making more informed decisions about patient care. Improving the efficiency in healthcare delivery is a simple but important example of how deep learning is changing healthcare.
Cancer prognosis and treatment
One of the biggest challenges doctors face after a cancer diagnosis is deciding the best course of treatment. The evolution of a tumour is difficult for doctors to predict, however deep learning models can be built to advise them on how a particular type of cancer is likely to progress over time. With this knowledge, doctors are equipped with tools to create a more precise treatment plan, but the use of AI can go a step further. Deep learning can also be used on genetic information to identify markers, which indicate how a patient would respond to a specific treatment plan. This can increase the likelihood that patients receive the right treatment with fewer side effects.
This is a very strong use case for deep learning in healthcare as there was previously no method for leveraging genetic data to optimise cancer treatment.
Drug discovery
Algorithms are not only limited to patient data and can be trained on datasets of chemical compounds and their associated biological activities to identify new drug candidates. This allows pharmaceutical and biotechnology companies to more efficiently search for new molecules that would lead to new drugs. In the next stage of the drug development cycle, deep learning can be used to optimise drug design by predicting the properties of newly formed compounds, such as pharmacokinetics and toxicity. These tests had to previously be done manually, which meant much longer development cycles. Deep learning is utilized in different parts of the development pipeline which makes the entire drug discovery process much quicker and more efficient. Considering the fact that the average life cycle for development in the US is 12 years and only 10% of drugs from the preclinical phase get approved by the FDA, saving a few years and increasing the chances of success even by a small margin can be very important.
In many cases, AI researchers combine different data modalities to build more complex models that use multiple sources of information to make predictions. Here’s an example of how CT Scans were used together with clinical data to predict the severity of a case of covid-19.
AI products in action
Let’s talk about a few examples of products that are already being deployed. This is not an exhaustive list but rather a look at some of the most impactful products.
- Enlitic has built a healthcare platform for utilising artificial intelligence in the entire lifecycle, with tools for data standardisation, retrospective and real time analysis, research, risk mitigation, and workflow simplification. This helps hospitals save tons of valuable time
- IBM Watson Health is a subsidiary of IBM that has developed AI solutions using almost all of the technologies discussed above. They are helping hospitals use AI to make decisions in 9 out of the top 10 best hospitals in the US.
- Owkin has created a diagnostic tool that assists pathologists to determine the risk of relapse in breast cancer patients, enabling doctors to create optimal treatment plans as they gain more visibility on patient outcomes.
- NVIDIA Clara is delivering AI tools specifically oriented towards researchers. These are designed to accelerate drug discovery and are supporting the missions of other biotech companies.
Going from research and modelling to actually delivering a product that is used in hospitals is a very long journey. Even after the scientific challenges have been overcome, many legal and financial barriers have to be crossed before these tools can be deployed. This is largely due to very strict regulations around healthcare mixed with a lack of existing legal frameworks around AI.
This is another reason why Biotechnology companies are very capital intensive, as it can take many years of work before they generate revenue from their products. Despite the fact that most of these companies are likely to fail, VCs continue to pour billions into biotech companies as the risk can be worth the reward.
Next steps and challenges
These products are only the beginning and serve as examples of the massive changes headed our way. Although most people involved are quite excited about the innovation on the horizon, its important to tread carefully and ask some difficult questions as integrating AI into everyday processes becomes the new normal in hospitals and clinical trials.
As these models rely on data, questions of bioethics about the source and use of data come into play.
- Who owns the data?
Collecting large amounts of personal data by AI systems raises concerns about privacy and data security. Privacy preserving technologies such as Federated Learning have been proposed as a solution to mitigate this problem, but the question of using patient data for AI is still not completely resolved.
- Bias and discrimination
AI algorithms can perpetuate existing biases in training data, leading to biased outcomes and solutions that may not be generalisable. Treatments may be optimised for subsets of populations.
- Who is responsible?
It’s not clear who would have to take responsibility when things go wrong. This is an important issue as the decisions these models take have huge consequences. Would it be the hospitals accountable or those behind the actual products?
As AI advances and penetrates every aspect of the healthcare system, there has been a big push for open source healthcare to help establish standards and enable collaborations. Collaboration enabling technology is important as it allows data scientists and doctors to bridge the knowledge gap and work together to maximise their productivity.
Nonetheless, these are very exciting times to be a human, as we can expect to see some dramatic paradigm shifts within the next decade. Some researchers such as David Sinclair at Harvard, are even using AI to take on the challenge of solving ageing.
REFERENCES:
https://www.mckinsey.com/industries/life-sciences/our-insights/what-are-the-biotech-investment-themes-that-will-shape-the-industry
https://vial.com/blog/articles/drug-development-lifecycle/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6716335/
https://www.ge.com/news/reports/software-please-doctors-looking-ai-speed-diagnosis
https://anolytics.home.blog/2020/07/20/what-are-the-medical-data-sets-that-i-can-use-to-train-ai-for-medical-diagnosis/