XAI in MedTech
Digitization in Healthcare
“The process of tracing back chains of infections was often still done with pen and paper”
The global COVID-19 pandemic revealed inefficiencies across many if not most industries.
Despite the fact that Germany has handled the situation with relative success, we were confronted with the harsh reality that the status-quo might not always lead the way into a bright future. This also holds true in the healthcare space.
Clear examples of the aforementioned inefficient processes could be seen in parts of the German Public Healthcare system. Until now, there had not been a need to increase the efficiency in managing outbreaks as there never had been more than a few dozen cases simultaneously. This changed at the beginning of the year when the global community was faced with the novel SARS-CoV-2-virus. The process of tracing back chains of infections was often still done with pen and paper and afterwards transmitted via fax between regional healthcare administration offices. One way of solving the problem of rapidly increasing numbers of infections was to hire more staff, but this proved to be a difficult task.
Similar to the above mentioned case, many more instances of bureaucratic or tedious repetitive work existed within the healthcare sector and not all of them can be digitized as easily as we did with quarano in the previous scenario. Artificial intelligence, specifically deep learning coupled with the price deflation of computational power, now more than ever before has the potential to automate tasks with high inherent complexity.
The IBM cancer project is a great example of that: (https://www.research.ibm.com/cancer/), where they have been working for years on applying image recognition algorithms on oncology data. Another prominent instance of successfully putting AI into practice in medicine is ADA (https://ada.com/de/), a convenient smartphone app
Layer-wise Relevance Propagation
Side Note: Explainable Artificial Intelligence, or XAI for short, is an umbrella of methods and techniques that aim to, as the name suggests, demystify the inner workings of Machine Learning Models. This helps for instance to reduce the risk of accidental biases or errors to occure when a model is used in production.
Layer-wise relevance propagation is a novel technique, within Explainable Artificial Intelligence (XAI), which is basically reversing the math responsible for making predictions and therefore allowing for understanding of the reasoning behind the decision-making process of Neuronal Networks.
In contrast, “traditional” Deep Learning algorithms are referred to as black box models. This means that besides the prediction no further information is returned, and blind trust is required when applying a system in the real world. In other less critical industries this does not pose such an imminent threat. However, in healthcare, a missed error could prove fatal.
A practical example
To give an example of how this explainability can look like in the field, one can imagine a scenario where many multi-page medical documents need to be read and informed decisions need to be made based on the information included. In Machine Learning terms, this would be referred to as a Natural Language Processing (NLP) problem. XAI would introduce smart highlighting to the solution, which enables medical professionals, or other domain experts, to immediately spot errors and intervene manually while still taking advantage of the superior processing speed offered by an automated system. This, among other things, is one of our main projects right now. Our team of AI experts, at DeepMetis, is currently working on validating the technical feasibility of layer-wise relevance propagation in the context of data mining for medical studies. More information on our research will be released in the future.
The way forward
Adding explainability to machine learning offers an immense potential for industries where certainty and verification are key — especially when lives depend on the decision made. Right now an increasing number of organizations are successfully exploring XAI in medicine and healthcare. We, at DeepMetis, believe that explainable models have enormous potential to alleviate, or at least reduce, the risks imposed by opaque machine learning models across all domains and industries.
Ferdinand is a high-tech R&D enthusiast and one of the co-founders at DeepMetis (www.deepmetis.com ), an impact-driven AI and Quantum Computing research firm based in Berlin. He also is the co-initiator and chairman at quarano (www.quarano.de ), an NGO providing software which digitizes processes in the public healthcare sector, in the light of the COVID-19 crisis.