Published in


XAI in MedTech

Digitization in Healthcare

“The process of tracing back chains of infections was often still done with pen and paper”

The global COVID-19 pandemic revealed inefficiencies across many if not most industries.

Despite the fact that Germany has handled the situation with relative success, we were confronted with the harsh reality that the status-quo might not always lead the way into a bright future. This also holds true in the healthcare space.

Clear examples of the aforementioned inefficient processes could be seen in parts of the German Public Healthcare system. Until now, there had not been a need to increase the efficiency in managing outbreaks as there never had been more than a few dozen cases simultaneously. This changed at the beginning of the year when the global community was faced with the novel SARS-CoV-2-virus. The process of tracing back chains of infections was often still done with pen and paper and afterwards transmitted via fax between regional healthcare administration offices. One way of solving the problem of rapidly increasing numbers of infections was to hire more staff, but this proved to be a difficult task.

Similar to the above mentioned case, many more instances of bureaucratic or tedious repetitive work existed within the healthcare sector and not all of them can be digitized as easily as we did with quarano in the previous scenario. Artificial intelligence, specifically deep learning coupled with the price deflation of computational power, now more than ever before has the potential to automate tasks with high inherent complexity.

The IBM cancer project is a great example of that: (, where they have been working for years on applying image recognition algorithms on oncology data. Another prominent instance of successfully putting AI into practice in medicine is ADA (, a convenient smartphone app

Layer-wise Relevance Propagation

Side Note: Explainable Artificial Intelligence, or XAI for short, is an umbrella of methods and techniques that aim to, as the name suggests, demystify the inner workings of Machine Learning Models. This helps for instance to reduce the risk of accidental biases or errors to occure when a model is used in production.

Layer-wise relevance propagation is a novel technique, within Explainable Artificial Intelligence (XAI), which is basically reversing the math responsible for making predictions and therefore allowing for understanding of the reasoning behind the decision-making process of Neuronal Networks.

In contrast, “traditional” Deep Learning algorithms are referred to as black box models. This means that besides the prediction no further information is returned, and blind trust is required when applying a system in the real world. In other less critical industries this does not pose such an imminent threat. However, in healthcare, a missed error could prove fatal.

Overview of layer-wise relevance propagation. Taken from Layer-wise Relevance Propagation for Deep Neural Network Architectures by Binder et al.

A practical example

To give an example of how this explainability can look like in the field, one can imagine a scenario where many multi-page medical documents need to be read and informed decisions need to be made based on the information included. In Machine Learning terms, this would be referred to as a Natural Language Processing (NLP) problem. XAI would introduce smart highlighting to the solution, which enables medical professionals, or other domain experts, to immediately spot errors and intervene manually while still taking advantage of the superior processing speed offered by an automated system. This, among other things, is one of our main projects right now. Our team of AI experts, at DeepMetis, is currently working on validating the technical feasibility of layer-wise relevance propagation in the context of data mining for medical studies. More information on our research will be released in the future.

Our concept applies LRP on NLP data.

The way forward

Adding explainability to machine learning offers an immense potential for industries where certainty and verification are key — especially when lives depend on the decision made. Right now an increasing number of organizations are successfully exploring XAI in medicine and healthcare. We, at DeepMetis, believe that explainable models have enormous potential to alleviate, or at least reduce, the risks imposed by opaque machine learning models across all domains and industries.

If you want to learn more about our work at DeepMetis check out our blog ( or contact us:

Ferdinand is a high-tech R&D enthusiast and one of the co-founders at DeepMetis ( ), an impact-driven AI and Quantum Computing research firm based in Berlin. He also is the co-initiator and chairman at quarano ( ), an NGO providing software which digitizes processes in the public healthcare sector, in the light of the COVID-19 crisis.




DeepMetis is a Berlin-based deep tech venture that researches on, develops and deploys cross-sector tech solutions. We strive to contribute to the advancement and enhancement of ethical and beneficial technologies addressing real-world challenges.

Recommended from Medium

Conversational IVR: Automate Customer Care Calls with AI — Haptik Blog

The Race for AI Dominance is More Global Than you Think

Alliance4ai Admits New Cohort of Future Makers In Kenya,Rwanda, Tanzania, Tunisia, and Nigeria.

Generating value with A.I. solutions: Where to start

Random thoughts on AI

Paper review : Machine learning in organ transplantation

Opportunities for AI & Machine Learning in Scientific Discovery

In search of Ground Truths…

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ferdinand Biere

Ferdinand Biere

co-founder at DeepMetis

More from Medium

AI in Healthcare Industry

AI or die: the race to AI solutions implementation

EMNLP 2021 | Empirical Methods in Natural Language Processing Review

Newsletter #68 — Google breakthrough with PaLM language model