Explainability AI in Genomic Medicine

Carla Orlandi
LARUS
Published in
3 min readJun 4, 2020
Photo by National Cancer Institute on Unsplash

Explainable AI means humans can understand why conclusion was reached or how machines make decisions or specific judgments.

Explainability of AI (XAI) is fundamental for people to trust, manage and use AI models. People will not use AI systems if they don’t get explainability. AI is called to present both its judgments and the reasons for them at the same time.

Explainable artificial intelligence (AI) is attracting much interest in medicine and it has actually become a key responsibility for adopting AI in medicine.

For fields such as health care, where mistakes can have devastating effects, the black box aspect of AI makes it difficult for physicians to trust it. The decision-making of clinicians or pathologist performance is impacted by transparency and accuracy of AI models. Such models have to answer the question “why did the model predict that?

Our Partner Fujitsu Laboratories of America has developed the very first explainable AI, called Deep Tensor, capable of showing the reasons behind AI-generated findings and to make them explainable, allowing human experts to validate the truth of AI produced results and gain new insights.

Deep Tensor has been used to improve the efficiency of survey work by experts in the field of genomic medicine. Using a knowledge graph constructed from public databases in the field of bioinformatics and a medical literature database Fujitsu Data Scientists searched for knowledge that could provide corroborating evidence for phenomena in which relationships are only partially known and checked to see whether links could be established (Figure 1).

Fig.1 Effect of applying developed technology to genomic medicine.

Deep Tensor was able to learn relationships between genetic mutations and pathogenicity from public databases. They then extracted information and academic papers related to factors determined by inference-factor identification technology and formed a basis. The basis-forming example of Figure 2 shows the genetic mutation targeted for inference as a pentagonal-shaped node, factors significantly contributing to the inference result as circle nodes, academically supporting knowledge extracted from medical literature as square nodes, and disease candidates as triangular nodes.

Fig.2 Example of forming a basis.

In the figure, the edges (solid lines) connect- ing nodes indicate that those items of knowledge are related on the knowledge graph. The broken lines, on the other hand, interconnect the genetic mutation and its gene, the drug Losartan targeting that gene, and the disease related to that drug thereby presenting the relationship between the genetic mutation and disease. In short, by starting with the genetic mutation and tracing out the relationship between genes and drugs and that between drugs and diseases on the knowledge graph, it becomes possible to construct a graph of associated knowledge that extends to candidate diseases as a basis that can be viewed by the user.

For more information and to know the potential of this technology in medicine, please refer to the paper Explainable AI Through Combination of Deep Tensor and Knowledge Graph (M. Fuji et al.), from which this article is taken.

--

--