The Quest for Transparency: How Quantum-inspired Tensor Networks Can Revolutionize AI Explainability

Multiverse Computing
4 min readFeb 22, 2024
This mutual information heatmap illustrates the features of the algorithm, showing that some pairs are highly correlated while others are very independent. Brighter colors are related to bigger mutual information values. A high mutual information value indicates a strong relationship or dependency between the tensors, meaning that knowing the state of one tensor provides significant information about the state of the other one.

By Borja Aizpurua, Quantum Research Scientist

Explainable Artificial Intelligence (XAI) has emerged as a pivotal movement towards demystifying the decisions made by AI models, aiming to make these models not only precise and powerful but also transparent and understandable. This push for XAI is driven by the need to foster trust in AI systems by elucidating their decision-making processes and ensuring they meet increasing demands for regulatory compliance.

Techniques such as LIME (Local interpretable model-agnostic explanations) and SHAP (Shaply additive explanations) have been developed to bridge the gap between model accuracy and explainability. However, despite these advancements, many deep learning methods remain largely opaque, creating a challenge for achieving true interpretability.

Against this backdrop, Multiverse Computing introduces its groundbreaking Matrix Product State (MPS) model. Unlike conventional AI models that often act as “black boxes,” this tensor-network generative model offers a “white box” approach. This model not only achieves performance on par with the best available models but also provides incomparable interpretability.

The MPS becomes a new standard of transparency, with its functions and tensors offering clear, interpretable probabilities instead of uninterpretable weights. This breakthrough is pivotal for tasks demanding a high degree of trust and understanding in the AI’s decision-making process, such as in finance, healthcare, and legal fields.

Our Solution and How It Works

Leveraging Adversary-Generated Threat Intelligence, our approach transcends conventional rule-based systems. Rule-based systems use specific guidance to identify threats and explain categorizations of threats in terms of which rules were applied. In contrast, the MPS solution represents knowledge within the model’s tensor network structure, enabling a probabilistic understanding that captures complex correlations that are challenging to express using simple rules. It excels in learning ‘normal’ behavior patterns, which allows for the precise flagging of ‘abnormal’ activities, advancing significantly beyond traditional rule-based systems.

The model’s ability to identify unknown attacks is anchored in its proficiency in understanding the data’s probability distribution and contextual correlations. The Negative Log Likelihood (NLL) metric is pivotal here, quantifying how divergent an event is from the expected norm — the higher the NLL, the more anomalous the event. Moreover, this tool offers two pivotal capabilities:

1. Synthetic Data Generation: At times, real-world data may be insufficient or too generic to effectively train models or simulate complex scenarios. MPS addresses this by generating synthetic data that mimics real-world complexities, enhancing model training and offering a strategic advantage in cybersecurity through the simulation of deceptive activities to thwart attackers.

2. User-Friendly Risk Tolerance Analysis: The integration of our analysis tool allows for real-time monitoring and filtering of data by anomaly levels. This facilitates a focus on significant threats, optimizing resource allocation and response times.

This multifaceted strategy ensures a balanced alert system, minimizing false positives while providing deep insights into anomaly detection. The advanced interpretability of MPS, alongside its synthetic data generation capabilities, marks a significant leap forward in understanding complex AI decisions.

The training dataset for this model comprised 674,704 events, of which 1,007 were incident-related. This realistic imbalance poses a unique challenge, underscoring the need for a robust anomaly detection system capable of discerning subtle patterns of cyber-threats within predominantly benign data traffic.

Expanding Explainability with Tensor Networks

The model’s interpretability includes direct probability extraction and von Neumann entropy analysis, but its true strength is in its capacity to transparently reveal the correlations learned from data. It employs mutual information within the MPS framework to highlight feature interdependencies, as demonstrated through mutual information heatmaps. This approach not only makes the model’s decision-making process transparent but also streamlines its performance by aligning correlated features. Organizing these interdependent attributes within the data in a more straightforward structure makes interpretation easier and improves efficiency. This is particularly beneficial in sectors where understanding the rationale behind AI’s decisions is as crucial as the results, enhancing data representation and simplifying complex correlations for better interpretability and efficiency.

The Impact of Improved Explainability

The success of the MPS model in cybersecurity underscores its potential across various industries. By capturing the probability distribution and inherent correlations within data, the model showcases its adaptability, promising significant benefits in fields requiring transparent AI solutions. In the business realm, the necessity for explainable AI becomes apparent in scenarios like credit scoring and healthcare decision-making.

For instance, in credit scoring, the ability to elucidate the reasoning behind decisions aids in refining risk models and ensuring equitable practices. Similarly, in healthcare, explainability supports the interpretation of diagnostic models, fostering trust and enabling actionable insights. These examples underscore the broader applicability of MPS across various domains, where transparency not only builds confidence but also ensures compliance and operational efficacy.

Setting New Benchmarks in AI Transparency

The quantum-inspired MPS model heralds a new era in machine learning, where the identification of threats and patterns is accompanied by clear, understandable logic. This paradigm shift towards explainable AI not only meets the current demand for transparency but also enhances the capability to generate synthetic data, thereby improving the training of complex models. As we continue to navigate the intricacies of AI applications, the advancements presented in “Tensor Networks for Explainable Machine Learning in Cybersecurity” (available on arXiv) offer a promising path toward more interpretable and reliable AI systems across various industries.

--

--

Multiverse Computing

Multiverse uses quantum and quantum-inspired software to tackle complex problems in finance, energy and manufacturing to deliver value today.