You should care about Explainable AI

Adelaide
Red River West
Published in
10 min readJul 24, 2023

I am a VC analyst at Red River West, and I previously studied Artificial Intelligence at Telecom Paris. I am passionate about AI but I thought it would be even more interesting to dwell into a "counter trend": the need for explainable AI.
In today’s business landscape, AI systems are playing an ever more significant role in decision-making, but their complexity often leads to a lack of transparency and understanding. This is where Explainable Artificial Intelligence (XAI) comes in. XAI aims to bridge the gap between the intricate inner workings of AI models and the need for transparency, interpretability, and trust in their decision-making.

Why do businesses need Explainable AI?

The need for explainability is twofold.
First, to comply with global guidelines and current regulations prioritizing fairness, equity, and ethics. Major scandals, like the LAPD’s use of PredPol for crime prediction, have highlighted the risks of biased AI systems. Remember Minority Report, Spielberg’s movie? Well, this closely looks like this 2057 dystopia. The use of biased data can lead to unjust outcomes, particularly affecting minority communities. Transparent AI models can help identify and correct biases, ensuring fair and ethical decision-making. Explainability is vital for regulatory compliance and will be even more the case with the upcoming AI act that will be enforced in 2025.

Second, explainable AI creates value in business applications.

« Companies that attribute at least 20 percent of EBIT to their use of AI are more likely than others to follow best practices that enable explainability. » McKinsey Research.

Understanding AI’s decision-making process enables better human oversight and intervention, mitigating operational and business risks. Moreover, explainability fosters trust among internal users and external customers, driving wider adoption of AI systems.

What is the technology behind this concept?

The concept of Explainable AI revolves around analyzing and understanding the input dataset, the model, and its outputs to provide human-understandable explanations for AI decisions. The first step in achieving explainability involves a thorough analysis of the dataset used for training the model. Dataset issues, such as biases and representativity, can significantly impact the model’s fairness and performance.

Explainability techniques like statistical analysis, sampling methods, visualization, and dimensionality reduction help assess data distribution, bias, and overall data quality. Understanding the dataset is crucial before proceeding to interpret the model’s decision-making process.

The core of XAI lies in model explanation. This is where the challenge intensifies, especially for complex models like deep learning because they act like black boxes especially when layers are adding up. To interpret model decisions, various techniques are used:

• Model-Agnostic Techniques like LIME and SHAP provide local explanations by approximating the behavior of the model in the vicinity of a specific instance. They assign values to each feature, indicating their contribution to the prediction.

• Rule-based, attention-based, model simplification and feature importance techniques offer further insights into model behavior and its impact on predictions.

• Global surrogate techniques and concept-based techniques also aid in understanding the model at a broader level.

Example of LIME techniques applied to a binary classification model.

Example of LIME techniques applied to a binary classification model

The model is 81% confident this is a bad wine. The values of alcohol, sulfates, and total sulfur dioxide increase wine’s chance to be classified as bad. The volatile acidity is the only one that decreases it.

At River West, we leverage data and AI within our algorithmic Sourcing platform. The tech team, right from the start, followed a similar approach by highlighting clearly which KPI had the most importance in the calculation of a given score. (Growth score for instance). Learn more here.

However, the trade-off between explainability and accuracy is a significant challenge. The more accurate a model, the more complex, making it harder to interpret. Achieving full interpretability for deep learning models may not always be possible while maintaining high performance.

Interpretability versus performance trade-off given common ML algorithms.

Source: https://docs.aws.amazon.com/whitepapers/latest/model-explainability-aws-ai-ml/interpretability-versus-explainability.html

The Chief of AI at a healthcare & AI company I talked to mentioned “Our clients [large pharmaceutical companies] prioritize efficiency over explainability. Due to the limited advancement of explainability techniques, we abandoned an interpretable model built and went back to using a CNN that identifies features not linked to any medical or real-life information. We estimate that a 100% explainable model would perform 15% lower.”

Moreover, fairness and equity present complex challenges, and defining an acceptable level of bias remains a subjective issue, intertwined with social and sociological considerations.

Explanations are not one-size-fits-all; their effectiveness depends on the context and user experience. Social sciences play a crucial role in developing effective and contextually relevant explanations. To measure a good level of explanation, both mathematical and sociological metrics, including user confidence and understanding, must be considered. I had the chance to interview Astrid Bertrand, a Ph.D. candidate at Ecole Polytechnique whose work focuses on HCI, human-machine interaction, and Responsible AI. One of her studies showed that NLP explanations do not improve comprehension of investment advice & robot advisors (for instance).

In conclusion, achieving comprehensive explainability for AI systems requires a multidimensional approach and a deep understanding of human-machine interaction.

What are the XAI market trends today?

The global market for Explainable AI (XAI) is experiencing rapid growth, and its value reached USD 4Bn in 2021, with estimations pointing to a market size of USD 11Bn by 2027. The expansion of this market is primarily driven by the growing adoption of AI technologies across various organizations.

Regulatory considerations also play a pivotal role in shaping the demand for XAI solutions. Governments and regulatory bodies worldwide have recognized the importance of addressing ethical concerns, ensuring fairness, and safeguarding consumer privacy in AI applications. Consequently, they are enacting measures throughout the AI Act in Europe for instance that mandate compliance with data protection laws, algorithmic transparency, and bias mitigation. Organizations are consequently seeking AI software that adheres to these regulatory requirements, further fueling the demand for Explainable AI.

In the XAI market, we can identify three categories of players.

AI & Data Consulting Firms

They provide comprehensive AI solutions and products tailored to meet specific organizational needs. Leading companies in this segment include Quantum Black, BCG Gamma and others, which offer end-to-end solutions covering various applications. These firms prioritize Explainable AI as a key element in providing value to their clients. For instance, BCG has developed an open-source library called FACET in partnership with sklearn (open-source ML platform), which facilitates model interpretability and decision-making by revealing the underlying mechanisms of advanced machine learning models. According to a Senior Data Consultant at BCG that we talked to:

“Explainability, during all the development process of AI & Data systems, is key to deliver the best value to their clients”.

Global AI software and ML platforms

The second category encompasses Global AI software providers, such as Google AI, IBM, Dataiku, H2O.ai, Microsoft, and DataRobot. These major players have integrated Explainable AI features into their offerings to enhance user understanding and build trust in AI-driven decisions. For instance, Dataiku’s Explainability feature provides model interpretation, bias detection, and documentation, while IBM’s cloud-native data and AI platform incorporates AI governance for explainability, fairness, compliance, and risk management. H2O.ai offers an autoML cloud with a strong focus on transparency and accuracy throughout the machine learning lifecycle.

XAI and AI governance startups

The third category consists of stand-alone XAI software providers, which specialize in helping organizations understand, monitor, and mitigate biases while ensuring transparency in AI-driven decision-making processes. Prominent players in this category include Fiddler, AI-vidence, Holistic AI, and Credo.ai. Fiddler, established in California in 2018, is a leading AI governance and explainability solution, offering a comprehensive platform for ML system governance. AI-vidence, a French startup aims to make AI systems transparent, fair, and understandable and develops a new open-source technology while Holistic AI, a UK-based start-up founded in 2020, focuses on global AI governance, risk management, and compliance.

Despite the immense potential and growth of the XAI market, not all companies currently prioritize Explainable AI in their Data & Digital strategy. Nonetheless, with the emerging regulatory landscape and the increasing focus on AI governance, it is evident that Explainable AI will become a vital element for organizations across industries seeking to leverage AI technologies while ensuring transparency and compliance.

XAI landscape

What’s next?

Explainability remains a challenge in AI systems, with current techniques often lacking in achieving satisfactory transparency and interpretability. Researchers are exploring new methods to explain deep learning algorithms, especially in advanced architectures with numerous layers and intricate designs. Effective visual analytics approaches tailored to these architectures are needed to enhance interpretability and understandability. The challenges of XAI include tailoring interfaces, interactive explanations, human-machine teaming, addressing security and adversarial attacks, enhancing explainability in reinforcement learning, ensuring safety, enabling machine-to-machine explanations, and balancing privacy and explanation rights.

Large Language Models (LLMs) pose specific technical challenges, including their complexity with billions of parameters, lack of transparency, and non-deterministic behavior. Their contextual understanding and lack of explicit feature representation make it difficult to pinpoint factors influencing their decisions. Moreover, their scalability and resource-intensive nature hinder efficient explanation techniques. In Europe, Aleph Alpha and Mistral have communicated largely on XAI as a differentiator versus US players. (see Aleph Alpha Explainability feature)

In Silicon Valley, a petition championed by Elon Musk and experts called for a “pause” in artificial intelligence (AI) development, challenging the ideology of performance.

As the AI Act will be enforced in 2025, “organizations need to be ready” according to David Cortes, CEO of AI Vidence. He believes that a new label for Ethical and responsible AI could emerge soon to guarantee AI Act compliance.

The “Labelia — Responsible and trusted AI” label is the 1st label in Europe in the field of responsible and trusted AI. To date, 4 organizations have obtained it: Axionable, MAIF, Artefact, Apricity. We could imagine a new label for organizations whose core business is offering AI tools to companies, which would represent a competitive advantage.

In France, some CEOs and researchers worried about the danger of such European AI regulation that could endanger European competitiveness. They even signed a petition to raise awareness against the AI Act. However, the US is taking on the mission to make AI more ethical. For instance, from 5 July 2023, the NYC Bias Audit Laws request for auditing bias in automated tools to hire candidates or promote employees. And the UK will host an international AI Safety Summit this Autumn to tackle alignment. Just like the RGPD paved the way for global data & privacy regulations in Europe, the AI Act could be an inspiration for other countries.

Conclusion

Global regulations, such as the AI Act proposed by the European Union, prioritize ethical and responsible AI development, emphasizing the need for regulatory compliance. With the rapid growth of the XAI market and the rising focus on AI governance and explainability, organizations are recognizing the significance of adopting AI software that adheres to these standards. However, challenges remain in achieving full interpretability, especially for complex models, necessitating ongoing research and innovative approaches to balance accuracy and explainability effectively. The future of AI lies in a multidimensional approach, where sociological and technical considerations converge to create a responsible and accountable AI ecosystem.

Standalone XAI platforms are emerging to address specific needs in the market. And companies whose core business is offering AI tools must comply and may have a “Responsible AI label”.

I have attempted to build a table that shows the most relevant strategies for explainability for each player in the market.

For entrepreneurs leveraging AI, which is now commonplace, it’s imperative not to delay integrating explainability into their product design. This crucial step will enable them to stand out from the competition and meet the expectations of both customers and regulators.

For customers using AI based product, especially in industries such as healthcare and financial services where risks are significant, they will prefer product transparency, as it adds value and reduces long-term risks: while movie recommendation algorithms do not demand such scrutiny, for critical applications like cancer diagnosis or medical interventions, explainability matters a lot.

As responsible investors, Red River West actively supports our portfolio companies, like Ada Health (which leverages AI for medical diagnosis) in navigating these requirements. Encouraging the integration of explainability in their products not only provides numerous benefits as discussed earlier but also aligns with our commitment to responsible investment practices.

We also believe XAI will give birth to exciting new standalone companies… so if you’re building something in that sector, do reach out! Olivier and I would love to talk!

Adélaïde Renié
with help & support by Olivier Huez

--

--