Explainable AI: A Business Perspective

Sparsha Devapalli
8 min readAug 15, 2020

--

AI has achieved a remarkable momentum in touching and shaping our everyday lives — so much so that the current era is often referred to as the “Age of AI” and that, if it is harnessed appropriately, may deliver the best of expectations over many application sectors ranging from medical diagnosis to autonomous driving.

With Great Power comes Great Responsibility”- Artificial Intelligence (AI) raises concerns on many aspects due to its potentially disruptive impact. These challenges include workforce displacement, loss of privacy, potential prejudices in decision-making and uncontrolled automated systems and robots. Though significant, these issues are addressable through right planning, supervision and governance.

When decisions derived from such systems ultimately affect humans’ lives, there is an imperative need for understanding how such decisions are furnished by AI methods. Nevertheless, beyond this buzz of AI, Data scientists may have trouble explaining why their algorithm (especially Deep Neural Networks) gave a decision and the laymen end-user may be reticent to trust the predictions without contextual proof and reasoning.

When developing a ML model, the consideration of interpretability as an additional design driver can improve its implementation for 3 reasons — veracity, trustworthiness and impartiality.

Concept of XAI

What is Explainable AI (XAI):

Care to explain that to me?” is one of the great intimidating conversation starters in the business world. It puts the onus on the recipient to provide effective rationalisation about the prediction outcomes. On the contrary, if you ask the same question to an AI model, neither the model is intimidated nor it encompasses the aptitude required to explain itself.

Machine Learning algorithms are ubiquitous, complex and efficient, yet these models are opaque, non-intuitive and abstruse. Explainable AI (XAI) is any machine learning technology that can accurately explain a prediction at the individual level. Simply put, it is the ability to elucidate a machine learning model outcomes.

Explainable models (e.g., Decision Trees) are easily understandable but don’t work very well as they are simple. Accurate models (like deep learning models) work well but aren’t explainable as they are complicated. The trade-off decision to be made should depend on the application field of the algorithm and the end-user to whom it’s accountable. Hence Explainable AI can be outlined as Given an audience, an Explainable Artificial Intelligence is one that produces details or reasons to make its functioning clear or easy to understand.

Different objectives for XAI sought by varied audience profiles

The above figure depicts different purposes of explainability in ML models sought by different audience profiles.

XAI is one of the prominent research programs at DARPA expected to enable “third-wave AI systems”, where machines comprehend the context and environment in which they operate, and over time build core explanatory models allowing them to depict real world phenomena.

Essence of XAI

Explainable AI is an emerging and multifaceted concept that sits at the intersection of several areas of active research in machine learning and AI. It will be essential if future war-fighters or medical practitioners etc., are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.

The need for XAI:

Gartner predicts that, by 2023, over 75% of large organisations will hire artificial intelligence specialists in behaviour forensic, privacy and customer trust to reduce brand and reputation risk. The analyst firm foresees a strong future for Explainable AI, a collection of models that make AI more transparent in their recent report, “Top 10 Data and Analytics Technology Trends That Will Change Your Business.”

According to article 14 of European Union’s General Data Protection Regulation (GDPR), when a company uses automated decision-making tools, it must provide meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing to the data subject. This resolution testifies to a citizen “right to explanation” of algorithmic decisions that “significantly” affect any individual.

The black box challenge surrounding machine learning has been discussed at length for years for one main reason — the need for trust. Why is the machine making this decision? On what basis is this decision being made?

It’s very uncomfortable, let alone dangerous, to not know these answers if you are making important business bets on a machine’s decisions that you don’t thoroughly understand. This is where the demand for Explainable AI (XAI) originates. Examples of critical implementations of AI where XAI plays a crucial role are a self-driving car that should make a decision in a complex traffic situation or a cancer diagnosis that has to be made.

Perpetuating the ordinance, we have a social and professional ethical obligation trying to design models as fair, accountable and transparent as possible, offering reliable results and being able to be queried to validate predictive decisions.

Goals of XAI:

The most commonly used nomenclature — clarifying the distinction and similarities among terms often used in the ethical AI and XAI communities.

Intelligibility (Understandability) — denotes the characteristics of a model to a human, without any need for explaining its internal structure or the algorithmic means by which the model processes data internally.

Interpretability: it is defined as the ability to explain or to provide the meaning in understandable terms to a human.

Explainability: it can be viewed as an active characteristic of a model, denoting any action or procedure taken by a model with the intent of clarifying or detailing its internal functions.

Transparency: a model is considered to be transparent if by itself it is understandable.

Confidence: A measure of robustness and stability accessed on a reliable model. Trustworthy interpretations and sustainable confidence evaluations should be produced by stable models. Hence, an explainable model should contain information about the confidence of its working regime.

All these goals are clearly under the surface of the concept of explainability. Explainability and Confidence are the prime objectives for our explainable AI models.

Responsible Artificial Intelligence:

Enabling responsible application of AI technologies is one of the field’s foremost challenges as it transitions from research to practice. Several academies, government bodies and industry leaders are calling for technology creators to ensure that AI is only used in ways to benefit the human race and integrate responsibility aspects into the technological foundations. It is vital to resolve the challenges, attain the goals and allow responsible development to certify a future landscape where AI can be widely accepted and used.

In the XAI realm, the notion of Responsible AI is a paradigm that imposes a series of AI principles to be met in large scale implementation with fairness, accountability, transparency, and privacy at its core.

Here are key important factors when designing and deploying responsible AI systems:

Summary of XAI vision and its impact on the principles of Responsible AI

The principles of Responsible AI can be summarised using the below key traits:

Governance — The cornerstones for responsible AI stimulate the need for end-to-end enterprise governance.

Governance for AI enables an organisation to address important questions relating to the decision-making process of AI applications — identifying accountability; determining how AI aligns with business strategy; formulating regulations and determining consistent and reproducible outcomes.

Ethics and Compliance — The primary goal is to aid organisations develop AI that is morally responsible and compliant with relevant regulations.

Explainability and Robustness– Provide a vehicle for AI-driven decisions to be interpretable and easily explainable by those affected.

In terms of resilience, next-generation AI systems are likely to be increasingly “self-aware”, with a built-in ability to detect and correct faults and accurate or ethical decisions.

Security — Help organisations develop AI systems that are secure to use.

The potentially catastrophic outcomes of AI data or systems being compromised or “hijacked” make it imperative to build security into the AI development process from the start, being sure to cover all AI systems, data, and communications.

Bias and Fairness– Address issues of bias and fairness so that organisations are able to develop AI systems which execute prejudicial distinction in the treatment of different categories based on implicit racial, gender, ideological bias in bad data and achieve decisions that are fair in a well-communicated way.

Being able to account for the entire pipeline, from data collection to the production and monitoring stage, is the most effective way to ensure biases aren’t creeping in at any point in this process.

Responsible AI is perhaps an even newer phrase that we, along with others, are starting to use as an umbrella term for all the different sub-disciplines mentioned above. We also observe compliance, whether that’s with GDPR, CCPA, FCRA, ECOA, SR 11–7 or other regulations, as an additional and crucial aspect for Responsible AI.

In today’s increasingly transparent, fast-moving and competitive marketplaces, implementing Responsible AI is not merely a nice-to-have, but imperative for success.

Interpretability Evaluation Methods:

There are currently three major ways to evaluate interpretability methods: application-grounded, human-grounded, and functionally grounded.

· Application-grounded assessment requires a human to conduct experiments within a real-life application. For example, to evaluate an interpretation on diagnosing a certain disease, the best way is for the doctor to perform diagnoses.

· Human-grounded evaluation is about performing simpler human-subject experiments. For example, people are given pairs of explanations, and they must select the one they consider to be of higher quality.

· Functionally grounded Involves usage of proxy rather than human experiments for explanation quality. This method is much less costly than the preceding two. Naturally the challenge is to determine which proxies to uses. In many cases , for example, decision trees were deemed interpretable, but further work is needed.

What does Explainable AI bring to your business?

The following are some of the ways in which XAI helps companies in a secure manner that offers enterprise value in its own right.

In terms of value for business applications, here are some ways in which XAI benefits organisations in areas such as marketing, fraud, and detection of anomalies.

Given the plethora of benefits, companies should design XAI in their machine learning business applications from the beginning ensuring that advanced technology solutions are offered to the clients with efficient deliveries.

Conclusion:

Artificial intelligence should not become a powerful deity that we follow blindly without understanding its reasoning, but we should not forget its beneficial insight that it can possess. For a safer, more reliable inclusion of AI, a seamless blend of human and artificial intelligence is needed.

As AI becomes more profound in our lives, explainable AI becomes even more important. If AI isn’t responsible, it isn’t truly intelligent.

To know about the technical implementations of XAI, kindly refer to my blog.

References for XAI:

  1. Broad Agency Announcement: Explainable Artificial Intelligence (XAI). DARPA-BAA-16–53, August 10, 2016
  2. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
  3. Understanding Explainable AI ”, Forbes
  4. Ryan Wesslen, “Explainable AI: Opening up the Black Box
  5. Responsible AI 2020”, Open Data Science
  6. Barry O’Sullivan, “The Impact of AI: Challenges & Policy Responses
  7. Gartner’s — “Explainable AI

--

--