The next stage of human-machine collaboration

SYNCHRONIUM®
4 min readFeb 20, 2019

--

Some AI-based services and tasks today are relatively trivial — such as a song recommendation on a streaming music platform.

However, AI is playing an expanding role in other areas with far greater human impact. Imagine you’re a doctor using AI-enabled sensors to examine a patient, and the system comes up with a diagnosis demanding urgent invasive treatment.

The next stage of human-machine collaboration

In situations such as this, an AI-driven decision on its own is not enough. We also need to know the reasons and rationale behind it. In other words, the AI has to “explain” itself, by opening up it's reasoning to human scrutiny.

The transition to Explainable AI is already underway, and within three years, we expect it to dominate the AI landscape for businesses.

Explainable AI systems will play this pivotal role through their ability to:

Explainable AI, ready for takeoff

The transition to Explainable AI is already underway, and within three years, we expect it to dominate the AI landscape for businesses. It will empower humans to take corrective actions, if needed, based on the explanations machines give them. But how will it do this?

There are three ways of manifesting and conveying the reasoning behind AI decisions made by machines:

  1. Using data from the machine learning — using comparisons with other examples to justify the decisions

2. Using the model itself — explanations mimic the learning model by abstracting it through rules or combining it with semantics

3. A hybrid approach combining both data and model — offers metadata and feature-level explanations.”The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, and understanding.”

– FREDDY LECUE, Explainable AI Research Lead, Accenture Labs

Two use cases for Explainable AI

№1 — Detecting abnormal travel expenses
Most existing systems for reporting travel expenses apply pre-defined views, such as time period, service or employee group. While these systems aim to detect abnormal expenses systematically, they usually fail to explain why the claims singled out are judged to be abnormal.

To address this lack of visibility into the context of abnormal travel expense claims, Accenture Labs designed and built a travel expenses system incorporating Explainable AI. By combining a knowledge graph and machine learning technologies, the system delivers insight to explain any abnormal claims in real-time.

№2 — Project risk management
Most large companies manage hundreds, if not thousands, of projects every year across multiple vendors, clients, and partners. A company’s expectations are often out of line with the original estimates because of the complexity and risks inherent in the critical contracts.

This means decision-makers need systems that not only predict the risk tier of each contract or project but also give them an actionable explanation of these predictions. To address the challenges, Accenture Labs applied Explainable AI and developed a five-stage process to explain the risk tier of projects and contracts.

Measuring effectiveness

Eight measures can be applied to assess its value and effectiveness. These measures capture the elements that people need in an explanation, but cannot necessarily all be achieved. While explainable AI will use and expose techniques that address these questions, we — as humans — should still expect a trade-off between value and effectiveness.

Comprehensibility

How much effort is needed for a human to interpret it?

Succinctness

How concise is it?

Actionability

How actionable is the explanation? What can we do with it?

Reusability

Could it be interpreted/reused by another AI system?

Accuracy

How accurate is the explanation?

Completeness

Does the “explanation” explain the decision completely, or only partially?

A technology revolution with people at its heart

The explanation is fundamental to human reasoning, guiding our actions, influencing our interactions with others and driving efforts to expand our knowledge. AI promises to help us identify dangerous industrial sites, warn us of impending machine failures, recommend medical treatments, and take countless other decisions.

The promise of these systems won’t be realized unless we understand, trust and act on the recommendations they make. To make this possible, high-quality explanations are essential.

Source: Accenture Lab AI report 2018

--

--

SYNCHRONIUM®

Many minds and one mission. We’re innovating the Future™ by making business and life easier, smarter, and better than ever! visit us at https://synchronium.io