Not Sure What Your AI Is Doing? Google Will Explain

By David Curry

RTInsights Team
RTInsights
2 min readDec 4, 2019

--

Google Explainable AI is a new set of tools and resources to help developers better understand model behavior and detect and resolve bias, drift, and other gaps in their models.

While a lot of organizations are already seeing upwards trends with their AI deployment, some are struggling to understand what the AI is doing and why.

Not many businesses have strong AI expertise in-house, and most AI programs do not come with a simplistic user interface to explain decisions. The end result is confused business leaders, who cannot understand the AI behavior.

Google wants to help bridge the communication gap, with the launch of Explainable AI. The new set of tools and resources, available to its cloud platform customers, enables developers to better understand model behavior and “detect and resolve bias, drift, and other gaps” in their models.

In cases where an AI has gone off the rails or incorrectly examined data, Explainable AI will provide an analysis of what led to the decision. From that, developers should be able to patch the AI and better understand the cause-and-effect.

“While this capability allows AI models to reach incredible accuracy, inspecting the structure or weights of a model often tells you little about a model’s behavior,” said director of AI strategy, Tracy Frey, in a blog post. “This means that for some decision-makers, particularly those in industries where confidence is critical, the benefits of AI can be out of reach without interpretability.”

Google does note that while Explainable AI will provide an analysis of the model, it cannot make a judgement on the data inputted. As explained in a Gartner webinar, one of the key factors in AI deployment success is good data quality and governance.

“AI Explanations reflect the patterns the model found in the data, but they don’t reveal any fundamental relationships in your data sample, population, or application. We’re striving to make the most straightforward, useful explanation methods available to our customers while being transparent about its limitations,” said Frey.

Related: Summarization Platform Receives Google, Microsoft Backing

Originally published at https://www.rtinsights.com.

For more articles and resources on continuous intelligence, visit our Continuous Intelligence resource center here. To get the latest CI news and insights in your inbox, subscribe to our biweekly Continuous Intelligence newsletter.

--

--