Watson OpenScale: Promoting Trust and Transparency When Climbing the AI Ladder

Eric Martens
5 min readOct 9, 2019

--

This article was written in collaboration with Julianna Delua

Climbing the AI ladder: How does that affect my business?

Businesses love the idea of putting data to work. Building and scaling AI with trust and transparency — sounds great, right? As enterprises adopt machine learning to streamline customer service and remedial tasks, their employees can provide better customer experience while freeing themselves up to work on more interesting problems.

IBM leads the industry in empowering enterprises to accelerate the journey to AI. Our prescriptive approach for collecting, organizing, analyzing, and infusing data can help your business prepare for and implement AI. Further, we are pioneering in our approach to automating AI lifecycle management with AutoAI as part of Watson Studio and Watson Machine Learning, IBM’s award winning intelligent automation as selected by 13 independent judges by AIconics.

Now that you have begun building those models and possibly automating some model generation tasks with AutoAI, taking that last step to production can be frightening. Turning over even a subset of business processes and decisions to the machine-generated intelligence will always be a tough decision. How can you trust that your models are making reasonable decisions that pass the smell test? What do you need to demonstrate that to a skeptical business executive, user base, or even auditor?

Explainability is foundational to AI trust and transparency

“Trust and transparency in AI” sounds too good to be true, but it’s been the focus of IBM Research and Watson developers for some time now. You may have seen the release of IBM’s open-source initiatives, AI Fairness 360 and AI Explainability 360. These free toolkits allow data scientists to identify potential issues with their models at build time.

IBM Watson OpenScale is the enterprise version of these capabilities for production models at runtime. When you configure Watson OpenScale, you can get detailed analysis on the factors that led your model to make a particular prediction, whether your model is a simple decision tree or a complex neural network. This explainability feature allows businesses to build more trust in their AI models.

Explainable transactions with Watson OpenScale

Watson OpenScale’s dashboards and user interfaces were built with the business users in mind, and aim to take the mystery out of data science and make it understandable to anyone. You can check out the Watson OpenScale product tour that monitors the AI model performance and its potential bias. What’s more, the true power of Watson OpenScale lies in its APIs, which let developers harness this technology for their applications.

Watson OpenScale promotes trust for AI-powered apps

Let’s take a look at the example of an auto insurance company. In order to help their claims adjustors accelerate claim resolution for their customers, this insurance company has begun using a machine learning model to evaluate claims for potential fraud. The model looks at factors such as the claim amount, mileage at time of loss, and police report status, then flags potentially fraudulent filings.

The company has set up Watson OpenScale to monitor the model, which means that all data going into and predictions coming out of the model are saved. The company’s developers then use Watson OpenScale’s APIs to easily infuse this data into the application their adjusters use to review claims, so the model’s prediction is available on demand for each case.

OpenScale APIs allow app developers to infuse AI data into business applications

Additionally, Watson OpenScale’s explainability capabilities can be used to highlight aspects of the claim that triggered the model to flag it for potential fraud. Developers can use the Watson OpenScale APIs to get a detailed analysis of the model’s decision, providing a wealth of information for the claims adjustor.

Watson OpenScale to detect and help correct the drift in accuracy

Watson OpenScale™ tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable and compliant wherever your models were built or are running. However, AI model change can be risky and costly. To help catch these issues before they happened, we introduced a drift monitor as part of Watson OpenScale. The “drift” is defined as potential drop in model accuracy from input data. Watson OpenScale can detect when models in production struggle to correctly predict the intended outcomes.

Our approach does not rely on additional feedback data from production. Watson OpenScale can automatically predict a specific scenario where predictions may turn out to be inaccurate. For example, let’s say Watson OpenScale has analyzed the model training data and determined that the credit risk prediction for younger demographics is less accurate than for other demographics. In this case, Watson OpenScale can identify how a sudden influx of applications from the younger demographic is likely to affect overall model accuracy, and even see if this drop in accuracy is correlated with business performance indicators.

Detect drops in accuracy with Watson OpenScale

Next Steps in promoting AI trust and transparency

Are your machine learning models delivering value to your business? Do you trust that their predictions are grounded in reality? Can you explain those predictions to your customers and executives? Watson OpenScale can help you answer “yes” to these questions.

To learn more, visit the Watson OpenScale page on IBM Demos for a closer look at Watson OpenScale’s capabilities. You can get your hands dirty with step-by-step instructions on building the insurance app from this article using free trial versions of IBM Cloud services with our 90 min Watson OpenScale tutorial.

When he’s not exploring the outdoors near Boulder, Colorado, Eric Martens builds demos and technical content for IBM Watson OpenScale and tries to keep up with the rapidly-evolving AI landscape. Got questions? Contact him on LinkedIn.

--

--