Trust and Transparency with IBM AI OpenScale

Manish Bhide
Trusted AI
Published in
3 min readFeb 4, 2019

Imagine that you are watching a soccer match and someone were to ask you the following question: Who was the best soccer player of 2018? What would you answer? Pause and think about your answer before you read further. If you were a fan of Argentina you might say Messi, whereas if you were a fan of Portugal you would have said Ronaldo. Someone else might say Neymar or Bruyne or even Oblak. Each of these answers (including most likely yours) reveals an inherent bias of the person giving the answer. This could be bias towards a specific player or bias towards a specific team or country. Bias is everywhere in everything that we do — sometimes it is conscious at other times it is unconscious. How many would have answered Lucy Bronze or Pernille Harder? Did you think of any female soccer player? This shows the unconscious bias where it was assumed that the best soccer player had to be male!

When it comes to AI algorithms generating actionable insights, bias can have serious repercussions. Consider a bank leveraging AI to detect fraud. Now imagine a scenario where the data scientist (let’s call him John) who built this model made use of a dataset where all the fraudulent transactions were done by people of a specific gender, ethnicity or income range. It is very likely that the model built from such biased data would propagate the biases into its predictions! The irony is that the generated model will have very good metrics (precision / recall) on the test data (which would follow the same distribution as the training data) — hence making it difficult for John to detect any problem. However, in the real world such a model would either incorrectly flag a transaction as fraudulent or, fail to detect a fraudulent transaction altogether. The root cause of this is the bias present in the data and has nothing to do with any conscious or subconscious bias of John. I am sure no one wants to be in John’s shoes!

An even bigger problem is when the model behaves in a biased manner even if training data is unbiased. This could happen due to the different weights given to different features (something that the algorithm learns for optimizing accuracy) or could happen due to the nonlinear transformations of data during training. Such issues are the root cause for the lack of trust in AI in the minds of business owners. Hence it is important to not only identify and remove bias at design time (when the model is built) but also to continuously monitor the model to ensure that it is not behaving in a biased manner when it is deployed in production.

IBM AI OpenScale is an offering from IBM which allows enterprises to monitor their models and detect biases at runtime. This continuously monitors the deployed models and analyses its behaviour to detect and report any kind of biases. IBM AI OpenScale not only detects biases, but it also finds the root cause of the bias. It also provides data which can be sent for manual labelling to fix the bias. Thus it provides the guard rails to ensure that the models continue to act in a fair and unbiased manner throughout their lifecycle.

The second big challenge with the enterprise adoption of AI models, is the black box nature of the models. In other words, how do business owners trust that the AI model is making the right decision based on the right information? How do they explain the behaviour of an AI model? AI Explainability is a big problem that is faced by enterprises today. IBM AI OpenScale has capability which addresses this problem. Given a model prediction, IBM AI OpenScale provides two kinds of explanations on the model behaviour. This helps line of business owners build trust in the AI Models and it goes a long way in helping scale AI adoption in the enterprise.

IBM AI OpenScale also has other capabilities such as automatic bias mitigation, accuracy monitoring of the models, generating of new version of the models when accuracy drops, monitoring the load on the models, model lineage, etc. In this publication we will soon be adding posts which provide a deeper dive of the various features. Stay tuned!

--

--