Finding Trust and Transparency in Artificial Intelligence

TL;DR: It’s possible but there’s no magic wand.

--

I’ve written before about the existence of racism and inequalities perpetuated by the usage of artificial intelligence. This is one of many things that cause people to pause before delving into AI to solve real world problems.

Dandelion seeds floating away in a sunset. Photo by Dawid Zawiła on Unsplash

If you work with artificial intelligence technologies you are acutely aware of the implications and consequences for getting it wrong. I wish I could tell you there was a magic wand to be waved that could solve problems like ensuring fairness, removing racial and gender bias, increasing explainability, preventing adversarial attacks, and providing transparency. The reality is that if these are things you care about, and they very well should be, you are going to need to put the work in to understand how to tackle these very important problems. The good news is that there are teams of engineers, data scientists, and researchers at IBM who are building, contributing to and maintaining open source projects and communities to support you in this endeavor.

The Importance of Industry Standards

Globally, more than 50% of enterprise organizations have some form of AI deployment in operation today, with the average company having at least four AI or ML projects running. With this level of rapid adoption, it is imperative that the industry align to ensure that we use this technology in responsible ways. To that end, IBM has joined the LF AI Foundation, underneath the Linux Foundation. The LF AI Foundation supports open source projects within the artificial intelligence, machine learning, and deep learning space. To build trust in the adoption of AI, LF AI has established the Trusted AI Committee. IBM joins a group of member organizations from across the globe to work toward building a set of principles for trustworthy AI and collecting a set of solid public use cases to use as examples of when this work is done right.

As a committee member, I am fortunate to not only share what we are doing at IBM toward this end, but learn from the other member organizations and discover areas for collaboration and knowledge transfer.

Todd Moore presenting at Open Source Summit EU 2019 on Trusted AI and the LF AI Trusted AI Committee

Fairness and Bias Mitigation

Defining “what is fair” in the usage of AI is an incredibly difficult thing to do because it is contingent on your particular use case and defined by context. While people may have good intentions with their usage of AI, if they don’t identify systemic racism and cognitive biases that are present in their datasets or the way their models and algorithms are built, they can all too easily perpetuate and AUTOMATE inequalities that exist in society.

The AI Fairness 360 toolkit (AIF360) helps you toward this end. AIF360 is an open source Python library that includes a long list of metrics to help you define and measure individual and group fairness in your models and datasets. Once you’ve got a solid handle on the problem you need to solve you can dive in to the bias mitigation algorithms to actually create some change in the output of your models.

Kush Varshney presents about AIF360 at Reinforce Conference in March 2019. Kush also co-directs the IBM Science for Social Good initiative.

Improving Explainability

For many who want to take advantage of the efficiency gains from AI, the inner workings of machine learning algorithms is a mystery and thus the outcomes from your models are hard to justify or explain. Right now, impressive accuracy is achieved through black box machine learning models, such as deep neural networks and large ensembles, but there are few people who can understand or explain these outcomes. To be able to confidently utilize machine learning for high stakes decisions, explainability and interpretability of the models is essential.

AI Explainability 360 toolkit (AIX360) was built to increase transparency and to allow your machine learning models to explain their decisions. This toolkit includes many ways to explain your machine learning model’s outcomes: data vs. model, directly interpretable vs. post hoc explanation, local vs. global, static vs. interactive; the appropriate choice depends on the platform and audience that needs the explanation.

Decision Tree to guide you in choosing the right algorithms for explaining the outcomes of your models. Find more on the AIX360 website.

Like AIF360, this toolkit is a Python package that includes a set of algorithms that span the different dimensions of ways of explaining along with proxy explainability metrics.

Preventing Adversarial Attacks

Machine Learning models are vulnerable to attack from hackers who can use a variety of approaches to make a model misclassify its inputs. Some attacks add specially crafted “adversarial noise” to an image that is not visible to humans but can confuse a model or “poison” the crowd-sourced training data set to achieve their goals. An alarming example of this would be the insertion of noise into the image of a stop sign so that an autonomous vehicle mistakes it for something else. A hack like this would have deadly results!

Adversarial Robustness 360 Toolbox demo app showing a C&W Attack on a picture of a siamese cat showing that the image detection output thinks it is now an ambulance! Check it out here: https://art-demo.mybluemix.net/

The Adversarial Robustness 360 Toolbox (ART) provides the tools to build and deploy defenses and test them with adversarial attacks. Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial samples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary. The attacks implemented in ART allow creating adversarial attacks against Machine Learning models which is required to test defenses with state-of-the-art threat models.

Building Transparency

The growing interest to increase efficiency with open data and deep learning doesn’t solve the problem that not everyone is a data scientist or machine learning expert. Our team at CODAIT is making this process easier through compiling and cataloging open source tools and providing documentation to make them easier to evaluate and adopt.

The Model Asset eXchange (MAX) is a marketplace of open source deep learning models. They are free, instantly deployable without having to write any code, and trainable. There are more than 30 models available, with more being added regularly, for all sorts of things. There are models for image segmentation, caption generation, audio classification, human pose estimation, even a toxic comment identifier. With the ability to search by industry, product, or service, it’s easy to find a model that fits for your given use case.

You can find tutorials on IBM Developer that will help get your creative juices flowing for what you could do with MAX models. Our team is working on samples that illustrate how the models can be incorporated into web apps, IoT workflows, or severless apps. In each model you’ll find the links to try these out in the “Example Usage” section.

We show you a few fun ways to take advantage of MAX. There are models to help answer questions, recognize yoga poses, or even to make music with the flick of a wrist. Learn about these and more on IBM Developer.

Built as an accompaniment to MAX, the Data Asset Exchange (DAX) is a similar marketplace of open source resources but here you’ll find open data sets that are easily consumable in an enterprise environment. We’ve made it easy for you to evaluate licensing terms for each data set, and build within end-to-end deep learning workflows, from using the data to train models to deploying models in standard ways. Our goal is for Users to be able to easily train models on MAX using data from the DAX.

All the datasets in DAX fall underneath the Community Data License Agreement. This gives you confidence that the dataset(s) we’ve collected are available for commercial use and come from a trusted source.

The CODAIT team’s goal is to make it straightforward to use DAX and MAX in conjunction with IBM AI products as well as other hybrid, multicloud AI tooling, both proprietary and open source. We want to give data scientists and developers well-curated data starting points, to make it easier for you to start developing your AI applications and solutions with confidence.

Continual Learning…

In my work promoting Trust and Transparency in AI here at the Center for Open Source Data and AI Technologies, I’m lucky to work directly with many people who are doing crucial work toward making it possible to leverage this amazing technology in responsible and explainable ways. You can learn about the work my team is doing on the CODAIT website.

My team is also lucky to support and help maintain the open source projects created by IBM Research that were mentioned here in this article. They continue to push the envelope in building tools and standards in advancing responsible AI. In the scope of history, AI is still in its infancy and we are excited about the opportunities ahead.

--

--