Addressing Coded Bias with Responsible AI

Mark Kobe
Slalom Technology
Published in
5 min readJan 19, 2022
The documentary film CODED BIAS exposed prejudices and threats to civil liberty in facial recognition algorithms and artificial intelligence

Like many artificial intelligence practitioners, I watched the documentary CODED BIAS and was deeply affected by the stories of how algorithms and datasets can embed racial and gender biases of developers. Biased algorithms and datasets can affect the real lives of people every day — including decisions about law enforcement, financial services, healthcare, and education.

The film strengthened my determination to make a positive change in the way AI is developed. We need broader awareness and action to address coded bias. If we do not make this change, the benefits of AI will not be broadly adopted in business or society and the hard work (and good intentions) of the AI community will be lost!

The documentary CODED BIAS was produced and directed by Shalini Kantayya, the American filmmaker and environmental activist, who recently shared more about her motivation for the film. Shalini describes algorithmic justice as a human rights issue and a battleground for civil rights and democracy in the 21st century. The issue is compounded by a lack of public understanding. Shalini Kantayya believes that AI literacy could be the answer. “I really think that literacy is the spark that is going to make change… this is a moment where the cement is still wet,” she said during a virtual discussion of her film.

I strongly urge everyone in the artificial intelligence community to watch the documentary CODED BIAS and think about how you can support positive change. It inspired our team at Slalom to redouble efforts to help organizations with Responsible AI and collaborate with our partner Microsoft on this important mission. More on that in a moment!

Responsible AI For All

At Slalom, we believe that AI has the ability to unlock human potential and make a positive impact on the world… but only if it is accessible by all (not just the largest companies), inclusive of all (reflecting the diversity of humans), and creates opportunity for all (not just business value but social and environmental change). This is the inspiration behind our “All for All” initiative.

At the same time, we know that technology itself does not change the world… people are required. It is not only about training models. It is about training people and organizations to address coded bias. Therefore, we believe Responsible Application must be an AI Core Principle to mitigate bias, ensure safety, and build trust with humans.

Slalom AI Core Principles

Responsible AI is not only the right thing to do… it is a business imperative. Customers choose organizations with trusted brands. If customers don’t trust your AI-powered product they simply won’t adopt it. Similarly, if you are rolling out AI in your business, your employees must understand and trust the application… or they won’t adopt it. Therefore, Responsible AI must be an integral part of the strategy for any AI investment.

Responsible AI Strategy

Whether you’re just thinking about AI or your company’s journey is underway you’ll want to make sure you are asking questions to ensure Responsible AI is part of your strategy…

Here are a few illustrative questions you should be asking — certainly not exhaustive. The answers to these questions inform your strategy and must be operationalized in your processes and systems. It’s important to ask the hard questions before investing significant time and effort in building AI that people do not trust.

Clarify principles: How does your AI align with your company’s values and priorities? Have you stated a point of view on responsible application of AI/ML?

People: Is your organization aligned around the way to plan AI projects with responsibility in mind? Do you engage stakeholders that are diverse include multi-disciplinary views?

Explainability: Are model inputs, outputs, dependencies, versions documented? Do you know the features that are used in the model and their importance? Are the features you are using correlated with protected attributes?

Measurement: Do you track model performance in products and monitor for drift against measures of bias?

Governance: Do you have stewards for data and models? Are you taking steps to maintain regulatory compliance? Are you prepared if regulations change? Do you have effective controls?

Data Selection: Are you using representative data to train models or do you need to seek additional data? How are you checking ground truth, counterfactuals, edge cases?

Your Responsible AI Journey

AI is evolving rapidly… and it can be daunting for organizations to navigate when they are already under tremendous pressure. Slalom helps customers adopt industry-leading Responsible AI practices as part of the AI for All Core Principles framework. Here are illustrative activities of a Responsible AI program:

Conclusion

The innovation and promise of AI must be balanced against its responsible application. As a community, we must work together to make a positive change. Not only for our organizations… but our society, family, friends, and colleagues. To learn more about the ways Slalom and Microsoft help customers adopt Responsible AI please watch our recorded webinar.

We look forward to working with our customers and the community to support this important mission. Please don’t hesitate to reach out with feedback or questions!

About Microsoft and Slalom

Microsoft and Slalom achieve more together through data and AI. Many organizations use Microsoft products and services. Using Slalom’s modern culture of data framework, we partner with change-making clients to shape the future around Microsoft technology. As we look at the next two decades and beyond, we know that future will be built on Microsoft too.

About Slalom

Slalom is a global consulting firm focused on strategy, technology, and business transformation. In 41 markets around the world, Slalom’s teams have autonomy to move fast and do what’s right. They are backed by regional innovation hubs, a global culture of collaboration, and partnerships with the world’s top technology providers. Founded in 2001 and headquartered in Seattle, Slalom has organically grown to over 11,000 employees. Slalom has been named one of Fortune’s 100 Best Companies to Work For six years running and is regularly recognized by employees as a best place to work. Learn more at slalom.com.​

--

--

Mark Kobe
Slalom Technology

Mark Kobe is a leader at Slalom and helps clients use AI to make a positive impact in their organization and in the world.