A Conversation on Mitigating Bias in Artificial Intelligence

By Alan Man

Introduction

In late 2019, researchers from across the US studied an algorithm that allocates resources in the healthcare industry. This algorithm operated on the (flawed) premise that healthcare expenditures could be used as a proxy for healthcare needs. What the researchers found was that the algorithm consistently allocated more resources to white patients than to similarly situated black patients. The algorithm was discriminating on the basis of race. Hidden in the data was the root cause of this bias: black patients, more often than their white counterparts, had lower access to healthcare resources and spent less on healthcare as a result. The algorithm was taking an industry with baked-in systemic racism, and perpetuating, perhaps even further amplifying, that racism.

The application of Artificial Intelligence (AI), such as this healthcare industry algorithm, is perhaps the biggest economic opportunity in our generation — estimated to contribute $15.7 trillion to the global economy by 2030. As a result, companies are working at full speed to reap the rewards by folding new algorithms into their systems and processes — sometimes without giving much thought to the wider systemic implications. But as the healthcare algorithm example — and countless others — illustrate, poorly designed algorithms may serve to further harm those in our society who are the most marginalized. As much as we like to use words like “Machine” learning and “Artificial” Intelligence to describe these algorithms, they are ultimately designed by humans. And human bias can influence their outputs in unexpected ways.

An example of how biased datasets lead to biased algorithms, taken from the Mitigating Bias in Artificial Intelligence playbook and presentation at the webinar event.

The Launch of Mitigating Bias in Artificial Intelligence: An Equity Fluent Leadership Playbook

The Center for Equity, Gender and Leadership (EGAL) recently launched a playbook to help business leaders and those working in AI to identify and mitigate bias in AI. The playbook will help business leaders understand why bias exists in AI systems and its impacts, be aware of challenges to address bias, and execute evidence-based plays.

EGAL held a virtual event to launch the playbook in which EGAL’s Associate Director, Genevieve Smith, introduced the playbook and moderated a panel with experts and stakeholders from across the AI ecosystem. The panelists provided important lessons and ideas from their own experience on how business leaders can mitigate biases in AI systems, and what else is needed to catalyze larger change.

The panel included:

  • Saniye Gülser — Director of Gender Equality, UNESCO
  • Donald Martin, Jr. — Sr. Technical Program Manager & Social Impact Technology Strategist, Google
  • Deborah Raji — Tech Fellow at the AI Now Institute, Research Fellow at the Partnership on AI, 2020 MIT Technology Review 35 Innovators Under 35
  • Katia Walsh — Chief Strategy & AI Officer, Levi Strauss & Co.

The Panel Discussion — Key learnings

Technology is neither good or evil — therefore hire for good

The EGAL playbook outlines the critical role that business leaders play, including having an organizational culture that values and prioritizes responsibility and ethics related to AI. Starting the panel off, Walsh discussed how COVID-19 is affecting consumer and business activities, and she points out that we are in the midst of a period of extraordinary transformation as a result. She highlighted that there is a huge amount of capability in technologies under the AI umbrella but that these technologies are application agnostic: they can be used equally for good or evil. Making sure that we are deploying these technologies in a responsible way means having the right teams in place.

For Walsh, it is critical to hire data scientists who genuinely want to make the world a better place, and surrounding them with the culture and environment which serves to reinforce that desire — it is when the right people and the right environment come together that we can ensure we are using technology in a productive and responsible way. An organizational culture that values responsibility and ethics starts with hiring.

Lived experience is important

Noting that bias can enter the algorithm development process even before a single line of code is written, Martin spoke about the need to include people from diverse backgrounds early on in the process of framing an AI model. He cited the aforementioned healthcare algorithm case as an example of a problem that was created in the very early phases of formulating the model and deciding which factors should be included. He explained that the algorithm was based on the misguided assumption that “if you have more complex healthcare needs, you would spend more on healthcare.”

In hindsight, this assumption was misguided — perhaps it was made by a team of people whose only lived experience was that their own healthcare spending directly correlated with healthcare needs (off course, this makes total sense!). Yet, ultimately, this assumption lacked an understanding of the historical context behind healthcare disparity and systemic racism in America, and resulted in an outcome that might have been avoided had people with different lived experiences been involved in the formulation of the model.

Balance technical algorithm building with real-world context

Raji points out that bias can be introduced through the data that is used to train and evaluate models. She discussed examples where she had identified bias in facial recognition software — particularly against black women — largely because the datasets used to train the models were included primarily white faces. Interestingly, she went on to explain that highlighting this bias at one company triggered conversations about privacy and how facial recognition technology could be used — and even weaponized. This reinforces the need to balance the technical details of algorithm development with the real-world context in which those algorithms will be implemented.

Bias is everywhere, think deeply around the impact

Gülser rounded off the conversation with a discussion on AI examples that many of us are very familiar with — and the biases that exist in their current applications. AI assistants, such as Siri and Alexa, are almost all always voiced by women. Further, these assistants have become known for their coy flirtations in response to sexually explicit/inappropriate questions from their users. Not only does this reinforce harmful gender stereotypes, but it creates a poor example of users (especially children) interacting with these products. Examples like this provide an important reminder that bias can exist not only in the algorithms themselves, but also in the way in which businesses create products using the technology.

Gülser also highlighted the role of the UN in advancing responsible and ethical AI, as well as the importance of partnerships and regulation needed for the industry.

Conclusion

As the playbook rightfully concludes (paraphrased): While it’s not achievable to completely remove all bias, it is clear that bias in AI isn’t simply technical and thus can’t be solved with technical solutions alone. Addressing bias in AI requires assessing the field more broadly. It requires seeing the big picture. As developers, users, and managers of AI systems, businesses play a central role in leading the charge while the decisions of business leaders are of historic consequence. This is why addressing bias in AI is an issue for business leaders.

“The ultimate goal is to mitigate bias in AI to unlock value responsibly and equitably. By using the playbook, you will be able to understand why bias exists in AI systems and its impacts, beware of challenges to address bias and execute strategic plays.”

For more information, check out The Mitigating Bias in AI playbook and the panel discussion can be viewed below.

--

--

Center for Equity, Gender & Leadership (EGAL)

At the heart of UC Berkeley's Business School, the Center for Equity, Gender, and Leadership educates equity-fluent leaders to ignite and accelerate change.