Addressing the Root Causes and not the Symptoms of Bias in Tech

[5]

Googling bias in tech gives you a long list of companies and platforms with biased Artificial Intelligence (AI) or tech in general. OpenAI’s DALL-E 2 being the most recent. Are we surprised? Not really. Neither is OpenAI. If you scroll to the bottom of DALL-E 2’s main webpage, you will find a “Risk and Limitations” document detailing the bias in the system[1]. The document informers us that DALL-E 2’s researchers have attempted to address these issues but could not find an effective solution; each solution produced different trade-offs. So, for the time being, most of the results for ‘lawyer’ on DALL-E 2 are of white men and most of the results for ‘flight attendants’ are of Asian women.

That said, OpenAi is doing something about it. They are openly discussing their issues and they are limiting the software’s release so that people can find faults in the system and researchers can “red team” it or find as many flaws in it as possible. The team at OpenAi recognize the vulnerabilities of their system and are trying to address them. Not everyone is doing that. Pointing out flaws in a system that people have spent countless of hours on, and companies have invested millions of dollars in is not easy, but it has to be done if we want to work together for a more cohesive world. These imperfect systems do not have to be completely thrown out, but they must be examined for flaws and these failings must be addressed.

Which brings us to the question of evaluating bias in tech. How do we do that? One approach is similar to OpenAi’s: you give limited number of people access to the tech so that they can give you an outsider’s perspective on potential issues. Another, is to have different internal teams look at the tech. However, these solutions address the symptoms of bias and not their root causes. It is important to remember that humans and our past data are the essence of the problem. We should be addressing the issue from where it stems.

To do that, we should be looking into our internal teams and the people around us. One suggestion is to train developers and all staff on bias and anti-bias decision making. As a result, staff are aware of the problems their tech can create and can work to address these issues from the very beginning. This awareness can come in a form of standard trainings, quarterly conversations on bias, and / or monthly check-ins with DEI and HR team members. Thus, bringing anti-bias decision making to the forefront of tech. When developing the training and conducting conversations, remember that bias comes in different forms: race, ethnicity, sexual orientation, religion, languages, culture, disability, and the list goes on. Ensure that you address all these issues to avoid unwanted consequences.

Remember that developers and the tech team are not solely responsible for mitigating bias. This is a collective responsibility shared by every member on your team, that should be led by an executive that should champion this fight. According to an IBM study, there has been a 15% uptick since 2018 in companies pointing to non-technical executives as the “primary champions” of AI ethics[2], bringing the tally to 80%. Thus, the importance of recognizing bias is a companywide responsibility and should be treated as such.

Addressing biases within your teams will also help addressing bias intrinsic to the tech being developed. An IBM study also found that even though 68% of companies recognize the importance of diversity in teams to address bias, the “respondents indicated their AI teams are: 5.5 times less inclusive of women, 4 times less inclusive of LGBT+ individuals and 1.7 times less racially inclusive.”[3] These companies recognize the importance of diversity but are not implementing it. Change that. Ensure your team is diverse.

These steps will not completely eliminate bias in tech, but they will help mitigate the issue and will start the process of addressing the root cause, instead of solely focusing on the symptoms. If you are a company that consumes tech, ensure that your vendors are thinking about bias in the development phases. Ask for reports and information on the bias that is prevalent in their offerings. No tech is flawless so if they are telling you that it is perfect, look for another vendor[4].

[1] Samuel, Sigal. “A New AI Draws Delightful and Not-so-Delightful Images.” Vox, Vox, 14 Apr. 2022, https://www.vox.com/future-perfect/23023538/ai-dalle-2-openai-bias-gpt-3-incentives.

[2] “Responsibility for AI Ethics Shifts from Tech Silo to Broader Executive Champions, Says IBM Study.” IBM Newsroom, https://newsroom.ibm.com/2022-04-14-Responsibility-for-AI-Ethics-Shifts-from-Tech-Silo-to-Broader-Executive-Champions,-says-IBM-Study.

[3] “Responsibility for AI Ethics Shifts from Tech Silo to Broader Executive Champions, Says IBM Study.” IBM Newsroom, https://newsroom.ibm.com/2022-04-14-Responsibility-for-AI-Ethics-Shifts-from-Tech-Silo-to-Broader-Executive-Champions,-says-IBM-Study.

[4] Dishman, Lydia. “How to Spot Human Bias in the Tech Your Company Uses.” Fast Company, Fast Company, 13 Apr. 2022, https://www.fastcompany.com/90740524/how-to-spot-human-bias-in-the-tech-your-company-uses.

[5]“Diversity of Colour and Gender.” Strategic Finance, 1 July 2019, https://sfmagazine.com/wp-content/uploads/07_2019_tech_practices.jpg. Accessed 17 Apr. 2022.

--

--