The Representation Paradox

Nidhi Sinha
Women in Technology
5 min readMay 2, 2024
A young black man looks into the camera. He is wearing a dressing gown and has a towel around his neck. He applies a cotton pad to his face and wears a ring on his right hand. a list of words and numbers describe the man’s expression. Happiness 4.185, neutral 0.901, surprise 89.864, sadness 0.01, disgust 0.01, anger 5.021 and fear 0.01.
Image by Comuzi / © BBC / Better Images of AI / Mirror D / CC-BY 4.0.

Bias is one of the most discussed risks of AI, and increasing diversity in datasets is one of the most proposed mitigations. With the concepts of inclusion and representation come an interesting paradox:

When does representation become overexposure?

Sometimes, placing a spotlight on a group can be more like putting them under a microscope. Meredith Ringel Morris, in her paper on “AI and Accessibility”, calls out how people with rare disabilities may opt out of being included in datasets entirely due to fear of re-identification. As Ruha Benjamin astutely highlights in her book “Race After Technology”, increasing the number of Black and Brown people in these datasets does not just aid tools such as Facial ID for your phone, but for cybersurveillance. In fact, people of color are overrepresented in surveillance and predictive policing training data, as they are repeatedly and unjustly targeted for alleged criminal activity. So, while bias is indeed a real issue for AI, there is an even more real, perhaps more uncomfortable issue to address: injustice.

Increasing diversity often feels like a proposal that is simply placing a bandaid over a bullet hole. A Bloomberg article highlighted the drastic biases between generative AI prompts and their outputs. The prompt “CEO” displayed prominently older white men, while the prompt “social worker” skewed more towards women of color. These types of results reflect back an uncomfortable truth upon our societal biases.

However, the “obvious” solution to just increase representation in the dataset can lead to rather unfortunate, if not offensive, outcomes. For example, AI-generated images of the American Founding Fathers as Black and a female Pope led to enough public outrage that Google had to place its AI tool, Gemini, on pause. These types of “representative” solutions avoid addressing the heart of the issue. Rather than surface-level “fixes”, we need to be working together to address injustices throughout all types of systems, from academic to governmental, not just the AI ones.

When I first started learning about AI ethics, bias was the main topic. I already had a computer science degree, so I understood the technical components, but I learned about how even technology can fall guilty to the human attributes of bias, toxicity, and harm. Still, the only solutions that people provided in these ethical AI discussions were technical: make the dataset more representative, don’t use a black box model, use the proper data anonymization techniques. As I’ve progressed in my AI ethics career, these topics have broadened to include the importance of other aspects such as transparency and accountability. However, the focus we all have on bias is undeniable. Simply put, I don’t think that is enough. We need to incorporate a social element when designing AI. Not just by asking the UI/UX developers about the interface but also by considering the context in which this tool will be deployed and what the end audience actually needs. There is no way to design an AI system that accounts for all the nuances of a situation if the designers are not ready to listen to the people in that environment. Representation alone will not save us.

Representation alone will not save us.

I can’t help but notice the parallels between how corporations talk about AI bias mitigation and DEI initiatives. DEI plays an important role in giving marginalized voices more of a platform at work but has also been rightfully criticized for its often surface-level approach to equity. Especially with how quickly DEI programs are getting cut across the United States, one can make the reasonable assumption that maybe these ideals were never considered so important to these companies in the first place. Similarly, focusing all efforts on reducing bias in AI gives companies leeway to avoid facing the more difficult systemic problems in our society. These solutions, while conceptually important, fail to fully address the systemic inequities. Are we working towards promoting products or people? The answer to this question tells us everything.

When we discuss bias, it often comes with the underlying assumption that the need to collect data is justified, and we need to focus on collecting the right data. This is a flawed premise. If our AI systems are inherently unsafe, or if the proper privacy guardrails are not in place, then putting in the right data is only enabling these AI systems and their developers to create more damage to marginalized groups. We have to consider if we are aiming to recreate our existing world or imagine a totally new one.

From diagnostic tools to notetaking solutions, the potential applications for AI in healthcare are vast. However, simply mitigating the potential bias in these tools is not enough. While a major component of healthcare is accuracy, another component is trust. Bogdana Rakova, a former Mozilla Fellow, speculated on how to implement AI in healthcare in a way that empowers patients. Rakova’s project looks into building trust with the patients by co-creating consent licenses with a theoretical Generative AI assistant. Doing so centers human agency by asking individuals to determine how they want to interact with AI, rather than assuming every AI “solution” is wanted. These types of holistic approaches place the power back in peoples’ hands.

The most uncomfortable sentiment in Responsible AI is that sometimes AI should just not be used at all. This is not a sentiment that brings in funding from investors, but it is crucial to protecting individuals from harm. Representation can only do so much in addressing issues. Overpolicing methods such as surveillance, for example, target the issue of public safety, but tend to exacerbate violence and fear. In this scenario, harm reduction, education, and community engagement are all proven methods to reduce violent crime in neighborhoods. In scenarios like these, we need to pool resources into human-oriented solutions rather than try to force AI to fit.

The Detroit-based campaign Green Chairs, Not Green Lights (GCGL) underscores the importance of human-centric design beyond an AI context. Formed as a response to Project Green Light, a mass surveillance project implemented by the Detroit Police Department, GCGL encourages investing in communities rather than security. They highlight how facial recognition can misidentify people of color, leading to wrongful arrests. While the AI-centric design approach would suggest better quality data for these facial recognition datasets, the community-centric approach of GCGL considers what the actual families residing in Detroit need to feel safer in their neighborhood.

The representation paradox can be a difficult one to confront. If we don’t want there to be biased, stereotyped, or harmful information about a group, it is a reasonable conclusion that we just need more of that group out there. But plastering someone’s face over a billboard (or a dataset in this case) is not the same thing as actually listening to them. People like to believe that technical problems just need better technical solutions. In my own work, I have seen how promising technical solutions can be for bridging gaps across explainability, interpretability, and yes, even bias. However, if we go from the perspective of addressing the overall social problem rather than the technical tool itself, then that means a social component is necessary.

People like to believe that technical problems just need better technical solutions.

Valuing history, culture, and community engagement will actually move towards solving the big problems far more than trying to make AI fix everything on its own. We cannot erase the inequities embedded in our society throughout history, but we cannot absolve ourselves from aiming for better either — even if that involves some uncomfortable conversations.

--

--

Nidhi Sinha
Women in Technology

Working at the intersection of technology and ethics! Learn more about me @ https://worldofnidhi.com