Behind the Curtain

Unveiling AI’s Hidden Biases in the Quest for Clarity

Frejahartvig
HCAI@AU
3 min readMar 9, 2024

--

A robot with glowing eyes peeking out from behind a red velvet curtain.
Image by MidJourney (v6).

AI is becoming a big part of our daily lives but can mirror society’s biases, highlighting the need for clarity, especially in critical areas like health or legal matters. By developing AI systems that explain themselves and teaching people about their limits, we can make AI more trustworthy. It is important for everyone, including users, lawmakers, and the whole community, to join forces to enhance our understanding and management of AI.

Artificial Intelligence (AI) is becoming a bigger part of our lives every day, from chatbots like Chat-GPT to helping doctors make important health decisions. However, it is important to remember that AI is not perfect. How can we make sure that the advice and solutions AI gives us are not biased or wrong? Yuan et al.’s article from CHI 2023 (see full citation at the end of this post) looks into how people see biases in AI and how their views change based on how much is at stake, from small daily hassles to big, life-changing choices. It highlights how important it is for users to trust and understand AI and calls for AI systems that are not only technically advanced but also socially aware, offering clear and easy-to-understand explanations.

To understand how people view AI biases, Yuan and her team interviewed 39 people, including those who do not know much about how AI works and the engineers who build it. One of the most interesting findings from the interviews is the acknowledgement that AI, at its core, is a reflection of our society. It is like a mirror reflecting our social biases back at us. This idea suggests the issue is not just technical but also about needing social change.

Robot looking into a mirror but seeing a busy city street filled with people.
Photo created using ChatGPT.

However, this acknowledgement does not remove the need for AI systems to be explainable and trustworthy. People want to know how AI makes decisions, especially in important areas like healthcare or legal matters. The challenge is to explain AI in a way that is detailed enough for users to trust but simple enough to understand.

Interestingly, people worry more about bias and want clearer information when AI affects important aspects of their lives. If the problem is minor, they’re more forgiving. For example, using a photo search that mostly shows men as CEOs might not seem as critical. However, when it comes to significant matters, like a credit decision where a system unfairly lowers credit limits for women, fairness and accuracy become crucial. One solution could be to create AI systems that explain themselves and encourage users to think about their use and biases. This involves not just making how AI works transparent but also teaching users about AI’s limitations and the bigger picture of its societal effects. This way, users can interact with AI in a more informed and thoughtful way.

Yuan et al. also note that tackling AI biases and improving transparency is not just a job for engineers but for everyone, including users, lawmakers, and society as a whole. As AI continues to advance, it is crucial that our understanding and regulations around it also evolve. This will ensure that AI benefits society while respecting individual rights and values.

In conclusion, making the AI decision-making process clear and involving users can lead to better understanding and trust. This approach helps us move towards a future where AI not only makes our lives easier but is also transparent and accountable.

You can read the full article here:

--

--