Beware AI’s Hidden Biases: Why Artificial Intelligence is Never Neutral

Cat Mules
halobureau
Published in
4 min readFeb 16, 2023

As artificial intelligence (AI) becomes more integral to our daily lives, it is important to remember that it is never neutral. The biases, perspectives, and culture of its creators are deeply embedded in AI. In the not-so-distant future, AI will be an indispensable part of our everyday lives and routines. However, as these systems become more complex and integrated, the issue of bias becomes more critical.

Might transparency be what’s missing from AI? The orb centrepiece in this artist's representation of ideal AI is strong and consistent — everything is operating smoothly. Transparent AI helps build trust and accountability, with a constant flow of interaction surrounding it. Credit: Champ Panupong Te

Take one AI becoming more widely used with growing sophistication: natural language processing (NLP). Most of us are becoming more familiar with NLP through Chat GPT — an NLP that uses deep learning algorithms to generate text responses based on context and understanding of language. However, for over two decades, NLP has been used with commercial applications, such as Google searches, translation software, and virtual personal assistants (Siri, Alexa). Chat GPT has been ground-breaking as it was trained on a vast amount of diverse text data from across the internet. Over 1.5 billion parameters make it one of the most advanced AI language models. And its scope is said to be expanding with its recent acquisition by Microsoft, the world’s largest software company.

There are side effects and risks with the adoption of any technology. The extent to which AI is biased depends on the AI developers and the algorithms used to train it, as well as the motivations and biases of those who create and deploy the AI systems. If the data and training used to develop AI systems continue to be biased and lacking in diverse representations, the outcomes could be dangerous.

This can already be seen with facial recognition technology (FRT). FRT systems use AI algorithms to rapidly identify individuals based on their facial features. This technology is on the one hand extremely convenient and effective — used for quick scans to access phones securely, to prove identities, and to eliminate the need to remember increasingly convoluted passwords. Yet there has been repeated evidence (see here and here) that FRT is less accurate when identifying people from marginalised groups. Biases or lack of representation creates feedback loops that disproportionately fuel negative stereotypes. This leads to real-world impacts, with FRT most able to be spotted in false positives, arrests, and denials of treatment.

A threat of AI is its potential for hidden impacts. The big data AI models are trained on, and the self-learning algorithms it uses makes them able to impersonate and deceive. The ability to perpetuate and amplify biases may have an impact on an unforeseeable number of human activities: AI is already being used to pass certain tests, gain medical licenses and gain business degrees. There are implications for criminal justice, health and safety. Cybersecurity and personal privacy will become more under threat too, with criminals given the same abilities to create highly convincing phishing scams or impersonate individuals online.

One issue is the lack of transparency in the decision-making process. It might be challenging to assign accountability — let alone blame — for the behaviours of AI as it develops and is integrated into society. This can make it difficult to understand why an AI system made a certain decision, and it can be difficult to identify and correct biases in the system.

Initially, AI bias was thought of as a mere inconvenience — something that could be easily corrected and solved. But as AI systems are playing a more prominent role in important decisions — such as immigration, research, communication, hiring, and lending — the impact of these biases is becoming much more significant. As AI systems spread, they are beginning to shape our societies and economies in ways we could never have imagined. The speed and scale at which it develops have serious ethical implications.

In a world where AI bias has become more insidious seemingly impartial algorithms are making decisions that are deeply unfair, perpetuating existing inequalities and creating new ones. For some, there is already a crisis of confidence in technology. While some believe AI could still be salvaged, others feel it is beyond repair.

Steps can be taken to mitigate bias in AI. These include using more diverse and representative training data and involving underrepresented groups in the development process. We must make sure the data sets used to train AI systems are diverse and inclusive of society as a whole, adopt an inclusive approach to the creation and use of AI systems, and consider the potential effects of AI applications and reasonably involve all viewpoints — especially those of members of underrepresented groups — in the development process.

AI is never neutral. Technology will always be an expression of its creators’ values, which includes their biases, prejudices and motivations — those that they’re aware of and not. And ultimately, the extent to which AI becomes more or less biased will depend on the collective efforts of those who create and use AI systems, as well as the regulatory and ethical frameworks in place to govern the use of AI. Vigilance to the reality of bias in AI will be critical to maximising its potential.

--

--