AI Algorithms, Hidden Biases and Discrimination.

Robert Panico
ILLUMINATION’S MIRROR
4 min readNov 7, 2023
@gettysignature on Canva

Have you ever sat in a hospital waiting room, twiddling your thumbs for what feels like an eternity? Faced the agony of scheduling a critical specialist appointment that’s light years away, despite your urgent need? And those job offers flooding your inbox – why don’t they match your aspirations anymore? Brace yourself, because it’s the work of sneaky algorithms quietly discriminating against you.

In the realm of contemporary debates on race and racism, digital technologies have emerged as potent players, both intensifying and expanding racial inequalities worldwide. In 2020, the UN Special Rapporteur on racism, E. Tendayi Achiume, explored how digital technologies often perpetuate xenophobia and racial discrimination. These technologies, embedded with racial biases, breed discrimination in various facets of our lives.

The use of ‘big data’, which involves collecting, storing, and analysing vast amounts of digital information, has stirred the pot. It powers algorithms that shape outcomes in numerous domains as evidenced by the report, conducted by the European Union Agency for Fundamental Rights in 2018, titled #BigData: Discrimination in data-supported decision making. It underlines that while data can inform decisions and reduce human bias, the implementation of these technologies should be scrutinised for their impact on specific groups.

Predictive modelling through machine learning and artificial intelligence (AI) is transforming industries like employment, education, healthcare, and criminal justice, with algorithms taking the reins. But there’s a catch – these algorithms are only as unbiased as the data they are fed. If data collection is skewed due to biased methods or subjective choices, the algorithms learn and replicate these biases.

@phonlamaiphoto on Canva

A glaring example is in healthcare, where algorithms use past medical costs to predict future healthcare needs. Shockingly, a study found that these algorithms systematically discriminated against black patients, assigning them lower risk scores despite their equal medical needs.

Furthermore, an AI tool for medical appointment scheduling unintentionally favoured some patients over others based on factors like ethnicity and socioeconomic status, causing longer waiting times for certain groups.

The impact of automation extends globally.

In India, smart sanitation systems threatened jobs typically held by the ‘Dalit’ caste, exacerbating economic disparities for a marginalised community.

In various scenarios, automated systems indirectly discriminate by reducing or eliminating positions, impacting marginalised groups disproportionately. The criminal justice system increasingly relies on digital technologies for predictive policing, leading to an overrepresentation of minority ethnicities in the justice system, perpetuating racial biases.

To address these challenges, regulations are essential, even if, in my opinion, regulations can somehow prevent innovative progresses or even be used as a tool of control. They offer transparency in decision-making processes, ensure accountability in algorithm usage, and promote diversity in AI development. The UK government has set a precedent with a mandatory transparency obligation for public sector organisations utilising algorithms.

@phonlamaiphoto on Canva

However, regulations alone may not be enough. Establishing dedicated professional bodies at national, regional, and local levels is vital to continually assess, review, and update algorithms, minimising harm in the long term. While regulations protect fundamental rights, they shouldn’t stifle innovation and progress. Striking a balance is crucial, as governments must avoid overregulation that hampers societal advancement. Technology is a human creation, and while it can be neutral, the process behind its design and use introduces biases and ethical considerations.

As I contemplate making technology as neutral as possible, it’s evident that IT professionals need to be well-versed in recognising biases, addressing racial discrimination, and upholding ethical responsibility and accountability. Prominent figures like Geoffrey Hinton have raised concerns about the future misuse of AI, emphasising the need for investments in AI safety and control.

I conclude by writing that the realm of digital technologies holds incredible promise, but it’s a double-edged sword. Regulations, combined with continuous assessment and professional bodies, can guide the responsible use of these technologies, ensuring they are a force for good rather than perpetuators of bias and discrimination.

--

--

Robert Panico
ILLUMINATION’S MIRROR

Coach, Mentor, Facilitator on a mission to empower vulnerable people.