How is data-driven AI perpetuating societal biases?

Kristina Kolesnikova
Towards Entrepreneurship
3 min readNov 15, 2020
Source: Forbes

Artificial Intelligence is revolutionising our world and has been one of the greatest advancements that has developed exponentially in the past years. The views regarding this technology are polarised. Some regard it as the solution to many of our inefficiencies. Whilst others as the source of several anxieties.

Although AI is often associated with human-like machines, in reality the simple definition is a complex software that simulates human intelligence. A subcategory of AI is machine learning, which are algorithms that improve over time as they are exposed to more data.

Some tech gurus like Elon Musk are openly afraid of the possible consequences of a self-evolving AI, which could possibly become an existential threat. Speculating on the possibility of an apocalyptic scenario in relation to new technologies can be considered anti-progress or unrealistic, but sometimes it is needed since a sense of urgency calls for more regulation.

However, a more pressing issue is the bias associated with AI and machine learning. Especially when it comes down to risk assessments, policing and border control. Although we are quick to brush off the responsibility in relation to such emerging technologies to the tech industry and experts, the reality is that when technology enters the domain of social institutions, it becomes a political tool. Therefore, it requires greater oversight and control when the futures of millions of people are at stake.

AI bias means that aspects of the data used to create the algorithms lead to biased results and discrimination. The data from our societies is inherently skewed by past prejudices and preconceptions, which are mirrored in the performance of such algorithms. One example is using algorithms in the US justice system. The US has the most people in correctional facilities than any other country, and to remedy this the justice system has turned to AI and facial recognition systems.

Some of the issues arising from facial recognition is that firstly, the algorithms are not efficient across all demographic groups and secondly, judging someone’s likelihood of being a criminal offender based on their facial features is both ethically and morally wrong. In addition, algorithms being used for risk assessment such as COMPAS are also questionable since they are developed using historical crime data. The Correctional Offender Management Profiling for Alternative Sanctions conducts a risk assessment regarding how likely a criminal is to reoffend. The results from finding correlations within the data are not representative of the reality and put low-income and minority groups at risk of having a worse score. Such reliance on technology makes it difficult to hold someone accountable and shows how efficiency should not outweigh social justice.

On the other hand, parity does not always equal justice, as stated by Kate Crawford in her talk on Politics of AI at the Royal Society. Simply removing the inefficiencies and biases from the data may have adverse effects on marginalized populations. For example, making facial recognition software more likely to recognize minority groups could in reality put them at a greater risk of being prosecuted due to the other existing biases in the data.

Another famous example of a technology perpetuating bias is Amazon’s hiring algorithm. In 2015, it was discovered that the algorithm was biased against hiring women because it was trained on resumes from the past 10 years, which were mainly male. Thus, the algorithm was less likely to rate highly an application, which had anything related to women on it. In this manner, the gender gap was enforced further until the issue was identified and made neutral towards such terms. Although the problem was addressed, it is plausible that such algorithms may be discriminating on another basis.

These examples show the difficulty of making AI neutral, especially when it is modelled based on historical data and our past behaviour. Some possible solutions to reduce these biases are to make the domain more transparent and secondly introduce an aspect of ethics and accountability.

A final question we should be asking ourselves, is whether the technology is assisting our goals of living in a better and more just world. Possibly, a more important question is how do we address and change the existing prejudices in our societies and resolve social issues.

--

--

Kristina Kolesnikova
Towards Entrepreneurship

Russian living in London. Arts and Sciences student at UCL. Passionate about technology, business and art.