The Fault In Our Algorithms

moneyguru
Guru Gyan
Published in
3 min readFeb 22, 2021

How can something that is ones and zeros have biases in them?

Just In

An audit conducted by the University of Washington researchers, who have affiliations with The Information School at the University of Washington, showed that Amazon has massive amounts of health misinformation products belonging to categories including books, e-books, apparel and health and personal care. The researchers also said, “It (Amazon) not only hosts problematic health-related content but its recommendation algorithms drive engagement by pushing potentially dubious health products to users of the system.”

The researchers added that their investigations have found that Amazon’s algorithm has learnt problematic patterns via consumers’ past viewing and buying patterns. This is not the first time someone has uncovered the presence of problematic content on Amazon. Earlier in May, CNN found listing and advertisements for books and movies that promoted vaccine misinformation. Post the report, Amazon removed the anti-vaccine documentaries from its Prime Video service.

This shows that algorithms are not perfect and that could be a problem.

Algorithms & Biases

Human beings have biases and that is a known and well-documented fact. But how do algorithms develop biases? Firstly, what is an algorithm bias? It is the lack of fairness that emerges from the output of a computer system. The lack of fairness described in algorithmic bias comes in various form, but we can summarize it as the discrimination of one group based on a specific categorical distinction.

But how do biases get into algorithms? In many ways. People making a program can introduce biases or the algorithm can “learn” bad behaviour from training data before launch or from users afterward, which causes results to warp over time.

The Harm

You might be wondering, does bias in algorithms actually hurt anyone? When human beings have gender bias or racial bias, it ends up hurting people. You would be surprised if we tell you that algorithmic bias ends up harming us too. But how?

Amazon had to stop using a hiring algorithm after it found it gave preference to applicants based on words such as “executed” or “captured”. But why is it a problem? Because these words are more commonly found on men’s resumes. This means that the algorithm was biased towards giving preference to male candidates.

Researchers have also found that online job-finding services are less likely to refer women and people of colour for high-paying positions because those job seekers do not match the typical profile of people in those jobs (i.e) mostly White men.

One study found that systems that used algorithms to decide who got loans were 40% less discriminatory than face-to-face interactions. But the algorithm charged higher interest rates to Black and Latino borrowers than was justified by their risk of default.

Earlier in November 2019, Apple came under fire after it was alleged that the algorithm responsible for credit decisions for the Apple Card is giving lower credit limits to females than equally qualified males.

Zooming Out

The more we use AI for more things around us, the more we have to be careful about how these algorithms are designed. In order to prevent biases, the functioning of AI applications must be regularly monitored and analysed, whether the app has been developed in-house or acquired from an outside provider.

At the end of the day, AI reflects the views and values of the people who developed it while buying AI apps from overseas involves its own risks that need cautiousness. Companies should take this seriously and constantly examine whether their algorithms are being fair and don’t have any bias.

Head to moneyguru’s Insight section to stay updated on all major financial news updates of the day

--

--

moneyguru
Guru Gyan

Your Best Direct Mutual Fund Investing Experience Begins Here. Invest, Read and Track — at one place & for free! vist us at: www.moneyguru.in