Dealing with Biases in Artificial Intelligence: How can we make Algorithms Fair and Just?

Vanshika Singla
The Startup
Published in
7 min readJun 17, 2020

What Role do Algorithms Play in AI?

As artificial intelligence becomes more pervasive and entrenched in our lives, we are faced with challenging questions to ensure that the future of AI is fair and accountable. Algorithms simply serve as mathematical instructions that guide the functioning of an AI system. “When it comes to artificial intelligence, consider the algorithm a recipe”. Daily, algorithms are shaping our lives. From our Netflix recommendations to our Facebook and Instagram advertisements, everything is dependent on an algorithm. They can generate flawed outcomes when the input dataset reflects personal or societal biases or when the dataset lacks relevant information and in consequence, creates a biased output. Whether the act of feeding biased data in an AI system is intentional or not, it leads to race, gender, and age discrimination which imposes a sense of urgency on organizational decision-makers to undertake resolution.

AI has spread rapidly and widely across disciplines varying from criminal justice and healthcare to financial services and human resources. As AI continues to gain popularity, the ethics of it comes to light. Major corporations such as Google, Facebook, Microsoft, Amazon, IBM, and Apple have all suffered from tangible and intangible losses because of algorithmic bias in their AI systems. Google’s algorithm has been accused of underrepresenting women in job-related image searches. Amazon’s recruiting algorithm revealed a bias against certain demographics, specifically women. In light of these controversies, companies have begun to incorporate ethics into their AI philosophy. Microsoft has introduced “Responsible AI”, that is governed by a set of moral principles to make AI fair, reliable, and inclusive. Although an Ethical AI movement has begun, we still have a long way to go. To accelerate this ethical transformation of AI, every stakeholder has to play a role. But more importantly, users of the tool — organizations, that is to say, employers, managers, and executives need to gravely address the solutions to reduce algorithmic bias and eventually, eliminate it. By confronting the current reality of AI, employers will be compelled to think about the who, why, and how of changing AI for the better.

So, what can you do?

How can Diversity & Inclusion Help?

Joy Buolamwini describes algorithmic bias as “the coded gaze” and rightfully so because of the power vested in coding programmers when it comes to building an algorithm’s decision model. To create more inclusive codes, the people behind the code matter. Ensuring that a diverse pool of individuals is involved in designing and testing the algorithm, could help detect unintentional biases or include key information that was previously withheld in the dataset. The baseline definition of “diverse pool” involves selecting qualified people from different social and cultural backgrounds which will help address the lack of gender and race diversity in tech. Therefore, employers must address the lack of diversity in the technology discipline by hiring data scientists, IT specialists, and computer programmers from varied geographical and cultural backgrounds. Hiring a diverse group of AI designers and testers would not only help exercise inclusive coding practices in the workplace but also ensure that the input data mirrors almost no racial or gender biases. Once hiring managers begin thinking about “who codes” then they will finally realize the value of diversity in dealing with biases in AI.

Why Invest in Educational Measures?

In addition to diversity, education can also play a contributory role in dealing with the biases present in AI systems. Ethics education in organizations can create cultural awareness among employees and help gain exposure to different lifestyles and value systems within society. A deeper understanding and acceptance of the existence of diverse perspectives, behaviors, and attitudes would help all employees directly involved in the algorithm design process to consider bias in data that might have been overlooked in the past. Though this does not guarantee to eliminate algorithmic bias from its roots, it’s certainly a step forward in reducing the occurrence of social biases in the dataset.

Salesforce’s online learning platform initiative, Trailhead, helps inform millions of people on “the technology of tomorrow” ranging from Blockchain to AI. Trailhead recently introduced a new module named Responsible Creation of Artificial Intelligence which aims to educate everyone who might be directly involved in the AI development process on building and responsibly employing AI and understanding the implications of it on consumers, businesses, and society as a whole. The module explores topics pertinent to detecting and eliminating bias from data and algorithms to advocate the ethical and efficacious use of intelligent technologies.

As employers and managers, you should incorporate ethics education such as the one described above as part of mandatory employee training so that employees working in the technology department of the organization can make culturally informed decisions regarding the detection and elimination of bias.

Is Transparent, Explainable AI the Answer?

To fully combat social identity bias and discrimination attributable to AI systems, transparency is vital. The need for transparency gained ample attention after a husband and wife received exceedingly different credit limits on their Apple Card despite her having a higher credit score. Since the credit card limit is determined by an algorithm, Apple faced significant backlash on the biased nature of its AI system. Words like “transparent”, “accurate”, “observable”, “responsible” and “fair” started trending and researchers began demanding clarity on an algorithm’s decision model. Transparency in the general sense would imply revealing the input, programming, and output of an algorithm but because organizations usually regard algorithms as part of their intellectual property, it makes them hesitant in disclosing the code behind the algorithm. But if you think deeply about the impact transparency can have, customers would benefit more if rather than understanding how an AI system works, it explains its reason behind reaching a particular decision.

For instance, if a customer is declined a bank loan or rejected for a job then the loan approval and hiring algorithm should explain why the customer was denied the loan and rejected for the position respectively. The algorithm could say that it came to this conclusion because the loan applicant had little savings or inadequate credit references and then provide the minimum number in savings and references that it considers for loan applications. Similarly, if a candidate is rejected for a position then the algorithm should say that the applicant was rejected because of his or her limited experience in that industry or the lack of a particular skill and then go on to provide information on the minimum years of experience or the relevant skill required. In both these cases, the algorithm is not only justifying its decision but also providing some instruction to improve upon for future applications. This implies that employers or managers must be aware of what type of data is fed to the algorithmic model and how that data is utilized by the model to generate an outcome. It is also your responsibility to effectively communicate this understanding and explain the algorithm's decision to the customer so that they don’t question the accuracy and reliability of the AI system. Through the process of making AI more explainable, data scientists and programmers are also provided with the opportunity to delve deeper and identify whether an algorithm's decision is a result of biased input data or not.

Google integrated a new feature named What-If to its machine learning web platform TensorBoard. Through this feature, anyone can analyze a machine learning model and create explanations for its outcome without requiring any coding from programmers. IBM has introduced new cloud-based AI tools that help show customers which factors are involved in the algorithm arriving at a conclusion. In addition to this, the tools can analyze algorithmic decisions in real-time to identify implicit biases and provide recommendations on dealing with them. KPMG has also begun experimenting with explainable tools developed in-house to better understand the decision-making process of an algorithm and provide customers with satisfactory responses about the decisions concerning them. Lastly, Bank of America and Capital One are amidst developing AI algorithms that can explain the rationale behind reaching a particular banking outcome or decision.

To further assign accountability to AI systems, an AI platform called the Grace Platform provides organizations with the opportunity to remodel AI to its transparent, explainable, and ethical form. It offers technology companies and larger organizations with data monitoring, algorithm traceability, and model training and development related services. With the help of Grace, employers can ensure that their AI decision models are being meticulously studied and examined to recognize any flaws relating to personal or societal biases being reflected in the system.

Such efforts encourage organizational leaders and managers to take a more moral stance on their AI use and maintain compliance with ethical standards in the long-term. By defining transparency through the scope of explainable outcomes, employers are directly addressing the customer’s stake in AI and ensuring that customers feel well-informed about decisions made by an algorithm. Ultimately, the power of diversity, education, and transparency will unveil innumerable benefits for every stakeholder and pave the way towards a bias-free AI.

Looking Forward

Such a complicated issue indeed demands multiple solutions and though it can be a time-consuming process to implement all said changes, it is essential to work towards an ethical future of intelligent technologies. So, build diverse engineering and programming teams, seek ethics education, learn more about detecting and removing bias, and create transparent and explainable algorithms. AI is here to stay so we have to ensure its benefits are not reaped at the expense of society’s sense of fairness, integrity, and equality. In the words of Osonde Osoba, “if you want to build a better, fairer society we need AI systems that reflect and amplify the better parts of our nature”.

--

--