Dark Side of AI — Lack of transparency widens inequality and trust in Systems

Bipin Kumar
SciTech Forefront
Published in
5 min readJul 21, 2022

As more and more of our world gets connected and automated, we are entering a new era where the government or private entities access to most services/opportunities is pre-determined by deploying Computer /AI algorithms.

The issue with such systems are they are deployed without our consultation or knowledge, nor are these systems accountable to any standards or regulations. These algorithms have typically been trained on biased data collected without explicit consent. Usually, it impacts the already marginalized people in society, such as indigenous people or people of colour.

Typically these algorithms are deployed assuming that computers and AI would remove discriminatory practices, but in reality, they are automating inequality and racism.

Current regulations

The only comprehensive effort to regulate AI and automated decision-making systems in Canada is the Government of Canada’s Directive on Automated Decision-making (“the Canada ADM Directive”). Many other governments, including the Government of Ontario, have begun considering AI and ADM regulations but have not yet passed or implemented comprehensive or dedicated rules.

The Government of Canada has introduced Bill C-11, An Act to enact the Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act, which could impact privacy protections and AI systems. Similarly, PL 64 in Québec on data protection includes new provisions for automated decision-making. Finally, the Government of Ontario has embarked on a major initiative to develop a “Trustworthy AI” framework in Ontario.

Currently, the Canadian Governments directive is very limited in scope and does not apply to private companies nor many sectors of the government, such as law enforcement, etc.

Why is it important?

AI applications influence in a variety of ways, including what information we see online by predicting what content is engaging to us by tracking our web activity and other data such as location from phone’s GPS systems, financial data from banks or credit cards, etc., capture and analyze data from faces to enforce laws or personalize advertisements, and are used to diagnose and treat cancer. In other words, AI affects many parts of our life.

An image showing how AI affects various aspects of our life.

Lack of transparency in AI can cause various types of individual harm, such as financial loss, loss of employment opportunity or access to credit, loss of freedom, and collective social damages.

AI used to determine hiring decisions has been shown to amplify existing gender discrimination or racial lines. Law enforcement agencies are rapidly adopting predictive policing and risk assessment technologies that reinforce patterns of unjust racial discrimination in the criminal justice system.

AI systems shape the information we see on social media feeds and can perpetuate disinformation when they are optimized to prioritize attention-grabbing content. This can adversely affect political discourse and significantly impact civil society and democracy as it becomes increasingly difficult to agree on the basic set of facts. In the US, a widely-used healthcare algorithm falsely concludes that black patients are healthier than equally sick white patients.

As AI becomes increasingly omnipresent in our society, it becomes increasingly essential that AI deployed in the public spaces for services should be free of bias to increase trust in the system and decrease inequality.

What can be done to reduce bias and increase transparency in AI?

--

--