human&
Published in

human&

Tech for Good?

Developing Society-Friendly AI

Fatos Bytyqi | Unsplash

How many times have you opened a social media application this week? We live in a world where technology reigns supreme, and the rate of technology use is only steadily increasing. A recent docudrama on Netflix called The Social Dilemma highlighted many issues within the technology era, such as the power AI and Machine Learning have to influence our individual actions and collective politics. Beyond this, Machine Learning has additional ethical issues which can exacerbate the worst parts of our society.

What is Machine Learning?

Machine Learning (ML) is a subset of AI that allows computers to progressively learn from data without a predetermined model or process. For example, say you wanted to eat only ripe fruit but you couldn’t see the fruit’s color. Using photos of fruits with varying ripeness, you could train a machine learning algorithm to label a fruit’s ripeness based on its physical characteristics. A more common way you’ve probably interacted with machine learning is Netflix’s recommendation algorithm. Netflix tags its content with identifiers beyond genre, and is able to use your activity and these tags to suggest other movies or TV shows you may like.

At this point you might be thinking, “Nimi, machine learning seems pretty innocuous, what’s the big deal?”. Besides the fact that ML is sometimes used in ethical gray areas, Machine Learning holds a large risk of algorithmic bias.

What is the effect of algorithmic bias?

The outcome of a ML algorithm is based on the inputs. In the fruit example, your objective was to eat ripe fruit, but imagine the algorithm was only provided pictures of bananas. Your model may reject a picture of a ripe orange, because it was trained on a data set of only bananas. It doesn’t know to classify a ripe orange as a ripe fruit. When these biases take effect in real scenarios, they worsen societal issues that are reflected in data.

For example, Amazon created an ML tool to sort through resumes more quickly. This tool was trained based on resumes that were submitted to Amazon over the past 10 years, and due to the gender gap in the tech industry, most resumes over the past decade came from men. As a result, the algorithm saw male applicants as preferable, and discriminated against female candidates. Ultimately, the project was disbanded, but it brings to light the limitations of ML.

Biased algorithms are impacting decisions today within the criminal justice system. AI-based criminal justice tools are used across the United States to predict the risk of recidivism. These risk scores then influence the parameters by which justice is served — jail sentences, parole approvals, bond amounts, etc. These tools are based on a variety of factors, including past arrests, parents’ criminal history, and income. Race isn’t an explicit variable within the algorithm, but a study by ProPublica found that an AI-criminal justice tool called COMPAS was twice as likely to give a Black person a high risk score than a White person. This is due to biases in the data that reflect systemic issues. For example, because policing isn’t evenly distributed across communities, using past arrests as a variable over-represents Black individuals in the data sets. In this context, using a machine learning-enabled tool doesn’t help make more accurate decisions, it automates the biases our society already has.

History is written by the victors, and based on the biased algorithm outcomes we see, so are machine learning models.

We have the power to change this by building algorithms that promote fair and responsible applications of AI. Here are three ways we can minimize the negative effects of AI.

1. Create Diverse Teams

The lack of diversity in the tech industry plays a large role in the impact of technology solutions. Creating a diverse team is a way to minimize these biases. Bring together technologists with varied life experiences so they can share their perspectives on the effects of an algorithm or point out limited data sets. Create roles for social scientists who can analyze how technology solutions may interact with society. Educate business stakeholders on how ethics must be applied to AI. If our teams can’t highlight biases within their work, a machine learning algorithm won’t be able to combat bias either.

2. Spur Government Action

The tech industry is notoriously unregulated in the US, and the governance of technology is largely left in the hands of sector-specific laws. To drive a standardized approach, we should advocate for regulation. Having a standardized framework set by a governing body will allow corporations to maximize their impact in driving responsible AI. An example of technology regulation in action is GDPR, a data protection policy within the EU. This is a step towards ensuring consumer rights in technology, but a robust ethical framework should be applied to more specifically regulate AI. Regulation would increase the accountability of organizations to ensure AI algorithms are held to a high standard of consumer protection.

3. Consider Not Using AI

There are some cases where we should consider avoiding the use of AI. We’ve seen the extreme end of the spectrum in how China has used AI to identify Uyghurs to be sent to concentration camps, and to quell protests in Hong Kong, but those aren’t the only cases in which we should deem AI immoral. Maybe we should consider limiting AI in aspects of the criminal justice system. We’ve seen this in San Francisco, Oakland, and various other cities that have banned the use of facial recognition by the police. The criminal justice system is a nuanced environment with serious consequences and many injustices that would be automated, rather than eliminated, through AI. Communities need to come together and make choices about what ethical AI means, and how AI should be used.

It’s not enough to analyze the technology we build and try to use it fairly. It’s up to us to address the biases in our society as well. If technology accelerates the worst aspects of our society, let’s try to make the worst of us a bit better. In a just world, technology accelerates justice. In an inclusive world, technology accelerates inclusivity. Let’s drive progress at both ends to ensure technology solutions work for all of us.

My postings reflect my own views and do not represent the views of my employer.

--

--

--

human& is about sharing the content that inspires us, makes us think twice, and helps us better understand our budding careers. Our goal is to spark conversation and encourage others to learn about how organizations can keep humans at the center of their decisions.

Recommended from Medium

Can Natural Language Processing Help Rebuild Trust in Fact-Based Discourse?

That’s no carrot!

NCR Corporation model of self-service checkouts and fast-lane at a Sainsbury’s store. (Creative Commons)

Solstarter | Synesis One Whitelisting Guide

Chatbot: Deconstructed

EUROBOT is the greatest team sport in engineering

Why responsible AI is suitable for your business

Intro Blog: Get to Know Me

How Artificial Intelligence Revolutionize the Video World

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Nimi Oyeleye

Nimi Oyeleye

Innovation Consultant who seeks to understand human experiences. I love learning new things, including where to find the best iced vanilla lattes in Houston.

More from Medium

Five things we learned about engaging citizens on strategy delivery and Artificial Intelligence

Are you becoming an Operations Manager? This intro is for you.

What’s Your (Learning) Agenda?

The Role of Immersive Learning in the Metaverse