Simulating Inequality: An Assessment of the Implementation of Artificial Intelligence into Modern Society

A. Lamar Johnson
Deconstruct Media
Published in
74 min readJan 2, 2020

Introduction

The world has proven itself quite a frightening and unforgiving place. There has been famine, plagues, many instances of subjugation, climate disasters, and so much more. However, much progress in humanity has been made. Humanity has made leaps in bounds, especially in the inception of basic human rights. There have been countries who have absconded from their very nature in the world, and there have been nations that have come back from the brink of a human rights disaster. Today, another threat to the understanding of human rights is on the forefront. There has become a growing concern in the ways by which Artificial Intelligence is developed. This concern has come unfounded to many people, but if history has anything to tell humans of the future, it is that with every technological advance, there have been a multitude of consequences. Artificial Intelligence has much potential, but humans have reached a point at which the technology has the capacity to grow and learn better and faster than humans, so it is imperative that the job of humanity is to learn from the ancestral woes. The growing concern is not that of a sci-fi/fantasy dystopian future in which robots take over the world enslaving humans, it is that of the inequalities that already exist in the world will be engrained into the framework of the nascent technology, threatening the very foundation upon which the International Bill of Rights is founded.

I. Unintended Consequences

There is an old fable which states that at one point during the time of British colonial rule over India, there was an issue that had began to circulate throughout the government. They became gravely concerned with the amount of venomous snakes in Delhi. In order to curb the problem, the government had promised its citizens a bounty to be collected for every cobra one kills. Of course in the beginning, it was great; there were less cobras in the area, and the people felt safer than they had been before.

Per human nature, there were people who had a desire to exploit the system set out with the sole purpose of helping the populous feel more secure. People had realized that they had the ability to breed cobras and make a living off of what the governments payouts were for those hunters. It was a scheme that, to some, proved itself the perfect form of income.

Of course, later the government was informed that there were people exploiting the system by engaging in this illicit activity. In response, the government shut down the program, so the snake breeders released the cobras they had been in the process of breeding into the wild. In the end, this area that the government wanted so badly to protect from these venomous creatures, was infested with even more than before. The solution to their problem had transformed into an even bigger problem that plagued their citizens.

For this governing body, the road to poor outcomes was paved with good intentions. A simple government measure that had the intention of solving a problem that was deeply aggravating and frightening for its citizens became an even bigger problem. This situation was coined, the “Cobra Effect”, and this theme of unintended consequences is intrinsically linked to the story and to this essay. Just as the path for Delhi ended in misfortune, humans today are posed with the exact same problem only this time, it is more based in technology. Rather than deliberately react to a situation with the first idea that comes up, it was necessary to heavily examine its implications in the first place. Had the government discussed further what the future of that solution could be, both negative and positive, they could have avoided that ever being an outcome.

The main reason that this story is so closely linked with the conceptualization of how Artificial Intelligence will shape Human Rights in the future, is that there is a wide belief that this technology can solve all of the problems that humans face, it can save humans from all of those unforgiving things the world like climate change, and famine, and plagues. However, in the theme of this thesis, one idea that has been in its nascent years of research is bias. This concept will become more clear these concepts begin to expand with this thesis, however, a great introduction to the topic is what Olly Burton, the CEO of Future Advocacy, said in relation with this growing intersectionality in which he stated.

There is a possible future in which artificial intelligence drives inequality, inadvertently divides communities, and even actively used to deny human rights. But there is an alternative future in which the ability of AI is to purpose solutions to increasingly complex problems is the source of great economic growth, shared prosperity, and the fulfillment of all human rights. This is not a spectator sport. Ultimately it will be the choices of businesses, governments, and individuals which determine which path humanity takes.

The idea that is to be conveyed is that policies and, more specifically, technologies that seem completely logical in every sense of the word, have the ability and unseen potential to real havoc in the realm of Human Rights. In this specific case, the topic at hand is that of these unintended consequences to be addressed herein, may have and will be against the very foundations of Human Rights. Of course the main story behind why the Cobra Effect came about comes off as harmless in the grand scheme, but it is a small example of a motif of humanity that can prove itself quite disastrous. The question is, how can one mitigate for unintended consequences in the name of Human Rights in a world in which this technology is ever-present and ever-changing?

II. History

In many countries, much like the United States, there is a dark history of inequality that continues to influence society there today in both negative and positive ways. What is needed for AI to have that super-intelligence, it first requires the knowledge of humans, and the expertise of humans. However, providing this information to an artificial intelligence can create unintended consequences strictly due to a nation’s history along of not reconciling with the issues of the history. There is potential for the foundation of Artificial Intelligence will be intrinsically based upon the biases and injustices of the society of the past in a way that is violative of the universally accepted human rights.

In order to understand how a technology that humans create has the ability to impact society on such a large scale, it has become increasingly important to understand the history of inequality in the world. In an attempt to provide brevity to such an undertaking, it will mainly address the history of inequality in the United States. America provides a very interesting data set upon which to base the idea of this thesis. For the overarching example, racial inequality provides excellent insight into how the consequences of the past, can necessarily predict the consequences of the future intersectionality of Artificial Intelligence and Human Rights.

Within the wide subject of racial inequality in the United States, of the important checkpoints within history to support this claim, mass incarceration is a viable foundation as it is quite complicated to discuss racial inequality in the United States without addressing this subject. In the year 1994, President Bill Clinton passed a more intensified version of Kennedy’s original policies on the War on Drugs. What happened during this policy change was that this bill made the conviction of the crime powdered cocaine possession was a lower charge than crack cocaine. It was widely known that the users of crack cocaine were those who were in low-income communities and primarily African-American and those who used powdered cocaine were more wealthy/influential people regardless of the fact that this was the exact same substance at its core properties.

Due to these more “Draconian Era” sentencing laws on specifically crack cocaine that the government touted as the best way to solve the war on drugs, it led to incredibly disparate outcomes in communities of low income white Americans as well as communities color. There was research which explained that incarceration based on drug offenses has very little impact on the offender, yet the law remained the same and given the rate of incarceration, was specifically harmful to communities of color. The consequence of this led to a decimation of the families of these people and to fifty-seven percent of those convicted of a drug offense were black or latino.

Moving forward, these communities still face many problems that directly stem from that era of policing, problems that have made it much harder for people within these communities of color to have the same chances to succeed proportionate to those not in those communities. It has led to increases in violence and the stigmatization of people who live in those areas.

All of these problems were created by a government that either was unaware of the unfair stipulations and the effects the policies had on these communities or they did not care, and in either case, it happened. The stigma is still apparent and is in constant evolution.

III. The Connection

It is important for this concept to be imposed on the reader. It allows the reader to understand the history in a way that sheds light on how prejudice and discrimination work. The above mentioned history acts as a precursor to understanding the in-depth topic of bias and eventually leading to the understanding of how biases and injustice from the past can influence the technology of the future.

This is what injects fear into many people; at a time in which humans already depend on artificial intelligence without even knowing that they are in phones and applications and whole structural systems, what happens when humans begin developing more artificial intelligence to solve what ever problem humans run into next, is there someone who is bringing up those provocative questions? How can humans prevent the “Cobra Effect”? Is there a committee that is solely purposed in figuring out the impact a technology could have? Why should they?

Further in this essay, the audience will find that there is a distinct connection between the concepts of discrimination and prejudice, how those behaviors feed into bias, and then how bias can negatively impact the nascent technology, Artificial Intelligence, and its effect on human rights globally.

IV. Project Construction

Throughout this thesis, the idea of the growing intersectionality of Artificial Intelligence and Human Rights will become clear and an ever-present thought in the mind of the reader. The connection between the history of inequality in the world and how that can affect the implementation of artificial intelligence in society will become clear.

IV.I Thesis

Upon the enlightenment of the future of Artificial Intelligence technologies and the implications it poses to Human Rights, it will become clear the multitude of threats have the capacity to cause a multitude of violations at the core of the International Bill of Rights primarily due to the bias that gets built into AI. However, with the proper safeguards many experts and philosophers in the field of AI and Human Rights have suggested, this technology will prove itself fundamental in unearthing those biases that hold society back.

What can we do to thwart the unintended consequences in the realm of human rights that come along in the development of artificial intelligence?

IV.II Methodology

The ways by which this information herein will be addressed is through a more mixture of a quantitative and qualitative approach. While there will be statistical information addressing the biases the artificial intelligence has imposed, it will also include much more of an in-depth insight into the concept at hand and how it functions socially. The

The aim for addressing this topic was more of a theoretical one, that serves to function as a tool for understanding the best ways to approach new technologies in general, and specifically AI, especially as they relate to human rights. This methodology serves the function of the thesis because it provides a history that leads the reader to the future.

In order to convey this essay completely, it will require the existing data on Artificial Intelligence, and in doing so, led to the analysis of the report done by Access Now which is an organization that “defends and extends the digital rights of users at risk around the world”. They completed a study on Artificial Intelligence, so the report that they created will prove to be invaluable to the whole. The policies and regulations that have been set in the International Bill of Rights are addressed in this research report. The subsequent information in this section put in detail the exact manner by which this information will be presented and conveyed.

The following chapters are subdivided into chapters, sections and subsections (see index for outline)

IV.II.I First Chapter

The first item to be addressed is to go through general information regarding Artificial Intelligence with regard to what it is and what it precisely means in this context. The second item to be addressed is the history of inequality in the world, specifically in this context, in America. Then what will begin to be focused on is how these current problems relate to human rights.

As a cautionary tale, there will be a deep dive into Facebook and the happenings in Myanmar in order to explain he overarching concept of how poorly built AI can have a plethora of unintended consequences in the realm of curbing inequality and bias without the AI doing much in the first place — all due to irresponsible parties

IV.II.II Second Chapter

Herein will be discussion on why there needs to be attention on this topic; to learn what about AI is so different from other technologies that have come into fruition in the past that make it such a contentious topic. There will be some expert opinions that are in the beginning and mainly this is to underscore the importance that is put on developing AI the correct way.

The topic of bias and implicit bias will also be a large topic throughout this portion and the rest of the thesis. This will begin to explain the imminent threats that it poses to this technology. Among the other topics, there is more discussion on Data and the concept of “Dirty Data” will be addressed which perpetuates the problems that will be addressed.

IV.II.III Third Chapter

This will be where the two different phenomenons converge into one topic; the point at which it makes sense to look at the problem of AI perpetuating inequality through the lens of historical inequality and implicit bias. Additionally, there will be more information on how this has actually happened in the past by highlighting a few different applications of Artificial Intelligence. The last thing that will be addressed in this section is the Chinese Social Credit System.

IV.II.IV Fourth Chapter

The fourth chapter allows all of what has been discussed in the first three chapters to really meld into one cohesive idea which will bring the reader to the same or a similar conclusion that has been outlined.

V. Glossary

Algorithm

“An algorithm is a step by step method of solving a problem. It is commonly used for data processing, calculation and other related computer and mathematical operations”.

Artificial Intelligence

In a very succinct way, this phrase is normally used as a catch-all term for the science of making machines smart. There are more further break downs of the term, however, for the bounds of the research herein, it is only important to refer to the idea of Narrow AI; the thesis is mainly discussing the intersection of this form of AI in the Human Rights field.

Weak or Narrow AI

It is focused on one narrow task, the phenomenon that machines which are not too intelligent to do their own work can be built in such a way that they seem smart. An example would be a poker game where a machine beats human where in which all rules and moves are fed into the machine. Here each and every possible scenario need to be entered beforehand manually. Each and every weak AI will contribute to the building of strong AI.

Strong AI

It is important to take into account that Strong AI are the “machines that can actually think and perform tasks on its own just like a human being. There are no proper existing examples for this but some industry leaders are very keen on getting close to build a strong AI which has resulted in rapid progress”

Big Data

“refers to the large, diverse sets of information that grow at ever-increasing rates. It encompasses the volume of information, the velocity or speed at which it is created and collected, and the variety or scope of the data points being covered. Big data often comes from multiple sources and arrives in multiple formats”.

Discrimination

treating a person or particular group of people differently, especially in a worse way from the way in which you treat other people, because of their skin colour, sex, sexuality, etc.”

Feedback loop

“[…]is system structure that causes output from one node to eventually influence input to that same node”.

Human Rights

“Human rights are rights inherent to all human beings, whatever our nationality, place of residence, sex, national or ethnic origin, colour, religion, language, or any other status. We are all equally entitled to our human rights without discrimination. These rights are all interrelated, interdependent and indivisible.

Universal human rights are often expressed and guaranteed by law, in the forms of treaties, customary international law , general principles and other sources of international law. International human rights law lays down obligations of Governments to act in certain ways or to refrain from certain acts, in order to promote and protect human rights and fundamental freedoms of individuals or groups.”

Machine Learning

Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

Training Data

The material through which the computer learns how to process information. Machine learning uses algorithms — it mimics the abilities of the human brain to take in diverse inputs and weigh them, in order to produce activations in the brain, in the individual neurons. Artificial neurons replicate a lot of this process with software — machine learning and neural network programs that provide highly detailed models of how our human thought processes work.

Chapter One: The Current State

Inside the Algorithm, AI, and Big Data

I. The Content

It is imperative that this section be introduced in the beginning in order to create a better understanding as to the relation these subsequent topics have in the Human Rights field. Through the intersectional relationship that Artificial Intelligence, Big Data, and Algorithms has with Human Rights is quite nascent in the world, with the rapid growth and development of technology in general, and specifically the aforementioned technologies, this conversation is long overdue.

Law generally has to play “catch-up” with current technologies as most developers are not working hand-in-hand with law makers unless there is some contractural agreement between the two parties. What the reader will come to find, is that there is a growing concern of the possible consequences of such a trend with artificial intelligence; they will come to the understanding that the trend law has in relation to technology can pose some of the most devastating of consequences especially relating to the principles within the International Bill of Rights such as equality, nondiscrimination, political participation, privacy, and freedom of expression.

I.II Relation to Human Rights

The way in which these seemingly unconnected distinctions relate, especially in regards to Human Rights, has mucho to do with the existing prejudices and biases in society today as they relate to class, race, gender, sexuality, and more. Algorithms, Artificial Intelligence, and Big Data relate in such a way that without addressing all of these topics in the framework of this thesis, would render it incomplete in conveying their implications as a collective in the realm of Human Rights.

While reading this work, the reader will come to see how much these topics are interrelated, by noticing the ways in which Artificial Intelligence can drive inequality. In order to make a form of artificial intelligence truly be intelligent beyond the capacity of normal human critical thinking and analysis, the framework of such a project would require algorithms that can be fed information that the algorithm subsequently analyzes and draws conclusions for whichever area it is placed. The algorithms that companies work on must utilize data in order to perform these analyses — without data, the algorithm is in a sense useless. That is the only way that these algorithms can work efficiently and with few errors, which is where the concept of Big Data comes in.

As aforementioned, Artificial Intelligence requires data in order for it to come to the conclusions the creators are intending for. Big Data helps to get to that final solution to creating AI. Big data is aggregated by thousands of companies and institutions in the world, and that is only going to grow in the coming years. Technology companies, social networks, and the like have been storing user data for years. Up until very recent history, there has not been significant legislation with regard to User Data and the rights that are associated with it.

This conversation is in its nascent stages just as AI is, all the while, tech companies across the globe could have been, and most certainly have been in many cases, utilizing user data in a malicious way that has led to unintended consequences. Again, law has historically been reactive to whatever is happening in society, and in dealing with technology, that situation is no different.

Most developed countries are at a point at which technology is in constant flux. Legislators are not keen on what that flux is due to the nature of their work, along with the entrepreneurs who may not have a clue as to what the implications of a certain technology are and what they mean for society due to the lack of well-rounded education. The specific lack of understanding on both sides of the issue will and have already created a plethora of unintended consequences that have been seen on the worldwide platform. Quite literally, all of these circumstances act as a slippery slope to actions that are violative of Human Rights on an International platform in the name of one’s ability to be seen as equal, to not be the victim of discriminative practices, to have their right to privacy, to participate politically, and to have the right to express themselves in whichever way they so desire.

From bias and inequality to unintended consequences, there is a base to this argument that must be addressed. It provides a broader understanding in how poorly built and managed AI and user data can lead to unintended consequences. Bias and prejudice simply play right into the same idea.

II. Facebook and Genocide

Many people are familiar with the social media platform known as Facebook, however few may be aware that the tech giant has had a developing history of being the foundation for crime much less the violations of Human Rights that has come with it. Although Facebook is a platform, in other countries, this is more than just a platform. What the reader will understand in the subsequent section is the origins of the tech-giant and how it became involved in human rights abuses elsewhere in the world. The issues revolving around facebook stem from much of what is to be discussed in this thesis as it relates to artificial intelligence, big data, and the misuse and the poor development of it that leads to human rights abuses In gaining an understanding of this concept, The connection to bias will become more clear.

II.I What Exactly is Facebook?

Facebook came into existence by a group of Harvard college students, namely, Mark Zuckerberg, at the time a Sophomore from New York. He had become a campus celebrity by creating an online program called “Facemash” which’s allowed the “users to objectify fellow students by comparing photos of their faces and selecting who they deemed as hotter”. Soon after, Mark had gotten reprimanded by the school board for the flagrant disregard of school privacy policies. The framework for this site soon developed into what people know today as Facebook back in 2004.

Since the company’s opening to the public in 2006, the company’s influence in the world and profits soared making them a huge force to be reckoned with in the social media landscape. The company now has over 2.7 billion people on its platform (including its conglomerates, WhatsApp and Instagram). To put that number in perspective, Christianity as a religion accounts for 2.4 billion or thirty-three percent of the world’s population. A country as populous as China only has 1.4 billion people, so put simply, Facebook is quite literally the size of a nation-state or a religion — it is a virtual world.

Recently, Facebook has gotten itself into a bit of trouble with the government in the United States over its dealings with a political consulting firm called Cambridge Analytica. The company acted as a tool so they could figure out the best ways to target the politician’s audience, so that they would in turn gain on the political traction on the political front. The company gathered tons of user data from Facebook; medication data, occupational information, sexual orientation, religious views, and so much more.

They essentially built user profiles for the best ways to communicate with a certain audience based on the aggregate data signifying if someone was confrontational, tolerant, easy-going, etc. One expert explained more succinctly that they were a company that would scrape data about people, and from messaging strategies around those details which it still does now. This already seemed incredibly unethical and definitely along the line of it being outright illegal. To clarify, this was data that Facebook had willfully sold to this company — over 87 million users which included even those outside of the scope of a U.S. election.

This is vital information because it helps the reader understand the motives of tech companies and why there are these human rights concerns that are coming into play. One big issue in the situation of Cambridge Analytica is that data about people who completed an online quiz were being used without their explicit consent. There was not proper oversight into this situation in which Data Rights and Consumer Rights are of concern, so what happens when that same oversight leads to murder, or even more specifically to human rights and in using Facebook as an example, genocide?

Many countries around the world use Facebook as their internet in every sense of the phrase; they communicate with their loved ones and friends, they buy and sell goods, play games, send information, etc — it is their connection with the world outside of their own. This creates an issue in the larger context of Data Rights and Consumer Rights, namely, Human Rights. Myanmar is one of those many countries.

II.II The Algorithm in General

The most important thing to know about Facebook and other social networking platforms, or content streaming platforms like Youtube or Netflix, and how they can contribute to human rights violations in the world is the algorithm. At this point, it is appropriate to mention other social media and streaming platforms because their algorithms are not the same by any means, however each company has the exact same core goal in keeping the user there for as much time as possible — that motivation in and of itself creates that parallel between the companies in those industries.

The algorithm in reference is a tool that creates a curation of content on one’s News Feed, a user’s main page which shows updates from the user’s friends and pages that they follow, and it shows that user the content that the algorithm deduced that the user would most likely enjoy viewing. They used to sort all of the posts in chronological order, however, this allows for the user to see what is most relevant to them as dictated by the AI. The way by which it presents that information to the user is that it analyzes all of the information, or data, that the user provides to the algorithm.

These data are the user’s likes, posts, quizzes, friend lists, the time the user spent on the app, all the way down to how much time the user could have spent watching a video, when they paused, etc. This information is granular and may come off as insignificant, however, with this information, as briefly described earlier in talking about Cambridge Analytica, it has the ability to make observations about one’s character and personality with great accuracy. Every single like is not just telling the person on the other side of that like that they enjoyed seeing their post, it is telling the algorithm about every detail of the photo, the time of day the user liked it, and making generalizations about that user in the process.

Further, the algorithm gets as the user continues to provide more data that the user gives it. The algorithm keeps feeding the user more and more information, and what researchers found is that the algorithm can begin understanding what made a user upset or triggered them in some way, and if that led them to look at information on Facebook longer or write more comments, they continued to inundate his newsfeed with that content. This is why Youtube who, as previously mentioned, uses a similar method for their algorithm and has been under siege in recent months over the idea that they had a hand in radicalizing young adults and kids.

The algorithm does what it is supposed to do, but it does not just end there. With this algorithm, much worse has come from it. The issues follow are much in the realm of algorithms, however, the intention is not to prove that algorithms are innately evil, by any means. This technology is what made it much easier to find quick answers to questions on Google, and everyone loves that. The path to human rights is and the concept of human rights is not static, it is and will always constantly shift, and today is a new era in which those fundamental ideas are under threat by something whose power is not fully understood by a large amount of people. The situation that has been ongoing in Myanmar is a direct reflection of that very idea. The lack of oversight created an opening for pre-existing bias and inequality to flourish in many parts of the world as it did in Myanmar

II.III Myanmar

As a precursor to explaining the implications of this technology could potentially mean for society, it is important to introduce the idea by looking at how Facebook changed everything for the Rohingya population in Myanmar.

The Rohingya are an ethnic minority in the country of Myanmar totaling at about one million in early 2017. They are one of many minorities of the country, and they represent the largest percentage of Muslims there as well. This group of people has their own language and culture in which they claim to be descendants of Arab traders and other groups who have been in the region, Kachin, for generations. The majority of the population of Myanmar are Buddhists and they have systematically denied the Rohingya citizenship in the country and even went so far as to exclude them from their 2014 census from being included in the data set. Essentially, the majority of the citizens of Myanmar see the Rohingya as illegal immigrants and they have for decades.

Due to the state’s reluctance to see them as natural citizens, they suffered from violence and discrimination which led many to flee from their oppression. Thousands escaped from “communal violence or alleged abuses by the security forces”.

As mentioned previously, there are many countries whose citizen depend on Facebook for much of everything in their lives, and that extends to applications that Facebook owns such as Instagram and WhatsApp. Myanmar is one of these countries. When posts on Facebook began surfacing from fan accounts of famous pop stars and national heroes, the Myanmar Facebook users’ news feeds became inundated with them. These posts were rife with hateful words and attitudes toward the Rohingya. One was stated to have written, “Islam is a global threat to Buddhism” while another one “shared a false story about the rape of a Buddhist woman by a Muslim man”. It was reported that there were many other posts similar to the ones mentioned that were purporting falsehoods of the Rohingya, or, as more commonly called, Fake News.

What was found was that these posts were not exactly being published by ordinary users, but by militants of the Myanmar Military Personnel. This was a “systematic campaign” that was designed to target the Rohingya. They incited murder, rape, and what became known as the “largest forced migration in recent history”. This information came to the surprise of the Facebook Administration. Facebook had taken the steps to take down the posts that were committing these false accusations on the Rohingya, however, the damage had been done.

By this time, the anti-Rohingya sentiment had spread viral throughout Myanmar; there were mobs of Buddhists committing these atrocities on the Rohingya.

The atrocities were detailed to include

allegations of extrajudicial killings; enforced disappearances; torture and inhuman treatment; rape and other forms of sexual violence; forced labour; recruitment of children into armed forces; and indiscriminate or disproportionate attacks arising from conflicts between security forces and armed groups (OHCHR, 2018).

They military had created even more fake accounts under trolls and high profile names and continued to spread misinformation.

Facebook found out further that it was even deeper than that, but that there were Fan accounts, celebrity accounts, and beauty guru accounts that had accumulated hundreds of thousands of followers that were all linked back to the Myanmar military. It was clear that this became one of the first examples “of an authoritarian governments using the social network against its own people”, and one of the first accounts of how the foundation of human rights was threatened due to the lack of oversight in how the Facebook Algorithm works. It was later reported that over 700,000 Rohingya had fled from Myanmar that year. Facebook expedited the ethnic cleansing of Myanmar. The UN High Commissioner for Human Rights, Zeid Ra’ad Al Hussein stated in an opening statement for the 38th session of the Human Rights Council that,

In Myanmar, as the Council is aware, there are clear indications of well-organised, widespread and systematic attacks continuing to target the Rohingyas in Rakhine State as an ethnic group, amounting possibly to acts of genocide if so established by a court of law (OHCHR, 2018)

This is an example of malicious use of AI through well-adhered societal discrimination and its effects on a given population. This is intentional algorithmic weaponization that had the ability to exacerbate tensions and create one of the greatest tragedies in recent history due to the past prejudices and biases the majority of the nation had toward the Rohingya. The military knew exactly what they were doing and this was more evidence that inequality can be exacerbated by social media platforms like Facebook; it happened in Myanmar, and they are still suffering the consequences.

To add insult to injury, Facebook did not know about the posts, the active societal discrimination, or the ongoing ethnic-cleansing of Myanmar until much later. They took a long time to figure out what was going on, they took a long time to take the posts down, even longer to take the accounts down. Facebook even admitted in a statement that they “failed to do enough to prevent its platform being used to fuel political division and bloodshed in Myanmar”.

The reason this information is vital in the story of understanding how Artificial Intelligence threatens human rights is that this military utilized the facebook algorithm for their own advantage. Dislike of the Rohingya was no secret at the time, clearly the exclusion of them from the 2014 census is sufficient evidence of that. Understanding the way that Facebook and its algorithm work allow for the reader to make that parallel.

What must be addressed is that this genocide or ethnic cleansing was not solely fueled by an algorithm. What happened was that given this hatred or dislike was and is very well alive in Myanmar, people who saw these ads became emboldened by the sheer amount of accounts that had the misinformation as well as from reposts from peers, but also from the more popular accounts that were spreading the same message. Community grew from those who disliked the Rohingya, and from this community, divisions between the Buddhists and the Muslims grew and violence ensued. All the algorithm did in that situation was spread the information further to those who would most likely want to see that information. If someone liked the message and had data on them being Buddhist or being associated with Buddhist culture, the algorithm fed that same information onto that person.

This is a pure example of what happens in a situation in which there is a person, or a group of people who already hold prejudice views, the algorithm would then siphon that information to them as it correlates with the data they had provided for the company in the past in the case that they had demonstrated those sentiments before. This sparks more hatred as people begin to share these posts and in the process become emboldened in believing that there are other people who think like them. Even worse, those who were looking for someone to blame for their own misfortunate began leaning toward extremism more and more, perpetuating hate.

Chapter Two: Sufficient Criteria

Existential Threat, Bias, and “Dirty Data”

I. What makes the risk of AI use different?

The technologies that are in use based on the power of AI have been clearly stated. The catastrophic impact of Facebook’s misuse of Big Data from users has been acknowledged, the ongoing Ethnic Cleansing of Myanmar and the implications of such a disaster have been accounted for. The consequences are extremely evidence. Now there are more technologies that are currently evolving. There are new technologies that are in the process of being creating and are coming into fruition and they spell another plethora of consequences that may come from them. The reason this thesis exists, is the mere presence of those consequences poses such a threat to the foundations of human rights.

The best actions that can be done at this point given that these consequences can be so inconceivable as Artificial Intelligence is such a new technology and humans are not yet aware of the power it can possess, the discussion of those potential issues is of the utmost importance. Engaging in the topic, and figuring out the best course of action is the one thing humans can do that can thwart those unintended consequences.

If what has been read was not enough to spark interest and curiosity at the very least into the implications, some of the greatest minds of today have discussed at length the potential consequences for AI misuse/mismanagement could be for which many described the consequences in the realm of human rights.

The specific intent of this essay, is to explain the current and future AI technologies, their implications in the realm of the International Bill of Rights, to discuss mitigation and solutions that experts have outlined, and last to extrapolate on those ideas supported support of various studies on the intersectional relationship between Artificial Intelligence and Human Rights.

I.I The Experts

Some of the biggest names in the field of Artificial Intelligence have given their own accounts of what they presume could create an existential threat, one of those experts is Elon Musk, the CEO of Tesla and SpaceX. At a conference at MIT in 2014, he had discussed AI and the existential risk it poses and compared it to the likes of “summoning a demon”. In an interview with Kara Swisher, host of Recode Decode, she discussed the idea of the “intelligence ratio”. She made the claim that as Artificial Intelligence continues to get smarter over the years of grueling development, the “relative intelligence ratio is probably similar to that between a person and a cat” and further instructed that researchers and developers alike need to be careful about the advancement.

Nick Bostrom, a Swedish philosopher who researches existential risk and human enhancement ethics, wrote a book on this particular topic called SuperIntelligence: Paths, Dangers, and Strategies, in which he discusses much of this topic in depth and through a more futurist lens. He used some of the same kind of disaster rhetoric that Musk used in his interviews stating that “we humans are like small children playing with a bomb”.

Granted, these statements can come across as almost irrelevant to the main objective of this thesis especially with regard to Human Rights violations, but the reason for their statements to be addressed is that humans really are playing with fire. Other experts in the same panel had addressed this as well. Michael Kleeman, board member at the Institute for the Future, wrote, “The utilization of AI will be disproportionate and biased toward those with more resources. In general, it will reduce autonomy, and, coupled with big data, it will reduce privacy and increase social control”. What Kleeman is stating is hitting at the core of this essay; there is a distinct message about AI that has not been discussed. It is not about Robots taking over the world and instituting a “robocracy” of sorts in which humans would be slaves to their robotic authoritarian regime, no. The real problem is that, humans will allow themselves to disregard their own biases that they incidentally project into their creations via biased data, or the inability to understand the extent to which their programs are causing damages to whole societies. That must be addressed first and foremost simply because it is affecting humans at this very moment.

One expert put it plainly in stating that he is concerned that “AI will magnify humanities flaws”. He further explained that this magnification of humanities flaws would be that this technology would be used as a tool of control. In addition to that, it can be perceived that along with being used as a tool of control, it could very well be used as a tool for in-group dominance as well. It could present itself in a situation in which one culture or ethnic tribe is dominant over the other due to ways by which the technology processes information.

Every step along the way of this path to Artificial Intelligence, there are going to road blocks and decisions that need to be made. These decisions will either push humanity toward the ultimate utopian ideals, or it can propel humanity into a trend that continues to compete with the basic understanding of Human Rights. Along that path, there will be many issues of human rights that will be addressed, and that is precisely why this needs to be discussed using the Human Rights lens. The potential violations will come about with every new iteration, with ever test, and adoption, and it is severely important that these question are on the minds of its developers and researchers.

In the next section, the concept of bias and more specifically, implicit bias, is a sufficient next step in understanding how this situation would occur. It is very easy to believe that a technology could not possess this ability to shift basic understanding of human rights, but in a preemptive rebuttal, technology has always and will always shape human cultures.

II. Bias

The best way to begin to gain a more developed concept of the impact these technologies is to specifically discuss bias. It can become easy to believe that in the decision-making and processing of information that AI does, it should sufficiently eliminate the possibility for bias given that the system is indifferent to the constructs that humans creates, especially as it relates to gender, ethnicity, sexuality, and religion. However, it is much more exact that given that AI presently is indifferent, there is a high likelihood that bias can be projected into the framework of AI through the data that is obtained from existing databases.

There is a general awareness of what prejudice and discrimination have done to society in the United States. There are many problems that stem from that period in U.S. history that have the tendency to spill over into life today. There are other oppressed groups that have undergone similar treatment such as other ethnic minorities and women who have held a sort of second-class citizenry for all of time and are, in recent history, gaining a voice. Additionally, those in the LGBTQIA+ community have also been subject to similar treatment in the past as well as the present.

Due to these issues that have occurred in the past, bias toward these groups was created, which are rooted cultural stereotypes, and are so intrinsically embedded into fabric of society. There is a general understanding that women are biologically weaker than men are, black people commit more crime at a higher rate than their ethnic counterparts, and so on. Even further, there are even smaller ideas about the same specific groups that have been woven as well which have exacerbated the issue of developing bias in society.

America, as surely there is everywhere else, there is bias. Many, if not, all people experience bias, an inclination or preference either for or against an individual or group that interferes with impartial judgment. Bias is normally something that is unfair or unjust in nature. Prejudice exists in many, if not all societies. The foundation to developing a bias is having that prejudice or exhibiting a sort of discriminatory behavior. Bias has been very present all throughout American culture; it’s something that is seen on TV, movies, in elections, in education, in sports, and so many other aspects of society. In fact, this is another way of developing that bias; when there is only one idea that is presented to a person, it is easy to draw conclusions from the implications of that idea. This is a concept that has been woven right into society creating a culture. This led to the society in which the bias people had developed in their subconscious from the constant inundation, reinterpretation, and reassertion of those biases, they became implicit. Meaning these biases could be triggered unbeknownst to the person perpetuating the bias.

II.I Implicit Bias

This is a very interesting phenomenon, simply because this is one of the ways in which consequences of the past have lived with a society and evolved just as nature does. Put simply, “Implicit bias is usually thought to affect individual behaviors, but it can also influence institutional practices”. It was further explained that “implicit bias is usually not deliberate [… but it is important to] consider how past biases and current lack of awareness might make an institution unfriendly to members of certain demographic groups”.

The bias that people had against these groups of people or individuals evolved into implicit bias.This is a situation in which one’s thoughts or actions can be quite literally altered due to the bias they possess unbeknownst to them. The bias can be so tied into culture, that it it becomes increasingly arduous to untangle it.

II.I.I The Data

Providing further support for this claim that bias exists, addressing one large issue many academics run into when researching this phenomenon is a sufficient method in gaining that understanding. Many researchers have studied the link between police brutality/shooting in the U.S. and how it relates to race and racism; there is even a website which tracks all police shootings across america in order to create an accurate database. There has been a growing understanding in the academic community that the data show that the circumstances upon which a person is shot and killed by a police officer is not directly linked with this idea of overt racism that police officers are exhibiting, but the negative behavior could be intrinsically linked with their own implicit biases which were imposed on them for most likely, the majority of their lives. It effects the manner by which they address certain situations. As a result, the rate at which unarmed black people are killed by police is far greater than that of any other race. From centuries of inequality in one community grew ideas of the culture which translated into biases that society acquired.

This explanation on how implicit bias is the driving force behind police shootings provides a basis on why this problem persists. Police officers have been so adamant in maintaining the rhetoric they have been utilizing when they discuss the reason for the shootings. On a consistent basis, cops have not admitted to being overtly racist, by indicating that they “feared for their life”, but what triggered that fear? Seems as if no one has actually questioned that very idea. Accepting that they were fearful and the shooting was justified has seemed to be a sufficient defense of poor choices, but what is not being understood is that the police officer who ended up in that altercation could have very well seen the person as more dangerous simply because of the color of the suspect’s skin, or their clothes, or some other innocuous feature. It becomes clear that one reason why African-American people are killed at a disproportionate rate, astonishingly “2.5 times more likely than white men to be killed by police”, is that police officers, just like a great amount of other people, have implicit biases. Consequentially, their position in the world allows them to act on those biases with impunity much of the time. These are unconscious attributes that humans give to different groups of people and it is natural. These biases cause police officers to act irrationally in situations because they can see a black face and be triggered to react differently than they would if it were a white face. This is a problem that lies within the amygdala.

That represents simply one of the many things that bias effects, but it definitely conveys the seriousness and elusiveness of the behavior. The impact is widespread into other areas of life. A smaller version of this bias exists in families, regardless of one’s ethnic background; parents in general rate their daughters with having a lower math proficiency in comparison to their sons regardless of how similar their performance in that subject was. This is a bias that has the strength to perpetuate a sexist culture in a given society in which women are generally seen as less than men.

II.II “Dirty Data”

It may be quite easy to question how exactly all of this information and history of bias and racial inequality relate to technology and, even more specifically, how it relates to Artificial Intelligence, and why is looking at it through the lens of racial inequality and bias is beneficial, however this is exactly where the two, once separate realms, begin to converge into one concept, and this lens allows the audience to make a specific connection to how artificial intelligence could further perpetuate racial inequalities let alone other forms of inequality and discrimination (e.g. sex, gender, religious, etc.).

When AI experts aggregate the initial data for their projects, depending on what is being undertaken, the data could represent certain forms of inequality or bias within society because the data obtained can be characterized as “Dirty Data” as Editor-in-Chief of Recode, Kara Swisher, has stated in publications and interviews numerous times. In an interview with published author Kate Crawford, they discussed this concept and explained that given the bias that police officers have had in the past, the data obtained from that does not account for the bias, and so it will only result in processing and implementing more bad information. It was explained that, “if we have dirty data actually forming our predictive policing systems, you’re ingraining the sort of bias and discrimination that we’ve seen over decades into these systems that in many ways just are above repute”. And system that is based on AI algorithms has the ability to have common biases society holds engrained into their framework, and not just in the realm of police data — that is Dirty Data.

A better way to explain this in to look at societies with grave inequalities. Whether it is racial inequality in America for one, income inequality in India, gender inequality in Myanmar, or human rights concerns in general from places like North Korea and China — these countries, especially those which are discussed relatively often like Myanmar, are at risk of being impacted negatively from this dirty data. As mentioned in Chapter One, Section II.III, the Rohingya Muslims of Myanmar were not counted in their census. That means that the government has gotten to the point at which they are no longer willing to support these people financially and even more so, believe that they are so outside of their population, they are willing to write them out of the population. This will skew all forms of data because they are not accounted for.

Although, especially in countries like the United States, there have been countless wins for the community of disenfranchised citizens, however there are many implicit biases that still remain. Rather than those biases emerging in a more overt way, they are more so entangled in everyday life. This relates to many people within the minority community, women of course, and those who have intersectional identities.

What is becoming a growing concern, is that if a given society is not willing to address the discrimination and inequality that society as a whole has participated in, they will never be able to even comprehend the ways in which their actions and decisions could potentially be affected by their own biases, which even further puts a distance between humans today and the ability to gain that reconciliation many societies like America so desperately need. Given that human technology has and will for the foreseeable future, be in exponential advancement, without this reconciliation and acknowledgment, there will only be a regurgitation of those exact same biases onto specific groups of people by technology. Ultimately, to untangle the years of bias and inequality from that model of artificial intelligence will make it an even more arduous task. This could hinder any society from that so desperately desired post-racial society. It is imperative to get past dirty data, and that is a challenge amongst many others. Later in the sections, it will become increasingly clear how this dirty data effects the outcomes of decisions and actions made by artificial intelligence.

As a reiteration, this idea can come off as completely “out there” or “far-fetched” due in part to it being such a complex undertaking to come to grips with the idea that technology could actually cause such profound division, however, the arc of history toward justice is not linear by any means; like humanity, it fluctuates. It has become a growing understanding that in this technological age, there is another complex problem that human rights advocates and administrators alike will have to deal with. The plan from this point is that, why not address this issue before it becomes an issue? Why does the government allow companies the size of Facebook or Twitter to control and wield power in the public sphere like a governing body? Why have people allowed it to get to a point in which an algorithm could exacerbate a genocide and perpetuate an ethnic cleansing of a whole country? This clearly was not the plan, but now people have the power to see the destruction and ask why this happened in the first place.

II.III Garbage in, Garbage out

As it has been thoroughly explained, bias has played out in a plethora of situations and that inequality is still an incredibly prevalent idea that many developed countries cannot seem to move past this roadblock. The biases that are harbored and the inequality that is at play can and most likely will be present in the data of any Artificial Intelligent’s that performs statistics based off of data in algorithmic decision making. It all stems from the dirty data that it was trained on. In a more succinct explanation, what the future could hold is that developers could either be building their own personal biases into the framework of these creations and “into the parameters they consider or the labels they define” and “although this rarely occurs intentionally, unintentional bias at the system level is common”. Just as bias can be unintentional, there are also unintentional consequences that arise as well.

Sometimes there can be a situation in which developers may conflate correlation with causation. It is reasonable to see how this kind of idea could cause such great confusion in the world in which this algorithm is making all kinds of important decision. Another way is when developers “choose to include parameters that are proxies for bias”. This one is especially dangerous because it comes off as a probable solution, however, when it comes to how this essay has framed the historical precedent, it is even more reasonable to believe that things like income, education, or location could be proxies to racial bias. Meaning that these factors are so closely encompassed with race that due to the proximate nature of those characteristics, an AI algorithm would essentially come out with the same results as if race were accounted for — in this situation however, it would be under the guise of being unbiased.

More often than not, historical data in and of itself is biased. There are other specific situations in which this creation of dirty data can occur. Another situation is when the data “are not representative of the target population” which refers to selection bias. The recommendations or directives that AI would provide would favor certain groups of people over the other; this is a situation that could be easily seen in a place like Myanmar or in the U.S. with ease. Other situations include when the input data are simply poorly selected or when the data is incomplete.

What AccessNow found in their report on Artificial Intelligence and Human Rights is that “biased data and biased parameters are the rule rather than the exception” and further state that since “data are produced by humans, the information carries all the natural human bias within it”. This was a position that does not agree with the premise of this thesis. Granted, there is an exposition of all of the harms that AI and bias can do to a given population in the future, however, the main reason is not to fear monger people into becoming luddites of some sort and to reject AI and big data all together, much like the tone of the AccessNow Report, but to emphasize that humanity is at a point at which they will create the “last great human invention” in a large sense. They also say further that there is no cure to this bias within AI systems, and that is a statement this essay will tackle. With how fast technology has been evolving, it will be easy to get to where humans want to go in regards to technological progress, however, the idea is to not allow those advancements to have such grave consequences and to mitigate for any ones that happen to be unforeseen.

The rest of this essay, as it is reaching the third and final chapter will discuss much of the technologies that exist today and how they have already displayed a probable threat toward human rights depending on who develops the technology and what they do with it. This section will be used in part to gain understanding on technology today and its impact on human rights, as well as the projects that are in development, and last of future ideas, once believed to be science fiction, that are becoming a reality and what do they say about the future of Human Rights in the world.

Chapter Three: The Intersection

Artificial Intelligence and Human Rights

I. The Impact of AI on Human Rights

In this third section in understanding the intersection of these two industries will include a discussion on the ways in which technologies deployed today have been on the fringes of Human Rights violations. Additionally, there are near-future inventions that have also been questioned due to the implications of the technologies which will also be of discussion. The last topic to be addressed will be regarding the technologies that were once seen as science-fiction, or even in science-fiction media for that matter, are now within the reach of scientists and researchers alike. In analyzing two different reports, AccessNow and one on Ethically Designed AI on the topics, included will be solutions experts have found as well as other forms of mitigation developed through the informed opinion based upon the research from which this essay was created.

In each section, different technologies are to be addressed and their implications to those internationally agreed upon rights. In discussion of these technologies, what will mainly be addressed is the Right to Privacy and data protection; Rights to equality; Right to Freedom of Movement; Freedom of expression, thought, religion, assembly, and association; Rights to equality and non-discrimination; Rights to political participation and self-determination. Discussing the internationally agreed upon rights that have been established via the Universal Declaration on Human Rights (UDHR), the International Covenant on Civil & Political Rights (ICCPR), and the International Covenant on Economic, Social, and Cultural Rights (ICESCR) will prove necessary in order to convey the potential violation of these rights, and how the use of Artificial Intelligence can further aggravate the foundation of these rights as outlined.

Inequality birthed bias into the world, and that bias, through the assistance of social and cultural norms, evolved into more implicit forms of bias which were addressed in the previous sections. From those implicit biases and the inequality that has persisted in many societies, any data drawn from those populations has the distinct ability to be classified as “dirty data”. What will soon come into the fore is how the ideas of bias and dirty data get translated into a given algorithm both maliciously and innocuously. Through the utilization of UDHR and similar texts, what will become clear is that many technologies on the rise that are based on artificial intelligence have a distinct ability to be turned on its users without the knowledge or even the intent of the creators. This is a problem that researchers and AI experts must tackle with in their pursuits to design these life-altering creations.

I.II Narrow v. General

There are two different forms of Artificial Intelligence as currently known. When many people discuss AI and the potential existential threat it poses, are mainly stalking about this strong version of AI in which theses creations may communicate and think more like humans, and while that is a general and valid concern, it is not within the scope of this essay. Before humanity is even at the point at which there are multiple versions of Strong or General AI and humans more so live in a society in which AI usage and proximity to AI is normalized, there are other processes that must be underwent.

The precise definition of Weak Artificial Intelligence, or Narrow AI as it will be described in this context, is the personified through the use of AI that exists today. Narrow AI is referencing the single-task application of artificial intelligence for uses such as image recognition, language translation, and autonomous vehicles. Whereas there is Strong or General AI, which is something that is a future hope for many researchers. For AI to “exhibit intelligent behavior across a range of cognitive tasks” however, that idea is reported to still be decades away.

II. The Sentencing Algorithm and its other forms

The first item on the list is the sentencing algorithm, which is an example of what Narrow AI is and how bias can be innate in these types of systems. As previously mentioned, the idea of looking at this topic through the lens of inequality, and even more specifically racial inequality, can come off as confusing or beside the point, however, this is exactly the point in which the history parallels quite disturbingly with the implications of one form of artificial intelligence. As many of these biases and the consequences of those inequalities are still alive to this day, it has thus resulted in a more segregated United States. In terms of schools, and some would even argue housing, the United States is factually more segregated than they were in the 1950s during peak Jim Crow. From these problems came the problem of mass incarceration. This is a phenomenon that is purported to exist due to the racial inequalities and biases that exist in society today. Now, there is an algorithm that can project that very reality.

The United States as a nation has the highest rate of imprisonments in the entire world, and it was reported that by the end of the year in 2016, there were nearly “1 in 38 adult Americans was under some form of correctional supervision”. It is widely known an accepted that African Americans account for a third of that 26.8 correctional population. On a positive note however, this is something that has been realized by many politicians and researchers, so there was an attempt to try to reduce these prison numbers. It was a common critique of America that they claim to be so free yet have the largest population of people in the world who are under correctional supervision which seemingly goes against the fundamental beliefs of Americans.

It is no secret that police departments “use predictive algorithms to strategize about where to send their ranks”. These algorithms allow them to understand where the most “action” is normally at and where issues may populate next. There is a similar algorithm that is used for sentencing individuals.

This tool, known as a the “Correctional Offender Management Profiling for Alternative Sanctions” or COMPAS, is designed for one specific action, therefore is a form of Narrow AI. That task is to “take in the details of a defendant’s profile and spit out a recidivism score” which is “a single number estimating the likelihood that he or she will reoffend”. This technology comes off as a perfect fix to a judge’s bias they may have on a person depending on the defendant’s identity and background and the judge’s relationship to that identity. Having a computer choose this score can at first seem as though it is the best way to create the environment in which they are not negatively judged on their skin color or some other factor. However, that was not the case.

The way this works is that based on the defendant’s profile and rates that ability to reoffend based on historical crime data. This ability allows it to pick up patterns of the past criminal behavior in any category the machine sees fit. This technology creates statistical correlations which have been said to be “nowhere near the same as causations”. What these risk assessments scores actually do is “turn correlative insights into causal scoring mechanisms”. By this, the the author is attempting to convey that the reason for increased criminal behavior by people of certain identity, is due to other structural issues that have not come into the foreground of the modern day politic. This is exactly what was being referred to previously, there are already situations in which artificial intelligence can have a negative impact in inequality while simultaneously be purported to be a solution — ergo, the “Cobra Effect”

These risk assessment tools are not only subject to use by police, these exact algorithms are what have systematically barred African-Americans access to public funds, schools, jobs, and so many more facets of society, and many of these situations went unbeknownst to the creator of the algorithm nor by the person employing it.

In sum, there are entire populations in which certain groups of citizens, in this specific case, Low-income and African-American, that have been disproportionately targeted by law enforcement. To add insult to injury, they are even at further risk for high recidivism scores based solely upon the circumstances under which they were born rather than the content of their character. That is an injustice waiting to be unfolded.

On a positive note however, there is evidence that there are people who are trying to get better. There are whole institutions that are uprooting the norms of their systems in order to ensure that the people who work under their unit are in no way allowing their biases and misconceptions to cloud their judgment. That is amazing in every way. There just needs to be more. The more discussion, the more reform, the better. On the other side, is a dystopian future in which there are AI developments that predict crime and re-instill over-policing in minority and low-income communities, thus perpetuating a corrupt system. What happens when bias is so engrained into policing and sentencing AI that humans reach a point at which bias and inequality are incomprehensible to humans because it has been so normalized? There are choices to be made and discussion to be had, and that is the notion upon which the idea of this essay was conceived.

II.I The Impact on Human Rights

The first document on human rights that comes to the fore is the International Covenant on Civil and Political Rights, and in particular, Articles 26 and 27. The first one begins as stated, “[a]ll persons are equal before the law and are entitled without any discrimination to the equal protection of the law”. Immediately, the sentencing algorithm is infringing on the first idea. Even without the disparate effects of the sentencing algorithm, there is a profound amount of evidence indicating that low-income and minority communities have been disproportionately targeted by law enforcement in places like the U.S. The algorithm, in defense of this thesis, is what would place this problem in the realm of a human rights violation.

Article 26 continues as, “[in] this respect, the law shall prohibit any discrimination and guarantee to all persons equal and effective protection against discrimination on any ground such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status”. These protections are “guaranteed”. The problem is that if governing bodies cannot acknowledge the inequality and bias that exists in their societies, a future in which these forms of oppression go unchecked by larger authorities is very possible.

Article 27 is something that is definitely an aspect of this agreement that is particularly interesting in that it really goes beyond the scope that has been provided in looking at where bias has stemmed from. It begins, “In those States in which ethnic, religious or linguistic minorities exist,” that people who belong to that minority are to never be denied their right to “enjoy their own culture, to profess and practice their own religion, or to use their own language”. Places like Myanmar, as previously discussed, had ethnic and religious minorities in their countries, what happens when they get ahold of such a technology and provide much harsher sentences to the Rohingya, given there are any left given the recent events that have been transpiring?

This poses profound questions about how humans will use the advanced technology and how are humans to prevent malicious uses of it, or at least preventing themselves from inadvertently utilizing the tools in a malicious manner for that matter. Additionally, and most notably, at which point will it become noticeable to the stakeholders of these technologies to recognize the negative impact some of these creations may have in the world?

III. Facial Recognition

In recent years the topic of Facial Recognition has gained quite a large amount of discussion in the public sphere. For better or for worse, this is already a hotly debated topic today, and unraveling what it is and how it works is incredibly important in understanding how it relates to the issue at hand.

Facial Recognition is a software, based in artificial intelligence, that is able to map a person’s facial features in a manner that allows it to recognize the person that it is looking at based off of the data previously prescribed to the algorithm. This biometric software is capable of “uniquely identifying or verifying a person by comparing and analyzing patterns based on the person’s facial contours” What is amazing, is that this technology has been getting more and more accurate with every update. Facial Recognition software is increasing in its use around the globe. While there are cities in the United States that have outright banned the use such as San Francisco, this technology will continue to be used and as an AI-based system, it has the exact same capacity to be used as a tool and as a weapon.

III.I Algorithmic Sexuality

As some of the other developments that were discussed were more based in governing bodies from the beginning, there are also independent researchers doing their own experiments and developing their own software, either in collaboration with a university/company or completely on their own. There was one development that was completed at Stanford University in the United States by Mika Kosinski and Yilun Wang. In 2017, they created an artificial intelligence powered algorithm that could “correctly distinguish between gay and straight men 81% of the time, and 74% for women”. The AI involved is using Facial Recognition. This information, which was published in the Journal of Personality and Social Psychology, was “based on a sample of more than 35,000 images that men and women publicly posted on a US dating website”. They were able to use AI to pinpoint exact features on a person’s face by utilizing an incredibly “sophisticated mathematical system that learns to analyze visuals based on a large data set”, or as they described it, Deep Neural Networks.

They were able to process the information by creating those generalizations on characteristics that only a computer could aggregate such as their grooming styles, appearing more masculine or feminine. The data also found a plethora of other ways of identifying someone’s sexuality based on features such as nose length, forehead size, and jaw width. They found that “faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain”.

Eventually, they fed the algorithm even more data with people’s profiles and pictures, and once they were able to provide the algorithm with five pictures of each person, they were able to find that the algorithm predicted the sexuality of men with 91% accuracy and with women 83%. Like any tool, this algorithm in the wrong hands could lead to further destruction of marginalized groups of people.

III.I.I The Impact

After reviewing the research and the results of this study, it is very easy for any reader, especially of those who have struggled with their own sexuality, to see the benefit this tool may have. The future of an algorithm like this is one in which millions upon millions of people who self-identify within the LGBTQIA+ community and they voluntarily enter their facial data, or more simply “selfies”, into this model via social media. In the situation in which there is a person who happens to be questioning themselves, the easy thing to do would be to enter their own picture of their face into the algorithm so that they are able to find out, based on all of the other people within this community, it is most likely that they fall into this sexuality or this gender. That would be a great tool especially for teenagers who are just starting to figure out what they are. Unfortunately, that society in which people are open to that concept in general is not the case.

The first idea that became solidified was that there are countries that have zero toleration for any form of relationship outside of their traditional biological man marries and has children with a biological woman let alone premarital sexual relationships and gay relationships. Many countries strictly adhere to a society in which those forms of relationships are punishable by law and in many countries, even if the law states very clearly that those in the LGBTQIA+ community are safe under the law, there are situations in which citizens take it upon themselves to perform extra-judicial killings and muggings.

Specifically, there are nine other countries that have national laws which criminalize “forms of gender expression that target transgender and gender non conforming people”. In regards to same-sex relations, there are some countries which punish certain acts, while there are other countries with more general laws that can be characterized as vague. They often have a varying interpretation.

Iran is a country that greatly exhibits the exact notion of what is being conveyed in this message. By performing any type of homosexual act, via the Islamic Penal Code of Iran, one can be sentenced to receiving “31 lashes (for homosexual acts other than anal sex or thigh sex) to 100 lashes to death”. That is clearly one of the more extremes of the problem, but it most certainly poses an interesting question. Given the general ideas of “western” forms of freedom, it is safe to assume that most people are not okay with this act and that even if they do not necessarily agree with the LGBTQIA+ community, they do not deserve lashes for being themselves let alone death — it is safe to assume that this is not in accordance with modern Human Rights Law. A situation could arise in which the governing body maybe inquires about purchasing or even developing their own algorithm based on the same principles as Yilun’s project. This could take their efforts in getting rid of gay people in their country to a new extreme. People will use this technology as a tool for good and for evil.

Specifically, in regards to Human Rights, there are already some serious problems with the first usage of this data. Is it even ethical to use people’s data to create this algorithm whether they have a public profile with their sexual orientation listed? How does the creator of this technology ensure that it is the person with that specific face trying to find out their results? While countries like Iran and Nigeria may utilize the application of this software for corporal punishment, other individuals may use it to target people in their own communities all over the world based on whether they have those biases toward that group of people. The consequences are ten-fold compared to the benefits with this system.

This can even get taken a step further, which it is in the process of being done right now. There are whole nations that are planning on using Facial recognition software with the specific intent to make their countries more secure, while simultaneously enabling the government to actively spy on its citizens. This could exacerbate those issues seen with a simple algorithm that figures out if a user is homosexual with 90% accuracy made by a bunch of students into something much more nefarious.

What becomes cause for concern is when reading the Universal Declaration of Human Rights, and specifically Article 18. There are a few key phrases in this article that must be pointed out. It begins, “Everyone shall have the right to freedom of though, conscience[…]”. Right away, it may be hard to see where the problem lies however, upon further inspection of the implications of the sexuality algorithm, this would take away a citizens freedom to even think about a same-sex relationship because the algorithm would take in facial information and result in a word indicating what their sexual orientation is without them even thinking about performing such acts. It can be a new form of predictive policing that corrupt and despotic regimes use in order to “cleanse” their society in the ways they feel are necessary and correlate to their society.

III.II The Chinese Social Credit System

In China, there is a system that is to be deemed the “Social Credit System” which will use the Big Data the government has collected from its citizens and abroad along with their heavily utilized CCTV footage to create a surveillance state. The system, set to debut in the year 2020, is dubbed “the most ambitious experiment in digital social control ever undertaken”. The main purpose behind this system is that the Chinese government wants to begin monitoring, rating, and regulating everything about their state including the citizens, the financials, and even their “social, moral, and possibly, political behavior”.

One of the biggest reasons why this technology can work is Facial Recognition. They also believe that this system would be a tool that would inevitably grow their economy and ultimately make them the most powerful nation in the world. Referring back to Chapter One, I.II, Big data, Algorithms, and AI are intertwined, meaning there really cannot be one without the other. One can create an AI, however, algorithms are needed in the framework, and the only thing that would make that artificial intelligence actually intelligent, data is needed. The more data the AI has the better it will be at performing its core actions.

How this system is set up is that every citizen is provided a social credit score, similar to how in most countries there is an actual credit score that is given o the child at brith. That score will increase of decrease specifically contingent on their behavior and financial status. The data they gather to continue changing the scores come from a plethora of sources as broad as government records, and as personal as their social media usage, internet history, shopping tendencies, and their interactions in the digital world. As aforementioned, they also rely heavily on facial recognition in order to ensure these systems work.

There are actions one can do in order to make their score go up such as positively influencing one’s neighbor, taking care of elderly family, engaging in charity work, praising the government on social media, and having a good financial history. Some of the actions that will decrease ones score are behaviors such as illegal protesting, traffic violations (drunk driving and jaywalking), posting anti-government messages on social media, participating in anything deemed to be a cult.

On the positive side of this score, meaning that citizen is above 1000 points, they will have access to benefits such as admission priority for schools and employment, priority list for public housing, tax breaks, cheaper public transport and so much more. However, falling under that threshold has consequences such as the denial of licenses and access to social services, less access to credit, exclusion from being able to travel domestic and abroad, no access to private schools, and even public shaming which would be exposure of that person’s face in public spaces with their information.

III.II.I The Uighurs

There are a group of people within China who have systematically been on the victim side of racial/religious discrimination and targeting by law enforcement in a similar, and most likely more degrading manner than the situation for African-Americans and those of low-income status. They are known as the Uighurs and for centuries, they have been continually persecuted and it has been reported that Chinese officials have silenced the academics who have come out of that culture to explain the plight their people had succumb too. This has created a growing concern in how this systemic bias will effect the relationship the Uighurs have with the state as well as in society.

Implementing this technology can put China in a position whereby it can exacerbate the growing inequality between them and the Uighurs. One way in which this may work is “Blacklisting”. The best way to explain this phenomenon is to look at a case in which Liu Hu was blacklisted and jailed. Hu is a journalist working in mainland China. He writes pieces that address censorship and government corruption. Due to this work, he was arrested and fined, and then eventually he was blacklisted. What Liu had found was that he was on the “List of Dishonest Persons Subject to Enforcement by Supreme People’s Court” as not qualified to purchase things such as a plane or train tickets. He even began having trouble buying his own property and accessing public funds. He explained that the really frightening aspect of this problem is that there is legitimately nothing that the blacklisted person can do, “you are stuck in the middle of nowhere”. One researcher explained how “there are no genuine protections for the people and entities subject to the system”.

It has been publicized that China does not adhere to international human rights in the same ways other countries in the world do. Holding that statement as indeed fact, the possibility of a situation in which the Uighurs happen to become blacklisted at a higher rate than other Chinese citizens, is incredibly likely. If the government is already allowing this form of discrimination, it can be taken as a given that the logical next step is to use the Facial Recognition software to target members of that community by increased levels of surveillance or even physical policing as in the United States.

The story of Liu Hu’s issue with the government underscores many problems within the idea of having a social credit system. It seems as though this system is not in place in order to maintain style safety of the public, but to control the things that they are doing or saying in a way that is highly violative of their human rights as many researchers have found.

III.II.II Into the Weeds

In this section and the subsequent one, it is necessary to really dive deep into the problem of the social credit system. Through the research done for this essay, it became clear that this system that is to come into fruition in 2020, encompasses the gravity of what this essay is about especially as it relates to the western concept of human rights — an important distinction to make.

There are a large subset of problems that deal in the realm of facial recognition, and the main reason to cover this topic at length is strictly due to its proximity to the humans today. It is quite a revolutionary technology, in learning that there were countries that were in the talks of creating a system using facial recognition for so many societal applications, one could only think of the implications of that very notion.

There are wonderful things that Facial Recognition software could do especially in situations in which police services need to find the suspect they have been looking for, which has happened in the past. Facial recognition is on the phones of many people in the world. There are circumstances that call for the use of the technology such as identifying the body of someone who does not have identification. This technology could come in to use in so many ways. However, there are malicious uses of it, and it is necessary to ask if the utilization of this advanced algorithm is not causing any undue burden onto the citizens. That discussion seems to be far out of the picture for the government in China at this point.

Aside: It is also important to note that in using China as an example of the ways in which this technology and AI can be used to exacerbate inequality, that China has a history of this abuse of power. Coming from the era of Mao Zedong, it could be quite understandable in seeing how that form of government could have been normalized in their society, nonetheless, it is a path that can be taken by any state regardless of their past. This is what makes focusing on the social credit system so valuable — it is an outsider’s perspective on a very possible future in the age of rapid technological advancement.

IV. Rights Now

As specified in the previous section on Algorithmic Sexuality, the international laws with regard to their rights to equality and non-discrimination are also in the same position here. The amount of data that China has on its citizens is what gives them the power to utilize this technology in this way. This is a situation in which the AI which only has the tasks that it is designated to complete, are using all of that historical data that does not take account of the past inequalities of the Uighurs will have the same effect on those people as the sentencing algorithm had on people in low-income and black communities in the United States.

Their right to freedom of thought, conscience, and religion are violated in this context too as stated in Article 18 of the Universal Declaration of Human Rights. Under this stipulation, the Uighurs are in this protected class of people and whether or not the discrimination is done with intent, they still have the right for those concerns to be addressed. With this technology, it will be that much harder to confront the government in the first place.

As discussed in the details of the credit system, their score goes down in any situation in which they are criticizing the government. In a given situation, it may make them that much closer to being Blacklisted as being dishonest and suffering the shame of being a public display of the worst kind of citizen.

Another topic that does not receive enough attention from the research obtained in preparing this essay, was a person’s right to political participation and self determination. In the International Covenant on Civil and Political Rights, Article 25 states specifically that “[every] citizen has the right and opportunity to take part in the conduct of public affairs[…]”. In stating this, it is characterized distinctly that it is an innate right of a citizen’s ability to participate, and it further states that the specifics of that participation can either be through a representative or directly toward that institution. Breaking down this specific article so thoroughly is imperative to convey the density of the article and the amount of area it covers. It further indicates that they are able to “vote and to be elected at genuine periodic elections […] which shall be guaranteeing the free expression of the will of the electors[…]” and further dictates the necessity to provide equality and to be in the genuine public service to the country. China has systematically gone against every single one of those ideals.

Studies have already show that there have been groups of people that are clamoring for democratic rule in China that have been silenced by the government. One instance was how the government was actively thwarting one of its citizens from participating in an electoral contest. Zhang Shangen was blocked at every single turn in his quest for local election. It was stated by the candidate the government was manipulating everything in society. Other candidates made claims that they could never get elected simply due to how the system works. These are people who have different ideology than the current government, and rather than listening to their population, they remain in control of what they determine is correct and what is order.

In regards to their data rights, for China, the era of privacy is over. That is to say it is the end of the western idea of privacy in the east. This data that is acquired from millions of CCTV footage data in along with the artificial intelligence required for facial recognition in combination with their governments efforts of thwarting democracy, it will create a vacuum for Chinese citizens. This creates a legal vacuum in which the citizens no longer have any power in what their government does.

It became increasingly apparent that at this point, the technology like Facial Recognition is beginning to have even greater capabilities. There is newer software which uses a 3D model of the person’s face claiming that it have a greater level of accuracy. What this software does is “a real-time 3D image of a person’s facial surface, 3D facial recognition uses distinctive features of the face — where rigid tissue and bone is most apparent, such as the curves of the eye socket, nose and chin — to identify the subject”. This newly developed software has the potential to make facial recognition much stronger and much more accurate than it was before.

In seeing this, the idea that became so clear was that, there were no advancements in detecting how this technology could be perfected in such a way that would not misidentify people especially those who are black women. It seems as though while there are necessary advancements in the technology, there seems to be little to no discussion on how these technologies have the biases previously discussed and how they can affect a given population. Granted, advancements in the technology may rid the technology of that problem, but without the conversation about it, the time in which that happens and the discourse that happens in the meantime. May become untenable. It has already been exhibited for decades of police brutality backlash from the minority community in the U.S. A situation which is normally addressed with self-righteous sentiments about one’s own racial history and background.

It is a pattern that could easily be replicated in such a way that parallels how implicit bias works in society today which is the main argument of this essay. In going forward, there will be a much more broken down analysis of everything that has just been discussed and what it all means and how everything connects into one narrative.

Chapter Four: Pathways of the Future

Implications, Solutions, and Further Research

From everything that has been discussed thus far in this essay, Through all of what has been addressed in reference to the history, discussion of concepts such as big data, algorithms, and bias, and so many more topics, there has been one central idea that continually underscored this concept of how historical events can act as a predictor of the future. Not so much as in the history repeats itself, but that there are patterns of human behavior that have been exhibited over time, one of them being bias, something that most, if not, all humans have. It can be reasonably assumed that bias has evolved into implicit bias, and the argument this essay is making is that bias, and in many cases implicit bias, in combination with historical inequality, can be projected into the framework of some of these technologies in such a way that perpetuates, and in many cases, exacerbates the inequality and bias that already exist.

Through the research conducted for this project, in all of the mishaps and through the trials and tribulations technology has undergone in recent years, it became increasingly noticeable that the solutions were virtually the same at their core. With that being said, one issue that had to be addressed was how to exactly talk about the solutions of this technology without redundancy. The solutions can really be broken down into two main ideas that come from the Three Pillars of the Ethically Aligned Design Conceptual Framework. These pillars come from the Creative Commons, a company that helps its customers to legally share their intellectual property, and they stated that these pillars are Universal Human Values, Political Self-determination and Data Agency, and Technical Dependability. What was found was that a lot of what needed to be discussed on these technologies fell underneath this umbrella dictated by these three pillars. However, for the scope of this essay, it is only necessary to discuss the first two pillars.

I. Universal Human Values and Self-Determination

This first and second pillar underscores much of what this essay was about. In developing a project based on Artificial Intelligence, this idea has to be central; understanding the ways in which the technology can be used for both good and evil purposes that would impact the users or the population at large. Too often things of this nature come down to monetary gain, so it can be understood how this topic can not be of interest in developing a platform or a device that is purported to have the possibility to change the world for the better.

Diving further into these ideas, they highlight well-being, transparency, accountability, and data agency. If there is an algorithm that is using one’s data or information about them in a malicious way, in the same manner as Cambridge Analytica, there can be serious consequences. The main thought comes to mind in the case dealing with Algorithmic Sexuality. This section is interesting in that it really could be endemic in a culture in which any form of homosexuality is penalized. In a situation in which a bad actor gets their hands on the technology, there could be countless violations to human rights, so it is important to have human values in mind at all time. Maybe there is a situation in which the Iran government obtains the technology and uses it for its own cleansing purposes. There could be a company that chooses not to hire people who are homosexual by implementing this algorithm in their candidate’s user profiles to weed out any people who fall in that identity based on the algorithm. This of course is a more explicit form of this bias and a sheer attempt from a bad actor to perpetuate that inequality. This can happen in ways that can go unnoticed in the development as they did with the Sentencing Algorithm

II. From Principles to Practice

The idea from these pillars is to break things down to such an extent that allows for all potential problems to arise and that there are specific reasons for the creation of this technology that would be in place for the betterment of society. They go through a rigorous process to ensure that the technology does not have this disparate impact on people and communities.

What the Creative Commons created was a method by which developers and creators alike can plan their development accordingly and they provided a foundation upon which they could build the idea. These guidelines that they have outlined are exactly what the technology community needs in order to prevent these futures in which bias is so deeply engrained into technology that there is no way to break the cycle.

One researcher worked specifically on these biases that algorithms based in artificial intelligence can have, and from the research that she had done on a very technical side, researcher Joyce Xu found that “Bias may be a human problem but amplification of bias is a technical problem — a mathematically explainable and controllable byproduct of the way models are trained”. In saying this, she is underscoring the importance that “dirty data” have in this essay. So she sought out ways to get rid of it on a mathematical side, which is something that needs to be researched even further, but it has promising results in that respect. She had found that there are different methods by which a person could remove bias from being a possibility in the technology. Although her research conclusions may not be suitable given their mathematic nature, there is still one aspect of her research that is worth mentioning in this essay. One solution that she had come up with was about “dynamic upsampling of training data based on learn latent representations”. What this essentially does is that the AI learns the representations of groups and rebalances the data that it is trained on, that way the outcomes are balanced rather than trying to account for the bias after the fact. This model would allow the AI to learn sensitive attributes to the dirty data and account for it as such.

III. Informed Opinions

From all of the research that has been conducted on this essay, there are few things on the that need to be addressed. It has become apparent that there are small solutions that can be done to resolve some of these problems of bias. Some of them include figuring out a method of policing artificial intelligent systems or create a process by which the decisions made by AI can be monitored to avoid the unknown of the AI in how it reached its conclusion, however, they are not an end to the means at all. These solutions that many experts have discussed in including more diversity in the workplace to avoid these problems, are accounted for. They are substantial changes that will change the outcome of how these technologies effect people.

There is also this argument that the best way to rid the world of these problems that tech is causing, is for these large tech companies, much like Facebook, need to be broken up into smaller entities, and while that argument is incredibly tantalizing given the effect facebook has had on the world, that will only stifle innovation and probably result in a Cobra Effect situation in which now that the companies are broken up it becomes much more of an arduous task to regulate everything that they are doing. Additionally, it can be reasonably assumed that companies as big as Facebook, do not necessarily have the ability to be governable by an outside entity at all.

III.I Diversity begets diversity

The issue that comes up in dealing with these topics is that they either do not have a large enough effect on the company/technology, or that they rely on some information that is completely conceptual at this point in the argument. It can be reasonably assumed that one action could create a wave of impact is diversity. Research has already shown that diversity especially as it relates to race and gender, are incredibly beneficial for a company. The best part is that this is a situation in that has positive-sum outcome; there are economic benefits as well as social ones.

Additionally, diversity along the proposed lines would be a great way to facilitate the discussion of these ideas. As the lack of diversity within technology concentrated areas such as Silicon Valley in California has been uncovered, it can be reasonably assumed that this sector in society would benefit greatly from such diversity. This seems like a great idea in concept, however, there are a few qualms that must be addressed. This diversity that many researchers have argued for should be taken a step further. Include diversity within age and wealth brackets, engage in the communities in which they are located, host community events in which the citizens can be present and talk about any concerns they may have in an open and honest discussion. This would greatly lesson many of the disparate impacts some of these technologies have. If the community of everyday citizens can be involved in the development of something, especially as it relates to artificial intelligence, this is the optimal space in which to complete such an endeavor.

As far as diversity goes, this could greatly enhance the quality and the attention to detail. Involving the everyday citizen to have a platform in the development could greatly reduce the bias because it would be something that could be addressed prior to that concept becoming a reality.

III.II Reconciliation and Restructure

In a larger proposal, one that in the one way that this problem of bias and inequality being perpetuated and exacerbated through Artificial Intelligence is to have that reconciliation with the problem. For too long the history of inequality in America and American History have been seen as two different concepts, as in, those situations of inequality were a different America and that the triumphs were the America that people live in today and that is wholly untrue.

This reconciliation is a collective action issue, and given that history has shown that regardless of how much progress is made in the realm of human rights, there are still situations in which is becomes easier and easier to slide back into those habits of treating people with inequality. The idea to highlight here is that this technology is truly interesting in that it is technology that can preform actions without the direct supervision of a human, and while that is wonderful feat that has been accomplished, there is still an element of worry. This reconciliation will come from the diversity, and from that diversity within the making or the development of an artificial intelligence based technology, there will be a greater opportunity to rid the technology of those biases that would have otherwise not been addressed. This reconciliation will do away with the idea of “dirty data” and its impact on existing levels of inequality and bias. Therefore, reconciliation and acknowledgment of these past faults will necessarily lead to more diversity in the workplace, especially as it relates to technology, and these biases will be much easier to catch in the first place rather than finding out as a result of unintended consequences.

On the policy side of things, there many actions that can be undertaken to avoid many of those consequences. One of the main ones that had come up throughout research were literally regulations on how this technology can be made and its process for being approved for public usage. There are laws in Europe like the General Data Protection Regulation, better known as the GDPR, and while it does not explicit protect what is being discussed in this essay, it is incredibly similar. This bill, as stated in the name, is to quite literally protect the data data and the online privacy of the users within Europe. This law came into fruition in light of the data scandal involving Facebook and Cambridge Analytica. This is an example of how the law works retroactively, and the biggest challenge in facing AI, is getting ahead and working to predict those negative outcomes.

III.III the case for anew

On one hand of the argument, there could be a strengthening in international rights agreements with updates that relate to technology and the implementation of it, however, it came to mind that the best way to combat this unintentional or intentional abuses of artificial intelligence, there needs to be an entirely new organization, with the putative power, to oversee all of his technology usage and development. The government, especially in the U.S. has shown they are not equipped with the specific knowledge for regulating tech by any means, and the U.S. Senate’s discussion on Facebook, Social Media Privacy, and the Use and Abuse of Data, were the epitome of that idea.

Given that technology and the development of technologies like AI are so widespread, it is necessary to create a global effort in policing these abuses, and forming the correct questions when there is potential harm to be done to societies. It is important that the once popular motto of “move fast and break things” has to be tamed. This legislative body could work with local, regional, and national governments, fund projects that are centered toward well-being and privacy, provide employees across the world with more bargaining when there are technologies being made that they do not agree with. There needs to be a supranational organization, like the EU, in the future whose main objective is to oversee technology development in their respective countries and to ensure that they do not have the ability to be used as malicious devices, ensure that these companies are asking the right questions in regards to their research for developments, ensure that they are accounting for bias, and so much more.

III.IV Shortcomings and Further Research

One of the shortcomings of this essay is that, as mentioned in the beginning, only deals with the idea of Narrow AI and not general/strong AI. One reason for this is that this essay is geared toward what is going on now and how those issues can be projected into the future. Another reason is the strong AI is incredibly undeveloped at this time, so who is to say that the strong artificial intelligence is able to decode those forms of inequality in its processing? Who is to say that it will? There is quite an amount of information to unpack in that idea that really becomes too technical for this essay. Making the general focus be on Narrow AI, how bias in that form get perpetuated, and the future of it in its impact on a given society, was beneficial in keeping this concept grounded.

Additionally, this essay is not to be used as a tool in detailing exact policy measures on the basis of structural change by any means. It is more a way in which to look at the current scale of Artificial Intelligence and what it causes and how human bias and the inequality that exists in the world today can be perpetuated and in many cases exacerbated by the technology that is supposed to solve the problem. This essay is a theoretical take on the problem at hand; the probable solutions as laid out underscore that very notion — an ideological look at the problem.

The main reason for an ideological insight is strictly because no one knows what this technology is going to do to society. It is completely new and that is why there are so many sci-fi thrillers about artificially intelligent beings that humans created taking over the world. The industry is in its nascent phases and that is precisely why the theoretical approach to this problem is so helpful; it takes the mind from an area of pragmatism, to an area that allows the reader to theorize how humans develop bias, how it becomes engrained to the point at which the bias occurs unknowingly, and what humans can do to avoid the problem of exacerbated inequality altogether. Looking at that topic through the lens of artificial intelligence is beneficial because it shows how the ideas and concepts that were once unrelated are now closer in relationship than they ever have been. Artificial Intelligence will increase inequality in so many different ways that this essay has not even addressed, bias is one problem that can cause a plethora of unintended consequences for a modern society just as it already has — the scale of the problem will only increase with the advancement of these technologies and the unwillingness to approach the foundational issues with reconciliation in mind.

Further, this idea could be a subset of other potentially harmful developments in the realm of AI. There are other problems worth exploring even when looking solely at how bias shapes AI. The realm of job automation is the first idea that comes to mind because that too is a factor of artificial intelligence that may have also exacerbate inequality. The further research to be made on this topic can be in the psychology sector as well; there is an increased need to understand how can humans combat bias in a healthy manner. Research in the actual artificial intelligence development could be developing a program that specifically looks for patterns of inequality from data, or one that is able to provide recommendations as to not increasing inequality or bias. These solutions are so much further away in time than the technology, so it is imperative that the discussion begins now, and that there are people dedicated to creating those solutions, predicting those outcomes, and actively working to ensure their developments do not disproportionately impact its users.

IV. Conclusion

The main takeaways that need to be addressed in the end of this essay is the narrative that this essay poses. From the inequality of the past, it affects the data that gets used in algorithms, and from that dirty data, come the issues of perpetuating and exacerbating inequality and bias. This can all happen knowingly or unknowingly in the same way that issues such as discrimination and bias can occur.

What is so profoundly interesting in these issues with human rights and artificial intelligence and talking about bias of the past is that it legitimately creates a new form of inequality that would be even more impossible to dismantle; one which is even harder to untangle than it is now. Even in a world in which there are minimal racial/gender biases or at least without overt expression of those biases, it could be engrained into that model of artificial intelligence due to the way by which it was created. There is a possibility that despite all of the positive change societies all around the world are going through, the transformational battle to eradicate bias and inequality will only persist, again, without it being blatantly obvious to those who are not impacted negatively by it.

Without properly vetting the data and ensuring any bias and inequality is acknowledged in the data do not get intrinsically added, dismantling that bias becomes even more daunting of a task. If a machine that is supposed to be smarter than any other human in the way that it can process data efficiently made a situation such that it continually marginalized these groups of people, the civil unrest would ensue. It would create another situation where it takes decades to convince those who would not be oppressed that this machine is creating undue harm to a certain population.

There have been amazing things done with the help of AI, there have been lives saved, work has become easier, entertainment is even easier than ever with algorithms that generate content that the user would like — it is all amazing, not to mention all of the advances that will happen before the year even ends. With the immense amount of advancements that will be made, what come with it are a plethora of consequences. This is not to drive production of AI down by any means, because this technology has the power to be so incredibly useful for human evolution in the long run — but are humans ready for that? Until the reconciliation of the past happens in whatever form that may be, this technology will continue to pose threats to modern society unfettered just as it has.

The point of this essay is that, there are challenges. Humans have the capability to confront those challenges and if society as a whole does want to progress, especially in that field of technology such that minimal harm is seen, those are problems that will have to be dealt with, otherwise, that idea of history always repeating itself, as cliche and untrue as it may be, may actually be a reality. Society and democracy are such fragile things, but they are both resilient beyond belief. In order to prevent the dismantling of human rights, it is necessary to now use that resilience and to recognize when these technologies are causing problems. Human Rights have clearly shown themselves as fragile, it is of the utmost importance that society is taking every step it can to protect it from harm and distress.

The arc of justice in history is never constant, nor always positive. It is in constant flux and goes back and forth. It is up to society to continuously push that arc toward justice everyday. Technology can either push humanity in that direction, or humans could ruin the opportunity. Two roads and we must take the one less traveled.

--

--

A. Lamar Johnson
Deconstruct Media

Master’s Degree in European Union Studies and Human Rights. Aspiring to work in the emerging intersectional relationship between human rights and technology.