AI On Trial

Leyla Khasiyeva
1001Epochs Publications
4 min readJul 5, 2021

We can see the traces and works of machines and complicated algorithms in more and more places every day. With the help of constantly advancing technology, we are able to understand and make concise enough predictions about things, notably, consumer behavior and preferences. Such advancements have become a concern for several people through ethical and legal perspectives. A major part of these concerns consists of issues related to data usage. What is the source of the data? Is it biased? What can AI do with that data? Where does that data end up being used? are some of the ambiguous questions around this topic.

First, let’s address the question: what is AI after all? “AI describes the capacity of a computer to perform the tasks commonly associated with human beings. It includes the ability to review, discern meaning, generalize, learn from past experience and find patterns and relations to respond dynamically to changing situations” (lexology.com). Look at all the things an artificial intelligence system is capable of doing: discerning meaning, generalizing, finding patterns… But how does it do all of that?

Photo by Possessed Photography on Unsplash

As the “machine learning” term also suggests, a machine learns by reviewing and discerning meaning from the data the programmer feeds it with. One of the recent mind-blowing fruits of machine learning is the GPT-3 (Generative Pre-trained Transformer 3). GPT-3 can have a conversation with humans interacting with it, and can even impersonate a historical figure of our choice. For example, the system can pretend to be Einstein by giving users a chance to have a conversation with the impersonated figure. But how does it do that? Basically, the machine has ‘learned’ who Einstein is: what his contributions are, his theories, as well as his personality, etc. But where does it learn from? It learns from the data fed to it which is from all the sources and libraries that exist in the world. Yes, you heard me right, from all the sources and libraries that exist in the world. No matter how inconceivable this may sound, it means that the new GPT model knows more than any human being does or possibly can know in the world. I don’t know about you, but this fact alarmed me the first time I read about it. Whether this should concern us or not depends more on the question: What can AI do knowing all that information?

Photo by Markus Spiske on Unsplash

To address these concerns beforehand, the European Commission’s High-Level Expert Group on Artificial Intelligence released the Draft Ethics Guidelines for Trustworthy AI in December of 2018. It states that the development and deployment of AI systems should respect the fundamental rules and regulations, and must be within the ethical boundaries of the state. Moreover, the system should be as technologically robust as possible to prevent any unintentional harm to human beings.

These regulations try to guarantee some reliability and trust for AI, however, they do not answer all the questions raised by it. Since AI can now learn, find patterns, and discern meaning independently, does it make it responsible for its actions? If something unfortunate happens as a result of the work of an AI, is the programmer responsible, or the developer, or the AI itself? No worries, I am not expecting an answer from you. After all, even the court of Arizona was struggling to find the convict when the Uber self-driving car hit and killed a pedestrian, Elaine Herzberg, because of not being able to recognize it as one due to the faults in the coding.

Photo by C Joyful on Unsplash

We said that for an AI to be able to perform its tasks, it has to learn from the data fed to it. No matter how cool the results are, AI systems are still computers built by humans, and they are dependent on programmers and developers in many things, especially in their earlier stages. Earlier stages also include data curation and labeling, which is also done by humans. So what does it mean? Our computer is what it eats. Or in other words, our AI only learns the data it is exposed to by the choice of a person. It cannot perform independently from the data it is introduced to. So anyone can make AI serve their own purposes by picking and choosing the data to feed it with.

Since we feed AI data from and about our community, it can end up mirroring the behaviors of our society. Compas, a machine learning algorithm developed to predict the likelihood of a person to recommit a crime, is a good example. Compas, being introduced to past court decisions as a reference for its predictions, was labeling people of color as objects of high risk while giving a much smaller crime recommitment chance for white people. AI, being a good student, immediately generalized the data and determined the patterns in the data set presented to it, proving and mirroring the racial bias existing in our society.

Now, considering all these concerns, should AI be granted legal personhood, which consequently passes it a set of legal rights and obligations? Should they be able to hire lawyers, sue, or have freedom of speech? Or will the concerns of people stop companies from going too far with the use of their AI systems?

--

--