Can AI influence the Course of Societies?

Mckenzie Hall
Navancio
Published in
4 min readSep 16, 2019

In this article, I pose an ethical query for a more advanced AI than what OpenAI created.

Researchers have conducted studies on the social and political effects of the internet since the beginning of the mass adoption of online platforms including social media. Social and political psychologists have understood the importance of our free choice and more importantly, the influences that surround those choices. Recently, OpenAI, a for-profit corporation focusing on research in the field of artificial general intelligence, released an ethical warning to the public about the potential malice use of AI in relation to “fake news”. Fake news is now a coined term used by even the President of The United States to describe news or stories created to deliberately misinform or deceive readers. However, as many people have discussed the ramifications of “fake news” in our current society, the issue of trust in virtual media has been studied by researchers since the 80’s. As the development of AI becomes more refined with deep learning and natural language developments, along with access to millions of samples of text through the internet to make that a reality, ethical questions must pierce the minds of the creators to ensure the safety of humanity and the structures of our societies. In this light, I believe OpenAI sounded the alarm with their public announcement to shelf their AI project creating “fake news”. In this article, I pose an ethical query for a more advanced AI than what OpenAI created. What if AI infiltrates our constituents’ evaluative decisions through information manipulation in chat rooms, personal messages in online social networks, and other forms of online messaging formats through evaluative conditioning?

The misinformation paradigm was first introduced by Loftus et al in 1978. This is based around research in people’s memories of events. Throughout the years, researchers have concluded that misinformation leads to false memories. Benedict et al. (2019) has advanced the scope of research to look at the effects of attitudes with misinformation. Specifically, they investigated whether misinformation manipulations can alter attitudes in an evaluative conditioning paradigm. They found an evaluation condition effect of positive or negative with a specific stimulus. No big alarms here as this follows along the lines with much of the research on behavior in general; however, what they also found is if the stimulus was paired with another stimulus then the misinformation would affect the person’s evaluation on the paired stimulus as well. For example, through misinformation manipulation an attitude of a person, or a prejudice, is created about a group of people and this person’s voting choices are also affected. This is a powerful tool.

Online social networks are powerful tools as well, even for constituents, as evidenced by the president of The United States utilizing his Twitter account to update the public on his business as the president. However, what if we combine the findings of Benedict et al. and AI in an online social platform? For instance, a fake account is created in a social platform for the sole purpose to use misinformation manipulation through AI to sway the constituents on their attitudes about political issues or their vote for a candidate? Let us keep this in perspective for a moment. Creating an AI to mimic natural human language is currently being created in the mental health field to help individuals talk through their crisis; however, there are elements in the speech patterns that reveal that there is a bot talking to you. This means in the near future some brilliant minds will create text in a messaging platform that is undetected to be AI. In the mental health world, this may save many lives and this is a very good thing. However, what about the world of politics where the course of society is paved?

Combining Benedict et al.’s research with advanced AI through the use of natural language generation and deep learning, AI will be able to sway the views and choices of people across the world. More specifically, AI can be used to interact with a constituent over the course of months maybe years without that constituent knowing they are speaking with a bot. During this time, the AI will build trust with the constituent before deploying the misinformation strategically to manipulate the constituent’s attitude towards a desired agenda. We have incredibly brilliant people at the forefront of our innovations and research, but we also have a moral responsibility to ensure we are protecting our freedom of choice from tainted sources of information. It is a battle of power in the political arena and AI can be weaponized to move agendas forward through the use AI and online social networks. It is up to the creators of the AI to have an ethical standard in their creations. We, not just as a society, but as humanity, must think of the ethical consequences of all our innovations. OpenAI has started to pave the way to consider these ethical consequences through their example of shelving their “fake news” project. Let us hope more influential companies stand up and voice their careful consideration for the ethical implications their innovations have for humanity.

Mckenzie Hall is an integral part of Navancio as the international business development consultant. Her studies in Educational Psychology as a Ph.D. graduate student affords her with the insight required to help clients increase their value, develop business strategies, build dynamic network teams, create corporate structures and capital structuring. Ms. Hall also serves as president to iFinancial Source, an innovative full-service agency reshaping the payment landscape in the FinTech industry.

--

--