AI Content Generation: Society’s Ticking Time Bomb

Jai Narayanan
SI 410: Ethics and Information Technology
7 min readFeb 18, 2023

I still vividly remember the awe I felt the first time I tried out ChatGPT. As I asked a seemingly endless set of random questions to the chatbot in an attempt to gauge it’s intelligence, I was filled with a sense of excitement and wonder as I watched the bot generate responses in real-time that were eerily similar to those of a human being. It was like searching a question on google, except the answer was generated on the fly. But as I delved deeper into my conversations with the bot, I noticed something peculiar. Practically all of the bot’s responses referenced sources that didn’t actually exist. Intrigued and confused, I took to the internet, where I stumbled across forums of others describing similar experiences in their conversations with ChatGPT. They showed examples of the bot creating it’s own sources, oftentimes linking to nonexistent URL’s on research organization and news outlet sites. This discovery left me with a growing sense of dread. If this technology, in its infancy, could so easily generate forged responses that seemed accurate, what capabilities would it have in the future? Would it be able to actually create its own fake research papers and reference those as sources instead? It seems as though it is already getting to that point, given it was able to create fake scientific abstracts that fooled some scientists. Now, I’m not trying to say A.I. is some doomsday technology that will project us into a Terminator like reality. As a computer scientist, I am enthralled by the potential of A.I. and wholeheartedly believe in the incredible set of applications and advancements that the technology promises. But at the same time, as an information consumer, I can’t help but feel anxious at the inevitable misuse of this technology to propel disinformation campaigns and create unnecessary conflict and mistrust. Because AI content generation will inevitably integrate into all of our information channels, we must proactively address the serious risks to the integrity and accuracy of the information we consume by implementing systems at organizational and individual levels that address widespread AI generated misinformation and disinformation.

What is misinformation, and why is it so bad?

Misinformation is a term for information that is false which ends up getting spread. Misinformation is unintentionally inaccurate, and generally arises from a lack of due diligence in assessing the credibility of information. Disinformation is a type of misinformation that is deliberate in nature. Bad actors create and spread false information for a variety of reasons, ranging from monetary gain, to political goals. While misinformation lacks malicious intent, it can be as detrimental as disinformation.

The dangers of misinformation and disinformation comes from their impact on our information ecosystem. Our information ecosystem is the set of complex systems that enable the flow of information between media, consumers, curators, and sharers. Every time we read a news article, post or share on social media, we are communicating with other information consumers and providers. Information ecosystems are essential in keeping us up to date, and have played an important role in the advancement of many progressive movements. Misinformation and disinformation threaten the health and integrity of these systems by mixing false and inaccurate information with the real stuff. Misinformation spreads like wildfire, and often leads to distrust in a variety of systems, greatly affecting our ability to improve public health, address climate change, and establish stable political institutions.

While the spread of misinformation has always been a problem plaguing society, the last decade has seen an explosion in the quantity and severity of false information. As technology improves, our avenues of communication increase and become more efficient. Social media, news outlets and messaging apps, provide rapidly growing forms of communication that connects our information ecosystem at global scales. While these new technologies has granted our societies the means of communicating more effectively, it also propels the reach of misinformation and disinformation to alarming levels. As Claire Waddle identifies in Fake news. It’s complicated, past attempts to influence public opinion relied on ‘one-to-many’ broadcast technologies, like radio, or television, but now, social networks consist of a ‘many-to-many’ relationship, where each individual account can be connected to hundreds, if not thousands of others. Combine this with social media engagement algorithms that allow propaganda to be directly targeted at specific users, and misinformation has the perfect vector to spread exponentially. On Twitter for instance, there’s so much false information, that it travels faster than true stories on the platform.

Where does AI fall into the picture?

Making A.I. that can write lengthy articles, draw high quality pictures, and replicate conversations, has always been a sought after goal in the field of artificial intelligence. However, in the past, the difficulty of making an algorithm capable of creative tasks, combined with high costs of development, left the growth of the technology primarily with media outlets and research organizations. However, as time passed, the combination of improved computing power, cheaper tech costs, and our understanding of the AI field, has made this technology more accessible. In just the past year for instance, so many AI content generation technologies have become open to the public, granting everyone the ability to use powerful industry grade tools like ChatGPT. This will open many positive avenues, but there’s a glaring issue with misinformation.

A.I. generation, however smart it may be, is unable to inherently distinguish whether the input it receives is accurate or inaccurate. If you wanted an article on why the Earth was flat, you could easily get the A.I. to do that. Combine this with this with the voice and video deepfaking technologies coming into existence, and anyone has the tools to create convincing inaccurate information.

Media manipulation, as explained in The Media Manipulation Casebook, is the sociotechnical process whereby motivated actors leverage specific conditions or features within an information ecosystem in an attempt to generate public attention and influence public discourse through deceptive, creative, or unfair means. What happened in the American 2016 election is a prominent example of this trend. During the election period, a Russian-backed disinformation campaign successfully showed 126 million Americans politically-oriented misinformation stories via Facebook, likely having a profound impact on voter decisions. In this, “Information War”, A.I. technology is the next big disinformation weapon.

Motivated actors, particularly those acting for political, propaganda, and monetary purposes, will be capable of creating inaccurate articles and posts, that seem almost indistinguishable from real sources. If a bad actor wants to point a political candidate in a bad light, they could simply use recordings of the candidates voice to generate an A.I. voice that sounds nearly identical. Think this type of technology is still years in the future? Think again. For just $5, you can access software which takes a short sample of a person’s speech, and uses that to create a “clone”, which can then be told to say anything. For example, take a look at this satire State of the Union speech, where the power of deepfaking technology was used to get Joe Biden to say completely random things. While this was for just fun, this helps us imagine just how this technology might be misused. In fact, we don’t have to imagine much. This technology is already being misused, with Chinese state-aligned actors using A.I. to create deepfake news anchors to peddle Chinese propaganda on social media. It’s eerie how realistic these AI-generated news anchors looked, even fooling researchers into thinking they were paid actors, until further investigation showed they were A.I. generated. As time progresses, bad actors will get more and more creative with how they use this technology, allowing them to spark disinformation campaigns that erode the public’s trust in our societal institutions by faking information and spreading it far across the information landscape.

So what can be done about this?

As hopeless as I’ve made this situation seem, it isn’t all doom and gloom. There are things that can be done to combat misinformation and protect the integrity of our information ecosystem, even from the advanced misinformation that can be generated by artificial intelligence. But for any countermeasures to be successful, they need to be implemented now. The solution involves every group in the ecosystem adopting responsibility and accountability for the information that travels.

First and foremost, the AI technologies that are easily accessible and being used to generate this information, has the responsibility to integrate fact checking into their systems. Before ChatGPT gives a response, or a deepfaking software is used to generate a voice, it’s important that the creation algorithm does some level of fact checking to what it produces. Apart from the algorithm creators, its also the responsibility of information distributors like social media apps, or news sources to have a system for fact checking on their platform. They should tag blatant misinformation, and limit the reach of those posts. They should also target the dissemination sources of misinformation by addressing the use of bots to spread information at large scales.

Addressing the problem at the information creator and provider stages is important, but at best, only somewhat effective. As Claire Wardle points out, its hard to do things like identify bot accounts and classify what constitutes as misinformation or genuine information. Platforms will always face an uphill battle, as its just easier for bad actors to peddle misinformation that bypasses checks.

As a result, the majority of work in preventing the spread of misinformation, must come from us as individuals who consume and spread information. It is our responsibility to fact check for ourselves, and be accountable in not mindlessly sharing false information with our networks. We shouldn’t fall into the trap of reposting something simply because we agree with it, and it confirms our beliefs. Instead, we should maintain some level of skepticism, and think critically before making decisions that spread information in the ecosystem.

At the end of the day, we as consumers are the main vector through which misinformation spreads, and by practicing smart consumption and acting critically in the face of what seems false, we can help keep our information landscape clean.

--

--