New Era of AI-Generated Content and Its Impact on Society

ReadyAI.org
ReadyAI.org
Published in
4 min readJan 21, 2024

By: Rooz Aliabadi, Ph.D.

Generating realistic yet artificial content has become remarkably easy, usually requiring a simple mouse click or a few prompts with no specialization in coding or AO. This has led to amusing outcomes, like a TikTok channel where a digitally-created Tom Cruise, adorned in a purple robe, performs Tiny Dancer moves to Paris Hilton, holding a toy dog, getting over 5 million followers. Nevertheless, this marks a significant shift in societies that traditionally viewed images, videos, and audio as nearly indisputable evidence of authenticity. Telephone scammers (as it is happening in many places worldwide, notably in China) can now replicate a loved one’s voice with just ten seconds of their audio, leading to scams. Similarly, artificially produced versions of celebrities like Tom Hanks, Taylor Swift, and even Pope Francis are seen endorsing questionable products online. Furthermore, the proliferation of fake videos of political figures (Argantina’s recent presidential election is an excellent example) is increasing, signaling a new era of digital deception. This is quite alarming as most democratic societies, including India and the US, are entering election cycles in 2024.

TikTok channel where a digitally-created Tom Cruise

Historically, from the advent of the printing press to the rise of the internet, emerging technologies have repeatedly facilitated the dissemination of falsehoods or the impersonation of credible sources. But we (humans) have traditionally depended on specific cues to detect deceit; for instance, an excess of typos in an email often hints at a phishing attempt. More recently, AI-generated images of people have frequently been given away by their oddly illustrated hands or eye movements, and fake videos or audio may exhibit synchronization issues. Nowadays, dubious content immediately triggers skepticism among those aware of AI’s capabilities.

Thousands of people become victims of phishing scams every year

But the challenge is that detecting fake content is becoming increasingly difficult. The fakes in all forms are improving as AI technology advances, strengthened by more powerful computing capabilities and expanding datasets. Embedding AI-powered fake-detection tools within web browsers to identify computer-generated content seems promising, but unfortunately, it’s not that simple. As we discuss with AI educators worldwide, the ongoing battle between content creation and detection is tipping in favor of the creators of fake content. Likely, AI models will soon be capable of creating flawlessly realistic fakes — digital replicas indistinguishable from actual recordings of events. The most advanced detection systems might find themselves without any flaws to detect. While models developed by regulated entities (for example, the EU AI Act and similar legislation) might be mandated to include watermarks, this doesn’t solve the problem posed by open-source models. Scammers can modify and operate these from their laptops, making regulation and detection even more challenging.

It might become challenging to prevent a situation where any photo can be altered into explicit content by someone utilizing an open-source model, possibly from their own home. Then, using it for blackmail is a concern already highlighted by law enforcement agencies and a topic I frequently address with schools as I discuss cyberbullying and its impact on school children. There’s also the possibility that someone could fabricate a video of a high-ranking government official, like a president or prime minister, faking declaring a nuclear strike, causing a brief global panic. Also, the likelihood of con artists successfully masquerading as family members is set to increase.

I believe that, over time, societies will evolve in response to these deceptive practices. People will understand that rich content like images, audio, or video recordings are not definitive proof of an event, just as a drawing isn’t. The era where open-source intelligence, relying on reliable crowdsourcing of information, may be fleeting. The inherent credibility of online content will diminish, making the source of the information as crucial as the content itself. If reliable sources can maintain secure identification through URLs, email addresses, and social media platforms, the value of reputation and origin will grow more significant than ever before.

This concept, though it might seem strange, aligns with historical examples. The period where mass-produced content was widely trusted is a rarity. The emerging challenge of identifying AI’s subtle manipulations doesn’t spell the end for ideas in generative AI and beyond. I believe that in the coming years, the most successful fake content, including images, voice, and videos, will likely be humorous, funny, and comical, but it will take time for all of us to adjust to these changes.

This article was written by Rooz Aliabadi, Ph.D. (rooz@readyai.org). Rooz is the CEO (Chief Troublemaker) at ReadyAI.org

To learn more about ReadyAI, visit www.readyai.org or email us at info@readyai.org.

--

--

ReadyAI.org
ReadyAI.org

ReadyAI is the first comprehensive K-12 AI education company to create a complete program to teach AI and empower students to use AI to change the world.