Social Media Manipulation in the Era of AI

AI has opened a potential propaganda gold mine. Social media platforms should redouble their efforts to counter spoofed accounts.

RAND
RAND
6 min readSep 5, 2024

--

Digital illustration of pixelated hands manipulating digital profiles. Images by fullvector/Adobe Stock and Whale Design/Getty Images
Images by fullvector/Adobe Stock and Whale Design/Getty Images

Li Bicheng never would have aroused the interest of RAND researchers in his early career. He was a Chinese academic, a computer scientist. He held patents for an online pornography blocker. Then, in 2019, he published a paper that should have raised alarms worldwide.

In it, he sketched out a plan for using artificial intelligence to flood the internet with fake social media accounts. They would look real. They would sound real. And they could nudge public opinion without anyone really noticing. His coauthor was a member of the Chinese military’s political warfare unit.

Li’s vision provides a glimpse at what the future of social media manipulation might look like. In a recent paper, RAND researchers argue it would pose a direct threat to democratic societies around the world. There’s no evidence that China has acted on Li’s proposal, they noted — but that should not give anyone any comfort.

“If they do a good enough job,” said William Marcellino, a senior behavioral scientist at RAND, “I’m not sure we would know about it.”

China has never really been known for the sophistication of its online disinformation efforts. It has an army of internet trolls working across a vast network of fake social media accounts. Their posts are often easy to spot. They sometimes appear in the middle of the night in the United States — working hours in China. They know the raw-nerve issues to touch, but they often use phrases that no native English speaker would use. One recent post about abortion called for legal protections for all “preborn children.”

Li Bicheng saw a way to fix all of that. In his 2019 paper, he described an AI system that would create not just posts, but personas. Accounts generated by such a system might spend most of the time posting about fake jobs, hobbies, or families, researchers warned. But every once in a while, they could slip in a reference to Taiwan or to the social wrongs of the United States. They would not require an army of paid trolls. They would not make mistakes. And little by little, they could seek to bend public opinion on issues that matter to China.

In a nation as hyperpolarized as the United States, the demand for authentic-sounding memes and posts supporting one controversial side or another will always be high. Li’s system would provide a virtually never-ending supply.

His paper had a touch of science fiction to it when it appeared in a Chinese national defense journal in 2019. Then, three years later, an AI model known as ChatGPT made its public debut. And everything changed.

ChatGPT and other AI systems like it are known as large language models (LLMs). They ingest huge amounts of text — around 10 trillion words, in the case of GPT-4-and learn to mimic human speech. They are “very good at saying what might be said,” RAND researchers wrote, “based on what has been said before.”

You could, for example, ask an LLM to write a tweet in southern-accented American English about its favorite NASCAR driver. And it could respond: “Can’t wait to see my boy Kyle Busch tearing up the asphalt at Bristol Motor Speedway. He’s a true legend. #RowdyNation.”

LLMs can respond to jokes and cultural references. They can engage users in back-and-forth debates. Some multimodal models can generate photo-quality images and, increasingly, audio and video. If a country like China wanted to create a social-media manipulation system like Li Bicheng described, a multimodal LLM would be the way to do it.

Freshmen from Huaiyin Normal University perform a military training demonstration in Huai’an, Jiangsu Province, China, October 22, 2022. Photo by Cynthia Lee/Alamy Stock Photo
Freshmen from Huaiyin Normal University perform a military training demonstration in Huai’an, Jiangsu Province, China, October 22, 2022. Photo by Cynthia Lee/Alamy Stock Photo

“The evidence suggests that parts of the Chinese government are interested in this,” said Nathan Beauchamp-Mustafaga, a China expert and senior policy researcher at RAND. The rise of LLMs, he added, “doesn’t necessarily make it more likely that China will try to interfere in the 2024 U.S. elections. But if Beijing does decide to get involved, it would very likely make any potential interference much more effective.”

China is not the only U.S. adversary exploring the potential propaganda gold mine that AI has opened. Earlier this summer, investigators took down a sophisticated Russian “bot farm.” It was using AI to create fake accounts on X, the social media platform formerly known as Twitter. Those accounts had individual biographies and profile pictures and could post content, comment on other posts, and build up followers. The programmers behind the effort called them souls. Their purpose, law enforcement officials said, was to “assist Russia in exacerbating discord and trying to alter public opinion.”

But China provides a useful case study, in part because its disinformation efforts seem to be getting bolder. U.S. officials believe fake Chinese accounts tried to sway a handful of congressional races in the 2022 midterms. Taiwan officials have also accused China of producing a flurry of fake news videos just before their presidential election this year. Some featured AI-generated hosts — including, in one strange case, Santa Claus.

Pro-China accounts have spread AI images of world leaders screaming and crying. They claimed last year that the U.S. had started a devastating wildfire in Hawaii by testing a “weather weapon.” They used AI-generated photos, showing a hurricane of fire and smoke bearing down on houses and high-rises, to draw attention to those posts. Another meme from a suspected Chinese account showed the Statue of Liberty with a torch in one hand and a rifle in the other. But, coming from way back in 2023, it was easier to spot as a fake than some more-recent AI images. The statue had seven fingers on its right hand.

The most recent U.S. threat assessment notes that China is demonstrating a “higher degree of sophistication” in its influence operations. And it warns: “The PRC (People’s Republic of China) may attempt to influence the U.S. elections in 2024 at some level.”

We have to assume that AI manipulation is ubiquitous, it’s proliferating, and we’re going to have to learn to live with it. That’s a really scary thing.

“AI is soon going to be everywhere,” Beauchamp-Mustafaga said. “The Chinese government has not publicly embraced Li Bicheng’s vision, of course; it denies doing anything like this at all. But we have to assume that AI manipulation is ubiquitous, it’s proliferating, and we’re going to have to learn to live with it. That’s a really scary thing.”

Social media platforms like Facebook and X should redouble their efforts to identify, attribute, and remove fake accounts, researchers concluded. Media companies and other legitimate content creators should develop digital watermarks or other ways to show that their pictures and videos are real. Federal regulators should at least weigh the pros and cons of requiring social media companies to verify their users’ identities, much like banks do.

But all of those steps are going to take time and require trade-offs. Getting them right will require an open and informed public conversation. That needs to start now, the researchers wrote, not after “another foreign (or domestic) attack on the U.S. democratic process in the 2024 election.” In the meantime, they added, the best defense is likely going to be a heavy dose of skepticism from anyone who ventures onto social media.

“Human beings have spent hundreds of thousands of years interacting with our environment through our senses,” Marcellino said. “Now those senses can be fooled.”

“If you get steamed up over something,” he added, “if you see it and just get immediately outraged, you should probably stop and ask yourself, ‘Am I maybe taking the bait?’”

This originally appeared on rand.org on August 29, 2024.

--

--

RAND
RAND

We help improve policy and decisionmaking through research and analysis. We are nonprofit, nonpartisan, and committed to the public interest.