AI for Bad: The Fat Line Between Parody and Propaganda

E.R. Burgess
Credtent on Content
5 min readAug 5, 2024

--

© 2024 Christina Burgess

If you missed this story, your news outlet failed you. Elon Musk, the CEO of too many massive companies, retweeted a piece of manipulated media that portrayed Vice President Kamala Harris as making false and insulting statements about herself and President Biden. I won’t restate here.

What did Mr. Musk say of the video, which contained racist and sexist comments? “This is amazing” with a laughing face emoji. I should note that the individual who made this AI deepfake video labeled it as a ‘parody’, which Mr. Musk didn’t bother to note when he retweeted it to his millions of followers.

A huge amount of engagement ensued–the coin of the realm these days– including California Governor Gavin Newsom noting his plan to ban this kind of manipulation of media. Media that has been made to look like an ad. Mr. Musk simply said, “Parody is legal in the US.”

Deepfake Dangers

The carelessness with which Mr. Musk shared this video can be pretty standard for casual users of social media. People see something they like and they share it, often with a pointless addition like the juvenile one added here by the world’s richest man. Does the casual user think about the kind of damage they are doing? Not just to the people depicted in these videos but to society itself when spreading what is simply disinformation.

Photo by Hartono Creative Studio on Unsplash

AI tools have now made this kind of voice manipulation so easy to produce. Social media platforms need to change how they handle the spread of content on their platforms. The serious issues here are myriad:

  • Disclosures Should Retweet — Why didn’t the parody label on the original tweet carry over through a retweet? Any labeling, community comments, etc. could and should carry over for a share every time; otherwise, they are largely meaningless in the chaos of the Twitter/X platform. Without enforcing this rule, it’s far too easy for bad actors to game the system–retweeting problematic content to distance it from appropriate warnings and labeling.
  • Company Leaders Should Not Break Company Policy — Does Mr. Musk know that his post violates his company’s own policies? Does he care? For that matter, how are Twitter/X’s safety team (if there’s anyone left on it) to address this issue when their owner is one of the worst offenders? Keep in mind, Mr. Musk insists that Twitter/X is a good source for timely news coverage. How can this be true if its owner breaks its rules because he thinks a deepfake is just funny?
  • Manipulating Media For Political Purposes is Not Funny — Is Musk so unserious a person that he does not see how his action manipulates voters by validating lies and disinformation? Claiming that ‘it’s just a parody’ does not cover up the fact that he is both A) Misrepresenting another person to make them look bad and B) Sending a clear message to bad actors that deepfakes are welcome on his platform. Go ahead, post something puerile and it may even get celebrated and reshared by someone with millions of followers.

The “Parody” Excuse

Fair-minded people can see that the individual who made this video did not have humor in mind when they created this video. When you watch parody videos on Saturday Night Live or watched comedienne Sarah Cooper act out former President Trump’s statements as if a young black woman were saying them during the 2020 Presidential Election, you are given the context that comedy is the goal because it’s clear that is not the person they are parodying speaking.

This is a parody. No one will think Sarah Cooper is Donald Trump. More importantly, Trump actually said these weird comments about preferring electrocution to ‘getting near the shark.’

In the Musk share, using AI to make the Vice President insult herself might play to the dark humor of die-hard fans of her opposition, but (or and) this is not parody in intention or outcome.

The video Mr. Musk shared is certainly not that. Instead, it’s cut to look like a traditional political ad, using real footage and a deepfake voice of the Vice President. Whether intended or not, the effect is to confuse voters and sow chaos into the political system where media cycles so quickly that the average voter cannot keep up with the flood of information being force-fed to them every day.

Credtent’s head of our Credibility Council, Dr. Galen Buckwalter, says that deepfake technology requires context to understand intention. “If someone sends you an unlabeled deepfake of themself, that’s their prerogative. If someone sends you an unlabeled deepfake of someone else, that’s unethical. If someone sends you an unlabeled deepfake intended to manipulate your opinion, that’s propaganda.”

Twitter/X Needs Better Ethical Standards

Meet Credtent. Credtent is a Public Benefit Corporation devoted to a scientific approach to credible content and ensuring creators control their content in the Age of AI. This kind of content is an ideal example of why we need to reform how we allow the spread of content online. One of our primary goals is to make thoughtful recommendations to various industries about how they should handle AI tools and use them in an ethical fashion.

To that end, we call upon CEO Linda Vaccarino to step up Twitter/X’s lack of meaningful policies to address deepfakes on their platform.

Disclosure is so important that Credtent has introduced Content Origin Badges to provide every type of creator a simple way to label their content based on their intentional use of AI in their creation process.

© Credtent 2024

While this is hardly a solution for every problem Mr. Musk has created with his casual retweet, at least these disclosures will enable people to see that content they encounter online isn’t always a reflection of reality, but something algorithmically cooked up with AI. And, possibly, a bad actor behind it.

While this may not be a panacea to eliminate all of the possible confusion, these disclosures provide labels that platforms can use to ensure they are doing the right thing– to provide people with the tools to understand media they experience in this fast-paced Age of AI.

Please help yourself: Credtent’s Content Origin Badges are both free to use and clearly defined here and also on our website. Credtent also helps companies determine appropriate labeling through our Content Origin Certification service. Credtent provides a high-tech but human-driven means of judging if content can be labeled Human-Created or not. For more information about that option, please contact us at sales@credtent.org.

Hear the story of an early Credtent creator just below.

This content is certified Human-Created Content. ©Credtent 2024

--

--

E.R. Burgess
Credtent on Content

CEO/Founder - Credtent.org, AI Product Leader, Data-Driven Content Marketer, Writer, Game & Gamification Designer (board & video)