Originally published in California Publisher, Winter 2020.
by Jason Shepard
Disinformation is likely to play a central role in the 2020 presidential election, and California lawmakers are trying to combat falsehoods with new laws that prohibit distribution of so-called deepfake videos.
Deepfakes, a word added to the Collins Dictionary in 2019, are digitally altered videos depicting real people doing and saying things they did not do.
“Deepfakes distort the truth, making it difficult to distinguish between legitimate and fake media and more likely that people will accept content that aligns with their views,“ wrote Representative Marc Berman (D-Palo Alto), who sponsored the new laws that took effect on Jan. 1 2020.
Assembly Bill 730 prohibits the distribution of “materially deceptive audio or visual material” that seek to injure candidates and sway voters within 60 days of an election, while a second bill, Assembly Bill 602, provides a cause of action for victims of pornographic deepfakes. Experts say fake sex videos, in which people’s faces are digitally transposed in sex scenes, comprise the majority of deepfakes to date.
The California laws follow action in Texas, where lawmakers passed a criminal law banning deepfakes within 30 days of an election.
Technological advances make it easier to produce sophisticated deepfakes. Because people tend to believe audio and video recordings, experts say deepfakes are more pernicious than other types of disinformation.
“Creators of deep fakes count on us to rely on what our eyes and ears are telling us, and therein lies the danger,” law professor Danielle Keats Citron said at a House Intelligence Committee hearing in June, the same month an anti-deepfakes bill was introduced in Congress. The DEEP FAKES Accountability Act, sponsored by New York Democrat Yvette Clark, would make it a federal crime to knowingly distribute fake videos with the intent to deceive and harm.
The rise of deepfakes in mass communications comes as the “functioning of the marketplace of ideas is under serious strain” with the decline of traditional media and as “falsehoods spread like wildfire on social networks,” Citron testified. She said deepfakes have significant implications for individuals and society.
“Under assault will be reputations, political discourse, elections, journalism, national security, and truth as the foundation of democracy,” Citron said.
Policing the truth on social media
We are only beginning to see the implications of deepfakes and their ability to go viral on social media.
In one high profile example in 2019, a video showing House speaker Nancy Pelosi slurring her words while criticizing President Donald Trump during an interview spread quickly on social media, garnering millions of views within 48 hours. The video altered Pelosi’s appearance by slowing down the audio and video to distort reality. Experts say it was a crude example of what’s likely to come.
The Pelosi video’s distribution raised questions about what social media companies should be doing to stop deepfakes. YouTube took down the video, while Facebook allowed it to be shared but labeled it as false.
While a federal law, Section 230 of the Communications Decency Act, gives internet service providers, such as Facebook, Twitter and YouTube, immunity from legal liability of its users’ content, internet companies are under increasing pressure to voluntarily adopt policies to stop falsehoods from spreading on their networks.
In October, the campaign of Democratic presidential candidate Elizabeth Warren paid for an ad on Facebook that claimed Facebook had endorsed Donald Trump for re-election. The stunt highlighted how easy it is for candidates to lie on the world’s most popular social network.
“Facebook changed their ads policy to allow politicians to run ads with known lies — explicitly turning the platform into a disinformation-for-profit machine,” Warren wrote on Twitter.
Facebook, for its part, has acknowledged the difficult line it is walking. It already bans and removes content that violates about 20 different categories, from pornography and copyright violations to depictions of self-harm.
But disinformation is a broader, and more difficult, category to police.
“While I worry about an erosion of truth, I don’t think most people want to live in a world where you can only post things that tech companies judge to be 100% true,” Facebook CEO Mark Zuckerberg said in an October speech at Georgetown University in which he defended Facebook’s embrace of broad free speech principles.
Still, under pressure, in January Facebook announced it would prohibit users from posting some deepfakes that deceive viewers.
Truth and the First Amendment
The First Amendment creates a high hurdle for government censorship, even for falsehoods. If California enforces its deepfakes law aggressively, the law is likely to be challenged as a violation of free speech.
AB 602 may be easier to defend against a First Amendment challenge, given that pornographic deepfakes generally involve individuals seeking to shame and harass private people with fake, public sex videos.
AB 730 focuses on political speech about public figures, something that the Supreme Court has long said is at the core of the First Amendment.
Individuals can use existing libel and privacy law to combat harmful false speech, but broader categorial bans on falsity are more problematic under First Amendment principles.
For example, in U.S. v. Alvarez in 2012, the Supreme Court struck down the Stolen Valor Act, a federal law that criminalized lying about receiving military honors. The Court’s plurality said the law did not satisfy the strict scrutiny needed to pass constitutional muster as a content-based regulation of speech.
In his controlling opinion, Justice Anthony Kennedy rejected the government’s argument that false speech receives no First Amendment protection.
“Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth,” Justice Kennedy wrote, referencing George Orwell’s 1984.
“The remedy for speech that is false is speech that is true. This is the ordinary course in a free society. The response to the unreasoned is the rational; to the uninformed, the enlightened; to the straightout lie, the simple truth,” Kennedy wrote.
The Alvarez precedent suggests laws like California’s AB 730 will be difficult to reconcile with First Amendment principles.
But the devil will be in the details. Based on the Alvarez precedent, California would have to show its deepfakes law was narrowly tailored to achieve a compelling government interest.
Much of Kennedy’s analysis in Alvarez focused on the harmless nature of Alvarez’s lies, suggesting falsity could be punished if it was “used to gain a material advantage.” And Kennedy only spoke for a plurality. Two justices would have applied a lower level of scrutiny, and three justices had no problems finding the falsehoods outside First Amendment protection.
AB 730 replaces an older law, called the Truth in Political Advertising Act, that prohibited campaigns from superimposing images of candidates with the intent to mislead voters.
The new law prohibits the distribution of “materially deceptive audio or visual media” done with “actual malice” and “with the intent to injure a candidate’s reputation or to deceive a voter in voting for or against a candidate.” The law also requires the videos must falsely appear to be authentic and that they cause reasonable people to have a “fundamentally different understanding” than the original, unedited footage.
The law exempts materials that are identified as having been manipulated, and exempts news organizations and videos that constitute satire or parody.
Candidates subject to deepfakes can seek injunctions barring distribution, and they can seek damages, attorney’s fees, court costs.
The California News Publishers Association opposed the bill, calling an earlier version “unnecessary, ineffective, and places an unconstitutional burden on political speech, which lies at the core of the First Amendment’s protections.”
The American Civil Liberties Union also opposed the bill.
“Despite the author’s good intentions, this bill will not solve the problem of deceptive political videos; it will only result in voter confusion, malicious litigation, and repression of free speech,” Kevin Baker, legislative director of California’s ACLU, wrote in a letter.
However, not all First Amendment scholars agree the laws are unconstitutional. Erwin Chemerinsky, dean of University of California, Berkeley’s Law School, told the Legislature he believed the bill was narrowly tailored to meet First Amendment standards.
“The Court has explained that the importance of preventing wrongful harm to reputation and of protecting the marketplace of ideas justifies the liability for the false speech,” Chemerinsky wrote. “AB 730 serves these purposes and uses exactly this legal standard. It prohibits deep fakes in the political realm, applies only where false images were knowingly or recklessly created and disseminated.”
A post-truth era
Deepfakes are just the latest problem in the post-truth era.
In his recent book “Enemy of the People,” veteran journalist Marvin laments the erosion of trust in “fact-based news” that has been driven by President Trump’s “political authoritarianism.”
President Trump has a prolific penchant for disinformation. In his first 1,055 days in office, the Washington Post’s Fact Checker documented 15,413 false or misleading claims in the president’s public statements.
While California has often been a leader in legal advancements, it remains unclear whether California’s deepfakes law will be an effective tool to combat lies from swaying voters.
Not only can deepfakes make people believe lies, they also may cause people to stop believing the truth. Scholars have called this effect the “liar’s dividend” — when people stop believing the truth because of past experience with deepfakes.
Case in point: President Trump suggested the Access Hollywood video of him bragging about assaulting women was a fake.
Kalb also quotes Leslie Stahl, the veteran CBS News journalist, who said in 2018 she once asked Trump why he kept publicly attacking the press.
“I do it to discredit you all, and demean you all, so that when you write negative stories about me, no one will believe you,” Trump said.
Jason M. Shepard, Ph.D., is chair of the Department of Communications at CSU Fullerton. His primary research expertise is in media law, and he teaches courses in journalism, and media law, history and ethics. Contact him at email@example.com or Twitter at @jasonmshepard.