SB890: Content Takedowns for Victims of Crime

Alan P. Kyle
Golden Data
Published in
3 min readMar 14, 2020
AP Photo/Rich Pedroncelli
AP Photo/Rich Pedroncelli

With the ability for information to be spread at such a large scale online, it’s common for lawmakers to try to protect against the spread of certain harmful information. California State Senator Pan aims to do this for victims of crimes who appear in online content. SB890 was introduced on January 27, 2020 and has a long way to go on its way to becoming law. With its significant shortcomings, this bill likely won’t make it very far. Some of its issues include technological challenges, unintended effects, and a likely federal preemption.

Bill Summary:

This bill would require a social media internet website, as defined, to remove a photograph or video recording uploaded or posted to the internet website by the perpetrator of a crime that depicts the crime within 2 hours after receiving a request from a victim of the crime that was depicted in that photograph or video, or from the family of a victim if the victim has died, to remove that content, as specified. The bill would impose a civil penalty of $1,000 for each day that the photograph or video recording remains available on the internet website upon a social media internet website that fails to comply with these provisions. The bill would specify that the penalty imposed pursuant to these provisions would be deposited in the Restitution Fund in the State Treasury to be used, upon appropriation by the Legislature, for indemnification of victims of crimes.

Technological challenges:

Two hours to review and take down content is not feasible. Content online, especially newsworthy content, often gets replicated quickly. Removals of this replicated content present a technical challenge because the content may be intentionally or unintentionally altered and slip past automated content moderation systems. The more newsworthy the crime, the more replications there will be, and the more difficult it will be to remove that content.

To use an extreme case as an example: in the first 24 hours, the Christchurch shooting video was re-uploaded to Facebook 1.5 million times with 1.3 million copies automatically removed at the point of upload. A strong effort, but it still means that 300,000 copies slipped through the cracks. Effective and timely content moderation can be extremely difficult.

Unintended effects:

How do you verify who a victim is? What if a victim is not identifiable in the piece of content? How do you verify that a piece of content was uploaded by the perpetrator? Who determines whether a crime was committed? The answers to these questions almost certainly cannot be answered in the proposed two-hour time frame.

There needs to be a strict verification standard, otherwise this law would have a chilling effect on user-generated content as companies think twice about what content they host. The companies that can best approach that two hour mark are the tech giants who have all the resources in the world. This law would disproportionately burden smaller companies thanks to the overhead involved in setting up the systems and resources to comply with this law. The more complex the regulation, the more entrenched big tech becomes.

Federal challenge:

This bill conflicts with Section 230 of the Communications Decency Act because it seeks to hold interactive computer service providers liable for user-generated content. Save for exceptions like federal crimes and intellectual property claims, Section 230 provides immunity from liability for content that users post. State law is specifically preempted by part (e)(3) of the law: “[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

--

--

Alan P. Kyle
Golden Data

Privacy & policy analyst with an interest in Internet policy and content moderation.