Defining Harms Without Stifling Startups

By Rachel Wolbers

In the world of Downton Abbey, a sideways look from the Dowager Countess was enough to cause everlasting reputational harm.

In today’s highly connected world, a wayward glance has rapidly been replaced by an often hostile Internet. Just last week the British government released a new white paper that sets out plans for a broad package of measures intended to keep citizens safe online. In the U.K., nearly 9 in ten adults are online and an even greater percentage of children and young adults are using the Internet, with 99 percent of 12–15 year olds active online.

Determining how to hold Internet platforms responsible for online harms is one of the biggest issues facing regulators today.

The EU has already enacted problematic Internet regulations, first with the General Data Protection Regulation (GDPR) on privacy, and more recently with the EU Copyright Directive, which requires companies to build upload filters to proactively block copyrighted materials. These vaguely crafted laws have disadvantaged startups with small legal teams. As the U.K. and the U.S. craft content moderation proposals, it is critical that they consider the impact their frameworks will have on the promotion of innovation and competition.

Startups are not broadly opposed to increased regulation. In fact, clear and predictable legal frameworks allow small companies to scale faster and boost investment in innovative products. To increase startup activity, lawmakers worldwide should aim to craft sensible regulations that continue to protect Internet safe-harbors for three reasons: defining harms is likely to result in ambiguous language, technology can’t erase all harmful content, and increased government bureaucracy will disproportionately impact small companies.

The U.K.’s recent white paper tells us what we already know — defining “online harms” is tricky. Content like child pornography, terrorist recruitment videos, and illegal drug sales is harmful, but the U.K. proposal shifts blame onto Internet companies for harms such as disrupted sleep schedules and childhood obesity. Unless we’re going to condemn the Internet for all societal problems, we need to find a reasonable middle ground.

In the U.S., while both chambers of Congress are addressing harms, the contradictory manner in which they are going about the effort demonstrates how difficult finding balance will be. Last week, the House heard testimony surrounding the rise of white nationalism on the Internet. At the same time, the Senate held a hearing about the censorship of conservatives online.

Drawing the line is also difficult for sophisticated technical content moderation tools.

Facebook testified to the House Judiciary Committee last week that the company is able to remove 99 percent of child exploitation and terrorist propaganda before users report it, but only 52 percent of hate speech. This is because it is much easier for a platform to identify content that is clearly harmful, like pornography or disinformation, than it is to identify more subjective content like bullying or intimidation. It is imperative lawmakers understand this distinction if they expect technology to solve the issue. Computers are bad at context, and, until they get better, content moderation will be imperfect.

The U.K. Secretary of State for Digital, Culture, Media and Sport and the Home Secretary propose setting up a new regulatory body that will determine if companies have violated a new “duty of care” for online platforms. The proposed penalties for breaching this ambiguous duty of care range from fines, to site blocking, to jail time for company executives. As we have seen in the privacy debate, creating new — and potentially conflicting — legal regimes for online platforms chills innovation and leaves small companies with the largest compliance burden. Additionally, increased legal burdens for merely knowing that harmful content exists will deter startups from taking action to seek it out and remove it.

As the definition of harm is ambiguous, technology is imperfect, and compliance measures burdensome on small companies, it is imperative that lawmakers protect reasonable safe-harbor provisions for well-intentioned startups.

In the U.S., startups rely on Section 230 of the Communications Decency Act, which allows companies to actively moderate user-generated content and protects from frivolous litigation. Section 230 provides a clear framework that allows startups to proactively monitor for and remove harmful content, commonly known as “Good Samaritan” protection.

Holding online companies liable for harmful content, without a liability safe-harbor, will mean that fewer companies curate content on their platforms to ensure the safety of users. If you think big platforms are making society worse, the answer should be competition. Future startups are in the best position to stop harmful content and create innovative platforms if governments protect safe-harbors, like Section 230, in future regulation.

Rachel Wolbers is policy director of Engine, a policy, advocacy and research organization that supports startups as an engine for economic growth.