Using AI to ensure regulatory compliance with the EU’s Digital Safety Act

Hetal Bhatt
Spectrum Labs
Published in
6 min readJan 30, 2023

For most platforms, this is not a process that can be done manually.

In less than a year, the European Union’s Digital Services Act (DSA) will go into effect across the continent. Online platforms and services that are active in the EU will need to comply with the DSA by January 1, 2024, or face fines of up to 6% of their annual company revenue.

The DSA has an extensive list of requirements for online platforms to follow to comply. In a nutshell, DSA directives focus on three key areas:

  1. Implementing measures to ensure online safety.
  2. Providing ways to report illegal content.
  3. Publishing regular transparency reports to demonstrate compliance.

Since the DSA prioritizes the removal of illegal content, it is imperative for platforms to have reliable mechanisms in place to identify unlawful posts or behavior in their communities. While basic automation such as keyword lists can be effective against cut-and-dry violations like profanity or hate speech, more complex forms of harmful and illegal content can only be detected at scale using Contextual AI — or scores of human moderators.

How Contextual AI makes online platforms safer and scales compliance

Complying with DSA regulations is a challenging lift. Any online platform that wishes to operate in Europe will need to implement significant changes to how they monitor user safety and log safety-related actions that are taken within their community. If such systems aren’t deployed efficiently, they could become a massive expense for businesses. And failure to comply with the DSA can result in fines of up to 6% of revenue.

With automation, these systems can be more efficient. With Contextual AI, they can be more efficient, easier to scale, and more effective.

Unlike basic content moderation like profanity filters or domain blacklists, Contextual AI can recognize more complex toxic behaviors like child grooming, radicalization, and solicitation of contraband. Finding and removing unlawful behavior is crucial to DSA compliance, as the regulation requires platforms to take down illegal content in a timely manner. Complex harmful behaviors often occur without using any flagged keywords and can only be detected when the full context of the conversation is parsed:

Example Phrase: “Is your mom home?”

Example Context: Male, 29 years old, profile less than 30 days old at 3pm on a weekday messaging a female, 11 years old, in a private chat; no prior chat history between users.

Contextual AI analyzes all nuances of user interactions to better detect harmful behavior.

With Contextual AI, online platforms can uncover a wide range of online toxicity and act on it quickly. Speed is important since the DSA and accompanying EU regulations require sites to remove illegal content within a concise timeframe — for example, terrorist content must be taken down within 1 hour of flagging. Platforms using Contextual AI can moderate at scale by configuring automated actions for frequently detected types of content, leaving only the most severe cases to human review. When human moderators aren’t bogged down with processing routine violations, they can review urgent cases more quickly.

Contextual AI is continuously learning from real-world datasets and improving its detection capabilities. Since it doesn’t require frequent manual updates like keyword lists do, Contextual AI can free up company resources to focus on higher-level tasks. Contextual AI uses a data vault of user-generated content (UGC) from multiple platforms across the globe, so it is constantly refining its detection models to identify different types of content and behavior. Essentially, a Contextual AI system’s accuracy is based on the size of its data vault and the variety of UGC data within it.

When it comes to detection, Contextual AI analyzes all of a user’s actions on a platform over an extended period of time. These include subtleties like chat history, behavioral patterns, and other metadata that help find illicit activity which would be overlooked or misinterpreted by traditional automation methods. The result is a smarter and more nuanced approach to content moderation that’s more effective at detecting truly toxic behavior with fewer humans subjected to reviewing severely toxic content.

Another key feature of Contextual AI is user reputation scoring. On most platforms, a small number of users create a disproportionate share of the harmful content. With user reputation scoring, moderators can remove the bulk of illegal and toxic behavior by pinpointing those users and taking action against them. This allows a platform’s overall moderation efforts to work in a quicker, more efficient manner by detecting harmful content at the source rather than chasing hundreds or thousands of individual posts.

On average, Contextual AI parses 5x more content than traditional moderation efforts — in other words, it vastly increases coverage and drastically reduces response times. Regarding user safety, Contextual AI is the most comprehensive way to ensure a safe user experience in compliance with the DSA and similar regulations.

How Contextual AI creates more accurate data for transparency reports

Another major DSA requirement is for online platforms to publish regular transparency reports of operational data that show their continued compliance with the regulation.

Contextual AI can gauge the full spectrum of user behavior — both good and bad. Through features like user-level moderation and 360° analytics, Contextual AI gives the most thorough and exhaustive picture of user conduct on a platform. To comply with online safety regulations, it is of utmost importance for companies to know exactly what’s happening in their online communities.

There are extensive metrics that must be included in DSA transparency reports. A small portion of these include the following:

All orders from EU member states to act against illegal content, including:

  • The specific type of illegal content.
  • Which member state submitted the order.
  • Average time it took to act on each order.

Complaints received through an online service’s own internal complaint system:

  • Why each complaint was filed.
  • Decisions taken upon each complaint.
  • The average time it took to act on each complaint.
  • The number of instances where original decisions were reversed.

For most platforms, this is not a process that can be done manually.

But systems backed by Contextual AI can methodically log all detection and subsequent actions taken against harmful content, whether it was done proactively or spurred by an outside complaint. By feeding this backend data to a comprehensive analytics structure, online platforms can effectively automate (or “templatize”) the process of compiling their transparency reports and track the same metrics over time to show how their performance has improved.

Along with helping to automate the process, Contextual AI also makes transparency reports more accurate.

With Contextual AI’s exact data and 360° analytics, companies can show which behaviors were more common on their platforms and how they responded to each category of harmful content. For example, a platform would want to show that it prioritized acting against high-risk behaviors (e.g., Child grooming, hate speech, etc.) more than simple profanity. By meticulously listing data points like response times to different types of content, a company’s transparency report would not just fulfill DSA requirements but also demonstrate their commitment to online safety.

Contextual AI is a powerhouse of data and analytics. Such insight helps make transparency reports more precise and gives online platforms a more accurate picture of their communities so they properly calibrate their overall strategy for ensuring user safety.

Additional resources on DSA regulatory compliance

For further information on the DSA’s requirements and how online platforms can approach regulatory compliance, check out Spectrum Labs’ previous posts on the topic:

--

--

Hetal Bhatt
Spectrum Labs

Part-time writer and full-time explainer @ Spectrum Labs.