Expert’s Corner with Lauren Tharp from Tech Coalition

Checkstep
Checkpoint
Published in
6 min readOct 13, 2022

--

For this month’s Expert’s Corner we had the pleasure of interviewing Lauren Tharp from the Tech Coalition.

The Tech Coalition is a global alliance of leading technology firms that have come together to combat online sexual abuse and exploitation of children. Because member companies have the same goals and face many of the same challenges, we know that collaborating to develop and scale solutions offers the most promising path towards a global solution to the problem.

Lauren Tharp is a Technical Program Manager based in Richmond, VA. She focuses her time helping companies adopt technologies to combat online child exploitation and abuse, and facilitates collaboration among a diverse set of industry members. Prior to joining the Tech Coalition, she worked as a Product Leader in the podcasting space where she learned the importance of Trust and Safety through the lens of brand safety. The answers included are not intended to represent any member of the TC or affiliated partners.

Lauren Tharp is the Technical Program Manager at Tech Coalition.

1. What is the mission of the Tech Coalition? What is the idea behind it?

The Tech Coalition facilitates the global tech industry’s fight against the online sexual abuse and exploitation of children. We are the place where tech companies all over the world come together on this important issue, recognizing that this is not an issue that can be tackled in isolation. We work relentlessly to coach, support, and inspire our industry Members to work together as a team to do their utmost to achieve this goal.

Every half second, a child makes their first click online — and the tools we all value most about the internet — our ability to create, share, learn and connect — are the same tools that are being exploited by those who seek to harm children. In this increasingly digital world, the technology industry bears a special responsibility to ensure that its platforms are not used to facilitate the sexual exploitation and abuse of children. Child protection is one place where our Members do not compete, but rather they work together to pool their collective knowledge, experience, and advances in technology to help one another keep children safe.

An example of how our work comes together is tech innovation — this is also the work I’m most passionate about. Our Tech Innovation working group exists first and foremost to increase our member’s technical capabilities to combat online CSAM. That means that even the smallest startups have access to the same knowledge and tools for how to detect and prevent CSAM as the largest tech companies in the world. We help members adopt existing technologies, such as technology to find known CSAM images and videos. We also fund pilots to innovate on new solutions, such as machine learning to detect novel CSAM to mitigate the need for human review. We also work closely with THORN — who have been invaluable partners in developing technology — and other subject matter experts throughout the industry to push innovation for our members.

2. With more and more children spending time online, online child safety should always be a priority for social media platforms. Have you seen specific trends / patterns in terms of child abuse? Are they getting harder to detect?

I’d say there are two major factors making it harder to readily detect online child sexual exploitation and abuse (OSCEA). The first is access. Many of us spend a significant amount of time online where we’re engaging, not only with trusted family and friends, but also have contact with strangers. This has largely been a success — think about cold outreach for a new job or finding peers who have a similar niche hobby. But the tradeoff to this access is that it also made it easier for bad actors to make contact with or groom children online. Recent studies have shown that nearly 40% of children have been approached by adults who they believe were trying to “befriend and manipulate them”. So I think we will continue to face the challenge of how to safeguard children online as bad actors subvert protective measures at an increasingly rapid pace.

The second factor is new content. Online CSAM has often taken the form of photos or pre-recorded videos, and so technology developed to detect those types of abuse. As users adopt new technologies such as live streaming, podcasting, direct message platforms, and gaming channels, the detection tools have to be trained towards those use cases. The difficulty is in keeping up with the pace of that change, and anticipating where abuse might occur next.

3. When thinking about moderation, human moderation alone is unable to deal with the scale of the issue. What types of AI do you see innovating in this space to help companies keep up with increasing volumes?

Human moderators are such an important part of keeping the Internet safe and free of child abuse imagery, but as noted, the scale of the problem requires innovative solutions (not to mention the psychological toll endured by many content moderators). To address these challenges, many companies use hashing technologies across photos and videos to detect and remove known CSAM. Hashing works by creating a digital “fingerprint” of photos or videos that have been deemed CSAM by a human moderator. These hashes are then stored in various databases that can be used across industry to automatically detect when the content is shared. The high degree of accuracy means less human moderation on content that we already know is violating.

While hashing is excellent at preventing the spread of known CSAM, it cannot detect new photos or videos. This is where true AI comes into play, also known as classifiers. Classifiers use machine learning to automatically detect if content falls into various categories, such as nudity, age, sexual acts, and more. By combining these categories, many companies can make quick decisions about which content should be escalated for review, versus content that does not meet the definition of CSAM.

4. Organizations like NCMEC, Thorn and IWF help with detection of child sexual abusive material (CSAM) but what about child grooming? This is often harder to detect. How can platforms better prepare themselves?

Grooming is a complex topic for many reasons, including the fact that there is no standard definition of what constitutes grooming, and it can vary by platform, language, culture, and other factors. In short, grooming is all about context. A classic example is the phrase “Do you want to meet on Friday?” which could be appropriate on a dating app, but is potentially inappropriate on a children’s gaming platform, again, depending on the context.

As a result, an organization’s approach to grooming detection must evolve over time to accommodate new trends. We typically recommend that companies start with keyword lists,

such as the CSAM Keyword Hub, which the Tech Coalition developed in partnership with THORN. The hub pulls known terms and slang related to online grooming and child sexual abuse so that organizations can begin to filter terms and adjust their content moderation strategies.

Increasingly we see a shift towards using AI to detect grooming by analyzing text (such as in public conversation channels) and by noticing behavioral signals (such as an adult user randomly befriending 100 minors in the span of an hour). The Coalition is working on training grooming classifiers on specific platform use cases, and continues to fund research to understand perpetrators grooming strategies.

5. Smaller platforms tend to have limited resources in dealing with online harms. Be it child safety or hateful content. What advice would you give them?

My primary advice would be to reach out! The Tech Coalition has a robust set of resources, mentorship opportunities, innovative tooling, webinars, and much more to help the smallest companies get started. Additionally, many tech companies like Google and Meta offer free tooling and support to ensure platforms of any size can start mitigating abusive content from being shared on their platform. But if I could offer a true first step, it would be to start learning about the scale and nature of the problem as early as possible. If you allow content to be uploaded or conversations to occur, please consider child safety within your design and product flows to ensure children can use your platform without fear of harm.

If you would like more information or are exploring options for AI enhanced moderation for your platform, contact us at contact@checkstep.com. Alternatively, you can also visit our website www.checkstep.com.

--

--

Checkstep
Checkpoint

AI to boost good content 🚀 and moderate bad content ⚠️ — Manage a UGC platform? Say contact@checkstep.com