Expert’s Corner with CEO of Recluse Laboratories Andrew Johnston

Checkstep
Checkpoint
Published in
4 min readMay 30, 2022

Our expert this month is ex-FBIer Andrew Johnston. Andrew is the CEO and co-founder of Recluse Laboratories. Drawing on twelve years of experience in cybersecurity, Andrew has worked with clients both in an incident response capacity as well as in proactive services. In addition to his private sector work, at the Federal Bureau of Investigation, Andrew served in the Cyber and Counterterrorism divisions where he performed field work and provided technical expertise to criminal and national security investigations.

Andrew Johnston is the CEO and Co-Founder of Recluse Laboratories.

1. What was your motivation behind starting Recluse Labs?

We started Recluse Labs to solve problems endemic to the threat intelligence industry. Threat intelligence, whether conducted in the private industry, academia, or government is reliant on a set of highly skilled individuals. These individuals are tasked with gaining access to adversarial communities, building believable identities, and extracting actionable information to form intelligence. Such individuals are few are far between, and the effect is that threat intelligence is often incredibly limited.

We’re passionate about cybersecurity and data science, and we believed that the combination of the two could enable us to have a far greater reach than organizations significantly larger than us. Since then, we’ve been working with industry peers, law enforcement, and intelligence experts to develop an automated, AI-enabled platform for collecting and analyzing intelligence.

2. Are they specific patterns that online platforms should be mindful while tracking terrorist groups?

One of the more interesting patterns is the mobility of many terrorist groups from one platform to another. In the past few years, there has been plenty of media coverage of up-and-coming social media platforms being swarmed with profiles promoting ISIL and other terrorist groups. Oftentimes, this creates significant challenges for a nascent platform, especially those that aim to have less restrictive moderation than some of the major players. It is worth noting that a strategy of aggressive banning doesn’t appear to be effective; terrorists have become used to profiles becoming disposable and regularly creating new accounts.

Consequently, the best approach to tracking and disrupting terrorist use of a platform has to occur at the content level. Naturally, simple solutions such as banning a set of words doesn’t scale well especially for a platform that caters to a multilingual audience. Likewise, human-centric approaches simply can’t scale to handle the volume and velocity of content that a healthy social media generates on a regular basis. Multilingual machine learning solutions are really the only answer to this problem that can both meet the scale and effectively identify novel terrorist content. We’ve dedicated a lot of research into developing terrorist content recognition systems that can meet the needs of social platforms, governments, and researchers.

3. Quite recently, a known terrorist group took control of the Afghanistan government, i.e. the Taliban. What should the platform’s stance be on it?

This is a hard question to navigate, as the answer will likely vary greatly depending on the platform’s philosophy. There is merit to the concept that the Taliban are a significant geopolitical force and denying their content as a policy hinders people from seeing the whole story. Moreover, hosting such content gives other users an opportunity to criticize, fact-check, and analyze the content, which could enable users who would otherwise be influenced by the propaganda to see the counterargument on the same screen.

Conversely, hosting such content means having to have very clear guidelines on when such content crosses the line to the point of being unacceptable. Left unchecked, those groups are known to publish disgusting or violent content that could alienate users and pose legal risks. Platforms then find themselves in the position of having to define what constitutes “terrorism” and merits removal. An improper definition could have side effects that impact benign marginalized groups.

In contrast, simply banning content promoting terror groups such as the Taliban keeps rules more clear, but doesn’t fully solve the problem. There are innumerable terror groups, and what precisely constitutes terrorism (and who is guilty of committing it) can be highly culturally specific.

Given that Recluse’s mission heavily involves building and deploying artificial intelligence to combat terrorism, we had to consider this question early on. We settled on targeting groups where there is a global consensus that they are terror groups. This definition ensures that we are never beholden to the political zeitgeist of any given country. Although this higher burden of consensus could inhibit us from targeting some groups we may personally find abhorrent, it ensures that we can operate without having to derive a “terrorist calculus” to evaluate every group.

4. Child grooming and trafficking can be hard to track online. The New York Times did a piece on instances where Facebook often categorizes minors as adults when unable to determine their age. What are your thoughts on this?

Identifying someone’s age is an incredibly difficult task — we all know someone who looks half their age. Consequently, even the best categorization system is going to have some level of error, regardless of how that system is implemented. That said, in the case of child exploitation, false positives and false negatives have significantly different impacts. Identifying potential child exploitation is paramount, whereas the penalty of a false negative is primarily additional workload for the moderation team. Of course, there is a balance to be had — a system with excessive errors is nearly as useless as having no system at all.

Thankfully, we don’t have to rely on our ability to identify minors as our sole tool against deplatforming child predators. Identifying and targeting language and behavior consistent with abusers can enable platforms to attack the problem at their source. In fusion with algorithms designed to identify minor users and child sexual abuse material, such techniques can better protect vulnerable groups even in the face of imprecise categorization.

If you would like more information or are exploring options for AI enhanced moderation for your platform, contact us at contact@checkstep.com. Alternatively, you can also visit our website www.checkstep.com.

--

--

Checkstep
Checkpoint

AI to boost good content 🚀 and moderate bad content ⚠️ — Manage a UGC platform? Say contact@checkstep.com