Facebook is using AI that simulates bad actors to make its real-world platforms more secure

The bots will pave the way for big changes to Facebook’s security

Cody DeBos
Artificial Intelligence in Plain English

--

Image: iStock

Facebook is known for having a bot problem. Whether it’s tampering with an election or harassing everyday users, these bots wreak havoc on the social platform.

To address the problem, Facebook engineers are taking a unique approach. They are using artificial intelligence (AI) to simulate how mischievous users act in the real world. Interestingly, the experiment is taking place on a parallel version of Facebook. This makes the simulation far more accurate and will help researchers experiment with new ways of stopping harmful behaviors.

Dub Dub

Facebook has affectionately named its simulator WW, pronounced “Dub Dub.” It is based on the social media giant’s real codebase. The simulator gets the name since it is a truncated version of WWW, the world wide web. Earlier this year, Facebook published a paper detailing how WW works, but recently shared more info at a roundtable.

The project is being led by Facebook engineer Mark Harman and the company’s AI department in London, England. Harman told journalists that WW is a highly flexible tool that may help Facebook limit a wide range of harmful behavior on its site.

In real life, bad actors typically start by going through a user’s friend group to find potential targets and weak points. To do so in the WW simulator, Facebook created a group of “innocent” bots and a number of “bad” bots that explored the network to find them.

Simulating behaviors with AI is pretty common in machine learning. However, the WW project is noteworthy because of how it is carried out — inside a codebase that replicates the real version of Facebook. The company calls the approach “web-based simulation.”

Harman says, “Unlike in a traditional simulation, where everything is simulated, in web-based simulation, the actions and observations are actually taking place through the real infrastructure, and so they’re much more realistic.”

It is worth mentioning that the WW simulation doesn’t visually resemble Facebook. Researchers aren’t looking at a clean interface to study the behavior of their bots. Instead, the WW project records bot interactions in the form of numerical data.

Meanwhile, none of the bots from the simulation can interact with the real Facebook platform — despite using the same infrastructure.

“They actually can’t, by construction, interact with anything other than bots,” says Harman. That’s good news for those worried about AI “bad” bots leaking onto Facebook.

No Changes Yet

Although it is certainly an exciting premise, WW is still in the research stages. That means the simulation hasn’t yet resulted in any real-life changes to Facebook. Harman and his team are working on tests to determine if the bots are able to accurately simulate human behaviors.

Changes will only be made if the bots can do so with a high enough fidelity to justify the adjustments. Even so, Harman believes that the WW project will result in changes to the platform by the end of the year.

Engineers are testing out a variety of different ways to stop the bad bots from harassing the innocent ones. Such tactics include things like limiting the number of private messages and posts that bots can send each minute.

Harman compared the strategy to the work of a city planner trying to figure out how to reduce speeding on a busy road. In that instance, engineers would run traffic flow simulators and then experiment with additions like speedbumps and stop signs to see how they affect the data. The WW project is very similar. It allows Facebook engineers to do essentially the same thing, but with Facebook user behaviors instead of speeding.

“We apply ‘speed bumps’ to the actions and observations our bots can perform, and so quickly explore the possible changes that we could make to the products to inhibit harmful behavior without hurting normal behavior,” says Harman.

He adds that the project can be rapidly scaled to explore more strategies and their effectiveness. Harman says, “We can scale this up to tens or hundreds of thousands of bots and therefore, in parallel, search many, many different possible… constraint vectors.”

Plenty of Potential

While the WW project hasn’t yielded actionable results just yet, that’s okay. The potential it has is enormous. Analyzing the bots’ interactions with Facebook’s infrastructure could help engineers determine where the weaknesses are in its system in a highly efficient way.

Sometimes, they are trained to act in specific ways that replicate real-life behavior. At other times, the bots are given a goal and left to decide their own actions to achieve it. The latter method, known as unsupervised machine learning, can yield some unexpected behaviors.

As bots work to achieve their goals, it sometimes results in behaviors that engineers don’t expect. If they are used to create mischief, those vulnerabilities can then be addressed.

“At the moment, the main focus is training the bots to imitate things we know happen on the platform. But in theory and in practice, the bots can do things we haven’t seen before,” says Harman. “That’s actually something we want, because we ultimately want to get ahead of the bad behavior rather than continually playing catch up.”

Harman went on to note that the researchers have already seen some unexpected behavior from the bots. However, he didn’t share any further details to avoid giving human scammers clues on how to exploit Facebook’s infrastructure.

The end goal, according to Harman, “is to find a mechanism that will thwart a real user that has a similar intention.”

Aside from helping to block these vulnerabilities, project WW could take some pressure off of human moderators. The simulation can also examine subjective situations, such as posts that are flagged for potential abuse.

For example, it can study how moderators handled similar complaints in the past and then apply that learning to determine how to apply Facebook’s standards to unique cases.

While the end results of this process will still be “an approximation,” according to Harman, it will create less work for human moderators.

👏 If you enjoyed this story, remember to hit the clap button! It lets more people find and enjoy it 👏

Originally published at https://www.theburnin.com on July 23, 2020.

--

--

Freelance Writer | RN-BSN | YA author | MTG Player | LoTR geek | Meme Connoisseur | Owner of Bolt the Bird | Business inquiries to: cody@codydeboswriting.com