Media Guide: Factcheck.me
How does Factcheck.me work?
We use the Twitter Streaming API to collect a simple random sample of tweets that match the topic we have been requested to track. We collect the images, and URLs from these tweets and include them in our dashboard. Then, we analyze the accounts collected using our proprietary machine learning model in order to determine whether the accounts are exhibiting “bot-like behavior”.
Machine Learning is form of Artificial Intelligence, and is useful to get a machine to discover trends in data. Using a training set of high confidence bot accounts and verified Twitter accounts, we were able to create a model that analyzes twitter accounts and identifies heuristics that would be impossible for a human to notice. Our program collects and analyzes samples over a rolling 24 hour window.
(To learn more about our model, click here)
What does it mean that an account is exhibiting “bot-like behavior”?
There are hundreds of inputs for each profile that help the model classify a profile as exhibiting political bot-like behavior. Join date, follower count, tweeting rate, retweeting rate, and tweet text are just a handful of traits that model looks at. These inputs can be found on a user’s public Twitter profile.
“Bot-like behavior” means that an account is exhibiting inhuman qualities. Our model, of course, can not verify the true identity of a Twitter user. Accounts that are classified as “exhibiting bot-like behavior” are most likely fully automated or semi-automated.
Sometimes, accounts run by humans are classified as “exhibiting bot-like behavior” when they tweet or retweet content with the consistency and frequency of a machine. These users are likely in violation of Twitter’s anti-spam policies, and their accounts have the same effect as fully or semi-automated accounts: they promote hyper-polarized content at machine-like rates, distorting our national political conversation.
(For a more in depth analysis, click here)
Where can I find these accounts with bot-like behavior “in the wild”?
We notice that lots of these accounts participate on Twitter by retweeting, promoting and amplifying content from high profile twitter accounts.
Accounts exhibiting bot-like behavior often times dedicate themselves to retweeting and reposting memes and urls.
Does RoBhat Labs know who is responsible for these accounts?
Factcheck.me does not currently determine the origin of accounts exhibiting bot-like behavior.
Who is susceptible to the effects of bots?
Members of all political groups are vulnerable to these sorts of attacks and campaigns. Information campaigns exploit basic and universal human psychology.