Instagram has an Animal Abuse Problem
I used Mechanical Turk workers to reverse engineer Instagram’s flagging system. Here’s what I found.
Warning: The videos described in this article are highly disturbing.
Three months ago I stumbled upon an awful video on Instagram. It was of security footage at a mall. A child had climbed over the railing of an escalator and then lost his grip. He fell a long way to the ground. The clip ended quickly so it was unclear what happened after that.
I immediately regretted clicking it.
Then a funny thing happened. Since I had watched and interacted with the video, Instagram’s algorithm thought I liked it. It started suggesting similar graphic videos. Worse ones. Disturbed but curious, I watched and then flagged those as well, hoping the suggestions would stop.
They did not.
Unbeknownst to me at the time, by watching the videos in their entirety, I was effectively telling Instagram I wanted to see more bad stuff.
After a couple weeks, I revisited some older flagged videos. None had been removed. At most, they displayed a warning message like this.
Why were none of these videos getting removed? I wanted to find out what Instagram considered inappropriate, but I needed more data. Their flagging system had to be stress tested in a more methodical way than what I was doing before I could make any conclusions about their flagging practices.
So I did an experiment¹.
I picked 9 videos which appeared to violate Instagram’s terms of service and sorted them into 3 categories:
- Violence/Gore
- Animal Cruelty
- Pornography
3 videos per category, 9 total. Not a huge sample size, but hopefully enough to lend some insight into what they deemed to be acceptable content.
Each video was flagged about once a day until one of the following things happened:
1) The account that posted the video went private
2) The video received 9 or 10 repeated flags
3) Instagram removed the video
I used Amazon Mechanical Turk workers to flag the videos. The instructions included a requirement to take screenshots of each step in the process to prove they completed the task.
Each time a video was reported, it was done by one person. So instead of me repeatedly flagging something like I had done earlier, I now had multiple real users from all over the world flagging videos.¹
It was a better model of a real world environment.
By the end of the experiment, 66 people had each flagged 1 of the 9 test videos.
The results of the experiment were surprising.
Two of the three pornography videos were removed within 24 hours. And the third was removed later. To me, this proves the company is diligent about certain types of content when they want to be. Pornography is removed quickly on Instagram.
But to my amazement, none of the Violence/Gore or Animal Cruelty videos were removed.
At first I thought maybe there was a long list of videos that content moderators had to work through before getting to mine. Perhaps my test videos were at the bottom of a long queue. But a month later, they were all still viewable.
That the animal cruelty videos were still viewable was the most shocking. But the violence/gore ones were still up too. Those were terrible in their own way.
In one of these videos, a man appears to punish female Nigeran soldiers by whipping them in the backside repeatedly. They can clearly be heard crying out in pain. In another video, a woman walking on the freeway gets hit by a car and tossed to the other side of the road. She must have certainly been killed.
One month later, both were still viewable.
I did see a couple of videos outside of my experiment, that appeared then disappeared. Presumably, because they were removed by moderators. One was footage of a man getting beheaded. Another, a gruesome autopsy video. Each of these was pulled within in a day or two.
But this begs the question, why would those videos get removed but not the ones I tested?
An autopsy clip is certainly gruesome and worthy of being pulled. But is it worse than footage of a rat being set on fire? Is it not removed because there is no blood? That makes no sense to me.
What the fuck is going on at Instagram?
Firstly, putting a warning message in front of a video showing animal abuse does not absolve them of their responsibility to remove it. On one account, I counted 87 videos with these warning messages. By hiding behind these warning messages, Instagram can give the appearance of giving a shit, while simultaneously allowing themselves room to hit their growth milestones.
The accounts I tested all have tens of thousands of followers. In some cases hundreds of thousands. It’s no surprise they are not punished.
This is not a free speech issue. It’s about Instagram enforcing their own guidelines which clearly state “Sharing graphic images for sadistic pleasure or to glorify violence is never allowed.”
I honestly don’t expect Zuckerberg to care about this issue. The man worships at the altar of engagement. Growth at all costs.
But their advertisers should care. I’d like to ask Samsung, Heinz, and Mercedes-Benz if they know their ads are being shown next to videos of dogs and cats getting abused.
[1] Instagram’s Animal Abuse Problem (WARNING: NSFW — Contains highly graphic videos)
[2] I don’t know for a fact that having multiple people flag a video will give it priority in the content moderation software Instagram uses. On their site, it says “The number of times something is reported doesn’t determine whether or not it’s removed from Instagram.”. So while having 10 different people flag a video won’t increase the likelihood it gets removed, one would hope those 10 flags would help it rise to the top of their queue.