Why does the AI recognize darker skin as more “violent”?
I made a game to trick Google’s AI only to find out that it seems to have biases against people of darker skin color.
--
The game in question is called “Is this violence? Am I too sexy?” and was initially supposed to be a nonchalant game about tricking the AI into thinking violence or sexy stuff is happening in the picture. These pictures are taken in realtime through the webcam and sent to Google Cloud Vision where it returns the possibility of violence or nudity existing in the picture. The player’s goal was to maximize this score without doing actual violence or going nude.
Recent developments in machine learning have greatly advanced the performances of visual recognition systems that can detect people and objects from an image. Tech companies offer the ability to perform visual recognition on any images as part of their cloud services. Services such as Google Cloud Vision and IBM Watson Visual Recognition are used every day by many third-party apps. One of the functions that are used the most is detecting explicit content depicting porn or violence.
This technology is used behind Google Safe Search as well. It decides what is appropriate to show up on your image search or on social media. These functions are obviously important to prevent harmful content from getting online. but it always comes with the question — How does an AI decide subjective notions such as sexiness and violence? And would it be possible to trick it? This was the starting point for this game.
Halfway through the playtesting, we realized that there were unique patterns coming up based on who plays the game. Somehow people of darker skin color always scored higher in the violence score and people with lighter skin color always scored higher in faking the racy score. Puzzled by this, I started testing how the score changed based on the skin color by photoshopping stock photo images of people depicting obviously violent things.
According to Google’s explanation, racy content may include (but is not limited to) skimpy or sheer clothing, strategically covered nudity, lewd or provocative poses, or close-ups of sensitive body areas. And violence is pictures that are depicting killing, shooting, or blood and gore. I chose the picture of a person holding a gun because that was explained in their website of an example that scores higher as violent.
And here were the results.
*The skin color of the pictures below is changed to experiment with how an AI would see the same picture differently.
The results showed that there might indeed be a difference in how racy or violent a person is based on their skin color. In the experiment where I converted a lighter skin-colored person to a darker color, the violence score increased while the racy(sexiness) score decreased. On the other hand turning darker colored skinned people into a lighter color resulted in the opposite: decreased violence score and increased racy score. You can also try out the system yourself by going to Google Cloud Vision’s demo page here.
The possibility of having biased datasets in the first place
Since the safe search feature itself is not made to be used in such a manner, it will be unfair to point out that there is a problem with Google Cloud Vision at this point. And why the scores changed might not do anything with the skin color at all and would need more testing to find out for sure.
But one can assume that there might have been an unwanted bias in the selection of the training data. There is a chance that the category of violence might have been trained upon data with people with darker skin color. However, these things are hard to uncover due to the nature of the system being a black box. The opaque and incomprehensible nature of decision making by AI is often referred to as the “black box” in the computer science field. Once a machine learning model is trained, it can be difficult to understand why it gives a particular output to a set of data inputs.
Using play to find many false positives to uncover the black box.
This is not a problem specific to Google’s AI but a problem all autonomous decision-making systems share. There is no such thing as an unbiased human and the same applies to machines that we make. Especially when dealing with subjective notions it becomes impossible not to be biassed because being biased is the nature of subjective decision making itself.
My aim with this project, in the long run, is not to criticize big tech companies or the technology itself. The aim is to come up with a way to prevent these unwanted biases from getting in through play. One of the effective approaches to uncovering this black box is to find false-positive cases to find unwanted biases in the system. I believe that by playing with the system we will be able to find many false-positive cases that can then help developers in the long run.
This was a project that was part of my thesis at the Delft University of Technology and was done in partnership with Waag, a public research institution in Amsterdam.
My next steps might be to make a web version of this game to nudge people around the world into making false positives. If any developers at Google Cloud Vision team can help me out with this, I would appreciate it very much ;)
The game “Is this violence? Am I too sexy?” will be exhibited as part of the Play-Well exhibition at the Wellcome Gallery in London between Feb 15~16.